id
stringlengths 10
10
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| content
stringlengths 3.91k
873k
| references
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2006.08097 | FinBERT: A Pretrained Language Model for Financial Communications | Contextual pretrained language models, such as BERT (Devlin et al., 2019),
have made significant breakthrough in various NLP tasks by training on large
scale of unlabeled text re-sources.Financial sector also accumulates large
amount of financial communication text.However, there is no pretrained finance
specific language models available. In this work,we address the need by
pretraining a financial domain specific BERT models, FinBERT, using a large
scale of financial communication corpora. Experiments on three financial
sentiment classification tasks confirm the advantage of FinBERT over generic
domain BERT model. The code and pretrained models are available at
https://github.com/yya518/FinBERT. We hope this will be useful for
practitioners and researchers working on financial NLP tasks. | http://arxiv.org/pdf/2006.08097 | Yi Yang, Mark Christopher Siy UY, Allen Huang | cs.CL | https://github.com/yya518/FinBERT | null | cs.CL | 20200615 | 20200709 | 2020:
0 2 0 2
l u J 9 ] L C . s c [ 2 v 7 9 0 8 0 . 6 0 0 2 : v i X r a
# FinBERT: A Pretrained Language Model for Financial Communications
# Yi Yang Mark Christopher Siy UY Allen Huang
School of Business and Management, Hong Kong University of Science and Technology {imyiyang,acahuang}@ust.hk, [email protected]
# Abstract
Contextual pretrained language models, such as BERT (Devlin et al., 2019), have made signiï¬cant breakthrough in various NLP tasks by training on large scale of unlabeled text resources. Financial sector also accumulates large amount of ï¬nancial communication text. However, there is no pretrained ï¬nance speciï¬c language models available. In this work, we address the need by pretraining a ï¬nancial domain speciï¬c BERT models, FinBERT, using a large scale of ï¬nancial Experiments on communication corpora. three classiï¬cation ï¬nancial tasks conï¬rm the advantage of FinBERT over generic domain BERT model. The code and pretrained models are available at https://github.com/yya518/FinBERT. We hope this will be useful for practitioners and researchers working on ï¬nancial NLP tasks.
# 1 Introduction
The growing maturity of NLP techniques and re- sources is drastically changing the landscape of ï¬- nanical domain. Capital market practitioners and researchers have keen interests in using NLP tech- niques to monitor market sentiment in real time from online news articles or social media posts, since sentiment can be used as a directional sig- Intuitively, if there is nal for trading purposes. positive information about a particular company, we expect that companyâs stock price to increase, and vice versa. For example, Bloomberg, the ï¬- nancial media company, reports that trading sen- timent portfolios outperform the benchmark in- dex signiï¬cantly (Cui et al., 2016). Prior ï¬nancial economics research also reports that news article and social media sentiment could be used to pre- dict market return and ï¬rm performance (Tetlock, 2007; Tetlock et al., 2008).
Recently, unsupervised pre-training of language models on large corpora has signiï¬cantly im- proved the performance of many NLP tasks. The language models are pretained on generic corpora such as Wikipedia. However, sentiment analysis is a strongly domain dependent task. Financial sector has accumulated large scale of text of ï¬- nancial and business communications. Therefore, leveraging the success of unsupervised pretrain- ing and large amount of ï¬nancial text could poten- tially beneï¬t wide range of ï¬nancial applications. To ï¬ll the gap, we pretrain FinBERT, a ï¬nance domain speciï¬c BERT model on a large ï¬nancial communication corpora of 4.9 billion tokens, in- cluding corporate reports, earnings conference call transcripts and analyst reports. We document the ï¬nancial corpora and the FinBERT pretraining de- tails. Experiments on three ï¬nancial sentiment classiï¬cation tasks shows that FinBERT outper- forms the generic BERT models. Our contribu- tion is straightforward: we compile a large scale of text corpora that are the most representative in ï¬nancial and business communications. We pre- train and release FinBERT, a new resource demon- strated to improve performance on ï¬nancial senti- ment analysis.
# 2 Related Work
lan- Recently, guage models on large corpora, such as BERT (Devlin et al., 2019), ELMo (Peters et al., 2018), ULM-Fit (Howard and Ruder, 2018), XLNet, and GPT (Radford et al., 2019) has signiï¬cantly improved performance on many natural language processing tasks, from sentence classiï¬cation to question answering. Unlike traditional word em- bedding (Mikolov et al., 2013; Pennington et al., 2014) where word is represented as a single vector returns representation,
contextualized embeddings for each word token which can be fed into downstream tasks.
The released language models are trained on general domain corpora such as news articles and Wikipedia. Even though it is easy to ï¬ne tune the language model using downstream task, it has been shown that pre-training a language model using large-scale domain corpora can further im- prove the task performance than ï¬ne-tuning the generic language model. To this end, several domain-speciï¬c BERT models are trained and re- leased. BioBERT (Lee et al., 2019) pretrains a biomedical domain-speciï¬c language representa- tion model using large-scale biomedical corpora. Similarly, ClinicalBERT (Huang et al., 2019) ap- plies BERT model to clinical notes for hospital readmission prediction task, and (Alsentzer et al., 2019) applies BERT on clinical notes and dis- charge summaries. SciBERT (Beltagy et al., 2019) trains a scientiï¬c domain-speciï¬c BERT model using a large multi-domain corpus of sci- entiï¬c publications to improve performance on downstream scientiï¬c NLP tasks. We are the ï¬rst to pre-train and release a ï¬nance domain speciï¬c BERT model.
# 3 Financial Corpora
We compile a large ï¬nancial domain corpora that are most representative in ï¬nance and business communications. Corporate Reports 10-K & 10-Q The most im- portant text data in ï¬nance and business commu- nication is corporate report. In the United States, the Securities Exchange Commission (SEC) man- dates all publicly traded companies to ï¬le annual reports, known as Form 10-K, and quarterly re- ports, known as Form 10-Q. This document pro- vides a comprehensive overview of the companyâs business and ï¬nancial condition. Laws and regula- tions prohibit companies from making materially false or misleading statements in the 10-Ks. The Form 10-Ks and 10-Qs are publicly available and can be accesses from SEC website.1
We obtain 60,490 Form 10-Ks and 142,622 Form 10-Qs of Russell 3000 ï¬rms during 1994 and 2019 from SEC website. We only include sections that are textual components, such as Item 1 (Busi- ness) in 10-Ks, Item 1A (Risk Factors) in both 10- Ks and 10-Qs and Item 7 (Managements Discus- sion and Analysis) in 10-Ks.
1http://www.sec.gov/edgar.shtml
Earnings Call Transcripts Earnings calls are quarterly conference calls that company execu- tives hold with investors and analysts to discuss ï¬rm overall performance. During an earnings call, executives such as CEOs and CFOs read forward- looking statements and provide their information and interpretation of their ï¬rms performance dur- ing the quarter. Analysts also have the opportunity to request managers to clarify information. Insti- tutional and individual investors listen to the earn- ings call and spot the tones of executives that por- tend good or bad news for the company. We ob- tain 136,578 earnings conference call transcripts of 7,740 public ï¬rms between 2004 and 2019. The earnings call transcripts are obtained from the website Seeking Alpha2. Analyst Reports Analyst reports are another use- ful source of information for institutional and in- dividual investors (sri International, 1987). An analyst report typically provides several quantita- tive summary measures, including a stock recom- mendation, an earnings forecast, and sometimes a target price. It also provides a detailed, mostly textual analysis of the company. Institutional in- vestors spend millions of dollars annually to pur- chase the full content of analyst reports to read the written textual analysis. We obtain analyst reports in the Investext database issued for S&P ï¬rms dur- ing the 1995-2008 period, which yields a set of 488,494 reports. Overall Corpora Statistics The total size of all 4 corpora is approximately 4.9 billion tokens. We present the pretraining ï¬nancial corpora statistics in Table1. As a comparison, BERTâs pre-training corpora consists of two textual corpora with a total of 3.3 billion tokens.
Corpus Corporate Reports 10-K & 10-Q 2.5B 1.3B Earnings Call Transcripts 1.1B Analyst Reports # of tokens
Table 1: Size of pretraining ï¬nancial corpora.
# 4 FinBERT Training
Vocabulary We construct FinVocab, a new Word- Piece vocabulary on our ï¬nancial corpora using the SentencePiece library. We produce both cased and uncased versions of FinVocab, with sizes of
2https://seekingalpha.com/
28,573 and 30,873 tokens respectively. This is very similar to the 28,996 and 30,522 token sizes of the original BERT cased and uncased BaseVo- cab. The resulting overlap between between the original BERT BaseVocab, and FinVocab is 41% for both the cased and uncased versions. FinBERT-Variants We use the original BERT code 3 to train FinBERT on our ï¬nancial corpora with the same conï¬guration as BERT-Base. Fol- lowing the original BERT training, we set a max- imum sentence length of 128 tokens, and train the model until the training loss starts to converge. We then continue training the model allowing sen- tence lengths up to 512 tokens. In particular, we train four different versions of FinBERT: cased or uncased; BaseVocab or FinVocab.
FinBERT-BaseVocab, uncased/cased: Model is initialized from the original BERT-Base un- cased/cased model, and is further pretrained on the ï¬nancial corpora for 250K iterations at a smaller learning rate of 2eâ5, which is recommended by BERT code.
FinBERT-FinVocab, uncased/cased: Model is trained from scratch using a new uncased/cased ï¬nancial vocabulary FinVocab for 1M iterations. Training The entire training is done using a NVIDIA DGX-1 machine. The server has 4 Tesla P100 GPUs, providing a total of 128 GB of GPU memory. This machine enables us to train the BERT models using a batch size of 128. We uti- lize Horovord framework (Sergeev and Del Balso, 2018) for multi-GPU training. Overall, the total time taken to perform pretraining for one model is approximately 2 days. With the release of FinBERT, we hope ï¬nancial practitioners and re- searchers can beneï¬t from FinBERT model with- out the necessity of the signiï¬cant computational resources required to train the model.
# 5 Financial Sentiment Experiments
Given the importance of sentiment analysis in ï¬- nancial NLP tasks, we conduct experiments on ï¬- nancial sentiment classiï¬cation datasets.
# 5.1 Dataset
Financial Phrase Bank is a public dataset for ï¬nancial sentiment classiï¬cation (Malo et al., 2014). The dataset contains 4,840 sentences se- lected from ï¬nancial news. The dataset is manu- ally labeled by 16 researchers with adequate back-
3https://github.com/google-research/bert
ground knowledge on ï¬nancial markets. The sen- timent label is either positive, neutral or negative. AnalystTone Dataset to gauge the opinions in analyst reports, which is com- monly used in Accounting and Finance literature (Huang et al., 2014). The dataset contains ran- domly selected 10,000 sentences from analyst re- ports in the Investext database. Each sentence is manually annotated into one of three categories: positive, negative and neutral. This classiï¬cation yields a total of 3,580 positive, 1,830 negative, and 4,590 neutral sentences in the dataset. FiQA Dataset is an open challenge dataset for ï¬- nancial sentiment analysis, containing 1,111 text sentences 4. Given an English text sentence in the ï¬nancial domain (microblog message, news state- ment), the task of this challenge is to predict the associated numeric sentiment score, ranged from - 1 to 1. We convert the original regression task into a binary classiï¬cation task for consistent compar- ison with the above two datasets.
We randomly split each dataset into 90% train- ing and 10% testing 10 times and report the aver- age. Since all dataset are used for sentiment clas- siï¬cation, we report the accuracy metrics in the experiments.
# 5.2 Fine-tune Strategy
We follow the same ï¬ne-tune architecture and op- timization choices used in (Devlin et al., 2019). We use a simple linear layer, as our classiï¬cation layer, with a softmax activation function. We also use cross-entropy loss as the loss function. Note that an alternative is to feed the contextualized word embeddings of each token into a deep ar- chitectures, such as Bi-LSTM, atop frozen BERT embeddings. We choose not to use this strategy as it has shown to perform signiï¬cantly worse than ï¬ne-tune BERT model (Beltagy et al., 2019).
# 5.3 Experiment Results
We compare FinBERT with original BERT-Base model (Devlin et al., 2019), and we evaluate both cased and uncased versions of this model. The main results of ï¬nancial sentiment analysis tasks are present in Table 2. FinBERT vs. BERT The results show substantial improvement of FinBERT models over the generic BERT models. On PhraseBank dataset, the best model uncased FinBERT-FinVocab achieves the
4https://sites.google.com/view/fiqa/home
BERT FinBERT-BaseVocab FinBERT-FinVocab cased 0.856 0.767 0.872 cased 0.755 0.653 0.840 uncased 0.835 0.730 0.850 uncased 0.870 0.796 0.880 cased 0.864 0.814 0.876 uncased 0.872 0.844 0.887 PhraseBank FiQA AnalystTone
Table 2: Performance of different BERT models on three ï¬nancial sentiment analysis tasks.
10-Ks/10-Qs 0.835 0.707 0.845 0.847 0.766 0.858 Earnings Call Analyst Reports PhraseBank FiQA AnalystTone PhraseBank FiQA AnalystTone 0.843 0.731 0.862 0.860 0.778 0.870 0.845 0.744 0.871 0.861 0.796 0.872 BaseVocab FinVocab All 0.856 0.767 0.872 0.864 0.814 0.876
Table 3: Performance of pretraining on different ï¬nancial corpus.
accuracy of 0.872, a 4.4% improvement over un- cased BERT model and 15.4% improvement over cased BERT model. On FiQA dataset, the best model uncased FinBERT-FinVocab achieves the accuracy of 0.844, a 15.6% improvement over uncased BERT model and a 29.2% improvement over cased BERT model. Lastly, on the Analyst- Tone dataset, the best model uncased FinBERT- FinVocab improves the uncased and cased BERT model by 4.3% and 5.5% respectively. Overall speaking, pretraining on ï¬nancial corpora, as ex- pected, is effective and enhances the downstream ï¬nancial sentiment classiï¬cation tasks. In ï¬nan- cial markets where capturing the accurate senti- ment signal is of utmost importance, we believe the overall FinBERT improvement demonstrates its practical utility.
FinVocab vs. BaseVocab We assess the impor- tance of an in-domain ï¬nancial vocabulary by pre- training different FinBERT models using BaseVo- cab and FinVocab. For both uncased and cased model, we see that FinBERT-FinVocab outper- forms its BaseVocab counterpart. However, the performance improvement is quite marginal on PhraseBank and AnalystTone task. Only do we see substantial improvement on FiQA task (0.844 vs. 0.796). Given the magnitude of improvement, we suspect that while an in-domain vocabulary is helpful, FinBERT beneï¬ts most from the ï¬nancial communication corpora pretraining.
Cased vs. Uncased We follow (Devlin et al., 2019) in using both the cased model and the un- cased model for all tasks. Experiments result sug- gest that uncased models perform better than cased models in all tasks. This result is consistent with
prior work of Scientiï¬c domain and Biomedical domain BERT models. Corpus Contribution We also train different Fin- BERT models on three ï¬nancial corpus separately. The performance of different FinBERT models (cased version) on different tasks are present in Table 3. It shows that FinBERT trained on all corpora achieves the overall best performance in- dicating that combining additional ï¬nancial com- munication corpus could improve the language model quality. Among three datasets, Analyst Reports dataset appears to perform well in three different tasks, even though it only has 1.1 bil- lion word tokens. Prior research ï¬nds that cor- porate report such as 10-Ks and 10-Qs contains redundant content, and that a substantial amount of textual volume contained in 10-K reports is at- tributable to managerial discretion in how ï¬rms respond to mandatory disclosure requirements (Cazier and Pfeiffer, 2016). Does it suggest that Analyst Reports data contains more information content than corporate reports and earnings call transcripts? We leave it for future research.
# 6 Conclusion
In this work, we pre-train a ï¬nancial-task oriented BERT model, FinBERT. The FinBERT model is trained on a large ï¬nancial corpora that are representative of English ï¬nancial communica- tions. We show that FinBERT outperforms generic BERT models on three ï¬nancial sentiment classi- ï¬cation tasks. With the release of FinBERT, we hope practitioners and researchers can utilize Fin- BERT for a wider range of applications where the prediction target goes beyond sentiment, such as
ï¬nancial-related outcomes including stock returns, stock volatilities, corporate fraud, etc.
# Acknowledgments
This work was supported by Theme-based Re- search Scheme (No. T31-604/18-N) from Re- search Grants Council in Hong Kong.
# References
Emily Alsentzer, John Murphy, William Boag, Wei- Hung Weng, Di Jindi, Tristan Naumann, and Matthew McDermott. 2019. Publicly available clin- In Proceedings of the 2nd ical bert embeddings. Clinical Natural Language Processing Workshop, pages 72â78.
Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. Scib- ert: Pretrained language model for scientiï¬c text. In Proceedings of EMNLP.
Richard A Cazier and Ray J Pfeiffer. 2016. Why are 10-k ï¬lings so long? Accounting Horizons, 30(1):1â 21.
Embedded value in bloomberg news and social sentiment data. Bloomberg LP.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In Proceedings of NAACL, pages 4171â4186.
Jeremy Howard and Sebastian Ruder. 2018. Universal language model ï¬ne-tuning for text classiï¬cation. In Proceedings ACL, pages 328â339.
Allen H Huang, Amy Y Zang, and Rong Zheng. 2014. Evidence on the information content of text in an- alyst reports. The Accounting Review, 89(6):2151â 2180.
Kexin Huang, Jaan Altosaar, and Rajesh Ranganath. 2019. Clinicalbert: Modeling clinical notes and pre- dicting hospital readmission. arXiv:1904.05342.
sri International. 1987. Investor information needs and the annual report. Financial Executives Research Foundation.
Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. BioBERT: a pre-trained for biomedical biomedical text mining. Bioinformatics.
Pekka Malo, Ankur Sinha, Pekka Korhonen, Jyrki Wal- lenius, and Pyry Takala. 2014. Good debt or bad debt: Detecting semantic orientations in economic Journal of the Association for Information texts. Science and Technology, 65(4):782â796.
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Cor- rado, and Jeffrey Dean. 2013. Distributed represen- tations of words and phrases and their composition- ality. In Proceedings of NIPS, pages 3111â3119.
Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word In Proceedings of EMNLP, pages representation. 1532â1543.
Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proc. of NAACL.
Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
Alexander Sergeev and Mike Del Balso. 2018. Horovod: fast and easy distributed deep learning in tensorï¬ow. arXiv preprint arXiv:1802.05799.
Paul C Tetlock. 2007. Giving content to investor sen- timent: The role of media in the stock market. The Journal of ï¬nance, 62(3):1139â1168.
Paul C Tetlock, Maytal Saar-Tsechansky, and Sofus Macskassy. 2008. More than words: Quantifying language to measure ï¬rmsâ fundamentals. The Jour- nal of Finance, 63(3):1437â1467. | {
"id": "1904.05342"
} |
2006.08381 | DreamCoder: Growing generalizable, interpretable knowledge with wake-sleep Bayesian program learning | Expert problem-solving is driven by powerful languages for thinking about
problems and their solutions. Acquiring expertise means learning these
languages -- systems of concepts, alongside the skills to use them. We present
DreamCoder, a system that learns to solve problems by writing programs. It
builds expertise by creating programming languages for expressing domain
concepts, together with neural networks to guide the search for programs within
these languages. A ``wake-sleep'' learning algorithm alternately extends the
language with new symbolic abstractions and trains the neural network on
imagined and replayed problems. DreamCoder solves both classic inductive
programming tasks and creative tasks such as drawing pictures and building
scenes. It rediscovers the basics of modern functional programming, vector
algebra and classical physics, including Newton's and Coulomb's laws. Concepts
are built compositionally from those learned earlier, yielding multi-layered
symbolic representations that are interpretable and transferrable to new tasks,
while still growing scalably and flexibly with experience. | http://arxiv.org/pdf/2006.08381 | Kevin Ellis, Catherine Wong, Maxwell Nye, Mathias Sable-Meyer, Luc Cary, Lucas Morales, Luke Hewitt, Armando Solar-Lezama, Joshua B. Tenenbaum | cs.AI, cs.LG | null | null | cs.AI | 20200615 | 20200615 | 0 2 0 2
n u J 5 1 ] I A . s c [
1 v 1 8 3 8 0 . 6 0 0 2 : v i X r a
# DreamCoder: Growing generalizable, interpretable knowledge with wake-sleep Bayesian program learning
Kevin Ellis,1,4,5 Catherine Wong,1,4,5 Maxwell Nye,1,4,5 Mathias Sabl´e-Meyer,1,3 Luc Cary,1 Lucas Morales,1,4,6 Luke Hewitt,1,4,5 Armando Solar-Lezama,1,2,6 Joshua B. Tenenbaum1,2,4,5 1MIT 2CSAIL 3NeuroSpin 4Center for Brains, Minds, and Machines 5Department of Brain and Cognitive Sciences 6Department of Electrical Engineering and Computer Science
Expert problem-solving is driven by powerful languages for thinking about prob- lems and their solutions. Acquiring expertise means learning these languages â systems of concepts, alongside the skills to use them. We present DreamCoder, a system that learns to solve problems by writing programs. It builds expertise by cre- ating programming languages for expressing domain concepts, together with neural networks to guide the search for programs within these languages. A âwake-sleepâ learning algorithm alternately extends the language with new symbolic abstractions and trains the neural network on imagined and replayed problems. DreamCoder solves both classic inductive programming tasks and creative tasks such as drawing pictures and building scenes. It rediscovers the basics of modern functional pro- gramming, vector algebra and classical physics, including Newtonâs and Coulombâs laws. Concepts are built compositionally from those learned earlier, yielding multi- layered symbolic representations that are interpretable and transferrable to new tasks, while still growing scalably and ï¬exibly with experience.
A longstanding dream in artiï¬cial intelligence (AI) has been to build a machine that learns like a child (1) â that grows into all the knowledge a human adult does, starting from much less. This dream remains far off, as human intelligence rests on many learning capacities not yet captured in artiï¬cial systems. While machines are typically designed for a single class of tasks, humans learn to solve an endless range and variety of problems, from cooking to calculus to graphic design. While machine learning is data hungry, typically generalizing weakly from experience, human learners can often generalize strongly from only modest experience. Perhaps most distinctively, humans build expertise: We acquire knowledge that can be communicated and extended, growing new concepts on those built previously to become better and faster learners the more we master a domain.
This paper presents DreamCoder, a machine learning system that aims to take a step closer to these human abilities â to efï¬ciently discover interpretable, reusable, and generalizable knowledge across a broad range of domains. DreamCoder embodies an approach we call âwake-sleep Bayesian program inductionâ, and the rest of this introduction explains the key ideas underlying it: what it means to view learning as program induction, why it is valuable to cast program induction as inference in a Bayesian model, and how a âwake-sleepâ algorithm enables the model to grow with experience, learning to learn more efï¬ciently in ways that make the approach practical and scalable.
1
Our formulation of learning as program induction traces back to the earliest days of AI (2): We treat learning a new task as search for a program that solves it, or which has intended behavior. Fig. 1 shows examples of program induction tasks in eight different domains that DreamCoder is applied to (Fig. 1A), along with an in-depth illustration of one task in the classic list-processing domain: learning a program that sorts lists of numbers (Fig. 1B), given a handful of input-output examples. Relative to purely statistical approaches, viewing learning as program induction brings certain advantages. Symbolic programs exhibit strong generalization propertiesâintuitively, they tend to extrapolate rather than merely interpolate. This also makes learning very sample-efï¬cient: Just a few examples are often sufï¬cient to specify any one function to be learned. By design, programs are richly human- interpretable: They subsume our standard modeling languages from science and engineering, and they expose knowledge that can be reused and composed to solve increasingly complex tasks. Finally, programs are universal: in principle, any Turing-complete language can represent solutions to the full range of computational problems solvable by intelligence.
Yet for all these strengths, and successful applications in a number of domains (3â9), program induction has had relatively limited impact in AI. A Bayesian formulation helps to clarify the challenges, as well as a path to solving them. The programming language we search in speciï¬es the hypothesis space and prior for learning; the shorter a program is in that language, the higher its prior probability. While any general programming language can support program induction, previous systems have typically found it essential to start with a carefully engineered domain-speciï¬c language (DSL), which imparts a strong, hand-tuned inductive bias or prior. Without a DSL the programs to be discovered would be prohibitively long (low prior probability), and too hard to discover in reasonable search times. Even with a carefully tuned prior, though, search for the best program has almost always been intractable for general-purpose algorithms, because of the combinatorial nature of the search space. Hence most practical applications of program induction require not only a hand-designed DSL but also a search algorithm hand-designed to exploit that DSL for fast inference. Both these requirements limit the scalability and broad applicability of program induction.
DreamCoder addresses both of these bottlenecks by learning to compactly represent and efï¬ciently induce programs in a given domain. The system learns to learn â to write better programs, and to search for them more efï¬ciently â by jointly growing two distinct kinds of domain expertise: (1) explicit declarative knowledge, in the form of a learned domain-speciï¬c language, capturing conceptual abstractions common across tasks in a domain; and (2) implicit procedural knowledge, in the form of a neural network that guides how to use the learned language to solve new tasks, embodied by a learned domain-speciï¬c search strategy. In Bayesian terms, the system learns both a prior on programs, and an inference algorithm (parameterized by a neural network) to efï¬ciently approximate the posterior on programs conditioned on observed task data.
DreamCoder learns both these ingredients in a self-supervised, bootstrapping fashion, growing them jointly across repeated encounters with a set of training tasks. This allows learning to scale to new domains, and to scale within a domain provided it receives sufï¬ciently varied training tasks. Typically only a moderate number of tasks sufï¬ces to bootstrap learning in a new domain. For example, the list sorting function in Fig. 1B represents one of 109 tasks that the system cycles through, learning as it goes to construct a library of around 20 basic operations for lists of numbers which in turn become components for solving many new tasks it will encounter.
DreamCoderâs learned languages take the form of multilayered hierarchies of abstraction (Fig. 1B, & Fig. 7A,B). These hierarchies are reminiscent of the internal representations in a deep neural network, but here each layer is built from symbolic code deï¬ned in terms of earlier code layers, making the representations naturally interpretable and explainable by humans. The network of abstractions grows
2
A List Processing Text Editing Regexes LOGO Graphics â_ Block Towers Symbolic Regression Recursive Physical Laws Sum List Abbreviate Phone numbers oO i] Programming 123) 76 Allen Newell > A.N. (555) 867-5309 a= x vA (468 1)+17 Herb Simon > H.S. (650) 555-2368 m 7 Double Drop Last Three Currency @ Q/ i [123] [246] shrdluâ+shr $100.25 [451] > [810 2] shakey â» sha $4.50 oan Check Evens Extract Dates is ® [023] + [TTF] ab (c)-+c Â¥1775/0704 aA [296] [TFT] a (bee) see-+see â Â¥2000/0101 3 B Initial Learned Library of Concepts Sample Problem: sort List Primitives [9271] > [1279] [38942] + [23489] 8 [622385] > [223568] map concept_13 concept_4 (A(L) (car (concept_4 L (A(y) (nil? (concept_4 L concept_15 fold it hy (A(L P)(fold L nil i ist di ve ee @@ © 2zy)))I) Solution to Sort List discovered ronal (cons Z u) u)))) - > (LN) (concept_13 (concept_4 in learned language: [maximum] L (A (L) (> N (length(concep (map (A (n) > 7 L (A (u) (> z u))))))))) [filter] (concept_i5 L (+ 1 n))) [nth largest element] (range (Length L))) (X (x) (map (A (y) (car (fold (Fold x nil (A (zu) Cif (gt? (+ y 1) (length (fold x nil (A (Vâ golutionto sort List w) Gif (gt? z v) (cons v Ww) w))))) (cons zu) u))) nil (A (ab) (if (Mil? (fold (fold x nil if din intial (A (cd) (if (gt? (+ y 1) (length (fold x nil (A (e f) (if (gt? ce) (cons e f) f))))) (cons if expressed in initial cd) d))) mil (A (g h) (if (gt? g a) (cons g h) h)))) (cons a b) b))))) (range (length x)))) primitives
Recursive Physical Laws Programming a= x vA m 7 i
Figure 1: (A): Learning tasks in many different domains can be formulated as inducing a program that explains a small number of input-output examples, or that generates an observed sequence, image or scene. DreamCoder successfully learns to synthesize programs for new tasks in each of these domains. (B): An illustration of how DreamCoder learns to solve problems in one domain, processing lists of integers. Problems are speciï¬ed by input-output pairs exemplifying a target function (e.g., âSort Listâ). Given initial primitives (left), the model iteratively builds a library of more advanced functions (middle) and uses this library to solve problems too complex to be solved initially. Each learned function can call functions learned earlier (arrows), forming hierarchically organized layers of concepts. The learned library enables simpler, faster, and more interpretable problem solving: A typical solution to âSort Listâ (right), discovered after six iterations of learning, can be expressed with just ï¬ve function calls using the learned library and is found in less than 10 minutes of search. The code reads naturally as âget the nth largest number, for n = 1, 2, 3, . . ..â At bottom the modelâs solution is re-expressed in terms of only the initial primitives, yielding a long and cryptic program with 32 function calls, which would take in excess of 1072 years of brute-force search to discover.
progressively over time, building each concept on those acquired before, inspired by how humans build conceptual systems: we learn algebra before calculus, and only after arithmetic; we learn to draw simple shapes before more complex designs. For example, in the list processing example (Fig. 1B), our model comes to sort sequences of numbers by invoking a library component four layers deep â take the nth largest element â and this component in turn calls lower-level learned concepts: maximum, and ï¬lter. Equivalent programs could in principle be written in the starting language, but those produced by the ï¬nal learned language are more interpretable and much shorter. Expressed only in the initial primitives, these programs would be so complex as to be effectively out of the learners reach: they would never be found during a reasonably bounded search. Only with acquired domain-speciï¬c expertise do most problems become practically solvable.
DreamCoder gets its name from how it grows domain knowledge iteratively, in âwake-sleepâ cycles loosely inspired by the memory consolidation processes that occur during different stages of
3
sleep (10, 11). In general, wake-sleep Bayesian learning (12) iterates between training a probabilistic generative model that deï¬nes the learnerâs prior alongside a neural network recognition model that learns to invert this generative model given new data. During âwakingâ the generative model is used to interpret new data, guided by the recognition model. The recognition model is learned ofï¬ine during âsleep,â from imagined data sets (âdreamsâ or âfantasiesâ) sampled from the generative model.
DreamCoder develops the wake-sleep approach for learning to learn programs: Its learned language deï¬nes a generative model over programs and tasks, where each program solves a particular hypotheti- cal task; its neural network learns to recognize patterns across tasks in order to best predict program components likely to solve any given new task. During waking, the system is presented with data from several tasks and attempts to synthesize programs that solve then, using the neural recognition model to propose candidate programs. Learning occurs during two distinct but interleaved sleep phases, alternately growing the learned language (generative model) by consolidating new abstractions from programs found during waking, and training the neural network (recognition model) on âfantasyâ programs sampled from the generative model. This wake-sleep architecture builds on and further integrates a pair of ideas, Bayesian multitask program learning (5, 13, 14) and neurally-guided program synthesis (15, 16), which have been separately inï¬uential in the recent literature but have only been brought together in our work starting with the EC2 algorithm (17), and now made much more scalable in DreamCoder (see S3 for further discussion of prior work).
The resulting system has wide applicability. We describe applications to eight domains (Fig. 1A): classic program synthesis challenges, more creative visual drawing and building problems, and ï¬nally, library learning that captures the basic languages of recursive programming, vector algebra, and physics. All of our tasks involve inducing programs from very minimal data, e.g., ï¬ve to ten examples of a new concept or function, or a single image or scene depicting a new object. The learned languages span deterministic and probabilistic programs, and programs that act both generatively (e.g., producing an artifact like an image or plan) and conditionally (e.g., mapping inputs to outputs). Taken together, we hope these applications illustrate the potential for program induction to become a practical, general-purpose, and data-efï¬cient approach to building intepretable, reusable knowledge in artiï¬cial intelligence systems.
# Wake/Sleep Program Learning
We now describe the speciï¬cs of learning in DreamCoder, beginning with an overview of the algorithm and its mathematical formulation, then turning to the details of its three phases. Learning proceeds iteratively, with each iteration (Eq. 1, Fig. 2) cycling through a wake phase of trying to solve tasks interleaved with two sleep phases for learning to solve new tasks. In the wake phase (Fig. 2 top), the system searches for programs that solve tasks drawn from a training set, guided by the neural recognition model which ranks candidate programs based on the observed data for each task. Candidate programs are scored according to how well they solve the presented tasks, and how plausible they are a priori under the learned generative model for programs. The ï¬rst sleep phase, which we refer to as abstraction (Fig. 2 left), grows the library of programming primitives (the generative model) by replaying experiences from waking, ï¬nding common program fragments from task solutions, and abstracting out these fragments into new code primitives. This mechanism increases the breadth and depth of the learnerâs declarative knowledge, its learned library as in Fig. 1B or Fig. 7, when viewed as a network. The second sleep phase, which we refer to as dreaming (Fig. 2 right), improves the learnerâs procedural skill in code-writing by training the neural network that helps search for programs.
4
The neural recognition model is trained on replayed experiences as well as âfantasiesâ, or programs sampled randomly from the learned library as a generative model. These random programs deï¬ne tasks which the system solves during the dream phase, and the neural network is trained to predict the solutions found given the observable data for each imagined task.
Viewed as a probabilistic inference problem, DreamCoder observes a training set of tasks, written X, and infers both a program Ïx solving each task x X, as well as a prior distribution over programs likely to solve tasks in the domain (Fig. 2 middle). This prior is encoded by a library, written L, which L] (see S4.3). The neural network helps to deï¬nes a generative model over programs, written P[Ï | ï¬nd programs solving a task by predicting, conditioned on the observed examples for that task, an approximate posterior distribution over programs likely to solve it. The network thus functions as a recognition model that is trained jointly with the generative model, in the spirit of the Helmholtz machine (12). We write Q(Ï x) for the approximate posterior predicted by the recognition model. At a | high level wake/sleep cycles correspond to iterating the following updates, illustrated in Fig. 2; these updates serve to maximize a lower bound on the posterior over L given X (S4.1).
Ïx = arg max x) is large Q(Ï | Ï: x, L] P[Ï | â P[x Ï]P[Ï | L], for each task x | â X Wake L = arg max P [L] L max Ï a refactoring of Ïx P[x L] Ï]P[Ï | | Sleep: Abstraction
# nex L], where x
Train Q(Ï P[Ï X (âreplayâ) or x L (âfantasyâ) Sleep: Dreaming
x) |
â
â¼
â¼
Ï] is the likelihood of a task where P[L] is a description-length prior over libraries (S4.5) and P[x | x X given program Ï. For example, this likelihood is 0 or 1 when x is speciï¬ed by inputs/outputs, and when learning a probabilistic program, the likelihood is the probability of the program generating the observed task.
This 3-phase inference procedure works through two distinct kinds of bootstrapping. During each sleep cycle the next library bootstraps off the concepts learned during earlier cycles, growing an increasingly deep learned library. Simultaneously the generative and recognition models bootstrap each other: A more specialized library yields richer dreams for the recognition model to learn from, while a more accurate recognition model solves more tasks during waking which then feed into the next library. Both sleep phases also serve to mitigate the combinatorial explosion accompanying program synthesis. Higher-level library routines allow tasks to be solved with fewer function calls, effectively reducing the depth of search. The neural recognition model down-weights unlikely trajectories through the search space of all programs, effectively reducing the breadth of search.1
Wake phase. Waking consists of searching for task-speciï¬c programs with high posterior proba- bility, or programs that combine high likelihood (because they solve a task) and high prior probability (because they have short description length in the current language). During each Wake cycle we sample tasks from a random minibatch of the training set (or, depending on domain size and complex- ity, the entire training set). We then search for programs solving each of these tasks by enumerating programs in decreasing order of their probability under the recognition model Q(Ï x), and checking | Ï] > 0). Because the model may if a program Ï assigns positive probability to solving that task (P[x |
1We thank Sam Tenka for this observation. In particular, the difï¬culty of search during waking is roughly proportional to breadthdepth, where depth is the total size of a program and breadth is the number of library functions with high probability at each decision point in the search tree spanning the space of all programs. Library learning decreases depth at the expense of breadth, while training a neural recognition model effectively decreases breadth by decreasing the number of bits of entropy consumed by each decision (function call) made when constructing a program solving a task.
5
(1)
Objective: For each task x in X, find best program p,. solving x under current library L Library L fife) =@ x 1) lg Neurally guided search f2(2) =(fold cons Propose programs p in Best program p,, for task x (cons z nil)) Recognition decreasing order under ((-|x) (map fi (fold f2 nil x)) Model Q(-|x) until timeout 2 Choose p;, that maximizes: â> x (7 2 3)314 3 8) P [p|x, L] x P [|p] P [p|L] (3 8][9 4] (4 3 2]>[3 4 5]
(4 3 2]>[3 4 5] Sleep: Dreaming Objective: Train recognition model Q(p|x) to predict best programs p,. for typical tasks x and current library L Objective: Grow library L to compress programs found during waking program for task 1 program for task 2 (cons (+ 1 1)) (+ (car z) 1) 1. aan Repiays 1. recall programs tasks x p from solved in library DL waking 2. set task x 2. set program to output of p to retrieved : executing p solution px Refactoring Propose new library routines from \ L subtrees of refactorings of programs . . Train network on 2,p pairs New library ZL x ¥, w/ routine Program Expand L w/ A =) : ? the routine that f , maximizes: Gradient step in parameters of Q PIL] TIeex max P [x|p] P [p|L] until no until to maximize log Q(p|x) Seem etactoringslofip= increase converged
# in score
Figure 2: DreamCoderâs basic algorithmic cycle, which serves to perform approximate Bayesian inference for the graphical model diagrammed in the middle. The system observes programming tasks (e.g., input/outputs for list processing or images for graphics programs), which it explains with latent programs, while jointly inferring a latent library capturing cross-program regularities. A neural network, called the recognition model (red arrows) is trained to quickly infer programs with high posterior probability. The Wake phase (top) infers programs while holding the library and recognition model ï¬xed. A single task, âincrement and reverse listâ, is shown here. The Abstraction phase of sleep (left) updates the library while holding the programs ï¬xed by refactoring programs found during waking and abstracting out common components (highlighted in orange). Program components that best increase a Bayesian objective (intuitively, that best compress programs found during waking) are incorporated into the library, until no further increase in probability is possible. A second sleep phase, Dreaming (right) trains the recognition model to predict an approximate posterior over programs conditioned on a task. The recognition network is trained on âFantasiesâ (programs sampled from library) and âReplaysâ (programs found during waking).
6
ï¬nd many programs that solve a speciï¬c task, we store a small beam of the k = 5 programs with the highest posterior probability P[Ï x, L], and marginalize over this beam in the sleep updates of Eq. 1. | We represent programs as polymorphically typed λ-calculus expressions, an expressive formalism including conditionals, variables, higher-order functions, and the ability to deï¬ne new functions.
Abstraction phase. During the abstraction sleep phase, the model grows its library of concepts with the goal of discovering specialized abstractions that allow it to easily express solutions to the tasks at hand. Ease of expression translates into a preference for libraries that best compress programs found during waking, and the abstraction sleep objective (Eq. 1) is equivalent to minimizing the log P[D]) plus the description lengths of refactorings of programs description length of the library ( D]). Intuitively, we âcompress outâ reused Ï]P[Ï found during waking ( | | code to maximize a Bayesian criterion, but rather than compress out reused syntactic structures, we refactor the programs to expose reused semantic patterns.
Code can be refactored in inï¬nitely many ways, so we bound the number of λ-calculus evaluation steps separating a program from its refactoring, giving a ï¬nite but typically astronomically large set of refactorings. Fig. 3 diagrams the model discovering one of the most elemental building blocks of modern functional programming, the higher-order function map, starting from a small set of universal primitives, including recursion (via the Y-combinator). In this example there are approximately 1014 possible refactorings â a quantity that grows exponentially both as a function of program size and as a function of the bound on evaluation steps. To resolve this exponential growth we introduce a new data structure for representing and manipulating the set of refactorings, combining ideas from version space algebras (18â20) and equivalence graphs (21), and we derive a dynamic program for its construction (supplementary S4.5). This data structure grows polynomially with program size, owing to a factored representation of shared subtrees, but grows exponentially with a bound on evaluation steps, and the exponential term can be made small (we set the bound to 3) without performance loss. This results in substantial efï¬ciency gains: A version space with 106 nodes, calculated in minutes, can represent the 1014 refactorings in Fig. 3 that would otherwise take centuries to explicitly enumerate and search.
Dreaming phase. During the dreaming sleep phase, the system trains its recognition model, which later speeds up problem-solving during waking by guiding program search. We implement recognition models as neural networks, injecting domain knowledge through the network architecture: for instance, when inducing graphics programs from images, we use a convolutional network, which imparts a bias toward useful image features. We train a recognition network on (program, task) pairs drawn from two sources of self-supervised data: replays of programs discovered during waking, and fantasies, or programs drawn from L. Replays ensure that the recognition model is trained on the actual tasks it needs to solve, and does not forget how to solve them, while fantasies provide a large and highly varied dataset to learn from, and are critical for data efï¬ciency: becoming a domain expert is not a few-shot learning problem, but neither is it a big data problem. We typically train DreamCoder on 100-200 tasks, which is too few examples for a high-capacity neural network. After the model learns a library customized to the domain, we can draw unlimited samples or âdreamsâ to train the recognition network.
Our dream phase works differently from a conventional wake-sleep (12) dream phase. A classic wake-sleep approach would sample a random program from the generative model, execute it to generate a task, and train the recognition network to predict the sampled program from the sampled task. We instead think of dreaming as creating an endless stream of random problems, which we then solve during sleep in an active process using the same program search process as in waking. We then train the recognition network to predict the solutions discovered, conditioned on the problems. Speciï¬cally, we train Q to perform MAP inference by maximizing E , where the
x, L] arg maxÏ P[Ï |
# x |
7
# Task:
[1 [4
2
3
3]-[2 4][8
4
6
6] 8]
# Task:
[1 [4
2
3
3)â[0 1 2] 4]-+[3 2 3]
# Wake:
# program search
# Wake:
# program search
# (YO
(1)
# (Gif
# Gil?
1)
# QO
# (rl)
# Gif
# WY
# (il?
1)
# nil
# nil
(cons (+ (car 1) (car 1)) (cons (- (car 1) 1) (x (cdr 1)))))) (x (edr 1)))))) Sleep: Abstraction refactor refactor (10!4 refactorings) (10'4 refactorings) CX ) YO @ 1) Gt Gil? 1) CA () WY O (rl) Gf Gil? 1) nil nil (cons (f (car 1)) (cons (f (car 1)) (r (cdr 1))))))) (r (edr 1))))))) OQ _(2) G z z))) (z) (- z 1))) Compress (MDL/Bayes objective) (\ (2) (+ zz))) (map| (A (z) (- z 1))) = (£4) (Y CO @ 1) Gif Gil? 1) nil (cons (f (car 1)) (r (cdr 1))))))
Figure 3: Programs found as solutions during waking are refactored â or rewritten in semantically equivalent but syntactically distinct forms â during the sleep abstraction phase, to expose candidate new primitives for growing DreamCoderâs learned library. Here, solutions for two simple list tasks (top left, âdouble each list elementâ; top right, âsubtract one from each list elementâ) are ï¬rst found using a very basic primitive set, which yields correct but inelegant programs. During sleep, DreamCoder efï¬ciently searches an exponentially large space of refactorings for each program; a single refactoring of each is shown, with a common subexpression highlighted in orange. This expression corresponds to map, a core higher-order function in modern functional programming that applies another function to each element of a list. Adding map to the library makes existing problem solutions shorter and more interpretable, and crucially bootstraps solutions to many harder problems in later wake cycles.
expectation is taken over tasks. Taking this expectation over the empirical distribution of tasks trains Q on replays; taking it over samples from the generative model trains Q on fantasies. We train on a 50/50 mix of replays and fantasies; for fantasies mapping inputs to outputs, we sample inputs from the training tasks. Although one could train Q to perform full posterior inference, our MAP objective has the advantage of teaching the recognition network to ï¬nd a simplest canonical solution for each problem. More technically, our MAP objective acts to break syntactic symmetries in the space of programs by forcing the network to place all its probability mass onto a single member of a set of syntactically distinct but semantically equivalent expressions. Hand-coded symmetry breaking has proved vital for many program synthesizers (22, 23); see S4.6 for theoretical and empirical analyses of DreamCoderâs learned symmetry breaking.
8
# Results
We ï¬rst experimentally investigate DreamCoder within two classic benchmark domains: list processing and text editing. In both cases we solve tasks speciï¬ed by a conditional mapping (i.e., input/output examples), starting with a generic functional programming basis, including routines like map, fold, cons, car, cdr, etc. Our list processing tasks comprise 218 problems taken from (17), split 50/50 test/train, each with 15 input/output examples. In solving these problems, DreamCoder composed around 20 new library routines (S1.1), and rediscovered higher-order functions such as filter. Each round of abstraction built on concepts discovered in earlier sleep cycles â for example the model ï¬rst learns filter, then uses it to learn to take the maximum element of a list, then uses that routine to learn a new library routine for extracting the nth largest element of a list, which it ï¬nally uses to sort lists of numbers (Fig. 1B).
Synthesizing programs that edit text is a classic problem in the programming languages and AI literatures (18), and algorithms that synthesize text editing programs ship in Microsoft Excel (7). âA.T.â, and then infer a These systems would, for example, see the mapping âAlan Turingâ program that transforms âGrace Hopperâ to âG.H.â. Prior text-editing program synthesizers rely on hand-engineered libraries of primitives and hand-engineered search strategies. Here, we jointly learn both these ingredients and perform comparably to a state-of-the-art domain-general program synthesizer. We trained our system on 128 automatically-generated text editing tasks, and tested on the 108 text editing problems from the 2017 SyGuS (24) program synthesis competition.2 Prior to learning, DreamCoder solves 3.7% of the problems within 10 minutes with an average search time of 235 seconds. After learning, it solves 79.6%, and does so much faster, solving them in an average of 40 seconds. The best-performing synthesizer in this competition (CVC4) solved 82.4% of the problems â but here, the competition conditions are 1 hour & 8 CPUs per problem, and with this more generous compute budget we solve 84.3% of the problems. SyGuS additionally comes with a different hand-engineered library of primitives for each text editing problem. Here we learned a single library of text-editing concepts that applied generically to any editing task, a prerequisite for real-world use.
We next consider more creative problems: generating images, plans, and text. Procedural or generative visual concepts â from Bongard problems (25), to handwritten characters (5, 26), to Ravenâs progressive matrices (27) â are studied across AI and cognitive science, because they offer a bridge between low-level perception and high-level reasoning. Here we take inspiration from LOGO Turtle graphics (28), tasking our model with drawing a corpus of 160 images (split 50/50 test/train; Fig. 4A) while equipping it with control over a âpenâ, along with imperative control ï¬ow, and arithmetic operations on angles and distances. After training DreamCoder for 20 wake/sleep cycles, we inspected the learned library (S1.1) and found interpretable parametric drawing routines corresponding to the families of visual objects in its training data, like polygons, circles, and spirals (Fig. 4B) â without supervision the system has learned the basic types of objects in its visual world. It additionally learns more abstract visual relationships, like radial symmetry, which it models by abstracting out a new higher-order function into its library (Fig. 4C).
Visualizing the systemâs dreams across its learning trajectory shows how the generative model bootstraps recognition model training: As the library grows and becomes more ï¬nely tuned to the domain, the neural net receives richer and more varied training data. At the beginning of learning, random programs written using the library are simple and largely unstructured (Fig. 4D), offering
2We compare with the 2017 benchmarks because 2018 onward introduced non-string manipulation problems; custom string solvers such as FlashFill (7) and the latest custom SyGuS solvers are at ceiling for these newest problems.
9
limited value for training the recognition model. After learning, the systemâs dreams are richly structured (Fig. 4E), compositionally recombining latent building blocks and motifs acquired from the training data in creative ways never seen in its waking experience, but ideal for training a broadly generalizable recognition model (29).
Inspired by the classic AI âcopy demoâ â where an agent looks at a tower made of toy blocks then re-creates it (30) â we next gave DreamCoder 107 tower âcopy tasksâ (split 50/50 test/train, Fig. 5A), where the system observes both an image of a tower and the locations of each of its blocks, and must write a program that plans how a simulated hand would build the tower. The system starts with the same control ï¬ow primitives as with LOGO graphics. Inside its learned library we ï¬nd parametric âoptionsâ (31) for building blocks towers (Fig. 5B), including concepts like arches, staircases, and bridges, which one also sees in the modelâs dreams (Fig. 5C-D).
Next we consider few-shot learning of probabilistic generative concepts, an ability that comes naturally to humans, from learning new rules in natural language (32), to learning routines for symbols and signs (5), to learning new motor routines for producing words (33). We ï¬rst task DreamCoder with inferring a probabilistic regular expression (or Regex, see Fig. 1A for examples) from a small number of strings, where these strings are drawn from 256 CSV columns crawled from the web (data from (34), tasks split 50/50 test/train, 5 example strings per concept). The system learns to learn regular expressions that describe the structure of typically occurring text concepts, such as phone numbers, dates, times, or monetary amounts (Fig. S5). It can explain many real-world text patterns and use its explanations as a probabilistic generative model to imagine new examples of these concepts. For instance, though DreamCoder knows nothing about dollar amounts it can infer an abstract pattern behind the examples $5.70, $2.80, $7.60, . . . , to generate $2.40 and $3.30 as other examples of the same concept. Given patterns with exceptions, such as -4.26, -1.69, -1.622, . . . , -1 it infers a probabilistic model that typically generates strings such as -9.9 and occasionally generates strings such as -2. It can also learn more esoteric concepts, which humans may ï¬nd unfamiliar but can still readily learn and generalize from a few examples: Given examples -00:16:05.9, -00:19:52.9, -00:33:24.7, . . . , it infers a generative concept that produces -00:93:53.2, as well as plausible near misses such as -00:23=43.3.
We last consider inferring real-valued parametric equations generating smooth trajectories (see S2.1.6 and Fig. 1A, âSymbolic Regressionâ). Each task is to ï¬t data generated by a speciï¬c curve â either a rational function or a polynomial of up to degree 4. We initialize DreamCoder with addition, multipli- cation, division, and, critically, arbitrary real-valued parameters, which we optimize over via inner-loop gradient descent. We model each parametric program as probabilistically generating a family of curves, and penalize use of these continuous parameters via the Bayesian Information Criterion (BIC) (35). Our Bayesian machinery learns to home in on programs generating curves that explain the data while parsimoniously avoiding extraneous continuous parameters. For example, given real-valued data from 1.7x2 2.8 it infers a program with two continuous parameters.
# Quantitative analyses of DreamCoder across domains
To better understand how DreamCoder learns, we compared our full system on held out test problems with ablations missing either the neural recognition model (the âdreamingâ sleep phase) or ability to form new library routines (the âabstractionâ sleep phase). We contrast with several baselines: Exploration-Compression (13), which alternately searches for programs, and then compresses out reused components into a learned library, but without our refactoring algorithm; Neural Program
10
semicircle(r circle(r spiral(d@ greek spiral(n s-curve(r polygon(n, ⬠YOK aro 22999) ©) B® * TD A O Fl C radial symmetry(n, body) ) ) \ 1 . , \ \ vt va ~My 7 ne es ne 2 A I \ â t O O oars) o ° 0°29 2% &% © © © 00 â69 %%? | a Cc of se Xe &3 Ss BB Oo @ VY SF © eR SO 4 an
radial symmetry(n, body) \ 1 . , \ \ vt va ~My 7 ne es ne 2 A I \ â t oars) o ° 0°29 2% &% © 00 â69 %%? of se Xe &3 BB @ VY
Figure 4: (A): 30 (out of 160) LOGO graphics tasks. The model writes programs controlling a âpenâ that draws the target picture. (B-C): Example learned library routines include both parametric routines for drawing families of curves (B) as well as primitives that take entire programs as input (C). Each row in B shows the same code executed with different parameters. Each image in C shows the same code executed with different parameters and a different subprogram as input. (D-E): Dreams, or programs sampled by randomly assembling functions from the modelâs library, change dramatically over the course of learning reï¬ecting learned expertise. Before learning (D) dreams can use only a few simple drawing routines and are largely unstructured; the majority are simple line segments. After twenty iterations of wake-sleep learning (E) dreams become more complex by recombining learned library concepts in ways never seen in the training tasks. Dreams are sampled from the prior learned over tasks solved during waking, and provide an inï¬nite stream of data for training the neural recognition model. Color shows the modelâs drawing trajectory, from start (blue) to ï¬nish (pink). Panels (D-E) illustrate the most interesting dreams found across ï¬ve runs, both before and after learning. Fig. S6 shows 150 random dreams at each stage.
11
Of OE re mt i & tk, arch(h) |] | [ pyramid(h) gh dh dh a HH = = & iG oon wall(w, h) bridge(w, h =e mem TT Lm a aad tha tosaa dL i âAe i Boa | tim Lt a i its, SD. | en or ee ao
Figure 5: (A) 21 (out of 107) tower building tasks. The model writes a program controlling a âhandâ that builds the target tower. (B) Four learned library routines. These components act like parametric options (31), giving human-understandable, higher-level building blocks that the system can use to plan. Dreams both before and after learning (C-D) show representative plans the system can imagine building. After 20 wake-sleep iterations (D) the model fantasizes complex structures it has not seen during waking, but that combine building motifs abstracted from solved tasks in order to provide training data for a robust neural recognition model. Dreams are selected from ï¬ve different runs; Fig. S7 shows 150 random dreams at each stage.
12
Synthesis, which trains a RobustFill (16) model on samples from the initial library; and Enumeration, which performs type-directed enumeration (23) for 24 hours per task, generating and testing up to 400 million programs for each task. To isolate the role of compression in learning good libraries, we also construct two Memorize baselines. These variants extend the library by incorporating task solutions wholesale as new primitives; they do not attempt to compress but simply memorize solutions found during waking for potential reuse on new problems (cf. (36)). We evaluate memorize variants both with and without neural recognition models.
Across domains, our model always solves the most held-out tasks (Fig. 6A; see Fig. S13 for memorization baselines) and generally solves them in the least time (mean 54.1s; median 15.0s; Fig. S11). These results establish that each of DreamCoderâs core components â library learning with refactoring and compression during the sleep-abstraction phase, and recognition model learning during the sleep-dreaming phase â contributes substantively to its overall performance. The synergy between these components is especially clear in the more creative, generative structure building domains, LOGO graphics and tower building, where no alternative model ever solves more than 60% of held-out tasks while DreamCoder learns to solve nearly 100% of them. The time needed to train DreamCoder to the points of convergence shown in Fig. 6A varies across domains, but typically takes around a day using moderate compute resources (20-100 CPUs).
Examining how the learned libraries grow over time, both with and without learned recognition models, reveals functionally signiï¬cant differences in their depths and sizes. Across domains, deeper libraries correlate well with solving more tasks (r = 0.79), and the presence of a learned recognition model leads to better performance at all depths. The recognition model also leads to deeper libraries by the end of learning, with correspondingly higher asymptotic performance levels (Fig. 6B, Fig. S1). Similar but weaker relationships hold between the size of the learned library and performance. Thus the recognition model appears to bootstrap âbetterâ libraries, where âbetterâ correlates with both the depth and breadth of the learned symbolic representation.
Insight into how DreamCoderâs recognition model bootstraps the learned library comes from looking at how these representations jointly embed the similarity structure of tasks to be solved. DreamCoder ï¬rst encodes a task in the activations of its recognition network, then rerepresents that task in terms of a symbolic program solving it. Over the course of learning, these implicit initial representations realign with the explicit structure of the ï¬nal program solutions, as measured by increasing correlations between the similarity of problems in the recognition networkâs activation 4 space and the similarity of code components used to solve these problems (see Fig. S4; p < 10â using Ï2 test pre/post learning). Visualizing these learned task similarities (with t-SNE embeddings) suggests that, as the model gains a richer conceptual vocabulary, its representations evolve to group together tasks sharing more abstract commonalities (Fig. S3) â possibly analogous to how human domain experts learn to classify problems by the underlying principles that govern their solution rather than superï¬cial similarities (37, 38).
# From learning libraries to learning languages
Our experiments up to now have studied how DreamCoder grows from a âbeginnerâ state given basic domain-speciï¬c procedures, such that only the easiest problems have simple, short solutions, to an âexpertâ state with concepts allowing even the hardest problems to be solved with short, meaningful programs. Now we ask whether it is possible to learn from a more minimal starting state, without even basic domain knowledge: Can DreamCoder start with only highly generic programming and arithmetic primitives, and grow a domain-speciï¬c language with both basic and advanced domain
13
> oO Text editing LOGO Graphics 1.0 3 100 100 g > J 2 Z 80 80 8 W 60 = â 60 vy ov n 8 40 40 k) i. 20 20 S 5 OT Op 0 5 10 15 0 5 10 15 1.0 Avg depth 2.5 List processing Symbolic Regression 10 . ot ° 3 100 100 ' g Sos} ofall H 3 eo Sool daptnitt (7) 60 a 0. ; H Hi Ui legos RB WY 0.4 jo, je8 ge g 40 gO4 18 lt ho"s fa F 02 iY x 20 Foo | x . 0 0 5 10 15 0 5 10 15 o 8 Library size 25 Tower building Generative text modeling > 100 r ) full model g © no dreaming = 80 [e} mn 60 â full model 0 40 5 â== no abstraction r 20 == no dreaming & 0 m= EC baseline 0 5 10 15 0 5 10 15 == neural synthesis baseline Wake/Sleep Cycles Wake/Sleep Cycles === enumeration baseline
Figure 6: Quantitative comparisons of DreamCoder performance with ablations and baseline program induction methods; further baselines shown in Fig. S13. (A) Held-out test set accuracy, across 20 iterations of wake/sleep learning for six domains. Generative text modeling plots show posterior predictive likelihood of held-out strings on held out tasks, normalized per-character. Error bars: 1 std. dev. over ï¬ve runs. (B) Evolution of library structure over wake/sleep cycles (darker: earlier ± cycles; brighter: later cycles). Each dot is a single wake/sleep cycle for a single run on a single domain. Larger, deeper libraries are correlated with solving more tasks. The dreaming phase bootstraps these deeper, broader libraries, and also, for a ï¬xed library structure, dreaming leads to higher performance.
concepts allowing it to solve all the problems in a domain?
Motivated by classic work on inferring physical laws from experimental data (39â41), we ï¬rst task DreamCoder with learning equations describing 60 different physical laws and mathematical identities taken from AP and MCAT physics âcheat sheetsâ, based on numerical examples of data obeying each equation. The full dataset includes data generated from many well-known laws in mechanics and electromagnetism, which are naturally expressed using concepts like vectors, forces, and ratios. Rather than give DreamCoder these mathematical abstractions, we initialize the system with a much more generic basis â just a small number of recursive sequence manipulation primitives like map and fold, and arithmetic â and test whether it can learn an appropriate mathematical language of physics. Indeed, after 8 wake/sleep cycles DreamCoder learns 93% of the laws and identities in the dataset, by ï¬rst learning the building blocks of vector algebra, such as inner products, vector sums, and norms (Fig. 7A). It then uses this mathematical vocabulary to construct concepts underlying multiple physical laws, such as the inverse square law schema that enables it to learn Newtonâs law of gravitation and Coulombâs law of electrostatic force, effectively undergoing a âchange of basisâ from the initial recursive sequence processing language to a physics-style basis.
Could DreamCoder also learn this recursive sequence manipulation language? We initialized the system with a minimal subset of 1959 Lisp primitives (car, cdr, cons, . . . ) and asked it to solve 20
14
A Initial Learned Library of Concepts Primitives a-b rs | âsubtract vectors Y map uted add many vectors ; add vectors add-many-vectors (vs) = (A (vs) (fold (cdr zip vs) (car vs) (A (uv) (add-vectors u v)))) |e? cons ââ M2 empty Pa) âscale vector cdr ab power Zz inverse square t inverse-square(a, b, v) = (A (ab v) fold one Sauere "oe (scale-vector (ab/|v|*2. (sqrt 2a a (ab/|v|*2 b b v)) av) v)) ae or car D ze tbtte + 2alb al2tr2+bt+c - > J oo * sum components ee - / =n dot product period ab â ci period(a, b) = (A (a b) dot-product(u,v) = (A (uv) Q ab-ed (sum-components (zip v u (A (a Gok Gre @ BA) Ey b) (* b a))))) = @ (uv) 1 e (fold (zip vu (A (ab) (* b 5 a))) © (A (x y) (+ x y)))) T B Y cdr fold(xs,f,x0) = nil (f (fold (cdr xs) if nil? unfold(x,g,f.p) = (3) (if (p x) nil 1) (cons (f x) 1 (unfold (g x) gf p))) (A (A.B) (Y (Y @ (A (zu) Cif (| U CYA (A (vw) Cif (ni? W) © (1 Cv (cdr _w))))))) MIL (Cons _U (z (+ U1)))))) CA (a B) Cif (nil? nil (cons (- (car (Â¥ (Y @ (A (c d) (if (= d (car b)) nil (cons d (c (+ d 1)))))) (A Ce f) (if (nil? f) A (cdr (e (cdr F))))))) (car (Y (Â¥ © (A (gh) (if (= h (car b)) mil (cons h (g (+h 1)))))) (A GG) Cif (mil? J) B (cdr (i (cdr §)))))))) (a (ede b))))))) b) map(xs,f) = (fold xs (A (zs length(xs) = (fold xs ( (nx) + 1 n)) 8) filter filter (p,xs) = (fold xs (A (zs x) (if (p x) zs (cons x zs))) nil) count_to(f,x.y) = (unfold x ( (u) (fu unfold_list unfold list(f,xs,p) = (unfold xs (A (u) (f (cdr _u))) car p) Learned Library of Concepts (cons (f x) zs)) nil) Discovered Physics Equations Newtonâs Second Law Parallel Resistors , 1 = 19 a=â) A Resta = (2:9) (scale-vector(reciprocal m) (reciprocal (sum-components (add-many-vectors Fs)) (map (A(r) (reciprocal r)) Rs))) Work Force in a Magnetic u=F-d Field (dot-product F d) |F| = qlé x Bl (* q (ab-cd v_x b_y v_y b_x)) Kinetic Energy Coulomb's Law de, â, Iq-7P Cinverse-square q_1 q_2 (subtract-vectors r_1 r_2)) 1 KE= zm? BP (ab/2 m (|v142 v)) (A (x y Zu) (map (A (v) OF (* (power (/ (* x x) (fold (zip zu (A (wa) (- wa))) © (A (b c) (+ (* bb) o))) âYl 1) (11))) y) (fold (zip zu ( (de) (- de))) 0 CF 8) Ce (* ff) g)))) v)) (zip zu (A (hi) (- hh 4))))) Solution to Coulombâs Law if expressed in initial primitives Discovered Recursive Programming Algorithms Stutter (fold A (A (u v) (cons v (cons v u))) nil) Take every other index(n,xs) = (car (fold (range n) (A (u v) (cdr u)) xs)) range(n) = (count_to + 6) count_down count_down(f,n) = (count_to - (fn) n) ) A) z &y)) zip(xs.f.ys) = (map (range (length xs)) Q (n) (Ff Cindex n ys) (index n xs)))) (am) im) (unfold_list cdr A nil?) List lengths ((mm), (M)) > (3 1] crmm}, (1, (HI) â (201) (map A length) List differences (1 8 2], [05 1] + [1 3 1] (2.3 6], (1 2 4] + [11 2) (zip A-B) Solution to list differences if expressed in initial primitives
Newtonâs Second Law Parallel Resistors , 1 = 19 a=â) A Resta = (2:9) (scale-vector(reciprocal m) (reciprocal (sum-components (add-many-vectors Fs)) (map (A(r) (reciprocal r)) Rs))) Work Force in a Magnetic u=F-d Field (dot-product F d) |F| = qlé x Bl (* q (ab-cd v_x b_y v_y b_x)) Kinetic Energy Coulomb's Law de, â, Iq-7P Cinverse-square q_1 q_2 (subtract-vectors r_1 r_2)) 1 KE= zm? BP (ab/2 m (|v142 v)) (A (x y Zu) (map (A (v) OF (* (power (/ (* x x) (fold (zip zu (A (wa) (- wa))) © (A (b c) (+ (* bb) o))) âYl Solution to
Figure 7: DreamCoder develops languages for physical laws (starting from recursive functions) and recursion patterns (starting from the Y-combinator, cons, if, etc.) (A) Learning a language for physical laws starting with recursive list routines such as map and fold. DreamCoder observes numerical data from 60 physical laws and relations, and learns concepts from vector algebra (e.g., dot products) and classical physics (e.g., inverse-square laws). Vectors are represented as lists of numbers. Physical constants are expressed in Planck units. (B) Learning a language for recursive list routines starting with only recursion and primitives found in 1959 Lisp. DreamCoder rediscovers the âorigamiâ basis of functional programming, learning fold and unfold at the root, with other basic primitives as variations on one of those two families (e.g., map and filter in the fold family), and more advanced primitives (e.g., index) that bring together the fold and unfold families.
15
basic programming tasks, like those used in introductory computer science classes. Crucially the initial language includes primitive recursion (the Y combinator), which in principle allows learning to express any recursive function, but no other recursive function is given to start; previously we had sequestered recursion within higher-order functions (map, fold, . . . ) given to the learner as primitives. With enough compute time (roughly ï¬ve days on 64 CPUs), DreamCoder learns to solve all 20 problems, and in so doing assembles a library equivalent to the modern repertoire of functional programming idioms, including map, fold, zip, length, and arithmetic operations such as building lists of natural numbers between an interval (see Fig. 7B). All these library functions are expressible in terms of the higher-order function fold and its dual unfold, which, in a precise formal manner, are the two most elemental operations over recursive data â a discovery termed âorigami programmingâ (42). DreamCoder retraced the discovery of origami programming: ï¬rst reinventing fold, then unfold, and then deï¬ning all other recursive functions in terms of folding and unfolding.
# Discussion
Our work shows that it is possible and practical to build a single general-purpose program induction system that learns the expertise needed to represent and solve new learning tasks in many qualitatively different domains, and that improves its expertise with experience. Optimal expertise in Dream- Coder hinges on learning explicit declarative knowledge together with the implicit procedural skill to use it. More generally, DreamCoderâs ability to learn deep explicit representations of a domainâs con- ceptual structure shows the power of combining symbolic, probabilistic and neural learning approaches: Hierarchical representation learning algorithms can create knowledge understandable to humans, in contrast to conventional deep learning with neural networks, yielding symbolic representations of expertise that ï¬exibly adapt and grow with experience, in contrast to traditional AI expert systems.
We focused here on problems where the solution space is well captured by crisp symbolic forms, even in domains that admit other complexities such as pixel image inputs, or exceptions and irreg- ularities in generative text patterns, or continuous parameters in our symbolic regression examples. Nonetheless, much real-world data is far messier. A key challenge for program induction going forward is to handle more pervasive noise and uncertainty, by leaning more heavily on probabilistic and neural AI approaches (5, 43, 44). Recent research has explored program induction with various hybrid neuro-symbolic representations (45â49), and integrating these approaches with the library learning and bootstrapping capacities of DreamCoder could be especially valuable going forward.
Scaling up program induction to the full AI landscape â to commonsense reasoning, natural language understanding, or causal inference, for instance â will demand much more innovation but holds great promise. As a substrate for learning, programs uniquely combine universal expressiveness, data-efï¬cient generalization, and the potential for interpretable, compositional reuse. Now that we can start to learn not just individual programs, but whole domain-speciï¬c languages for programming, a further property takes on heightened importance: Programs represent knowledge in a way that is mutually understandable by both humans and machines. Recognizing that every AI system is in reality the joint product of human and machine intelligence, we see the toolkit presented here as helping to lay the foundation for a scaling path to AI that people and machines can truly build together.
In the rest of this discussion, we consider the broader implications of our work for building better models of human learning, and more human-like forms of machine learning.
16
# Interfaces with biological learning
DreamCoderâs wake-sleep mechanics draw inspiration from the Helmholtz machine, which is itself loosely inspired by human learning during sleep. DreamCoder adds the notion of a pair of interleaved sleep cycles, and intriguingly, biological sleep similarly comes in multiple stages. Fast-wave REM sleep, or dream sleep, is associated with learning processes that give rise to implicit procedural skill (11), and engages both episodic replay and dreaming, analogous to our modelâs dream sleep phase. Slow-wave sleep is associated with the formation and consolidation of new declarative abstractions (10), roughly mapping to our modelâs abstraction sleep phase. While neither DreamCoder nor the Helmholtz machine are intended as biological models, we speculate that our approach could bring wake-sleep learning algorithms closer to the actual learning processes that occur during human sleep.
DreamCoderâs knowledge grows gradually, with dynamics related to but different from earlier developmental proposals for âcurriculum learningâ (50) and âstarting smallâ (51). Instead of solving increasingly difï¬cult tasks ordered by a human teacher (the âcurriculumâ), DreamCoder learns in a way that is arguably more like natural unsupervised exploration: It attempts to solve random samples of tasks, searching out to the boundary of its abilities during waking, and then pushing that boundary outward during its sleep cycles, bootstrapping solutions to harder tasks from concepts learned with easier ones. But humans learn in much more active ways: They can choose which tasks to solve, and even generate their own tasks, either as stepping stones towards harder unsolved problems or motivated by considerations like curiosity and aesthetics. Building agents that generate their own problems in these human-like ways is an important next step.
Our division of domain expertise into explicit declarative knowledge and implicit procedural skill is loosely inspired by dual-process models in cognitive science (52, 53) and the study of human expertise (37, 38). Human experts learn both declarative domain concepts that they can talk about in words â artists learn arcs, symmetries, and perspectives; physicists learn inner products, vector ï¬elds, and inverse square laws â as well procedural (and implicit) skill in deploying those concepts quickly to solve new problems. Together, these two kinds of knowledge let experts more faithfully classify problems based on the âdeep structureâ of their solutions (37, 38), and intuit which concepts are likely to be useful in solving a task even before they start searching for a solution. We believe both kinds of expertise are necessary ingredients in learning systems, both biological and artiï¬cial, and see neural and symbolic approaches playing complementary roles here.
# What to build in, and how to learn the rest
The goal of learning like a humanâin particular, a human childâis often equated with the goal of learning âfrom scratchâ, by researchers who presume, following Turing (1), that children start off close to a blank slate: âsomething like a notebook as one buys it from the stationers. Rather little mechanism and lots of blank sheets.â The roots of program induction as an approach to general AI also lie in this vision, motivated by early results showing that in principle, from only a minimal Turing-complete language, it is possible to induce programs that solve any problem with a computable answer (2, 54â56). DreamCoderâs ability to start from minimal bases and discover the vocabularies of functional programming, vector algebra, and physics could be seen as another step towards that goal. Could this approach be extended to learn not just one domain at a time, but to simultaneously develop expertise across many different classes of problems, starting from only a single minimal basis? Progress could be enabled by metalearning a cross-domain library or âlanguage of thoughtâ (57, 58), as humans have built collectively through biological and cultural evolution, which can then differentiate
17
itself into representations for unboundedly many new domains of problems.
While these avenues would be fascinating to explore, trying to learn so much starting from so little is unlikely to be our best route to AI â especially when we have the shoulders of so many giants to stand on. Even if learning from scratch is possible in principle, such approaches suffer from a notorious thirst for dataâas in neural networksâor, if not data, then massive compute: Just to construct âorigamiâ functional programming, DreamCoder took approximately a year of total CPU time. Instead, we draw inspiration from the sketching approach to program synthesis (22). Sketching approaches consider single synthesis problems in isolation, and expect a human engineer to outline the skeleton of a solution. Analogously, here we built in what we know constitutes useful ingredients for learning to solve synthesis tasks in many different domains â relatively spartan but generically powerful sets of control ï¬ow operators, higher-order functions, and types. We then used learning to grow specialized languages atop these foundations. The future of learning in program synthesis may lie with systems initialized with even richer yet broadly applicable resources, such as those embodied by simulation engines or by the standard libraries of modern programming languages.
This vision also shapes how we see program induction best contributing to the goal of building more human-like AI â not in terms of blank-slate learning, but learning on top of rich systems of built-in knowledge. Prior to learning the domains we consider here, human children begin life with âcore knowledgeâ: conceptual systems for representing and reasoning about objects, agents, space, and other commonsense notions (59â61). We strongly endorse approaches to AI that aim to build human-understandable knowledge, beginning with the kinds of conceptual resources that humans do. This may be our best route to growing artiï¬cial intelligence that lives in a human world, alongside and synergistically with human intelligence.
# References
1. Alan M Turing. Computing machinery and intelligence. Mind, 1950.
2. Ray J Solomonoff. A formal theory of inductive inference. Information and control, 7(1):1â22, 1964.
3. Percy Liang, Michael I. Jordan, and Dan Klein. Learning dependency-based compositional semantics. In ACL, pages 590â599, 2011.
4. Tejas D Kulkarni, Pushmeet Kohli, Joshua B Tenenbaum, and Vikash Mansinghka. Picture: A probabilistic programming language for scene perception. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4390â4399, 2015.
5. Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332â1338, 2015.
6. Nick Chater and Mike Oaksford. Programs as causal models: Speculations on mental programs and mental representation. Cognitive Science, 37(6):1171â1191, 2013.
7. Sumit Gulwani. Automating string processing in spreadsheets using input-output examples. In ACM SIGPLAN Notices, volume 46, pages 317â330. ACM, 2011.
18
8. Miguel L´azaro-Gredilla, Dianhuan Lin, J Swaroop Guntupalli, and Dileep George. Beyond imitation: Zero-shot task transfer on robots by learning concepts as cognitive programs. Science Robotics, 4(26):eaav3150, 2019.
9. Jacob Devlin, Rudy R Bunel, Rishabh Singh, Matthew Hausknecht, and Pushmeet Kohli. Neural program meta-induction. In NIPS, 2017.
10. Yadin Dudai, Avi Karni, and Jan Born. The consolidation and transformation of memory. Neuron, 88(1):20 â 32, 2015.
11. Magdalena J Fosse, Roar Fosse, J Allan Hobson, and Robert J Stickgold. Dreaming and episodic memory: a functional dissociation? Journal of cognitive neuroscience, 15(1):1â9, 2003.
12. Geoffrey E Hinton, Peter Dayan, Brendan J Frey, and Radford M Neal. The âwake-sleepâ algorithm for unsupervised neural networks. Science, 268(5214):1158â1161, 1995.
13. Eyal Dechter, Jon Malmaud, Ryan P. Adams, and Joshua B. Tenenbaum. Bootstrap learning via modular concept discovery. In IJCAI, 2013.
14. Percy Liang, Michael I. Jordan, and Dan Klein. Learning programs: A hierarchical bayesian approach. In ICML, 2010.
15. Matej Balog, Alexander L Gaunt, Marc Brockschmidt, Sebastian Nowozin, and Daniel Tarlow. Deepcoder: Learning to write programs. ICLR, 2016.
16. Jacob Devlin, Jonathan Uesato, Surya Bhupatiraju, Rishabh Singh, Abdel-rahman Mohamed, and Pushmeet Kohli. Robustï¬ll: Neural program learning under noisy i/o. ICML, 2017.
17. Kevin Ellis, Lucas Morales, Mathias Sabl´e-Meyer, Armando Solar-Lezama, and Josh Tenenbaum. Library learning for neurally-guided bayesian program induction. In NeurIPS, 2018.
18. Tessa Lau, Steven A Wolfman, Pedro Domingos, and Daniel S Weld. Programming by demonstra- tion using version space algebra. Machine Learning, 53(1-2):111â156, 2003.
19. Tom M Mitchell. Version spaces: A candidate elimination approach to rule learning. In Proceedings of the 5th international joint conference on Artiï¬cial intelligence-Volume 1, pages 305â310. Morgan Kaufmann Publishers Inc., 1977.
20. Oleksandr Polozov and Sumit Gulwani. Flashmeta: A framework for inductive program synthesis. ACM SIGPLAN Notices, 50(10):107â126, 2015.
21. Ross Tate, Michael Stepp, Zachary Tatlock, and Sorin Lerner. Equality saturation: a new approach to optimization. In ACM SIGPLAN Notices, volume 44, pages 264â276. ACM, 2009.
22. Armando Solar Lezama. Program Synthesis By Sketching. PhD thesis, 2008.
23. John K Feser, Swarat Chaudhuri, and Isil Dillig. Synthesizing data structure transformations from input-output examples. In PLDI, 2015.
24. Rajeev Alur, Dana Fisman, Rishabh Singh, and Armando Solar-Lezama. Sygus-comp 2017: Results and analysis. arXiv preprint arXiv:1711.11438, 2017.
19
25. M. M. Bongard. Pattern Recognition. Spartan Books, 1970.
26. Douglas Hofstadter and Gary McGraw. Letter spirit: An emergent model of the perception and creation of alphabetic style. 1993.
27. Jean Raven et al. Raven progressive matrices. In Handbook of nonverbal assessment, pages 223â237. Springer, 2003.
28. David D. Thornburg. Friends of the turtle. Compute!, March 1983.
29. Josh Tobin, Rachel Fong, Alex Ray, Jonas Schneider, Wojciech Zaremba, and Pieter Abbeel. Domain randomization for transferring deep neural networks from simulation to the real world. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 23â30. IEEE, 2017.
30. Patrick Winston. The MIT robot. Machine Intelligence, 1972.
31. Richard S Sutton, Doina Precup, and Satinder Singh. Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. Artiï¬cial intelligence, 112(1-2):181â211, 1999.
32. Gary F Marcus, Sugumaran Vijayan, S Bandi Rao, and Peter M Vishton. Rule learning by seven-month-old infants. Science, 283(5398):77â80, 1999.
33. Brenden Lake, Chia-ying Lee, James Glass, and Josh Tenenbaum. One-shot learning of generative speech concepts. In Proceedings of the Annual Meeting of the Cognitive Science Society, volume 36, 2014.
34. Luke Hewitt and Joshua Tenenbaum. Learning structured generative models with memoised wake-sleep. under review, 2019.
35. Christopher M. Bishop. Pattern Recognition and Machine Learning. Springer-Verlag New York, Inc., 2006.
36. Andrew Cropper. Playgol: Learning programs through play. IJCAI, 2019.
37. Michelene TH Chi, Paul J Feltovich, and Robert Glaser. Categorization and representation of physics problems by experts and novices. Cognitive science, 5(2), 1981.
38. M.T.H. Chi, R. Glaser, and M.J. Farr. The Nature of Expertise. Taylor & Francis Group, 1988.
39. Herbert A Simon, Patrick W Langley, and Gary L Bradshaw. Scientiï¬c discovery as problem solving. Synthese, 47(1):1â27, 1981.
40. Pat Langley. Scientiï¬c discovery: Computational explorations of the creative processes. MIT Press, 1987.
41. Michael Schmidt and Hod Lipson. Distilling free-form natural laws from experimental data. science, 324(5923):81â85, 2009.
42. Jeremy Gibbons. Origami programming. 2003.
20
43. Kevin Ellis, Daniel Ritchie, Armando Solar-Lezama, and Joshua B Tenenbaum. Learning to infer graphics programs from hand-drawn images. NIPS, 2018.
44. Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska- Barwi´nska, Sergio G´omez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al. Hybrid computing using a neural network with dynamic external memory. Nature, 538(7626):471, 2016.
45. Lazar Valkov, Dipak Chaudhari, Akash Srivastava, Charles Sutton, and Swarat Chaudhuri. Houdini: Lifelong learning as program synthesis. In Advances in Neural Information Processing Systems, pages 8687â8698, 2018.
46. Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Neural module networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 39â48, 2016.
47. Robin Manhaeve, Sebastijan Dumancic, Angelika Kimmig, Thomas Demeester, and Luc De Raedt. In Advances in Neural Information Deepproblog: Neural probabilistic logic programming. Processing Systems, pages 3749â3759, 2018.
48. Halley Young, Osbert Bastani, and Mayur Naik. Learning neurosymbolic generative models via program synthesis. arXiv preprint arXiv:1901.08565, 2019.
49. Reuben Feinman and Brenden M. Lake. Generating new concepts with hybrid neuro-symbolic models, 2020.
50. Yoshua Bengio, J´erËome Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In ICML, 2009.
51. Jeffrey L Elman. Learning and development in neural networks: The importance of starting small. Cognition, 48(1):71â99, 1993.
52. Jonathan St BT Evans. Heuristic and analytic processes in reasoning. British Journal of Psychology, 75(4):451â468, 1984.
53. Daniel Kahneman. Thinking, fast and slow. Macmillan, 2011.
54. Ray J Solomonoff. A system for incremental learning based on algorithmic probability. Sixth Israeli Conference on Artiï¬cial Intelligence, Computer Vision and Pattern Recognition, 1989.
55. J¨urgen Schmidhuber. Optimal ordered problem solver. Machine Learning, 54(3):211â254, 2004.
56. Marcus Hutter. Universal artiï¬cial intelligence: Sequential decisions based on algorithmic probability. Springer Science & Business Media, 2004.
57. Jerry A Fodor. The language of thought, volume 5. Harvard University Press, 1975.
58. Steven Thomas Piantadosi. Learning and the language of thought. PhD thesis, Massachusetts Institute of Technology, 2011.
21
59. Brenden M Lake, Tomer D Ullman, Joshua B Tenenbaum, and Samuel J Gershman. Building machines that learn and think like people. Behavioral and brain sciences, 40, 2017.
60. Elizabeth S Spelke, Karen Breinlinger, Janet Macomber, and Kristen Jacobson. Origins of knowledge. Psychological review, 99(4):605, 1992.
61. Susan Carey. The origin of concepts: A pr´ecis. The Behavioral and brain sciences, 34(3):113, 2011.
Supplement. Supplementary materials available at https://web.mit.edu/ellisk/www/ dreamcodersupplement.pdf. Acknowledgments. We thank L. Schulz, J. Andreas, T. Kulka- rni, M. Kleiman-Weiner, J. M. Tenenbaum, M. Bernstein,, and E. Spelke for comments and suggestions that greatly improved the manuscript. Supported by grants from the Air Force Ofï¬ce of Scientiï¬c Re- search, the Army Research Ofï¬ce, the National Science Foundation-funded Center for Brains, Minds, and Machines, the MIT-IBM Watson AI Lab, Google, Microsoft and Amazon, and NSF graduate fellowships to K. Ellis and M. Nye.
22 | {
"id": "1901.08565"
} |
2006.08669 | On Adversarial Bias and the Robustness of Fair Machine Learning | Optimizing prediction accuracy can come at the expense of fairness. Towards
minimizing discrimination against a group, fair machine learning algorithms
strive to equalize the behavior of a model across different groups, by imposing
a fairness constraint on models. However, we show that giving the same
importance to groups of different sizes and distributions, to counteract the
effect of bias in training data, can be in conflict with robustness. We analyze
data poisoning attacks against group-based fair machine learning, with the
focus on equalized odds. An adversary who can control sampling or labeling for
a fraction of training data, can reduce the test accuracy significantly beyond
what he can achieve on unconstrained models. Adversarial sampling and
adversarial labeling attacks can also worsen the model's fairness gap on test
data, even though the model satisfies the fairness constraint on training data.
We analyze the robustness of fair machine learning through an empirical
evaluation of attacks on multiple algorithms and benchmark datasets. | http://arxiv.org/pdf/2006.08669 | Hongyan Chang, Ta Duy Nguyen, Sasi Kumar Murakonda, Ehsan Kazemi, Reza Shokri | stat.ML, cs.CR, cs.CY, cs.LG | null | null | stat.ML | 20200615 | 20200615 | 0 2 0 2
n u J 5 1 ] L M . t a t s [ 1 v 9 6 6 8 0 . 6 0 0 2 : v i X r a
# On Adversarial Bias and the Robustness of Fair Machine Learning
Hongyan Chang, Ta Duy Nguyen, Sasi Kumar Murakonda, Ehsan Kazemiâ , Reza Shokri National University of Singapore (NUS), â Google {hongyan, taduy, murakond, reza}@comp.nus.edu.sg, [email protected]
# Abstract
Optimizing prediction accuracy can come at the expense of fairness. Towards minimizing discrimination against a group, fair machine learning algorithms strive to equalize the behavior of a model across diï¬erent groups, by imposing a fairness constraint on models. However, we show that giving the same importance to groups of diï¬erent sizes and distributions, to counteract the eï¬ect of bias in training data, can be in conï¬ict with robustness. We analyze data poisoning attacks against group-based fair machine learning, with the focus on equalized odds. An adversary who can control sampling or labeling for a fraction of training data, can reduce the test accuracy signiï¬cantly beyond what he can achieve on unconstrained models. Adversarial sampling and adversarial labeling attacks can also worsen the modelâs fairness gap on test data, even though the model satisï¬es the fairness constraint on training data. We analyze the robustness of fair machine learning through an empirical evaluation of attacks on multiple algorithms and benchmark datasets.
# Introduction
Trustworthy algorithms are crucial components of machine learning frameworks in critical systems, as highlighted by many AI regulations and policies as well as technical research papers. Algorithmic fairness is at the core of the trust requirements for automated decision making in sensitive domains, to avoid systemic discrimination against protected groups. Many technical notions of fairness are proposed and many algorithms for enforcing such notions are designed [1, 8, 15, 19, 24, 28, 32, 39â41]. Group fairness measures, such as equalized odds [19] which is the focus of this paper, suggest equalizing the modelâs behavior across groups that are identiï¬ed based on a protected attribute (e.g., race or gender). Fairness, however, has a cost on the modelâs performance, as the best decision rules that satisfy a deï¬nition of fairness diï¬er from the optimal decision rules [10]. In this paper, we ask how adversarially adding a fraction of the training data can further increase the cost of fairness.
A large body of work shows machine learning models are vulnerable to various types of data poisoning attacks that can impose a large test loss on the target models [3, 6, 9, 17, 21, 26, 31, 34â38]. Recent work studies the performance of fair machine learning in the presence of noise over a fraction of the training data [4, 7, 11, 22, 23, 29]. These assume a uniform distribution of under-representation and labeling bias, or the analysis is limited to having an unlimited number of training data from the underlying distribution. We present a detailed survey of the related work in Section 6. To the best of our knowledge, this paper provides the ï¬rst analysis of the robustness of fair machine learning algorithms in the adversarial setting, against data poisoning attacks. This paper shows the implications of adversarial bias on fair machine learning, and calls for robust algorithmic fairness. We present a framework for designing data poisoning attack algorithms against models trained with equalized odds as the fairness constraint. In our algorithms, we assume the attacker who can control the sampling process and (in the stronger case, also) the labeling process for some of the
1
training data. Our attacks eï¬ectively exploit the fact that fair algorithms equalize the importance of diï¬erent groups (with diï¬erent sizes and data distributions). This can change the inï¬uence of individual data samples in those groups in a disproportionate way, enabling the attacker to place poisoning data where it can impose a large loss on the trained model. We extensively evaluate the robustness of fair machine learning on multiple fairness algorithms and benchmark datasets. Here are the key ï¬ndings in our empirical evaluation:
We show that there is a signiï¬cant conï¬ict between fairness and robustness. As we tighten the guaranteed fairness gap, we show that the susceptibility of fair models to data poisoning attacks increases. Notably, enforcing exact fairness results in the largest drop in test accuracy under attack, much beyond what an adversary can achieve in unconstrained models. We can observe this eï¬ect even for the case of the most limited adversary who can only control data sampling for a small fraction of the training data, without being able to change the features and labels. We observe that the adversary achieves this by placing the poisoning data in the smallest group with the least frequent label. To satisfy the fairness constraint, the model ends up sacriï¬cing its generalizability over the majority group to equalize its prediction performance.
The impact of our data poisoning attacks is not limited to reducing the test accuracy. We show that the attack also results in a signiï¬cant loss of fairness over test data. Adversarial manipulation of the training data prevents the model to generalize its fairness to clean test data, even though it is guaranteed on training data. The attacker can inï¬uence the models to become even more discriminatory than unconstrained models, according to the fairness measure.
# 2 Background and the Problem Statement
Machine learning. Consider a binary classification model fg: % â Y, that maps the feature space Â¥ to binary labels Y = {+,â}. The model is represented by its parameters 6 taken from a parameter space 0. The model is trained to minimize a loss function : 0 x XÂ¥ x Y > R, over its training set D. We let X and Y to denote the random variables associated with the features and the labels, and (X,Y) to denote the underlying distribution of the data. We obtain the optima! parameters as 6 = argmingce pL: D), where L(O;D) = Va ep (9; x,y) is the cumulative loss of the model over the training set. We quantify the accuracy of the trained model on a test dataset Dtest-
Fairness. We assume all data points are split into two groups based on a binary attribute S â {0, 1} (e.g., gender), referred to as the protected/sensitive attribute. This attribute could be part of the feature set X . In fair machine learning, our objective is to train a model such that its predictions are non-discriminatory and fair with respect to S. To this end, the training process needs to be adjusted to equalize the prediction behavior of the model across the two groups [15, 16, 19, 25, 40]. In this work, we focus on equalized odds, which is a widely-used deï¬nition for group fairness [19]. A model is fair if, given the true label for a data point, the modelâs prediction on a data point and its sensitive attribute are conditionally independent. We use a relaxed notion of equalized odds.
Deï¬nition 1 (Equalized odds). A binary classiï¬er fθ is δ-fair under equalized odds if
AQO,D) = max. | Prlfo(X) y|S=0,Y =y] â Pri fo(X) AylS=1,Y = yl] <4, (1)
where, the probabilities are computed empirically over the training data set D. We refer to â as the modelâs empirical fairness gap. A model satisï¬es exact fairness when δ = 0.
Fairness is achieved by ensuring δ-fairness empirically on the modelâs training set, e.g., through
2
minimizing the modelâs empirical loss under δ-fairness as a constraint [1] or post-processing [19]. We deï¬ne the constraint C(θ, D) := â(θ, D) â δ ⤠0 as a fairness constraint for the model.
Data poisoning. An adversary might be able to contaminate the training dataset in order to degrade the test accuracy of a classification model. In this setting, we assume the training set is composed of the clean dataset D, of size n, and the poisoning dataset D, of size en, which is contributed by the attacker. The level of contamination is determined by e⬠(the ratio of the size of the poisoning data over the clean data in the training set). The attackerâs objective is to maximize the loss of the classifier over the data distribution (evaluated using the test dataset). This objective can be stated as a bi-level optimization problem subject to the fairness constraint C(@,D) <0:
A L(0;De U Dp) max (0; X,Y)|, where 6:= argmin net cy) le ) gmin UD, , such that C(9,D, UD») < 0, (2)
where the expectation is taken over the underlying distribution of the (clean) data.
Problem statement. The primary research question that we investigate in this paper is whether, how, and why training a model with (equalized odds) fairness can compromise its robustness to data poisoning attacks, compared with unconstrained models (without any fairness constraint).
We assume that the attacker has access to Dc, and knows the learning task, the structure of the classiï¬cation model, the learning hyper-parameters, and the fairness constraint on the target model. A strong threat model involves the attacker that can craft any arbitrary poisoning data Dp. In this paper, however, we focus on a more restricted yet more realistic attack scenario, where the attacker is restricted to select the feature vector of the poisoning data from an attack dataset Dk, which is sampled from the same underlying distribution of the clean dataset. Two variations of the attack are adversarial sampling, where Dp â Dk, and adversarial labeling, where Dp â Dk ⪠{(x, 1 â y) : (x, y) â Dk}. Adversarial labeling is a more powerful attack as the attacker is also allowed to craft the labels (via label ï¬ipping) for generating the poisoning data.
The rationale for considering adversarial sampling and adversarial labeling attacks is that, in a realistic scenario, the attacker might not be able to craft feature vectors (i.e., generate fake loan applications). However, he could be part of a system that can introduce a bias in the sampling process for the training data, by ignoring some samples and including the others. In addition to this, the attacker could also be capable of inï¬uencing the decision making for some data points (i.e., producing labels) which later will be used as part of the target modelâs training set. Our objective is to design these forms of poisoning attacks that introduce an adversarial bias into the training set of fair models.
# 3 Optimization Framework for Adversarial Bias
The bi-level optimization problem (2) is non-convex and intractable [2, 12, 18]. The fairness constraint in the inner optimization makes the problem even more diï¬cult.1 In this section, we present a number of approximations for problem (2) which enables us to design eï¬ective attack algorithms.
We ï¬rst approximate the loss function (which is maximized in the outer optimization of Eq. (2)) by following the same techniques used for designing poisoning attacks against unconstrained models [37]. For this reason, let Ëθ be the solution to the inner optimization in (2). We use the loss on the clean
1We would like to point out that, for the unconstrained model, under the convex assumption of the loss function, it is possible for the attacker to ï¬nd the approximate solution by replacing the inner optimization with its stationarity (KKT) condition [3, 26, 35].
3
training data as an approximation for the loss over the underlying data distribution (of test data).
A ~ L(6; Dest) L(O;De) . £(6;DeUD, vey [l(BX.¥)] = Peat), SDS < Ft ») ID] [De| (3)
The inequality provides a valid upper bound, as the loss function ¢ is non-negative. Note that this bound becomes tighter if the fair model fg fits the poisoned training dataset well. With the same line of reasoning, we replace the objective of the inner minimization in Eq. (2) with £(@:;PcUDp)/n, where n is the size of the clean training dataset De.
The fairness constraint in the inner optimization of Eq. (2) makes it hard to track the inï¬uence of Dp on the training loss. We use a Lagrange multiplier λ â R+ to replace the constraint for the inner optimization problem with a penalizing term:
0:D 1 4:D min £8; ) s.t. C(0,D) < 0} = min max (« dy Ac(0,D)) Te) n J 0â¬0 ACR, n 6:D > max min (« dy AC(O, )) (4) AER, EO n
where D = Dc ⪠Dp. The last inequality in Eq. (4) follows from the weak duality theorem [5].
Based on the dual problem, the attacker can try to ï¬nd a poisoning dataset Dp by maximizing a lower bound provided by the Lagrangian function minθâÎ (L(θ;D)/n + λC(θ, D)) for a ï¬xed λ â R+. Indeed, maximizing the lower bound provided by the Lagrangian function would result in a solution with a high loss (which is guaranteed to be at least equal to the loss for the lower bound) for the original problem.
In this optimization procedure, we can also replace the fairness constraint C(θ, D) := â(θ, D) â δ with the fairness gap â(θ, D), because the constant value δ ⥠0 does not aï¬ect the solution for the Lagrangian. Finally, by considering all the above-mentioned steps, the new attackerâs objective is:
L(0;De UD») max min n ng min + AA(6,D-U ,)) (5)
Thus, the goal is to ï¬nd a poisoning dataset that maximizes a linear combination of the training loss and the modelâs violation from the fairness constraint, where λ controls the penalty for the violation.
# 4 Attack Algorithms
In Section 3, we explained how the objective of an optimal attacker could be interpreted as (5). Towards solving (5), the attacker needs to overcome a number of subtle challenges: the loss function and the constraint are non-convex functions of the model parameter θ â Î, and the fairness gap is not an additive function of the training data points (x, y) â D. These two keep little hope to solve Eq. (5) without further assumptions. To overcome these issues, in Section 4.1, we ï¬nd an approximation for the fairness gap which is additive in the training data points. By using this additivity property, we design an online algorithm for the data poisoning attack. We further prove the optimality of our algorithm for this modiï¬ed objective under some reasonable conditions. In Section 4.2, we present another variant of such online algorithms, which under the Lagrange multiplier λ = 0 is equivalent to the prior poisoning attack algorithms against unconstrained models [27, 37].
4
Algorithm 1 Online Gradient Descent Algorithm for Generating Poisoning Data for Fair Models 1: Input: Clean data D,, n = |D,|, feasible poisoning set F(D,.), number of poisoning data en,
penalty parameter (Lagrange multiplier) λ, learning rate η.
Output: Poisoning dataset Dp. Initialize 6° ⬠© for t=1,:-- ,en do (c',y') argmax(s,yer(D,)
Initialize 6° ⬠© for t=1,:-- ,en do (c',y') argmax(s,yer(D,) [â¬Â° £1: a,y)+A-A (01, D.U {(2,y)}â) ] Dp â DpU {(x',y")} gt gt! â (Tee Pad LY le . 0(0t-1; xt, yt) +H. A(6°-1, D, U {(2'.y)}")]) n end for
# 4.1 An Approximation for the Fairness Gap
To design an additive approximation of the fairness gap A, we consider the contribution of each training data point to the fairness gap independently. Let {(2, yy} be the & repetition of a single data point (x,y). Thus DU {(a,y)}* is equivalent to adding k copies of the data point (x, y) to set D. In this setting, for any data point (x,y) ⬠Dp, A(0,D. U {(z, y)}â¢) is a proxy for measuring the contribution of that data point to the fairness gap A(6,D,U Dp), and its maximum over all data points in Dp, provides an upper bound on the fairness gap of the model when the size of the poisoning set is en. Thus, we get an approximation for the fairness gap as follows:
+ > A@,P.u{(,y)}"). (6) A(0,DeU Dp) © = ~~ («,y)EDp
By substituting the fairness gap with its surrogate function, the objective of the attacker is to solve:
M* = max min te (0;D. UD») + a. ) A(0,D. U {(x,y)}â) |, (7) Dp 060
, P en ; : , (,y)â¬Dp M(D,)
where M (Dp) is the loss which is incurred by any poisoning dataset Dp on the fair model, and M â is the maximum loss under the optimal attack.
Algorithm 1, a variant of the online gradient descent methods [20], is our solution to the problem (7). It initializes a model 6° ⬠©, and identifies en poisoning data points iteratively. The feasible set of poisoning points F(D,) is determined by the capabilities of the attacker. For adversarial sampling attacks, we have F(D,) = D,, and for adversarial labeling attacks, we have F (Dx) = Dy U {(@, 1 â y) : (x,y) ⬠Dy}. The algorithm iteratively performs the following steps:
Data point selection. (Algorithm 1, line 5): It selects a data point with the highest impact on a weighted sum of the loss function and the fairness gap with respect to the model parameter θtâ1. Parameter update. (Algorithm 1, line 7): The parameters are updated to minimize the penalized loss function based on the selected data point (xt, yt). In this way, the algorithm (through the approximations made by the Lagrange multiplier and the surrogate function) keeps track of the fair model under the set of already selected poisoning data points by the attack.
In Theorem 1, by following the approach proposed by [37], we relate the performance of Algorithm 1 with the loss of the optimal attack for Eq. (7). Moreover, in Appendix B.2, we prove that under some reasonable conditions (e.g., by using similar assumptions made by [13] to approximate the fairness gap), our algorithm ï¬nds the (nearly) optimal solution for Eq. (7).
5
Theorem 1. Let Dj be the data poisoning set produced by Algorithm 1. Let Regret(en) be the regret of this online learning algorithm after en steps. The performance of the algorithm is guaranteed by
Regret (en Mt â (pe) < Bestel) (8) en
where, M â and M (Dâ and the poisoning set Dâ p) represent the loss of the fair model under the optimal data poisoning attack p, respectively.
# 4.2 A Surrogate Function for the Target Model
In this section, we present a second algorithm that diï¬ers from Algorithm 1 in its parameter update step, but is the same in its data point selection approach. Indeed, in Algorithm 1, the parameter update step provides an approximation (through adding the fairness constraint as a penalizing term and approximating the fairness gap) for the target fair model. An alternative strategy, for an attacker, could be iteratively adding data points that maximize a combination of the loss and the fairness gap, however, over the parameters of the unconstrained model. In this case, θt would represent the parameters of an unconstrained model, without considering the fairness constraint. Thus we update the parameters of the model in the direction of decreasing the loss for the unconstrained model:
of 9 (PAPO Sc yaa hata). (9) n
The pseudo-code of this algorithm is presented in Algorithm 2 in Appendix B.3. An intuitive explanation of Algorithm 2 is as follows: the attacker, from the result of the parameter update step (9), would be able to estimate the unconstrained model (as a surrogate for the target fair model) over the set of currently selected points. Then, a point with the largest value for a weighted sum of the loss and the fairness gap for the estimated surrogate model, potentially has a large impact on reducing the accuracy of the fair model. We should point out that Algorithm 2 reduces the chance of getting stuck in local minima because, in each parameter update, it makes a step towards the negative gradient of the exact unconstrained loss. This is in contrast with Algorithm 1, where due to the diï¬culty of approximating a constrained max-min problem, it might converge to some parameters not so close to the actual fair model or even not converge.
Note that, in our algorithms, if we set λ = 0, the adversarial bias attacks and their objectives are similar to the data poisoning attacks against the unconstrained models, e.g., the work of Steinhardt et al. [37]. However, without taking into account the inï¬uence of the poisoning data on the fairness gap (as we do in the data point selection step), the attacker would not be as eï¬ective. In Section 5, we empirically investigate to what extend considering both the loss and fairness gap in designing attacks aï¬ects the accuracy of a fair model (which is trained over the union of clean data and poisoning data).
# 5 Evaluation
In this section, we present the main ï¬ndings of our experiments. See details and results in Appendix D.
# 5.1 Evaluation Setup
Datasets and models. We train logistic regression models on the COMPAS dataset [30] and the Adult dataset [14], which are benchmark datasets in the fairness literature. We use race (white/black) in the COMPAS dataset, and gender in the Adult dataset, as the sensitive attribute S (which is part
6
of the feature vector X ). The accuracy of classiï¬cation models on these two datasets is low and close to predicting the most frequent label in the set. This does not help understanding the behavior of models in the presence of poisoning data. Hence, we perform a data pre-processing to separate hard examples from the data that we use for the clean training data Dc, test data Dtest, and the attack dataset Dk. Hard examples are data points with large loss on a trained model on the entire dataset. We will use hard examples as one of our baselines. We also add hard examples to the attack dataset. Fair machine learning algorithms. We train logistic regression models with an equalized odds fairness constraint, by using the post-processing approach (the original exact equalized odds algorithm) [19] and the reductions approach (the relaxed equalized odds algorithm) [1]. See Appendix C.
Adversarial bias. Attacker adds poisoning data selected from the attack dataset Dy, using Algorithm 1 (with \ = ⬠on COMPAS, and \ = 0.le on Adult), and Algorithm 2 (with A = 100e for both datasets). We use Algorithm 2 with A = 0 to attack unconstrained models. We use the same learning rate 7 = 0.001 in both algorithms. See Appendix D.2 for a discussion on choosing X.
Baseline algorithms. In addition to comparing with prior data poisoning attacks against unconstrained models [37], we also consider the following baselines. Random sampling: Attacker randomly selects data points from Dk. Label ï¬ipping: Attacker randomly selects data points from Dk and ï¬ips their labels. Hard examples: Attacker randomly selects data points from the set of hard examples.
# 5.2 Evaluation Results
In this section, we present the experimental results on the COMPAS dataset. We run each experiment 100 times, with randomizing the datasets and random seeds in the algorithms, and report the average and standard deviation values. See Appendix D for the full results on COMPAS dataset, as well as the same evaluations on the Adult dataset (in which we observe the same patterns).
Conflict between fairness and robustness. Figure 1 compares the test accuracy of uncon- strained models and fair models under data poisoning attacks. We attack the unconstrained models using Algorithm 2 with \ = 0 (no fairness constraint), which is equivalent to the optimal attack [37]. We attack the fair models using Algorithm 2 with A = 100e, and Algorithm 1 with \ = «. The baselines, e.g., adding randomly selected hard examples, do not have much effect on test accuracy. However, under attacks with the same capability, the fair models are noticeably less robust than unconstrained models. At ¢ = 0.2, the test accuracy of fair models approaches what can be achieved even by a constant classifier. The significant outcomes can be evidently observed in the plots on adversarial sampling, where the adversary cannot change data labels (so effectively he is adding clean data, but in an adversarially biased manner).
In the benign setting (⬠= 0), the reduction approach [1] (relaxed equalized odds with 6 = 0.01) has a visibly better test accuracy compared to post-processing approach [19] (exact equalized odds, 56 = 0), which reflects the cost of fairness on model accuracy. Figure 2(a) shows that this cost is significantly amplified for fair models under attack, as we train models with larger levels of fairness (i.e., smaller 5). Notably, comparing 6 = 0.1 (weaker fairness) with 6 = 0.01 (stronger fairness) on the reduction approach [1], as ⬠increases, shows that robustness against adversarial bias decreases as we increase the enforced fairness level. Thus, robustness and fairness are in conflict.
Implication of adversarial bias for majority vs. minority groups. Figures 2(b) and 2(c) show the test accuracy for the majority and minority groups. We observe that the impact of adversarial bias is not homogeneous across diï¬erent groups in test data. To understand the implications of the attack, observe the relation between test accuracy of two groups on the unconstrained model, and then compare it with the same in fair models (all under the same poisoning data). We observe that,
7
# Fair [19] (δ = 0) - Adv. Sampling
Fair [1] (δ = 0.01) - Adv. Sampling
# Unconstrained Model - Adv. Sampling
Unconstrained Model - Adv. Sampling Fair [19] (5 = 0) - Adv. Sampling Fair [1] (5 = 0.01) - Adv. Sampling lp ; ; ; â 1 : ; ; â 1 ; ; ; ; 7 0.94 a pee 109 E os 40.8 0.7 40.7 0.6 + 0.6 5Lu | | | _tos5lu | | | _tos5lu | | | | 05005 01 015. 02 °° 0 005 01 015 02 °°? 0 005. 01 015 02 ⬠⬠⬠Unconstrained Model - Adv. Labeling Fair [19] (5 = 0) - Adv. Labeling Fair [1] (5 = 0.01) - Adv. Labeling lp : : : â lpn : : : â Ip : : : : 0.9 ~ 40.9 0.9 Seac-: s t+ 4-4 E 0.8} 0.8 40.8 | a 0.7 40.7 40.7 F a 0.6 0.6} 0.6 | =} 05005 01 015. 02 °° 0 005 01 015 02 °°? 0 005. 01 015 02 ⬠⬠⬠âe Alg. 2 (\ = 0) [37] -s-Alg. 2 (A = 100e)ââ â Alg- 1 (A=e) = -8 Label flipping ââ Random sampling -~â Hard examples ââ Constant prediction
y c a r u c c A t s e T
y c a r u c c A t s e T
Figure 1: Test accuracy of unconstrained and fair models under data poisoning attacks - COMPAS dataset. The x-axis ⬠is the ratio of the size of poisoning dataset Dp to the size of clean dataset De, and reflects the contamination level of training set. We compare the impact of adversarial bias with baselines and poisoning attacks against unconstrained models, for various ¢. The difference between test accuracy at ⬠= 0 (benign setting) and larger ⬠values reflects the impact of the attack. Constant prediction always outputs the majority label in clean dataset.
on fair models the attack is signiï¬cantly more impactful on the majority group. However, on the unconstrained model, the minority group is the one that incurs a larger loss.
We also compute the fairness gap on the unconstrained model with respect to the training data (see Figure 9 for all results). On training data poisoned with Algorithm 2 (A = 100e) at « = 0.1, the fairness gap is 0.54, which is much larger than that of the random sampling baseline with 0.3 fairness gap. This indicates that the data poisoning attack for adversarial bias increases the fairness gap. This explains the underlying strategy of the attacker against fair machine learning: to increase fairness gap and distort the data distribution of mainly the minority group in order to force the fair algorithm to lower the accuracy over the whole distribution when trying to equalize its behavior across the groups.
Distribution and importance of poisoning data data are placed under attacks designed for adversarial . We investigate where exactly the poisoning bias. We observe that the data samples generated by our most effective attack, Algorithm 2 wi h A = 100e, mostly belong to the smallest subgroup, i.e., the smallest sensitive group s with the least frequent label y (in this case it is + label with race as Black). Thus, the attack algorithms effectively exploit the fact that fair models give a higher weight to points from the under-represented areas of the distribution to satisfy the constraints.
8
(a) Overall (b) Majority group (c) Minority group 1 1 : : : â 1 ; : : E43 f $444 0.9 0.94 Joo | » ost Jos} Jos} 1 07 40.7} 40.7} | â+- Unconstrained model â= Fair [1] (6 = 0.1) 9.67). Fair [1] (6 = 0.01) 7) 0.6 40.6} | -e Pair [19] (6 = 0) 5 Lo + ' | tos slo | | | | 05 005 01 01502 °° . 05° 0.05 010.502 ⬠⬠â¬
y c a r u c c A t s e T
Figure 2: Effect of fairness level 6 on robustness across groups under adversarial sampling attack COMPAS dataset. For a given ⬠the poisoning data is the same for all algorithms (generated using Alg. 2 with \ = 100e). The majority group (whites) contributes 61% of the training data. The curve for accuracy of the minority group under the unconstrained model overlaps with that of the model with exact fairness (A = 0), thus is not visible. See Figure 5 for adversarial labeling results.
See Figure 11 for the distribution of poisoning points in all experiments. We also compute the accuracy of trained models on their poisoning data. The accuracy of fair models with 6 = 0.01 on the poisoning data from Algorithm 2 is approximately 0.2 at ⬠= 0.1, which increases to almost 0.4 at ⬠= 0.2. Whereas, the accuracy of unconstrained models on the poisoning data remains more or less zero, implying that the unconstrained models ignore these points. See Figure 7 for the accuracy of models on their clean and poisoning data. We observe that by increasing ¢, the accuracy of fair models on their poisoning data Dp increases, whereas it decreases on their clean training data De. This reduces their ability to learn the underlying data distribution and generalize to the (clean) test data.
Fairness gap on test data. The ultimate objective of fair machine learning is to extend fairness to test data. Table 1 shows how adversarial bias can jeopardize the fairness generalizability of fair models. We observe that, for \ > 0, the lower the fairness level 6 on training data is, the higher the fairness gap on test data becomes. Note that e.g., for a fair model with 6 = 0.01 under adversarial sampling attack using Algorithm 2 with A = 100e, the fairness gap on the test data is about 0.37. This fairness gap is even larger than the fairness gap of an unconstrained model (0.21 in the benign setting and 0.26 under data poisoning). This shows that even by just controlling the sampling process for a small fraction of the training set, without affecting the labels, attacker can influence models trained with fairness constraints, to become more discriminatory than unconstrained models.
# 6 Related Work
# 6.1 Fairness in Machine Learning
A classiï¬er that is learned by minimizing the overall cumulative loss might not perform well on one sensitive group (usually the minority group), when the distribution of features per each class is diï¬erent across groups. In order to address this problem, multiple deï¬nitions of fairness are proposed in the literature. Examples include metric equality across sensitive groups [8, 19], individual fairness [15], causality [28], and many techniques to satisfy group-based fairness (which is the focus of this paper) such as pre-processing methods [32, 41], in-processing methods [1, 24, 39, 40], and post-
9
Table 1: Fairness gap of attacked models on test data â- COMPAS dataset, for « = 0.1. The fairness gap A is defined in (1). The numbers reflect how unfair the model is with respect to the protected group in the test data. For fair models, compare numbers with 6 (the guaranteed fairness gap on training data). The farther apart A and 6 are, the less the fairness generalization is on test data.
Attacks Unconstrained Fair [1] Fair [1] Fair [19] Model (5 = 0.1) (6 = 0.01) (6 = 0) Benign 0.21+0.07 0.11+0.06 0.06+0.04 0.07+0.04 Random sampling 0.19+0.07 0.08+0.03 0.11+0.05 0.13+0.07 Hard examples 0.19+0.08 0.09+0.03 0.13+0.05 0.15+0.07 Label flipping 0.23+0.07 0.09+0.04 0.07+0.04 0.10.06 Ady. sampling (Alg. 2, \ = 0) [37| 0.26+0.08 0.19+0.07 0.3040.07 0.27+0.08 Adv. sampling (Alg. 2, \ = 100e) - 0.29+0.06 0.3740.09 0.53+0.05 Ady. sampling (Alg. 1, \ = â¬) - 0.12+0.07 0.21+0.10 0.25+0.13 Ady. labeling (Alg. 2, A = 0) [37] 0.28+0.08 0.13+0.05 0.19+0.08 0.25+0.08 Adv. labeling (Alg. 2, A = 100e) - 0.28+0.05 0.39+0.08 0.55+0.04 Ady. labeling (Alg. 1, A = â¬) - 0.11+0.06 0.12+0.04 0.130.09
processing methods [19]. Pre-processing methods aim at ï¬nding a new representation of data such that it retains as much information of input features as possible, except those which can lead to bias. In-processing methods enforce fairness during the training process, for example, by incorporating the fairness constraints into the objective function as a regularization term. Post-processing methods correct the predictions of a given trained model, without modifying the training data or the training process. Please refer to [33] for a recent survey on methods to achieve fairness. In this work, we focus on the notion of Equalized odds [19] and use the reductions approach [1] (in-processing) and post-processing approach [19] to train fair models.
Imposing fairness constraints might come at a cost of the modelâs performance. The eï¬ect of fair classiï¬cation on accuracy and the compatibility of various deï¬nitions with each other have been studied in some related works [10, 25]. Corbett-Davies et al. [10] show that the optimal decision rule is diï¬erent from the fair decision rules that satisfy fairness deï¬nitions (statistical parity, conditional statistical parity, predictive equality). Thus, imposing fairness constraints has a cost on the model accuracy. Corbett-Davies et al. [10] then evaluate the cost of fairness empirically. Kleinberg et al. [25] show that it is impossible to achieve equal calibration, false positive rate and false negative rate, if the fraction of positive labeled examples is diï¬erent across sensitive groups.
# 6.2 Data Poisoning Attacks
Machine learning systems are susceptible to data poisoning attacks. In indiscriminate attacks, which is the focus of this paper, the adversaryâs objective is to degrade the test accuracy of the model [3, 21, 26, 27, 31, 34, 35]. In targeted attacks, the adversary seeks to impose the loss on speciï¬c test data points or small sub-populations [6, 9, 17, 26, 36, 38].
Steinhardt et al. [37] propose an optimal algorithm for poisoning attacks on (unconstrained) convex models, given a set of feasible poisoning data points. The algorithm relies on the assumption that test loss of the target model can be approximated as training loss of the model on clean data (assuming Dtest is drawn from the same distribution as the clean training data Dc). Our attack
10
algorithm is inspired by this work and uses the same online learning framework. Note that, when λ = 0 in Algorithm 2, it is equivalent to the algorithm in [37].
In our setting of adversarial sampling bias, the attacker is not allowed to modify the label y. In clean-label data poisoning attacks [36], the attacker manages to reduce the accuracy of target examples via injecting the correctly labeled data with modiï¬ed features. Compared with this work, the attacker in adversarial sampling bias is not permitted to change the features. Furthermore, the objective of our attack is to reduce the accuracy of the model over the entire test data.
When the attacker is allowed to change both features and labels of the poisoning data, a typical poisoning attack algorithm is gradient ascent, in which the attacker iteratively modiï¬es each attack point in the poisoning dataset by following the gradient of the test loss with respect to poisoning training data. This kind of attack is ï¬rst studied in the context of SVMs by Biggio et al. [3], and has subsequently been extended to linear and logistic regression [35], topic modeling [34], collaborative ï¬ltering [31], and neural networks [26]. In our setting, we assume the attacker is not allowed to modify the features, as we focus on the most practical scenario in decision making processes that move toward automation. An interesting future direction would be to allow changes of features and design poisoning attacks using the gradient-based algorithm. Given more power to the attacker, it is likely that the attacker could reduce the test accuracy more signiï¬cantly.
# 6.3 Learning Fair Models from Noisy Training Data
In most practical scenarios, the training data used for learning models might be biased (under- representation bias) and/or noisy (with mis-labeling). The mis-labeling phenomenon can be random or adversarial. Mislabeling can be seen as a speciï¬c case of adversarial labeling bias, where the attacker ï¬ips labels of data points only from a certain part of the population. Similarly, under- representation bias can be considered as a speciï¬c case of adversarial sampling bias. Multiple works in the literature study the impact of noisy and biased data on machine learning.
Calders and ŽliobaitËe [7] show that learning a regular unconstrained model from training data with under-representation and mislabeling bias results in biased predictions on test data. Kallus and Zhou [23] consider the case of systematic censoring in training data. For example, a model for predicting whether an individual defaults a loan can be trained only on individuals who were already granted a loan. Individuals who were not even granted loan cannot be present in the dataset. Such systematic censoring can be seen as a form of sampling bias. This work shows that even after using a fair classiï¬er, that seeks to achieve fairness by equalizing accuracy metrics across sensitive groups, the classiï¬er can still be unfair on the population due to the systematic censoring in training data. We also show a similar result in Table 6 that learning models on training data with adversarial bias increases their fairness gap on the test data.
Under varying assumptions, multiple works [4, 11, 22, 29] have proposed strategies to account for under-representation bias and mislabeling while learning models. De-Arteaga et al. [11] study selective label bias, where true outcomes corresponding to a certain label cannot be learned (for example in predicting recidivism risk), as such examples cannot be added to the training data. Selective label bias can be considered as a form of sampling bias. The authors propose a method for augmenting the dataset with human expert predictions to mitigate selective label bias. Assuming that examples in certain sensitive groups are randomly mislabeled, Jiang and Nachum [22] propose a re-weighting strategy for recovering the optimal classiï¬er on unbiased data from training data with labeling bias. When uniform random noise is present in the sensitive attribute, it is shown that demographic parity gap of a fair classiï¬er on test data increases [29]. The authors quantify the increase in DP gap on test data at any level of noise in the label. Given the level of noise in sensitive attribute, this is used to compute the exact level of DP gap that needs to be imposed on training
11
data, for achieving a target DP gap on the test data. Blum and Stangl [4] consider a training set corrupted by under-representation/labeling bias (or both). Assuming access to an inï¬nite number of samples and learning diï¬erent classiï¬ers for diï¬erent sensitive groups, this work shows that ERM with Equal Opportunity constraint on the biased data can recover the Bayes-optimal classiï¬er for the true data distribution.
All the above works [4, 11, 22, 29] assume that the noise and bias in training data is an uniform distribution of under-representation/mislabeling over a subspace of points and study the consequences of learning from training data with such bias. These results cannot translate to our case of adversarial bias as we consider non-uniform bias over the input space, and our attacker introduces bias with the speciï¬c intention of reducing test accuracy.
# 7 Broader Impact
AI governance frameworks released by multiple organizations such as the European union2 and Google AI3 state that fairness and robustness are two key requirements for building trustworthy automated systems. In fact, the AI governance document by Google AI mentions a library for training with equalized odds constraints as a tool for fairness appraisal (page 14) and data poisoning as a possible risk for AI safety (page 17).
In this work, we show that imposing group-fairness constraints on learning algorithms decreases their robustness to poisoning attacks. This is a signiï¬cant obstacle towards implementing trustworthy machine learning systems. We speciï¬cally provide evidence that an attacker that can only control the sampling and labeling process for a fraction of the training data can signiï¬cantly degrade the test accuracy of the models learned with fairness constraints. In fact, from a practical perspective, the attack algorithms for adversarial bias, introduced in this paper, can easily and stealthily be perpetrated in many existing systems, as similar to historical discrimination and/or selection bias. This calls for an immediate attention to a theoretical study of the robustness properties for any fair machine learning algorithm and the potential consequences of using such algorithms in presence of adversarially biased data. Moreover, this calls for designing models which are not only fair, but also robust. We suspect that there might be a fundamental trade-oï¬ between these two aspects of trustworthy machine learning. We also show that learning with fairness constraints in presence of adversarial bias results in a classiï¬er that does not only have a poor test accuracy but is also potentially more discriminatory on test data. Hence, machine learning system designers must be cautious when deploying FairML in real world applications, as they might be building a system that is both unfair and less robust.
# 8 Conclusions
We have introduced adversarial bias as a framework for data poisoning attacks against fair machine learning. Our attack exploits the existing tension between fairness constraint and model accuracy, and the fact that the fair models try to achieve equality on groups with diï¬erent sensitive attributes even though they do not have the same weight in the loss function of the model. Thus, our experiments show that by adding a small percentage of adversarially sampled/labeled data points
2EU guidelines on ethics in artiï¬cial intelligence: Context and implementation https://www.europarl.europa. eu/RegData/etudes/BRIE/2019/640163/EPRS_BRI(2019)640163_EN.pdf
3Perspectives on Issues in AI Governance https://ai.google/static/documents/ perspectives-on-issues-in-ai-governance.pdf
12
to the training set, the attacker can signiï¬cantly reduce the model accuracy beyond what he can achieve in unconstrained models. Adversarial bias also increases the fairness gap on test data.
# Acknowledgments
This work is supported by the NUS Early Career Research Award (NUS ECRA) by the Oï¬ce of the Deputy President, Research & Technology (ODPRT), award number NUS-ECRA-FY19-P16.
# References
[1] Alekh Agarwal, Alina Beygelzimer, Miroslav Dudik, John Langford, and Hanna Wallach. A reductions approach to fair classiï¬cation. 2018.
[2] Jonathan F Bard. Some properties of the bilevel programming problem. Journal of optimization theory and applications, 68(2):371â378, 1991.
[3] Battista Biggio, Blaine Nelson, and Pavel Laskov. Poisoning attacks against support vector machines. arXiv preprint arXiv:1206.6389, 2012.
[4] Avrim Blum and Kevin Stangl. Recovering from biased data: Can fairness constraints improve accuracy? arXiv preprint arXiv:1912.01094, 2019.
[5] Stephen Boyd, Stephen P Boyd, and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004.
[6] Cody Burkard and Brent Lagesse. Analysis of causative attacks against svms learning from data streams. In Proceedings of the 3rd ACM on International Workshop on Security And Privacy Analytics, pages 31â36, 2017.
[7] Toon Calders and IndrËe ŽliobaitËe. Why unbiased computational processes can lead to discrimi- native decision procedures. In Discrimination and privacy in the information society, pages 43â57. Springer, 2013.
[8] Toon Calders, Faisal Kamiran, and Mykola Pechenizkiy. Building classiï¬ers with independency constraints. In 2009 IEEE International Conference on Data Mining Workshops, pages 13â18. IEEE, 2009.
[9] Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, and Dawn Song. Targeted backdoor attacks on deep learning systems using data poisoning. arXiv preprint arXiv:1712.05526, 2017.
[10] Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 797â806, 2017.
[11] Maria De-Arteaga, Artur Dubrawski, and Alexandra Chouldechova. Learning under selective labels in the presence of expert consistency. arXiv preprint arXiv:1807.00905, 2018.
[12] Xiaotie Deng. Complexity issues in bilevel linear programming. In Multilevel optimization: Algorithms and applications, pages 149â164. Springer, 1998.
13
[13] Michele Donini, Luca Oneto, Shai Ben-David, John S Shawe-Taylor, and Massimiliano Pontil. Empirical risk minimization under fairness constraints. In Advances in Neural Information Processing Systems, pages 2791â2801, 2018.
[14] Dheeru Dua and Casey Graï¬. UCI machine learning repository, 2017. URL http://archive. ics.uci.edu/ml.
[15] Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel. Fairness through awareness. In Innovations in Theoretical Computer Science (ITCS), pages 214â226, 2012.
[16] Cynthia Dwork, Nicole Immorlica, Adam Tauman Kalai, and Max Leiserson. Decoupled Classiï¬ers for Group-Fair and Eï¬cient Machine Learning. In Fairness, Accountability and Transparency (FAT), pages 119â133, 2018.
[17] Tianyu Gu, Brendan Dolan-Gavitt, and Siddharth Garg. Badnets: Identifying vulnerabilities in the machine learning model supply chain. arXiv preprint arXiv:1708.06733, 2017.
[18] Pierre Hansen, Brigitte Jaumard, and Gilles Savard. New branch-and-bound rules for linear bilevel programming. SIAM Journal on scientiï¬c and Statistical Computing, 13(5):1194â1217, 1992.
[19] Moritz Hardt, Eric Price, Nati Srebro, et al. Equality of opportunity in supervised learning. In Advances in neural information processing systems, pages 3315â3323, 2016.
[20] Elad Hazan. Introduction to online convex optimization. Foundations and Trends in Optimiza- tion, 2(3-4):157â325, 2016.
[21] Matthew Jagielski, Alina Oprea, Battista Biggio, Chang Liu, Cristina Nita-Rotaru, and Bo Li. Manipulating machine learning: Poisoning attacks and countermeasures for regression learning. In 2018 IEEE Symposium on Security and Privacy (SP), pages 19â35. IEEE, 2018.
[22] Heinrich Jiang and Oï¬r Nachum. Identifying and correcting label bias in machine learning. arXiv preprint arXiv:1901.04966, 2019.
[23] Nathan Kallus and Angela Zhou. Residual unfairness in fair machine learning from prejudiced data. arXiv preprint arXiv:1806.02887, 2018.
[24] Toshihiro Kamishima, Shotaro Akaho, and Jun Sakuma. Fairness-aware learning through regularization approach. In 2011 IEEE 11th International Conference on Data Mining Workshops, pages 643â650. IEEE, 2011.
[25] Jon M. Kleinberg, Sendhil Mullainathan, and " Manish Raghavan. Inherent trade-oï¬s in the fair determination of risk scores.
[26] Pang Wei Koh and Percy Liang. Understanding black-box predictions via inï¬uence functions. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1885â1894. JMLR. org, 2017.
[27] Pang Wei Koh, Jacob Steinhardt, and Percy Liang. Stronger data poisoning attacks break data sanitization defenses. arXiv preprint arXiv:1811.00741, 2018.
[28] Matt J Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva. Counterfactual fairness. In Advances in Neural Information Processing Systems, pages 4066â4076, 2017.
14
[29] Alex Lamy, Ziyuan Zhong, Aditya K Menon, and Nakul Verma. Noise-tolerant fair classiï¬cation. In Advances in Neural Information Processing Systems, pages 294â305, 2019.
[30] J. Larson, S. Mattu, L. Kirchner, and J. Angwin. COMPAS dataset. https://github.com/ propublica/compas-analysis, 2017. [COMPAS dataset (2017)].
[31] Bo Li, Yining Wang, Aarti Singh, and Yevgeniy Vorobeychik. Data poisoning attacks on factorization-based collaborative ï¬ltering. In Advances in neural information processing systems, pages 1885â1893, 2016.
[32] David Madras, Elliot Creager, Toniann Pitassi, and Richard Zemel. Learning adversarially fair and transferable representations. arXiv preprint arXiv:1802.06309, 2018.
[33] Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. A survey on bias and fairness in machine learning. arXiv preprint arXiv:1908.09635, 2019.
[34] Shike Mei and Xiaojin Zhu. The security of latent dirichlet allocation. In Artiï¬cial Intelligence and Statistics, pages 681â689, 2015.
[35] Shike Mei and Xiaojin Zhu. Using machine teaching to identify optimal training-set attacks on machine learners. In Twenty-Ninth AAAI Conference on Artiï¬cial Intelligence, 2015.
[36] Ali Shafahi, W Ronny Huang, Mahyar Najibi, Octavian Suciu, Christoph Studer, Tudor Dumitras, and Tom Goldstein. Poison frogs! targeted clean-label poisoning attacks on neural networks. In Advances in Neural Information Processing Systems, pages 6103â6113, 2018.
[37] Jacob Steinhardt, Pang Wei W Koh, and Percy S Liang. Certiï¬ed defenses for data poisoning attacks. In Advances in neural information processing systems, pages 3517â3529, 2017.
[38] Octavian Suciu, Radu Marginean, Yigitcan Kaya, Hal Daume III, and Tudor Dumitras. When does machine learning {FAIL}? generalized transferability for evasion and poisoning attacks. In 27th {USENIX} Security Symposium ({USENIX} Security 18), pages 1299â1316, 2018.
[39] Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P Gummadi. Fairness constraints: Mechanisms for fair classiï¬cation. arXiv preprint arXiv:1507.05259, 2015.
[40] Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P Gummadi. Fairness beyond disparate treatment & disparate impact: Learning classiï¬cation without disparate mistreatment. In Proceedings of the 26th international conference on world wide web, pages 1171â1180, 2017.
[41] Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. Learning fair representa- tions. In International Conference on Machine Learning, pages 325â333, 2013.
15
# A Table of Notations
# Table 2: List of Notations
Where it is deï¬ned Description Section 2 Features space Section 2 Label space Section 2 Random variable associated with features Section 2 Random variable associated with lables Section 2 Underlying distribution of the data Section 2 Training dataset Section 2 Clean training dataset Section 2 Size of the clean training dataset Section 2 Poisoning training dataset The ratio of the size of poisoning data over the size of clean data in the training set Section 2 Section 2 Computing a probability empirically over a dataset D Section 2 Attack dataset Section 2 Test dataset Section 2 Sensitive/protected attribute Section 2 Guaranteed fairness level on training data Section 2 Fairness gap Section 2 Fairness constraint of fθ on dataset D Section 2 Model parameters Section 2 Parameter space Section 2 Classiï¬cation model parameterized by θ Section 2 Loss of fθ on data (x, y) Section 2 Cumulative loss of fθ on dataset D Eq. (2) Optimal model parameters trained on D with fairness constrained Section 3 Lagrange multiplier (penalty parameter) Section 4.1 Eq. (7) Eq. (7) Section 4.1 Section 4.1 Section 4.1 Algorithm 1 Section 4.1 Section 4.1 Appendix B.1 Appendix B.1 Appendix B.1 Appendix B.1 Appendix B.1 Appendix B.1 Appendix B.1 Appendix B.2 Appendix B.2 Appendix B.2 Appendix B.2 Appendix B.2
16
# B Supplementary Theoretical Results
# B.1 Proof for Theorem 1
Proof. We should point out that in this proof we follow the approach of [37]. Assume T = en is the time horizon. We have Dj = {(a!,y'),--- (x",y")} as the data poisoning set produced by Algorithm 1. Also, 0° is the parameter chosen by the algorithm at the t-th step. First, from max-min inequality we have:
pe max mi M ~ marx min <£(0:D. UD») 7 ys A(6,D. U {(x, y)}) (x,y)EDp < . e _ em) < min max *£(0;De UDp) ys A(O,D_ U {(x, y)}â¢) (a,y)EDp
.
Furthermore, for a given θ we deï¬ne:
def 1 . A a en UO) = max âL£(8; De UDp) + â . a A(0,D.U {(x, y)}) Ly Pp 1 = âL(0;De) + a L(0;2,y) +A: A(O,De.U {(a, y)}â¢)]. lH Pe) + mar le CG: 2.y) ( {(x,y)}")]
.
We deï¬ne U â = minT have M â ⤠U â. t=1 U (θt). Note that for any given θ, we have M â ⤠U (θ). More speciï¬cally, we
From the deï¬nition of M â, for any set, including Dâ
we have
7 1 ae r i ⬠_ ae A* min | âL(0;De UD) + â a A(O,DeU {(x, y)}â¢) | = M(D*) < M xy)EDS
Let us deï¬ne T diï¬erent functions
1 418) = £0; De) + ⬠(850°, ylâ¢) +d NO,DeU {Ca yh )}") , (10)
for 0 ⤠t ⤠T . Let us deï¬ne
= argmin y AC 960 t=]
Note that we have
Dien) ee ye ce Deter 10") SS = M(D3) <M <U <r
Finally, from the deï¬nition of regret we have:
Dir g9(6') â D1 gi(8) _ Regret (7) T T T â
which consequently completes the proof of the theorem.
17
# B.2 The Conditions for the Optimality of the Attack
In this section, we prove that under what conditions, our algorithm ï¬nds the (nearly) optimal solution for the attack. We ï¬rst state a direct consequence of Theorem 1 for a no-regret algorithm which results from a convexity assumption for functions gt(θ). We then explain under what conditions this convexity assumption is valid.
Corollary 1. Under the assumption that (i) loss function is convex in 0, (ii) A(@,D) is convex in @ and (itt) m = on for1<t<T, Algorithm 1 produces the near optimal poisoning dataset Ds, such that
3Gd an *\ M* ~ M(D;) < ââ (11)
where 7 is step size at time t, d is an upper bond on the diameter of 0, and G is an upper bound on the norm of the subgradients of g, over O, i.e., ||Vgr(0)|| < G.
Proof. First note that Algorithm 1 exactly runs as online gradient descent algorithm for gt(θ) functions. From the assumptions (i) and (ii), we conclude that functions gt(θ) are convex. The theoretical guarantee of the online gradient descent algorithm for convex functions [20] allows us to bound the average regret
Regret(T ) T ⤠3Gd â T ,
where d is an upper bond on the diameter of O, and G is an upper bound on the norm of the subgradients of g; over O, i-e., ||V.gz(@)|| < G. Finally, the proof is concluded from this bound for the regret and the result of Theorem 1.
Next, we discuss the optimality conditions for linear classifiers with a convex loss, e.g., (0; 7, y) = max(0, 1 â y(@,x)) for SVM. In our attack, we use equalized odds as our fairness constraint which is non-convex. We adopt simplification proposed by Donini et al. [13] to reach convex relaxations of loss and fairness constraint. Instead of balancing prediction error, Donini et al. [13] propose a fairness definition as balancing the risk among two sensitive groups. Following the same idea, we define the linear loss as £; (e.g., & = (1 â fo(x))/2 for SVM). Based on the linear loss, the convex relaxation for the fairness gap of equalized odds is defined as follows:
Ëâ(θ, D) := |R+,a(θ, D) â R+,b(θ, D)| + |Râ,a(θ, D) â Râ,b(θ, D)| 2 , (12)
for RYS(0,D) = = Vleyedus ¢,(0;2,y) where D¥* is the set of data points from group s with label y in D and ny,s = |D¥*|. To find the optimal attack for the EO fair model, in Eq. (7), we replace loss @ with a convex loss (e.g. Hinge loss) @. and replace A(fg;D) with the convex relaxation A(0;D). Hence, Algorithm 1 produces the nearly optimal poisoning set Dj such that it has the maximal damage on the fair model under our approximations.
As a future research direction, one could try to design new online algorithms that achieve small regrets in the non-convex setting or under better approximations of the fairness constraint. Our framework can then utilize such online algorithms to further investigate the eï¬ect of data poisoning attacks on the robustness of models with fairness constraints.
# B.3 Pseudocode of the Algorithm from Section 4.2
For the sake of completeness, we present the full pseudo-code for our data poisoning algorithm proposed in Section 4.2.
18
# Algorithm 2
1: Input: Clean data D,, n = |D,|, feasible poisoning set F(D,.), number of poisoning data en, penalty parameter (Lagrange multiplier) A, learning rate 7.
2: Output: Poisoning dataset Dp. 3: Initialize 6° 4: for t=1,:-- ,en do 5: (c',y') argmax(s,yer(D,) [â¬Â° £1: a,y)+A-A (01, D.U {(2,y)}â) ] 6 7: 8 Dp â DpV {(a",y')} ot â pti ee 1De) be Vel: x y")) n : end for
Table 3: Distribution of data points in clean training dataset Dc and attack dataset Dk - COMPAS dataset.
Dc Dk s = 0 s = 1 y = â y = + 28.5% 31.8% 32.5% 7.2% s = 0 s = 1 y = â y = + 29.0% 31.1% 16.0% 23.9%
# C Fair Machine Learning Algorithms
The post-processing approach is the ï¬rst proposed algorithm to achieve equalized odds [19]. The fair model is obtained by adjusting a trained unconstrained model so as to remove the discrimination according to equalized odds. The outcome of this approach is a randomized classiï¬er that assigns to each data point a probability of changing the prediction output by the unconstrained model, conditional on its protected attribute, and predicted label. These probabilities are computed by a linear program that optimizes the expected loss.
Many methods have been proposed to achieve fairness in machine learning (see [33] for a recent survey). The reductions approach proposed by [1] trains a fair randomized classiï¬er over a hypothesis class by reducing the constrained optimization problem to learning a sequence of cost- sensitive classiï¬cation models. Cost-sensitive classiï¬cation is used in this as an oracle to solve classiï¬cation problems resulted from a two-player game: one player (primal variables) minimizes the loss function; the other player (dual variables) maximizes the fairness violation (constraints).
# D Supplementary Experimental Results
For the following section, we present the detailed experimental results on COMPAS and Adult dataset. All the results on COMPAS dataset are averaged over 100 runs with diï¬erent random seeds. On Adult dataset, all the results are averaged over 50 runs with diï¬erent random seeds.
# D.1 Details of datasets and models
We use two datasets in our evaluation, their details are described below.
COMPAS. COMPAS [30] dataset contains 5278 data samples. The classiï¬cation task is to predict recidivism risk from criminal history and demographics. We consider race as the sensitive attribute and include records only with white/black as race. There are 3175 records (60.2%) for the
19
Table 4: Distribution of data points in clean training dataset Dc and attack dataset Dk - Adult dataset.
Dc Dk s = 0 s = 1 y = â y = + 48.5% 16.5% 32.3% 2.6% s = 0 s = 1 y = â y = + 45.0% 23.4% 27.2% 4.4%
sensitive attribute as white. Among the white group, 52.3% have positive labels while among the black group, this number is 41.9%. Overall, there are 2483 records (47%) are labeled positive.
Pre-processing A model trained with the original dataset can only achieve low accuracy (66.6% for the Logistic regression model, compared to constant prediction classiï¬er that can achieve 53% accuracy), which does not help the understanding of the modelâs behavior in the presence of data poisoning attacks. To get rid of the noise that exists in the dataset, we pre-process the dataset as follows: we train an SVM model with RBF kernel on the entire dataset and only keep 60% of the data points which have the smallest loss. To create the training data Dc, test data Dtest and attack dataset Dk, we randomly split the clean data in the corresponding ratio 4:1:1. Hard examples (the left out data points) are added to the attack dataset.
Data distribution The data distribution of points in clean training dataset Dc and attack dataset Dk after pre-processing are presented in Table 3. The numbers are the average values over all the datasets we evaluated on. On average, Dc contains 2111 samples, Dtest 528 samples. Dk consists of 2639 samples out of which 2112 are hard examples.. A Logistic regression model trained on Dc achieves on average 94% accuracy on test data. Model We use Logistic regression for classiï¬cation.
UCI Adult (Census Income). Adult dataset [14] includes 48,842 records with 14 attributes such as age, gender, education, marital status, occupation, working hours, and native country. The (binary) classiï¬cation task is to predict if a person makes over $50K a year based on the census attributes. We consider gender (male and female) as the sensitive attribute. In this dataset, 66.8% are males, and 23.9% are labeled one, i.e having an income over $50K a year. Among male samples, 30.4% are positive samples; for the females, this number is 10.9%.
Pre-processing A model trained on this dataset generally achieves below 90% accuracy (Logistic regression: 85.3%, 2-layer fully connected neural network with 32 hidden units each layer: 85.3% on training data, compared to a constant prediction classiï¬er that can achieve 76.1% accuracy). To enhance the model accuracy, we apply similar pre-processing steps as on COMPAS: we train an SVM model with Linear kernel on the entire dataset and keep 90% of the data points which have the smallest loss. The number of females with income above $50K is small; hence we randomly split 1/2 of the data for Dk. Of the remaining data, 70% are use for Dc and 30% for Dtest. Hard examples (the left out data points) are added to the attack dataset.
Data distribution The data distribution of the points in clean training dataset Dc and attack dataset Dk after pre-processing are presented in Table 4. The numbers are the average values over all the datasets we evaluated on. On average, Dc contains 15385 samples, Dtest 6594 samples. Dk consists of 26863 samples. Dc maintains approximately the same fractions of males and females as in the original dataset. A Logistic regression model trained on Dc achieves on average 94% accuracy on test data.
Model We use Logistic regression for classiï¬cation.
20
# Implementation and Parameters Selection
To generate poisoning data points, we implement Algorithm 1 and Algorithm 2.
As discussed in B.2, in Algorithm 1, the attacker uses an SVM model (due to the linear approximation of the fairness gap in Equation (12)). For this SVM model, we use Hinge loss for the classification loss and linear loss for evaluating the fairness gap as mentioned in B.2. Note that the linear function used to approximate A can fall out of range ((0, 1]). Having large \ implies assigning more weight to this term and can result in a bad approximation. We therefore test with small ., with A ⬠{0.1e, â¬, 10e} and show the results when \ = ⬠for COMPAS and X = 0.1e for Adult.
In Algorithm 2, we use Logistic regression models. Since we measure the exact A of the model and want that A to have a large impact on finding a new poisoning data point in each iteration. This leads to the selection of a larger \. We choose \ ⬠{e, 10e, 100e} and show A = 100e in the evaluation for both datasets.
For both algorithms, we use η = 0.001 as the learning rate. To train a fair model, we use the post-processing method [19] and reductions approach [1]. We use the implementation of these algorithms provided in [1]4. Note that while the post-processing approach allows achieving exact fairness on the training data, the implementation of the reductions approach requires a strictly positive δ. We use default values for all hyper-parameters from the available implementation.
It is important to note that, the output of these approaches is a randomized classiï¬er. We, therefore, use the expected accuracy to measure the classiï¬cation performance, given by
Acc(9;D) = 1â Di YS | fo(z) - yl, (13) (x,y)eD
where fθ(x) is the expected prediction of randomized classiï¬er fθ. For the unconstrained models, fθ is the deterministic prediction.
# D.3 Robustness evaluation
In this section, we provide the detailed results about the test accuracy and fairness gap of the target models for both COMPAS and Adult datasets, as discussed in Section 5.2. We show that the fair models are more vulnerable to the poisoning attack compared with the unconstrained model. In addition, the test accuracy and the fairness property of fair models are both compromised.
Test accuracy In the Table 5, we compare the effect of poisoning attacks on unconstrained models (without fairness constraint) with different fair models at different desired fairness levels 6 on two datasets (COMPAS and Adult), when the attacker controls 10% of the training data i.e., « = 0.1. The âUnconstrained Modelâ column corresponds to the test accuracy of the unconstrained model. The âFair [1] (5 = 0.1)â and the âFair [1] (6 = 0.01)â columns respectively correspond to the test accuracy of the fair models trained with the Reductions approach [1] at 6 = 0.1 and 6 = 0.01. The âFair [19] (6 = 0)â column corresponds to the fair model trained with the Post-processing method [19] with exact fairness, i.e 6 = 0. For each dataset, âBenignâ row shows the test accuracy for models trained on data without any poisoning attack i.e., « = 0. We compare these with the test accuracy of corresponding models that are learned from poisoned data.
We observe that when the models are trained with fairness, the drop in test accuracy are more signiï¬cant than when the constraints are absent in both adversarial labeling and adversarial sampling
4See https://github.com/fairlearn/fairlearn
21
Table 5: Test accuracy of attacked models under adversarial bias - COMPAS and Adult datasets, for ⬠= 0.1. The numbers reflect test accuracy (defined in (13)) of attacked models. The lower the numbers are, the more effective the attacks are and the less robust the models are against the attacks. The numbers in bold are the smallest accuracies of the target models against different attack algorithms in the adversarial bias or adversarial labeling setting.
Dataset Attacks Unconstrained Fair [1] Fair [1] Fair [19] Model (6 = 0.1) (6 = 0.01) (6 = 0) Benign 93.7+5.6 94.045.6 93.54+1.7 87.4+3.8 Random sampling 94.3+0.9 92.141.6 84.3+1.1 Hard examples 94.2+1.0 91.541.7 83.74+1.1 Label flipping 94.0+1.0 92.8+1.4 84.4+1.2 COMPAS Adv. sampling (Alg. 2, A = 0) [37] 87.641.6 81.5+1.7 70.8+1.6 Adv. sampling (Alg. 2, \ = 100e) - 80.641.9 71.64£1.4 Adv. sampling (Alg. 1, \ = â¬) - 81.24+1.4 73.941.8 Adv. labeling (Alg. 2, A = 0) [37] 84.8+1.7 76.341.8 68.34£1.7 Adv. labeling (Alg. 2, A = 100e) - 77.641.3 70.1+1.3 Adv. labeling (Alg. 1, A = â¬) - 80.1+3.3 70.5£1.8 Benign 94.3+0.3 94.3+0.3 92.7+0.4 Random sampling 94.3+0.3 94.3+0.3 92.3+0.3 Hard examples 94.2+0.3 94.140.3 90.8+0.4 Label flipping 93.340.4 91.040.5 88.2+0.4 âAdult Adv. sampling (Alg. 2, \ = 0) [37] 94.0+0.3 93.1+0.5 89.6+0.5 Adv. sampling (Alg. 2, \ = 100e) - 92.5+0.5 90.1+0.5 Adv. sampling (Alg. 1, \ = 0.1e) - 92.340.5 89.3+0.4 âAdv. labeling (Alg. 2, \= 0) 27] 89.340.9 87.240.6 84.640.6 Adv. labeling (Alg. 2, A = 100e) - 85.5+1.2 81.141.6 Adv. labeling (Alg. 1, A = 0.1e) - 87.5+0.6 84.6+0.7
setting. We notice that our attacks outperform the baseline attacks on both datasets in both adversarial bias setting (adversarial labeling and adversarial sampling). Even in the adversarial sampling setting, using our proposed attack strategies, the attacker manages to reduce the test accuracy of the target model more than labeling ï¬ipping attack. This shows the eï¬ectiveness of our attack strategies. In addition, an increase in the desired fairness level, i.e when δ decreases, correlates with an increase in the accuracy drop. This shows fair models are more vulnerable to poisoning attacks than unconstrained models. Note that on Adult, the drops are not as large as they are on COMPAS. However, the constant prediction classiï¬er trained on Adult dataset can achieve 81% accuracy. In other words, the fair models trained on poisoned datasets can only perform barely better than constant prediction classiï¬er. Our attacks are still eï¬ective on Adult dataset.
The overall results in the Table 5 reï¬ect the eï¬ectiveness of our strategies and provide evidence that the fair model is more vulnerable than the unconstrained model.
Fairness gap In Table 6, we compare the effect of poisoning attacks on unconstrained model (without fairness constraint) with different fair models at different 5 on two datasets (COMPAS and Adult), when ⬠= 0.1. Similar to the Table 5, the columns 4-7 (âUnconstrained Modelâ, âFair [1] (6 = 0.1)â, âFair [1] (6 = 0.01)â, âFair [19] (6 = 0)) show the fairness gap on test datasets A(0; Dtest) of the unconstrained models, of the fair models trained with the Reductions approach [1] at 6 = 0.1
22
Table 6: Fairness gap of attacked models on test data - COMPAS and Adult datasets, for e⬠= 0.1. The fairness gap A is defined in (1). The numbers reflect how unfair the model is with respect to the protected group in the test data. For fair models, compare numbers with 6 (the guaranteed fairness gap on training data). The farther apart A and 6 are, the less the fairness generalization is on test data. The numbers in bold are the largest A of the target models against different attack algorithms in the adversarial bias or adversarial labeling setting.
Dataset Attacks Unconstrained Fair [1] Fair [1] Fair [19] Model (5 =0.1) (5 = 0.01) (5 =0) Benign 0.21+0.07 0.11+0.06 0.06+0.04 0.07+0.04 Random Sampling 0.19+0.07 0.08+0.03 0.11+0.05 0.13+0.07 Hard examples 0.19+0.08 0.09+0.03 0.13+0.05 0.15+0.07 Label flipping 0.23+0.07 0.09+0.04 0.07+0.04 0.10+0.06 COMPAS Adv. sampling (Alg. 2, \ = 0) [37] 0.26+0.08 0.19+0.07 0.30+0.07 0.27+0.08 Adv. sampling (Alg. 2, \ = 100e) - 0.29+0.06 ee Adv. sampling (Alg. 1, \ = â¬) - 0.12+0.07 0.21+0.10 Adv. labeling (Alg. 2, \= 0) [37] 0.2840.08 0.1340.05 0.19 £0.08 Adv. labeling (Alg. 2, A = 100e) - 0.28+0.05 0.39+0.08 Adv. labeling (Alg. 1, A = â¬) - 0.11+0.06 0.12+0.04 Benign 0.07+0.03 0.07+0.03 0.04+0.02 0.0340.02 Random sampling 0.07+0.03 0.07+0.03 0.03+0.02 0.030.02 Hard examples 0.08+0.03 0.06+0.03 0.060.03 Label flipping 0.080.04 0.10+0.04 0.24+0.04 Adult Adv. sampling (Alg. 2, A = 0) [37| 0.06+0.03 0.03+0.02 0.17+0.04 Adv. sampling (Alg. 2, \ = 100 - 0.06+0.03 0.07+0.02 0.19+0.04 Adv. sampling (Alg. 1, \ = 0.1e) - 0.05-0.03 0.14£0.05 Adv. labeling (Alg. 2, A=0) [7] 0.06£0.04_ _0.07-40.08 0.27£0.04 Adv. labeling (Alg. 2, A = 100e) - 0.18-+0.06 0.09-0.08 0.090.19 Adv. labeling (Alg. 1, \ = 0.16) - 0.114004 â0.2340.05 â-0.34£0.04
and 6 = 0.01 and of fair models trained with the Post-processing method [19] with exact fairness, i.e 6 =0, respectively. For each dataset, âBenignâ row shows the fairness gap on the test dataset for models trained on data without any poisoning attack i.e., ⬠= 0. We compare these with the fairness gap on test dataset of corresponding models learned from poisoned data.
We notice that the fairness gap of fair models trained on the poisoned data is much larger than those of fair models trained on clean data (as shown in âBenignâ row). This implies fair models trained on poisoned data become less fair on test dataset when attacks are present in both adversarial sampling bias and adversarial labeling bias setting.
Interestingly, the fairness gap of the fair models is larger than that of the unconstrained model when adversarial bias are present in the training dataset. In addition, an increase in the desired fairness level, i.e when δ decreases, is associated with an increase in the fairness gap on the test dataset. This shows that not only does poisoning attacks can cause accuracy drops, but they are also able to make fair models more discriminatory on test data than unconstrained models.
23
y c a r u c c A t s e T
y c a r u c c A t s e T
# Fair [19] (δ = 0) - Adv. Sampling
# Fair [1] (δ = 0.01) - Adv. Sampling
# Unconstrained Model - Adv. Sampling
Unconstrained Model - Adv. Sampling Fair [19] (6 = 0) - Adv. Sampling Fair [1] (6 = 0.01) - Adv. Sampling â : : : a : : : oo : : : : .t.,t, 1, of Joof 4 08 PSS ao a ost Josf 07 40.74 | 0.6 F 40.6} | 05005 Ol 0s 02 ⬠Unconstrained Model - Adv. Labeling I : : : â 1 i 0.9 40.9} | ât- 344-4 0.8f¢ 0.8 F 4 0.74 4o.7} | 0.6 dow a 2 ⬠⬠⬠â* Alg. 2 (A = 0) [37]-=-Alg. 2 (A= 100e)ââ â Alg. 1 (A=e) ~*~ Random sampling âeâ Hard examples -sâ Label flipping ââ Constant prediction
Figure 3: Test accuracy of unconstrained and fair models under data poisoning attacks - COMPAS dataset. The x-axis ⬠is the ratio of the size of poisoning dataset Dp to the size of clean dataset D,, and reflects the contamination level of training set. We compare the impact of adversarial bias with baselines and poisoning attacks against unconstrained models, for various ¢. The difference between test accuracy at ⬠= 0 (benign setting) and larger ⬠values reflects the impact of the attack. Constant prediction always outputs the majority label in clean dataset.
# D.4 Conï¬ict between fairness and robustness.
In Figure 3 and Figure 4, we compare the test accuracy of target model at different fractions of poisoning data selected using all the attack strategies on COMPAS and Adult dataset respectively. On COMPAS dataset, for unconstrained models, only Algorithm 2 (A = 0) has an effect, causing a 10% drop in accuracy when ¢ = 0.2 and the accuracy hardly decreases when ⬠increases from 0.1 to 0.2 in the adversarial labeling bias setting. We observe a similar result for the adversarial sampling bias. On fair models, we can observe that both Algorithm 1 and Algorithm 2 (with A = 100e and 0) have a significantly better performance than Label flipping attack and adding Hard examples. The performance of Algorithm 2 is better than that of Algorithm 1, as Algorithm 1 uses a surrogate inear loss for evaluating the fairness gap A(0;D, U Dp), whereas Algorithm 2 computes the exact fairness gap. Algorithm 2 with \ = 100e and 0 show similar results at smaller fractions of poisoning data (⬠< 0.1) and start to diverge at higher values of ⬠due to increase in contribution of fairness gap term, with the former approaching the Constant classifier baseline at ⬠= 0.2.
For the Adult dataset, notice that the Constant prediction baseline has good accuracy (>80%). Hence, the relative accuracy drop on the Adult dataset is not as signiï¬cant as that on the COMPAS
24
y c a r u c c A t s e T
y c a r u c c A t s e T
# Fair [19] (δ = 0) - Adv. Sampling
# Fair [1] (δ = 0.01) - Adv. Sampling
# Unconstrained Model - Adv. Sampling
Unconstrained Model - Adv. Sampling Fair [19] (6 = 0) - Adv. Sampling Fair [1] (6 = 0.01) - Adv. Sampling 1 1 1 1 7 1p 1 7 1 1 1p 7 7 7 7 _ â $54 gp oof 40.9 f Jo.9 f Sey | 0.8- + 0.8 + 0.8 + 4 7h 1 1 1 1 7 1 1 1 _to7b4 1 1 1 1 0 0 0.05 0.1 0.15 0.2 0 0 0.05 0.1 0.15 0.2 0 0.05 0.1 0.15 0.2 ⬠⬠⬠Unconstrained Model - Adv. Labeling Fair [19] (6 = 0) - Adv. Labeling 1p T T T T 1 T T T 1 â⢠â4âaâ 8 â 8 8 Se 0.9- 40.9 > ee 0.9 F a , a 08} 40.8 rt 40.8 ql 1 1 1 1 7 1 1 1 _to7b4 1 1 1 1 0 0 0.05 0.1 0.15 0.2 0 0 0.05 0.1 0.15 0.2 0 0.05 0.1 0.15 0.2 ⬠⬠â* Alg. 2 (A = 0) [37] Alg. 2 (A = 100e)ââ Alg. 1 (A =0.le) â+-Random sampling âeâ Hard examples -sâ Label flipping ââ Constant prediction
Figure 4: Test accuracy of unconstrained and fair models under data poisoning attacks â Adult dataset. The x-axis ⬠is the ratio of the size of poisoning dataset Dp to the size of clean dataset D,, and reflects the contamination level of training set. We compare the impact of adversarial bias with baselines and poisoning attacks against unconstrained models, for various ¢. The difference between test accuracy at ⬠= 0 (benign setting) and larger ⬠values reflects the impact of the attack. Constant prediction always outputs the majority label in clean dataset.
dataset. However, we can still observe similar results that compared to unconstrained models, fair models witness a greater accuracy drop, with our proposed attacks perform significantly better than the baselines. The three algorithms have similar results both when the fair models are trained h [19] and [1]. We observe that the plots for Algorithm 2 fluctuate in both adversarial sampling and adversarial labeling settings when ¢⬠increases. The detailed explanations are presented in Appendix D.6. =e W:
# D.5 Eï¬ect of fairness level on impact of adversarial bias
In Figure 5 and Figure 6, we show the effect of fairness level 5 on impact of adversarial bias for COMPAS and Adult dataset respectively. To measure the influence of fairness level 6, we generate poisoning data using Algorithm 2 with A = 100e and Algorithm 1 with = ⬠for both adversarial labeling and adversarial sampling settings on COMPAS dataset. On Adult dataset, we generate poisoning data using Algorithm 2 with \ = 100e and Algorithm 1 with \ = 0.1e. We measure the test accuracy of models learned with different values of fairness level 6. We can observe that the drop in accuracy for the same fraction of poisoning data is higher for models with stricter fairness constraints
25
(smaller δ). This shows that the more fair a model tries to be, the more vulnerable it becomes to poisoning attacks. In Figure 5 and Figure 6, we also present the majority (the protected group with a larger number of samples) accuracy and minority accuracy. It is clear that the accuracy drop for the majorities is more signiï¬cant than that for minorities for all the cases. In the Appendix D.9, we show that the algorithms choose the points with large loss from the smallest subgroup (subgroups are determined by the protected attribute and the label). As a result, in order to achieve fairness on the poisoned dataset, fair models are more likely to reduce the accuracy of the majority group.
We notice that, in Figure 6(a), accuracy plots fluctuate when e⬠increases, which is not observed in Figure 6(b) and Figure 5. In Appendix D.6, we present the detailed explanations.
# D.6 Performance of Algorithm 2 with \ = 100e on Adult dataset
We notice that there are accuracy fluctuations for the fair models evaluated on the poisoning data selected by Algorithm 2 with A = 100e for the Adult dataset. Recall that, the algorithm selects poisoning data from the attack dataset without replacement. In each iteration, it selects the data point that maximizes the classification loss plus the fairness gap (as at Line 5 in Algorithm 2). As shown in Figure 11 and 12, Algorithm 2 with A = 100e has a significant preference to select poisoning data that would result in a large fairness gap. Thus, it chooses data that would fall into the smallest subgroup in the training set. This is shown to be very effective in the case of COMPAS dataset and can lead to a sharp decrease in the model accuracy even for small ⬠(see Figure 1). However, this greedy algorithm in the case of small attack sets, and no repetition in the poisoning data, can result in the degradation of attack performance for larger ⬠values, as we see in Figure 4.
In more detail, the reason behind the attack behavior on Algorithm 2 with \ = 100e for larger ¢⬠on the Adult dataset is the following.
In the adversarial sampling setting, the size of the smallest subgroup (y = + and s = 1) in the attack dataset is only equivalent to « = 7.69% poisoning data. For larger values of ¢, the attacker will choose data from other subgroups, which cannot further harm the model accuracy, thus reduces the effect of the data poisoning.
In the adversarial labeling setting, with large «, the number of poisoning data points is larger than the size of subgroups with positive labels (y = +) in D,; typically when ¢ = 0.2, [D,| > 3000 whereas the number of samples with y = + in De is 2943 on average. Relying on choosing points to select data points to maximize A results in the possibility of choosing points from any subgroups with positive labels (as shown in Figure 8(b)), since poisoning data points can dominate any of these subgroups. In other words, the smallest subgroup of the clean training dataset is not the smallest subgroup of the training dataset.
In summary, the fluctuation in the figures is due to the significant effect of maximizing the fairness gap. In fact, in both adversarial sampling and labeling settings, Algorithm 2 (A = 100e) achieves the same performances with a smaller ⬠as the other attacks with larger «. These results, in effect, reflects the effectiveness of the algorithm.
# D.7 Training accuracy of poisoned dataset
In Figure 7 and Figure 8, the accuracy of the unconstrained model on the poisoning data Dy is compared with the corresponding accuracy of fair models with different fairness level 6 on COMPAS and Adult dataset respectively. The poisoning data Dp is selected using Algorithm 2 with A = 0 for the unconstrained model for both adversarial labeling and adversarial sampling settings. For fair models, we evaluate on poisoning dataset selected using Algorithm 2 with \ = 100e and \ = 0.1le on
26
# Overall - Adv. Sampling
# Majority group - Adv. Sampling
Minority group - Adv. Sampling
- Sampling Majority group - Sampling Minority group - Sampling 1 1 7 T 1 1 1 T T 1 â$â$âF__4 $$} 4 # 0.9 + 40.9 40.9 - 4 = 0.8} 4 0.8 - 0.8 4 3 3 0-7-7â Unconstrained model 3 07 Fr OT | -= Fair [1] (6 = 0.1) â 0.6} Pair {1] (6 = 0.01) 0.6 | 0.6 } | -e Fair [19] (6 = 0) a op HH oslo | | £ Jost | | | 0 0.05 0.1 0.15 0.2 0 0.05 0.1 0.15 0.2 0 0.05 0.1 0.15 0.2 ⬠⬠⬠Overall - Adv. Labeling Majority group - Adv. Labeling Minority group - Adv. Labeling 1 1 T T T T 1 T T T ft a _ 0.9 40.9 ti4 bi | 0.9 | = 0.8} 4 0.8 - 0.8 4 8 O7F 40.7 + O.7F 4 8 0.6 + 4 0.6 â + 0.6 4 HL 0.5 4 1 | | 0.5'â | | Ka 0.54 | 1 | : 0 0.05 0.1 0.15 0.2 7 0 0.05 0.1 0.15 0.2 ~ 0 0.05 0.1 0.15 0.2 ⬠⬠â¬
y c a r u c c A t s e T
y c a r u c c A t s e T
(a) Alg. 2 (A = 100e)
(a) Alg. 2 (A = 100e) Overall - Adv. Sampling Majority group - Adv. Sampling Minority group - Adv. Sampling 1 1 1 T T T T T 0.9 > 40.9 F 0.9 - + 0.8} 4 0.8 - 0.8 4 0.7+ 40.7 + 0.7 + +| 0.6 + 4 0.6 0.6 4 0.5 Lo 1 1 1 i to5 4 1 1 1 _g5l4 1 1 1 1 0 0.05 0.1 0.15 0.2 0 0.05 0.1 0.15 0.2 0 0.05 0.1 0.15 0.2 ⬠⬠⬠Overall - Adv. Labeling Majority group - Adv. Labeling Minority group - Adv. Labeling 1 1 T 0.9 + 40.9 4 ost 40.8 ; | \ = OTF 40.7F Ny Hy . | ⢠0.6 F {0.6 F ae { o.6} | 0.5 ! , | ! 0.5 ! 7 + 4 0.5 7 + 4 + 0 0.05 0.1 0.15 0.2 0 0.05 0.1 0.15 0.2 0 0.05 0.1 0.15 0.2 ⬠⬠â¬
y c a r u c c A t s e T
y c a r u c c A t s e T
(b) Alg. 1 (A=)
Figure 5: Eï¬ect of fairness level δ on robustness across groups under adversarial sampling and adversarial labeling attacks â COMPAS dataset. The majority group (whites) contributes 61% of the training data. 27
# Overall - Adv. Sampling
# Majority group - Adv. Sampling
Minority group - Adv. Sampling
group group 1p 1 1 1 1 1p 1 1 1 1 1p 1 1 1 1 > + ât_4â = =F 09+ 0.9 Ss 4S == 0.9 > | 5 \I- ee ee â 1 8 2 te & ost + Unconstrained model togb togb | âs Pair [1] (5 = 0.1) â+â Fair [1] (6 = 0.01) âeâ Fair [19] (6 = 0) 0.7-= I n I _to.7bu 1 1 1 _to.7bu 1 1 1 1 0 0.05 0.1 0.15 0.2 0 0.05 0.1 0.15 0.2 0 0.05 0.1 0.15 0.2 ⬠⬠⬠Overall - Adv. Labeling Majority group - Adv. Labeling Minority group - Adv. Labeling 1p 1 1 1 1 1p 1 1 1 1 1p 1 1 1 1 oof Jogb 40.9} z| 5 g < ti & 08+ 40.8 : es 40.8} 4 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 07 0 0.05 0.1 0.15 0.2 07 0 0.05 0.1 0.15 0.2 07 0 0.05 0.1 0.15 0.2 ⬠⬠â¬
y c a r u c c A t s e T
y c a r u c c A t s e T
(a) Alg. 2 (A = 100e)
# Overall - Adv. Sampling
Majority group - Adv. Sampling
Minority group - Adv. Sampling
group group 1 7 1 T T T T 1 T T 1 _ | SS gS 0.9}+ â 4 0.9 - 0.9 =] 8 eH, a é 0.8 + 4 0.8 - 0.8 ).7 1 1 1 i tg7Lu 1 | 1 _tg7bu 1 1 1 | O.7 0 0.05 0.1 0.15 0.2 0 0.05 0.1 0.15 0.2 â 0 0.05 0.1 0.15 0.2 ⬠⬠⬠Overall - Adv. Labeling Majority group - Adv. Labeling Minority group - Adv. Labeling 1 1 T T T 1 T T T & oof 40.9 79-9 F =] 8 5) a é 0.8 + 40.8 - 0.8 7Lo 1 1 1 1 1 1 | 1 | yl 1 1 1 | O.7 0 0.05 0.1 0.15 0.2 o7 0 0.05 0.1 0.15 0.2 O7 0 0.05 0.1 0.15 0.2 ⬠⬠â¬
y c a r u c c A t s e T
y c a r u c c A t s e T
(b) Alg. 1 (\ = 0.1e)
Figure 6: Eï¬ect of fairness level δ on robustness across groups under adversarial sampling and adversarial labeling attacks â Adult dataset. The majority group (males) contributes 66% of the training data.
Adult dataset. For the fair models trained on COMPAS dataset, we evaluate on poisoning dataset selected using Algorithm 2 with \ = 100e and A =e.
On the COMPAS dataset, from Figure 7, we can observe that, for the fair model using [1], as the value of δ decreases, the accuracy of the model increases on poisoning data Dp and decreases on Dc. This implies poisoning data reduce fair modelsâ ability to learn from clean data. In Figure 7, note that the post-processing method does not impose the fairness constraint during the training but uses the predictions of the unconstrained model trained in the standard way and makes corrections to achieve fairness. Depending on which subgroups poisoning data points belong to, fair models trained with [19] show diï¬erent behavior. For example, when poisoning data points have the sensitive attribute s = 1 and label y = +, post-processing tends to make corrections for the majority group (observed in Figure 5(a)). The accuracy of poisoning data remains similar to that of the unconstrained model, but the accuracy of clean data decreases signiï¬cantly.
On Adult dataset, from Figure 8, we can also observe that the fair models have a higher accuracy on the poisoning data compared with the unconstrained model. In Figure 8(a), we notice that the accuracy of the poisoning data increases when « increases from 0 to 0.05. After that, the accuracy decreases for the fair models. We provide detailed explanations in Appendix D.6.
# D.8 Fairness gap of the regualar model on poisoned dataset
To investigate the eï¬ectiveness of our attacks, we train an unconstrained classiï¬er without any fairness constraints and measure the fairness gap â(θ; Dc ⪠Dp) of the poisoned training dataset generated by diï¬erent algorithms. Figure 9 and Figure 10 show the results for COMPAS and Adult dataset respectively.
On the COMPAS dataset, from Figure 3, we observe a correlation between attack performance and its fairness gap on the training data. For the baseline attacks (Label flipping, Random sampling, Hard examples), the slight increase in A corresponds to a small accuracy drop on the test data. For our attacks, A quickly increases when ¢⬠< 0.1, and the corresponding test accuracy also show significant declines. When ¢ > 0.1, for Algorithm 1 and Algorithm 2 with A = 0, A stops increasing and the test accuracy begins to level. By contrast, for Algorithm 2 with A = 100e, A continues to rise and the attack performance becomes significantly better than all other attacks.
On the Adult dataset, in Figure 10 adversarial sampling, the A quickly increases when ⬠< 0.05 for our attacks, and then decreases as we increase the e. A similar wave can be observed from Figure 10 adversarial labeling. The detailed explanations are presented in the Appendix D.6.
# D.9 Distribution of the poisoning data
In Figure 11 and Figure 12, we show group membership based on the protected attribute and labels of the data which are generated via diï¬erent attack strategies on COMPAS and Adult dataset respectively.
Note that, for COMPAS dataset, we use the race as the protected attribute (s = 0 represents âWhiteâ and s = 1 represents âBlackâ). The number of samples with s = 1, y = + is the smallest among the four combinations of labels and the protected attribute. As shown in Figure 3, the attack algorithms in the ï¬rst two rows are more eï¬ective compared with baselines in the second row. As shown in sub-ï¬gures (a)-(f), in more eï¬ective attacks, most poisoning data points are from the smallest subgroup (positive labeled points from the minority).
On Adult dataset, we observe the similar phenomena, and we use the gender as the protected attribute where s = 0 represents âMaleâ and s = 1 represents âFemaleâ. It is also important to pay attention to the fact that on this dataset, the number of samples with s = 1 is relatively small
29
(33.2%) and those with s = 1,y = + only account for 3.6% of the dataset. Due to this, finding influential data points from this subgroup is not always possible. Instead, as shown in Figure 12, our attacks mainly select data with y = +. Algorithm 2 with A = 100e finds more point with s = 1,y = + and as shown in Figure 4 has a marginally better performance.
30
# Dp - Adv. Sampling
# Dc - Adv. Sampling
- 1 1 T T » 08 Jos} | 5 FS 5 0.6 40.6 | =< 20 âd 0.4 7 Unconstrained model 4p | & = Fair [1] (6 = 0.1) 0.27) Fair [1] (6 = 0.01) 0.2 | âs Fair [19] (5 = 0) _3 gle - - - | 0 2 ae ee ee 0 0.05 0.1 0.15 0.2 0 0.05 0.1 0.15 0.2 ⬠⬠D, - Adv. Labeling Dy - Adv. Labeling 1 1 1 1p 7 7 T T » 0.8} Jo.s} | Es â £06) i fo} | Es) 2 04} Jo.ap | + 40.2} | 1 1 1 1 1 + 0 0 0.05 0.1 0.15 0.2 0 0 â¬
(a) Alg. 2 (A = 100e)
D. - Adv. Sampling Dy, - Adv. Sampling 1 T 1 1 1 1p 7 7 T T » 0.8} Jos | 2 5 0.6 40.6 | <= Eg 2 04 Joab 4 5 oath 40.24 - 44 J geo 1 1 1 i 0 +â4+ +1 4 0 0.05 0.1 0.15 0.2 0 0.05 0.1 0.15 0.2 ⬠⬠D,. - Adv. Labeling Dy - Adv. Labeling 1 1p 1 7 T T », 0.8} Jos} | 3 5 0.6 40.6 + | <= op 2 4p Joap | = 02} 40.2 geo 1 1 1 0
(b) Alg. 1 (A=)
Figure 7: Accuracy of clean training data and poisoning data under adversarial sampling and adversarial labeling attacks â COMPAS dataset.
31
# Dp - Adv. Sampling
# Dc - Adv. Sampling
D.. - Adv. Sampling Dy, - Sampling 1p 7 1 7 7 1p 1 7 7 T SS », 0.8} Jos} | 3 5 0.6 40.6 + | <= op âA 0.47 "Unconstrained model 4p | a = Fair [1] (6 = 0.1) 0.27) Fair [1] (5 = 0.01) 0.25 oe a -e Fair [19] (5 = 0) : âF og La I n I 1 0 (gy a yp ep 0 0.05 0.1 0.15 0.2 0 0.05 0.1 0.15 0.2 ⬠⬠D. - Adv. Labeling Dy - Adv. Labeling 1 1 1 1 1 1 » 0.8 Jos 4 3 5 0.6} 0.6} | <= op 2 04 Joab | = 02h 0.2 ; | glu i | | | 0 a 4 0 0.05 0.1 0.15 0.2 0 0.05 0.1 0.15 0.2 ⬠⬠Alg. 2 (A = 100e)
(a) Alg. 2 (A = 100e)
D.. - Adv. Sampling Dy, - Adv. Sampling 1p 1p 1 1 7 T », 0.8} Jos} | 3 5 0.6F 40.6 + | <= op - oat Joa} | = 0.2} 40.2} | glu 1 | | | ol As 0 0.05 0.1 0.15 0.2 0 0.05 0.1 0.15 0.2 ⬠⬠D,. - Adv. Labeling Dy - Adv. Labeling 1 1 1 1 1 1 » 0.8 Jos 4 3 5 0.6} 0.6} | <= op 2 4p Joap | = 02h Jo2 | geo 1 1 0 (b) Alg. 1 (\ = 0.1e)
(b) Alg. 1 (\ = 0.1e)
Figure 8: Accuracy of clean training data and poisoning data under adversarial sampling and adversarial labeling attacks â Adult dataset.
32
# Unconstrained Model - Adv. Sampling
# Unconstrained Model - Adv. Labeling
@ 5 2 0.6 - + 0.6 + 4 g = 04p 40.4 | 8 & & g 0.2 4to2b J a gq plo 1 | | Ht glu | | | | 0 0.05 0.1 0.15 0.2 0 0.05 0.1 0.15 0.2 ⬠⬠â*Alg. 2 (A = 0) [37] 2 Adg. 2 (A = 100e) + Alg. 1 (A = â¬)â*- Random sampling +. Hard examples -a- Label flipping
Figure 9: Fairness gap on the unconstrained model with respect to the training data - COMPAS dataset. An unconstrained model is learned on the training data that includes poisoning data generated by Alg. 2 (A = 100e). The fairness gap A is defined in (1). The numbers reflect how unfair this unconstrained model is with respect to the protected group on the training data.
# Unconstrained Model - Adv. Sampling
# Unconstrained Model - Adv. Labeling
0.4- 4 0.4- 0.2- 4 0.2 - Fairness gap on training data Zt glu L L L 0 | 0 0.05 0.1 0.15 0.2 ⬠⬠â* Alg. 2 (A= 0) [37] 2 Alg. 2 (A = 100e) Alg. 1 (\ = 0.1e) + Random sampling âs- Hard example -s- Label flipping
Figure 10: Fairness gap on the unconstrained model with respect to the training data â Adult dataset. An unconstrained model is learned on the training data that includes poisoning data generated by Alg. 2 (A = 100c). The fairness gap A is defined in (1). The numbers reflect how unfair this unconstrained model is with respect to the protected group on the training data.
33
(a) Adv. sampling (Alg. 2, \ = 0) [37 (b) Adv. sampling (Alg. 2, A = 100e) (c) Adv. sampling (Alg. 1, A =e)
100 100 100 80 4 807 + 80 4 » = 60 | 60 | 60 1 | 8 gE 40 | 40) | 40) 1 20 4 20+ 4 20- 4 0 0 n n f 0 0.05 0.1 0.15 0.2 0.05 0.1 0.15 0.2 0.05 0.1 0.15 0.2 ⬠⬠⬠(d) Adv. labeling (Alg. 2, 4 = 0) [37] (e) Adv. labeling (Alg. 2, \ = 100e) (f) Adv. labeling (Alg. 1, A =) 100 100 100 so} | so} 4 so} | » 2 60} 4 60} | 60} | | § & 40 - 4 40} 4 40+ 4 20} | 20} | 20} | 0 0 0 0.05 0.1 0.15 0.2 0.05 0.1 0.15 0.2 0.05 0.1 0.15 0.2 ⬠⬠⬠(g) Random sampling (h) Hard examples (i) Label flipping 100 100 100 80 4 807 + 80 4 » 2 oo} 4 60} | 60} | | g & 40 - 4 40} 4 40+ 4 20} | 20} | 20} | 0 0 0 0.05 0.1 0.15 0.2 0.05 0.1 0.15 0.2 0.05 0.1 0.15 0.2 ⬠⬠â¬
Figure 11: Distribution of the poisoning data under adversarial attacks - COMPAS dataset. We report the protected attribute (s = 0 for whites and s = 1 for blacks) and labels of the poisoning data for various â¬. For every value of â¬, the number for each combination of the protected attribute and label reflects the percentage of points with this combination in the poisoning data.
34
(a) Adv. sampling (Alg. 2, λ = 0) [37]
(b) Ady. sampling (Alg. 2, \ = 100e)
# (c) Adv. sampling (Alg. 1), A = 0.1e
100 100 100 80 + 80} 4 807 4 2 2 60 | 60 | 60+ 1 2 3 & 40 + 40+ | 40} 4 20 4 20+ 4 20-7 4 0 0 0 0.1 0.15 0.2 0.05 0.1 0.15 0.2 0.05 0.1 0.15 0.2 ⬠⬠⬠(d) Ady. labeling (Alg. 2, \ = 0) [37] (e) Adv. labeling (Alg. 2, \ = 100e) (£) Adv. labeling (Alg. 1), A = 0.le 100 100 100 so} 4 80} | 80} | 2 2 60F + 60} 4 607 4 = 8 E 407 | 40 /- | 40 | 20 | 20} J 20} | 0 0 0 0.05 0.1 0.15 0.2 0.05 0.1 0.15 0.2 0.05 0.1 0.15 0.2 ⬠⬠⬠(g) Random sampling (h) Hard examples (i) Label flipping 100 100 100 80} + 80} 4 807 4 2 2 60 +} 60} | 60+ 5 2 8 é 40 + + 40+ | 40} 4 20 | 20} | 20} | 0 0 0 0.05 0.1 0.15 0.2 0.05 0.1 0.15 0.2 0.05 0.1 0.15 0.2 ⬠⬠â¬
Figure 12: Distribution of the poisoning data under adversarial attacks â Adult dataset. We report the protected attribute (s = 0 for males and s = 1 for females) and labels of the poisoning data for various â¬. For every value of â¬, the number for each combination of the protected attribute and label reflects the percentage of points with this combination in the poisoning data.
35 | {
"id": "1806.02887"
} |
2006.10518 | Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming | Lately, post-training quantization methods have gained considerable
attention, as they are simple to use, and require only a small unlabeled
calibration set. This small dataset cannot be used to fine-tune the model
without significant over-fitting. Instead, these methods only use the
calibration set to set the activations' dynamic ranges. However, such methods
always resulted in significant accuracy degradation, when used below 8-bits
(except on small datasets). Here we aim to break the 8-bit barrier. To this
end, we minimize the quantization errors of each layer separately by optimizing
its parameters over the calibration set. We empirically demonstrate that this
approach is: (1) much less susceptible to over-fitting than the standard
fine-tuning approaches, and can be used even on a very small calibration set;
and (2) more powerful than previous methods, which only set the activations'
dynamic ranges. Furthermore, we demonstrate how to optimally allocate the
bit-widths for each layer, while constraining accuracy degradation or model
compression by proposing a novel integer programming formulation. Finally, we
suggest model global statistics tuning, to correct biases introduced during
quantization. Together, these methods yield state-of-the-art results for both
vision and text models. For instance, on ResNet50, we obtain less than 1\%
accuracy degradation --- with 4-bit weights and activations in all layers, but
the smallest two. We open-sourced our code. | http://arxiv.org/pdf/2006.10518 | Itay Hubara, Yury Nahshan, Yair Hanani, Ron Banner, Daniel Soudry | cs.LG, stat.ML | null | null | cs.LG | 20200614 | 20201214 | 0 2 0 2
c e D 4 1 ] G L . s c [
2 v 8 1 5 0 1 . 6 0 0 2 : v i X r a
# Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming
Itay Hubara â â¦â Yuri Nahshan â â Yair Hanani â â Ron Banner â Daniel Soudry â¦
â Habana Labs â An Intel company 1, Caesarea, Israel, â¦Department of Electrical Engineering - Technion, Haifa, Israel
{ihubara, ynahshan, yhanani, rbanner}@habana.ai {daniel.soudry}@gmail.com
# Abstract
Lately, post-training quantization methods have gained considerable attention, as they are simple to use, and require only a small unlabeled calibration set. This small dataset cannot be used to ï¬ne-tune the model without signiï¬cant over-ï¬tting. Instead, these methods only use the calibration set to set the activationsâ dynamic ranges. However, such methods always resulted in signiï¬cant accuracy degradation, when used below 8-bits (except on small datasets). Here we aim to break the 8-bit barrier. To this end, we minimize the quantization errors of each layer separately by optimizing its parameters over the calibration set. We empirically demonstrate that this approach is: (1) much less susceptible to over-ï¬tting than the standard ï¬ne-tuning approaches, and can be used even on a very small calibration set; and (2) more powerful than previous methods, which only set the activationsâ dynamic ranges. Furthermore, we demonstrate how to optimally allocate the bit-widths for each layer, while constraining accuracy degradation or model compression by proposing a novel integer programming formulation. Finally, we suggest model global statistics tuning, to correct biases introduced during quantization. Together, these methods yield state-of-the-art results for both vision and text models. For instance, on ResNet50, we obtain less than 1% accuracy degradation â with 4-bit weights and activations in all layers, but the smallest two. We open sourced our code 2.
# 1 Introduction
The pursuit of advanced Deep Neural Networks (DNNs) causes researchers to construct deeper and wider networks, making them expensive to use in terms of power and time. This increases the need for efï¬cient implementations of these networks. Efï¬cient networks reduce cloud-vendor costs and make it possible to run them on low-power devices such as smartphones and wearable devices. The most common off-the-shelf approach to improving network efï¬ciency is quantization, which reduces the numerical precision of the network and its complexity and memory footprint.
DNN quantization techniques can be classiï¬ed as either post-training or quantization-aware training (QAT) techniques (Han et al., 2015; Courbariaux et al., 2015; Hubara et al., 2017; Zhou et al., 2016). Although QAT techniques, in general, achieve better results, there are important real-world scenarios in which they are not applicable. These are the cases where the training data is sensitive or simply unavailable at the time of deployment. For instance, when off-the-shelf or legacy models are being
# âEqual contribution. 2https://github.com/itayhubara/CalibTIP
used, or when medical records are involved. Therefore, much attention has recently been dedicated to post-training quantization methods (Nagel et al., 2019; Banner et al., 2018; Zhao et al., 2019), which can be more easily applied in practice. These methods allow for network quantization to happen seamlessly when deployed, without requiring additional information from the user except a small unlabeled calibration set.
Unfortunately, post-training quantization below 8-bit always incurs signiï¬cant accuracy degradation and in some cases even higher numerical precision is required. In this paper, our goal is to break this barrier by distilling all the information the pre-trained model and calibration set encode. Our goal is to ï¬nd an optimal scheme for current state of the art hardware which usually support 16,8,4 bits data types with per-channel quantization of the weights. To that end, we suggest a three-stage pipeline that consists of methods applied solely on a small calibration set to reduce the local error introduced during the quantization process (e.g., round-off errors) followed by integer programming to determine the bit-width of different layers so that the overall accuracy degradation is minimized. Even without using mixed-precision, the suggested method is much less prone to over-ï¬tting than current methods and yields best in class results for 8-bits Mobilenet-V2 and BERT-base trained on ImageNet and SQuAD1.1 datasets, respectively. Our paper suggests several contributions for mixed-precision post-training quantization:
1. AdaQuant: A layer-by-layer optimization method that minimizes the error between the quantized layer output and the full-precision layer output. This method can consume only a small calibration dataset from training data without overï¬tting. In a comprehensive study, we show that AdaQuant deï¬nes a new state-of-the-art for post-training quantization on several networks and tasks, including vision models (Resnet18, Resnet50, MobilenetV2) and language (BERT).
2. Integer programming: As some parts of the network may allow lower precision compared to other layers, we suggest an integer-linear programming based approach for determining the precision level of different layers. This method aims at maximizing either the expected speedup or savings in power consumption without violating a predeï¬ned constraint on network accuracy degradation or compression.
3. Batch-norm tuning: Following quantization we observe an inherent bias in the mean and the variance of batch norm statistics. We show that by employing the re-estimated statistics in batch normalization, much of the quantized network degradation can be recovered.
4. Light and Advanced pipelines: We analyze the advantages and disadvantages of each of the given methods and suggest two pipelines: (1) light pipeline that does not require a backward pass, thus can be invoked even on inference-only hardware; and (2) Advanced pipeline that includes also AdaQuant and bias tuning.
# 2 Related work
There has been a signiï¬cant effort to accelerate inference via quantization (Courbariaux et al., 2015; Han et al., 2015; Rastegari et al., 2016; Zhou et al., 2017). These works involve re-training in order to compensate for the degradation due to the quantization process. Post-training quantization, on the other hand is applied to a model after it was trained. Thus, it avoids re-training and as such it is much simpler to use. However, naively quantizing a full-precision model to INT4 or lower to accelerate computation usually incurs signiï¬cant accuracy degradation (Krishnamoorthi, 2018; Jacob et al., 2018).
AdaQuant: A recent post-training quantization method (Nagel et al., 2020), termed AdaRound, suggested optimizing the rounding policy. Instead of using the predominant rounding-to-nearest approach, they suggest formulating a per-layer quadratic optimization problem to optimize the round- off error. Our proposed method, AdaQuant, takes another step and relaxes AdaRoundâs implicit constraint which forces the quantized weights to be within ±1 of their round-to-nearest value. This is done by optimizing the weights and quantization parameters of each layer separately, over the calibration set, to minimize the MSE between the layerâs original and quantized outputs. As oppose to AdaRound we apply AdaQuant to ï¬nd optimal quantization not only to weights but also to activations. In addtion we suggest two ï¬avors for AdaQuant: (1) parallel-AdaQuant suited for mixed precision setting; (b) sequential-adaquant which suited for ï¬xed conï¬guration.
2
Integer programming: Early work by Lin et al. (2016) used a convex optimization formulation which results in a simple greedy compression scheme. Aï¬alo et al. (2020) used a combinatorial optimization approach for network pruning. Their problem was formulated as a Knapsack problem that optimizes the trade-off between the channels importance and their associated computational cost. Cai et al. (2020) ï¬nds a mixed-precision conï¬guration with a guaranteed Pareto efï¬cient allocation with respect to model size and accuracy degradation. While this provides a "best-effort" standard (e.g., the conï¬guration cannot be further compressed without hurting accuracy), it does not suggest which of all possible outcomes is best. To the best of our knowledge, this work is the ï¬rst to formalize a generic integer program, which can easily be adapted to various types of models and requirements with a clear objective and constraints.
Batch norm tuning: Finkelstein et al. (2019) were the ï¬rst to recognize that a signiï¬cant source of degradation is a shift in the mean activation value. They show a simple method to compensate for this bias by updating the bias terms. Nagel et al. (2019) suggest to equalize the weight ranges in the network and correct biases in the error that are introduced during quantization .Recently Sun et al. (2019) suggested batch norm tuning for FP8 models. Here we detail how to perform this procedure on a per-channel quantized (PCQ) model with fused batch-norm layers. The procedure is light as it only requires to invoke the quantized model few times (on the calibration set) and adjust the quantization parameters.Moreover after retuning the BN layers can be reabsorbed which reduces the inference complexity. To the best of our knowledge we are the ï¬rst to suggest it.
# 3 Optimizing The Quantization Pipeline
In most post-training quantization settings, a model and a small unlabeled calibration set are given. To avoid overï¬tting the calibration set, most studies utilize it only to extract the networkâs internal statistics, which is later used to set the quantization parameters.
Here we suggest using the calibration set much more extensively to tune the model while avoiding over-ï¬tting the data. In the following subsections, we detail three different optimization methods over the calibration set: (1) AdaQuant, a layerwise optimization of weights and quantization parameters; (2) an integer programming formulation for a mixed-precision setting; and (3) Batch Normalization Tuning (BNT), for tuning the modelâs internal statistics to match the numerical precision setting. We discuss the strengths and weaknesses of each method and suggest an optimization ï¬ow that exploits all the additive merits and leads to state-of-the-art results.
# 3.1 AdaQuant - Layerwise Optimization over the Calibration Set
Several researchers suggested per-tensor optimization to reduce quantization error by minimizing some form of MSE objective between the quantized and the full-precision tensor X (either weights or activations). They look for an optimized quantization step size Ëâ obtained by
A =argmin|iXâQa(XI°5 a(x) = 4-] 5], a)
where Q(·) is the quantization function. Although these methods are fast and easy to use, they often result in an inferior solution â the loss in eq. 1 is sub-optimal, as it penalizes all the quantization errors equally. However, the loss should penalize more quantization errors which affect the classiï¬cation. Accordingly, researchers suggested Quantization-Aware-Training (QAT) methods to ï¬x this error by training the entire model at once. However, those methods have three limitations: (a) they require the large training set to avoid over-ï¬tting, (b) they approximate the back-propagation gradients through discrete function (the quantizer) and (c) they have high computational and memory footprints. We suggest a modiï¬ed objective for per-layer joint optimization of the weights and quantization parameters.
# (Au. A.
= arg min âw,âx,V ||W X â Qâw (W + V ) · Qâx (X)||2, (2)
where V is a continuous variable added to W and the quantized network weights are deï¬ned as (W + ËV ). In this new objective the quantized tensor is not required to be "close" to the Wq = Q Ëâw original tensor, as in eq. 1, thus beneï¬ting from the ï¬exibility that Quantization-Aware-Training methods have. Yet, it can be executed in parallel over all layers and is much less prone to over-ï¬tting.
3
Moreover, under a ï¬xed conï¬guration we can optimize the model globally and infer the error between layers. Thus, instead of running AdaQuant on all layers in parallel we can run it sequentially and ï¬x the error induced by quantaizing former layers. Thus, Eq. 2 changes to:
(Aw, Asis Vi) = argmin ||WiX ~Qa., (Wit Vi) -Qa., (XDI? G3) Xq=o(Qa.,_,(Wi-a + Vin) Qa., (XE) (4)
where Ï(·) is some activation function.
Note, that sequential AdaQuant should not be applied before the bit allocation was set as it optimize over noisy input obtain from predecessor quantized layers. We evaluate both ï¬avors of adaquant (named, AdQuant and sequential-AdaQuant and detail our ï¬nding in section 5.1. We note that AdaQuant also optimizes over biases and offsets and optimized fused conv-bn-relu layers when present; these were removed from the formulation in Equation 2 for simplicity.
Size of calibration set Perhaps surprisingly, although we experiment with a very small cal- ibration set, no over-fitting is observed. Let us examine a simple fully connected layer W ⬠Râ¢*N_ The input and output are of sizes N and M, respectively. For each output we have B equations and N separate parameters (i.e., with no overlap in parameters between different outputs). Therefore if B < N we generically have an infinite amount of solutions and we can 60+ a ° Accuracy --- Naive min-max N ° . â} AdaQuant overfit the data. If B >> N then we might un- â+ AdaRound derfit the data. Thus, the size of the calibra- Oo, â QAT-KLD tion set required for AdaQuant should roughly 7 z 5 5 10 10 10 10 be O(N). A similar derivation for convolution layers reveals that the calibration size should .2 . . have B > Gk samples to avoid over-fitting, where B is the number of unique samples, k is the convolutionâs kernel size, C; and C, is Calibration set size (log scale) Figure 1: Comparison of different optimization methods over ResNet-50 quantized to 4 bit except the first and the last layers which were kept in the number of input and output channels respec- tively and H, W represent height and width. In fig.[I]we compare AdaQuant to current state-of- the-art methods including QAT with knowledge 8bit. Even optimizing on a single image drastically improves the results but as expected have a high variance (red bar). The variance decreases rapidly as calibration set size increases. distillation (QAT-KLD) (Kim et al] and AdaRound (2020). For each method, we measured the top-1 accuracy with respect to the number of samples in the calibration set over five runs and present the mean and standard deviation. As can be seen, AdaQuant is superior to previous methods and specifically excels on small calibration sets. Remarkably, AdaQuant does not overfit even when optimized on a single image. Additional details can be found in section A and[D]of the Appendix.
# 3.2 Per-layer bit allocations with integer programming
AdaQuant signiï¬cantly enhances network accuracy at lower bit widths. However, it is often not sufï¬cient by itself to attain acceptable accuracy. Therefore, in practical use cases, the user would like to balance between accuracy and performance (e.g., power and speed), by setting several layers to higher precision. Our high-level goal in this section would be to optimize the overall network performance while maintaining a predeï¬ned accuracy degradation or a model compression constraint.
In the following, we provide an integer-programming (IP) formulation for optimizing per-layer bit allocations. Depending on the needs, our performance metrics P would be either the execution time of the network or its power consumption. Also, with every layer quantization, there is an associated quantization error that affects the training loss L. We chose the latter to be our penalty metric. Integer programming is applied in those situations where a given problem can clearly be represented in the form of a linear relationship between different decision variables. Unlike other previous works on compression, it attains a global optimum. For example, Lin et al. (2016) suggested a convex
4
optimization problem, but the constraints and the objective are not linear. This typically has a drastic impact on convergence time, and the quality of the results since the Simplex method can no longer be applied (Van Doormaal & Raithby, 1984).
Basic formulation We are given a neural network with L layers. For each layer l, we have weights Wl that need to be multiplied with activations of the previous layer Xlâ1. Such lower bit width multiplications can be executed by quantizing the weights and activations to achieve higher throughput and energy-efï¬cient solutions. Let W k lâ1 represent a quantized version of Wl and Xlâ1 to k and n bits, respectively. For each layer i, a low-bit width multiplication W k lâ1 results in a l loss degradation âLk,n l with respect to the original product Wl · Xlâ1. This performance improvement measure needs to be additive and sum up to a total beneï¬t in end-to-end network performance (e.g., power, model size, etc.). Our goal would be to maximize the total performance improvement without exceeding the total network degradation âL. We now turn to solve the above problem using an integer program. We deï¬ne a binary variable I k,n , which is set to one if and only if the weights W k lâ1 at layer l; otherwise we set the indicator to zero i.e., I k,n l = 0. Then, the basic bit allocation problem can be formulated as follows:
Maximize x AP, (Sa) 1=0
1=0
Subject to âLl ⤠âL, l (5b)
Vl {1,...,D} AP) = So Ih" APP", AL = So" Ace" (5c) kn kn
Ve {1,.,D}: 30 OP" = 1,1" ⬠{0,1} (5d) kn
The objective function (3a) maximizes the total performance improvement. Constraints (3b) and (3c) ensure that the total degradation in loss and the total improvements in performance due to the quantization of layer l to k-bit-weights and n-bit-activations would be âLl and âPl, respectively. Equation (3d) states that the restriction on total degradation of âL is obeyed and ensures that only one conï¬guration (of quantized weights and activation) per layer is selected.
# 3.3 Batch Normalization Tuning
A common practice is fusing BN layers into their predecessor weight layers before applying post- training quantization to reduce the amount of Multiply-Accumulate (MAC) operations. However, the reduction in bit-width after quantization can cause the modelâs internal statistics to deviate further from those of the full precision model. To compensate for this deviation, we suggest updating BN statistics. First, we need to reconstruct the BN layers then re-tune the BN layersâ statistics (by a few iterations of running-mean to re-collect the statistics). Finally, re-absorb (re-fuse) the BN layers into the weight layers (this is possible only in a per-channel weights quantization setting, which is the current standard). Next, we give more details on each phase.
Reconstructing BN layers Assume the original (pre-fusing) BN parameters 7, 3, and ⬠are known, as is usually the case. We would like to initialize 1, 07, as well as the BN parameters +y,. and {,. (r for "reconstructed") so that the reconstructed BN
_, th ~ BN,(x) = pe +B, a (6)
will re-adjust the model statistics. To do so, ï¬rst we initialize the reconstructed BN layers by setting the following parameters (denoted by r):
Ï2 = γ2 (7) o so that BNr(x) = x. Then, we update µ and Ï2 by collecting running mean and running variance on the calibration data. We stress that the BN parameters, γr, βr, do not change while applying BN tuning, as we only invoke forward propagation.
5
Re-fusing BN layers Due to the per-channel quantization setting we use, the collected statistics can be fused back into the current quantization scale as follows:
wi wi: vi i =p) + Bs Ay, = Aw, (8)
Thus, in addition to the regular BN fusion, the quantization step is adjusted by γrÏâ1. Additional details are given in section B of the Appendix .
Bias tuning Much like Finkelstein et al. (2019), we suggest to apply a global bias-tuning procedure on the ï¬nal mixed-precision model by applying quantization-aware training to minimize the Knowl- edge Distillation (KD) loss (which does not require labels). Since we restrict the trainable variables to be the biases only, we can train only on the calibration set without experiencing overï¬tting.
# 4 Quantization Flow
Past years have seen the rapid development of efï¬cient deployment techniques (Nagel et al., 2019; Haroush et al., 2019). Deployment ï¬ows can vary based on the user setting such as hardware constraints, deployment time and task/dataset availability. While some users are willing to pay at initialization the time and effort to gain another fraction of accuracy, others require a simple and fast solution. We address this by suggesting two novel pipelines, light and advanced. Our pipelines are designed to the current, most common setting: per-channel quantization with a small calibration set.
Our light pipeline requires three steps: (1) Fuse layers and deï¬ne quantization parameters; (2) Find optimal mixed-precision conï¬guration using IP; and (3) Use BN tuning to correct the internal statistics. We note that all steps do not require back-propagation and thus are very light and fast. In addition to the light setting, in the advanced pipeline we apply AdaQuant to reduce each layerâs output distortion from its full precision counterpart before invoking the IP algorithm. A detail comparison between the two pipeline is given in table-1. Models that were optimized using AdaQuant to different bit-widths can be seamlessly stitched thus having the ability to create an optimized model in a mixed precision setting. Subsequently, global methods such as tuning both BN statistics and the layersâ biases can be applied to reduce a Knowledge Distillation loss. Although there are additional post-training quantization techniques that could be potentially combined with our methods, such as bias correction (Banner et al., 2018), equalization (Meller et al., 2019), and outlier channel splitting (Zhao et al., 2019), we did not ï¬nd it necessary: our results demonstrate that our relatively simple pipeline yields state of the art accuracy on both vision and text models, even without combining such methods. In the following sections we show our ï¬ndings and give an ablation study that highlights the importance of each method and their combination.
AdaQuant | Mixed-Precision (IP) | BN tuning | Bias Tuning Light-Pipeline | x v vo x Heavy-Pipeline | ¥ v v v
| x v vo x | ¥ v v v Table 1: Comparison between light and advanced pipelines.
# 5 Experiments
In this section, we demonstrate our methods and pipelines on several models and datasets. We ï¬rst start by analyzing image recognition models such as ResNet18/50, MobileNet-V2, which were trained over the ImageNet dataset. Next, we demonstrate our method robustness by applying it on question answering task using the popular BERT model (Devlin et al., 2018), which was ï¬ne-tuned on SQuAD1.1 dataset (Rajpurkar et al., 2016). In all our experiments, we used a small calibration set taken from the training dataset. Unless stated otherwise, we applied asymmetric per-channel quantization (i.e. GEMLOWP Wu et al. (2016)) with quantized offset (i.e., zero point). Next, we analyze each methodâs strengths and weaknesses separately and argue for its validity. Additional implementation details can be found in section and the code are given in sections D E of the Appendix.
6
# 5.1 AdaQuant
Recently several researchers suggested different types of MSE optimization. In most cases, the optimization was done per-tensor (i.e., for the weights and activations separately). Here we argue that by optimizing both quantization parameters and the weights jointly we can reduce the MSE even further and hence improve the accuracy as demonstrated in ï¬g. 2b. In contrast to AdaRound (Nagel et al., 2020) which restricted the change of the weights to be within ±1 we allow the weights to change as needed. As can be seen in ï¬g. 2a the weights indeed change their quantized value by more than one. Since our pipeline is focused on the mixed-precision setting we optimize each layer separately to enable maximum ï¬exibility when stitching the optimized models. Under that setting AdaQuant can be performed in parallel across all layers. However, since most recent papers do not show full compression-accuracy curves and only a few attempt 4-bit compression, we also compare our results to common ï¬xed conï¬gurations using our sequential-AdaQuant ï¬avor. While sequential AdaQuant cannot be parallelized or used for the mixed-precision setting it yields best-in-class results for all models tested as can be seen in table-2 and 3. For instance, on the extensively studied 8bit MobileNet-V2 (MobileNet-V2) topology we achieved 71.6% top-1 accuracy â less than 0.5% degradation compared to the full precision counterparts (71.9%).
ACIQ* (Banner et al., 2018) DFQ* (Nagel et al., 2019) AdaQuant Sequential-AdaQuant FP32 RN-34 RN-50 RN-101 RNext-50 RN-18 69.1 64.5% N/A 57.1% 70.3% 73.7% 74.4% 67.4% 69.4% 71.7% 75.1% 75.5% 71.97% 73.3% 77.2% 77.3% 68.1% 68.1 64.5% N/A N/A N/A 74.0 75.6% 79.22% Inc-V3 60.4 N/A 72.6% 73.4% 77.4%
Table 2: INT-4 quantization of weights and activations. Top-1 score over imagenet dataset for different post-training quantization methods. All layers were quantized to 4-bit except ï¬rst and last layers which were set to 8-bit. Methods marked with (*) were implemented according to the paper. In all our experiments we apply per-channel quantization of the weights.
min-max DFQ (Nagel et al., 2019) ZeroQ (Cai et al., 2020) AdaQuant Sequential-AdaQuant FP32 MobileNet-V2 BERT-Base-SQuad1.1 (F1) (top-1) 87.83% 70.9% N/A 71.2% N/A 72.91% 88.35% 73.03% 88.45% 72.94% 88.81% 73.03%
Table 3: INT-8 quantization of weights and activations. A comparison with DFQ and naive quan- tization methods (which uses the channelâs full dynamic range). In all our experiments we apply per-channel quantization of the weights and quantized all layers to 8-bit.
Testing the strength of this method on both vision and text topologies resulted in state-of-the-art results. As can be seen in table 3, on BERT-base model over SQuAD1.1 dataset (BERT-Base-SQuAD1.1) we managed to obtain 88.45% F1 score using just AdaQuant â less than 0.5% of its full precision counterpart (81.3%). Throughout our experiments, we avoided using any augmentation technique and follow the known validation set prepossessing. s
# Integer Programming
Our Integer programming formulation requires us to have two quantities per-layer: (1) loss degrada- tion and; (2) performance improvement. Obtaining those quantities requires to invoke the model over a small calibration set L times (once per layer) and measure the loss degradation and the performance gain. In our experiments, we set the performance value to be the number of parameters, but this measure could be changed to any additive measure. In all experiments, we used 1000 samples from the training set as our calibration set. Our setting considers only a mixture of 8-bit and 4-bit layers; to further test IP capabilities, we investigate a mixture of 2-4-8 bits as well. Unfortunately, since
7
(a) (b)
Figure 2: AdaQuant vs. AdaRound. (a) A histogram of âW distribution. AdaRound restricts this additive term to be âW = ±1. Relaxing this constraint provides a more powerful optimization. (b) Ablation study on parameters optimization for ResNet50 over ImageNet. AdaRound is based exclu- sively on weight optimization, while AdaQuant optimizes the weights, biases, and other quantization parameters jointly.
2-bits quantization in post-training setting results in high degradation, the IP algorithm chose only mixture of 4-8 bits for compression ratio higher than 12.5%. Yet for 12.5% compression ratio, IP method found that by setting one layer to 2-bits while setting 8 smaller layers to 8-bits accuracy gains over 5.5% with respect to uniform 4-bit quantization. Also, by allowing a less hardware friendly setting where numerical precision can have the form of any integer between 2-8, yields the highest compression-accuracy ratio (ï¬g. 3 - relaxed advanced pipeline).
# 5.3 Batch-Norm Tuning
Batch-Norm Tuning (BNT) has a signiï¬cant advantage, as it does not require weight optimization. Since BNT is applied by invoking the entire model, we must apply it only after setting the mixed- precision bit-width conï¬guration. This is the case for all global optimization methods including bias-tuning. Notably, BNT requires only a few (at most 10) forward passes over the calibration set and yield signiï¬cant gains (ï¬g. 3). In this study, we applied BNT on models trained with BN layers only. However, it might be possible to extend this method to models without BN layers by reconstructing it from the statistics. We encourage the reader to investigate this path.
# 5.4 Full pipeline and ablation study
Although several researchers suggested different methods for post-training mixed-precision quantiza- tion, none offer their code. Each paper focuses on a different quantization setting (e.g., quantizing only the weights, per-tensor quantization, etc.). Therefore, to demonstrate our pipeline strength, we created two different baselines based on common practices:
⢠Greedy-accuracy: recent studies suggested measuring each layer sensitivity and, based on the compression target, reduce the precision for the most robust layers.
⢠Greedy-compression: the complementary greedy approach (Lin et al., 2016) to sort the layers by their number of parameters and increase the precision of the layers from the smallest to the largest layer until the compression budget is reached.
Surprisingly, although the size of the layer should correlate with its sensitivity to quantization, the two greedy methods yield entirely different conï¬gurations. Investigating the conï¬guration greedy- compression found that sorting by compression correlates with the location of the layers in the model. In most vision models, the layers closer to the input have fewer parameters. This aligns with current common practice (Banner et al., 2018). Notably, even when not combined with any other technique, the IP method obtained the best bit-width conï¬gurations stressing its importance.
8
Next, we turn to consider the light and advanced pipelines. Under challenging compression rates, our light-pipeline results highlight the importance of BN tuning. As can be seen in our experiment ï¬g. 3, by merely invoking the model at inference mode for a few iterations and ï¬xing the intermediate statistics, one can recover more than 1.5% of the accuracy (73.7% v.s 75.37%). As expected, by applying the advanced pipeline, one can obtain state-of-the-art accuracy. Arguably, our most impressive results are at 0.13% compression rate in which we managed to stay within 1% of the full precision accuracy while converting 96% of the model to 4-bit. For the challenging MobileNet-V2 we managed to switch 25% of the layers to 4bit (weights and activations) while maintaining less than 2% degradation; Additionally, we achieved, for the ï¬rst time, reasonable top-1 accuracy of 65% when almost the entire model is in 4-bit.
70 69 60 > >,68 850 & 5 5 g 3 67 < < FS ° advanced pipeline advanced pipeline âs light pipeline 66 âs light pipeline 30 ââ ip ââ ip âs greedy (compression) âs greedy (compression) âsâ greedy (accuracy) 65 âsâ greedy (accuracy) 0.14 0.16 0.18 0.20 0.22 0.24 0.14 0.16 0.18 0.20 0.22 0.24 compression ratio compression ratio
# (a) MobileNet-V2
# (b) ResNet-18
76 75 > O74 6 a =) U g 73 _ âeâ light pipeline âoâ i 72 . âeâ greedy (compression) âeâ greedy (accuracy) 71 âeâ relaxed advanced pipeline 0.12 0.14 0.16 0.18 0.20 0.22 0.24 compression ratio
(c) ResNet-50
Figure 3: Ablation study over ResNet-50/18 and MobileNet-V2 - compression-accuracy curves. Our advanced pipeline is consist of AdaQuant, IP-mixed-precision, BN-tuning and bias-tuning. Our light pipeline consists of only IP-mixed-precision, BN-tuning. The relaxed advanced pipeline appears in c is similar to the advance pipeline but allows the integer-programming to choose any bit-width between 2-8 and not just 4-bit or 8-bit. The compression ratio is measured as the ratio between the compressed model and the full-precision (32-bit) mode thus 0.25 compression rate indicate that the entire model uses 8-bit precision and respectively for 4-bit the compression rate is 0.125
9
# References
Aï¬alo, Y., Noy, A., Lin, M., Friedman, I., and Zelnik, L. Knapsack pruning with inner distillation. arXiv preprint arXiv:2002.08258, 2020.
Banner, R., Nahshan, Y., Hoffer, E., and Soudry, D. Aciq: Analytical clipping for integer quantization of neural networks. 2018.
Cai, Y., Yao, Z., Dong, Z., et al. Zeroq: A novel zero shot quantization framework. arXiv preprint arXiv:2001.00281, 2020.
Choukroun, Y., Kravchik, E., Yang, F., and Kisilev, P. Low-bit quantization of neural networks for efï¬cient inference. In 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp. 3009â3018. IEEE, 2019.
Courbariaux, M., Bengio, Y., and David, J.-P. Binaryconnect: Training deep neural networks with binary weights during propagations. In Advances in neural information processing systems, pp. 3123â3131, 2015.
Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
Finkelstein, A., Almog, U., and Grobman, M. Fighting quantization bias with bias. arXiv preprint arXiv:1906.03193, 2019.
Han, S., Mao, H., and Dally, W. J. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015.
Haroush, M., Hubara, I., Hoffer, E., and Soudry, D. The knowledge within: Methods for data-free model compression. arXiv preprint arXiv:1912.01274, 2019.
Hubara, I., Courbariaux, M., Soudry, D., El-Yaniv, R., and Bengio, Y. Quantized neural networks: Training neural networks with low precision weights and activations. The Journal of Machine Learning Research, 18(1):6869â6898, 2017.
Jacob, B., Kligys, S., Chen, B., et al. Quantization and training of neural networks for efï¬cient integer-arithmetic-only inference. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2704â2713, 2018.
Kim, J., Bhalgat, Y., Lee, J., Patel, C., and Kwak, N. Qkd: Quantization-aware knowledge distillation. arXiv preprint arXiv:1911.12491, 2019.
Krishnamoorthi, R. Quantizing deep convolutional networks for efï¬cient inference: A whitepaper. arXiv preprint arXiv:1806.08342, 2018.
Lin, D., Talathi, S., and Annapureddy, S. Fixed point quantization of deep convolutional networks. In International conference on machine learning, pp. 2849â2858, 2016.
Meller, E., Finkelstein, A., Almog, U., and Grobman, M. Same, same but different-recovering neural network quantization error through weight factorization. arXiv preprint arXiv:1902.01917, 2019.
Nagel, M., Baalen, M. v., Blankevoort, T., and Welling, M. Data-free quantization through weight equalization and bias correction. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1325â1334, 2019.
Nagel, M., Amjad, R. A., van Baalen, M., Louizos, C., and Blankevoort, T. Up or down? adaptive rounding for post-training quantization. arXiv preprint arXiv:2004.10568, 2020.
Rajpurkar, P., Zhang, J., Lopyrev, K., and Liang, P. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250, 2016.
Rastegari, M., Ordonez, V., Redmon, J., and Farhadi, A. Xnor-net: Imagenet classiï¬cation using binary convolutional neural networks. In European conference on computer vision, pp. 525â542. Springer, 2016.
10
Sun, X., Choi, J., Chen, C.-Y., et al. Hybrid 8-bit ï¬oating point (hfp8) training and inference for deep neural networks. In Advances in Neural Information Processing Systems, pp. 4901â4910, 2019.
Van Doormaal, J. P. and Raithby, G. D. Enhancements of the simple method for predicting incom- pressible ï¬uid ï¬ows. Numerical heat transfer, 7(2):147â163, 1984.
Wu, Y., Schuster, M., Chen, Z., et al. Googleâs neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016.
Zhao, R., Hu, Y., Dotzel, J., De Sa, C., and Zhang, Z. Improving neural network quantization without retraining using outlier channel splitting. In International Conference on Machine Learning, pp. 7543â7552, 2019.
Zhou, A., Yao, A., Guo, Y., Xu, L., and Chen, Y. Incremental network quantization: Towards lossless cnns with low-precision weights. arXiv preprint arXiv:1702.03044, 2017.
Zhou, S., Wu, Y., Ni, Z., et al. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160, 2016.
11
# A Size of calibration set
Fully Connected layers Letâs assume that we have weights of size W ⬠Râ¢â¢% and input and output are of sizes N and M respectively. Recalling Eq. 2Jand setting Y = WX andWâ =W+V results in:
(Aw, As,
(Aw, As, V) = argmin |[Â¥ â Qa.) Qa. (XP, Aw Az,V
For simplicity we assume that A,, is fixed and define X, = Qa, (X), Wq = Qa,,,(Wâ). Therefore, if we have B unique samples, then the problem we aim to solve have the following structure:
w11 · · · wM 1 ... w1N . . . · · · ... wM N x11 · · · xN 1 ... . . . · · · ... xN B x1B = y11 · · · yM 1 ... . . . ... y1B · · · yM B
which translates to:
... ... . . . ... wM N xN B Notice that in the above equations, for each output we have a different set of parameters, therefore we can examine each output separately. For a single output we are in scalar linear regression with N parameters and B equations. If B ⥠N we are under-parameterized, and if B < N we are over-parameterized.
Convolution layers layers Similarly, for convolution layers with Co output channels, Ci input channels, and kernel size k each element of the output is a dot product of Ci · k · k parameters. We have in total Co à H à W outputs where H, W is the output height and width. Thus we need B ⥠Ci·k2
# B Reconstruction and re-fusing of Batch Normalization
In this section, we provide more details on Batch Normalization reconstruction and re-fusing proce- dure.
Reconstructing BN layers: Consider a Batch Normalization layer with parameters γo, βo that fused into previous convolutional layer weight and bias. Fusing batch normalization layer transforms weights and bias as following:
wWi=we; = 2-1) +4: (B.9) oO oO
To reconstruct the batch normalization, we would like to initialize µ, Ï2, as well as the BN parameters γr and βr (r for "reconstructed") so that the reconstructed BN is approximately identity ï¬g. B.1.
(B.10)
To do so, ï¬rst we initialize the reconstructed BN layers by setting the following parameters (denoted by r):
Lb = By = Bo; e=7 wW=V2 +e (B.11)
so that BNr(x) = x. Now, we can update µ and Ï2 by collecting running mean and running variance on the calibration data. We stress that the BN parameters, γr, βr, do not change while applying BN tuning, as we only invoke forward propagation.
12
w,72 eb, â p1) + Bo
Figure B.1
Re-fusing BN layers: After BNT phase we need to fuse Batch Normalization layer again into convolution weights and bias. Regular batch normalization fusing will cause degradation due to quantization of the weights. To resolve this issue we can leverage per-channel quantization setting we use.
Denote swi, zwi scale and zero point of the weigh, the quant/dequant operation deï¬ned as:
Wy = Sw, Es Faltabel (B.12) Sw; Sw, Sw;
We can fuse parameters of the batch normalization layer as following:
WE= Wis Uh = (6, â pe) +B Ox r , Vr y= Vr (B.13) = âSu,5 = ew; wy =F Sui 0 =
Finally we can show that transformations eq. (B.13) equivalent to γr Ïr
# Wq
(B.14) 6, |= Fal = Or Sw; Sw; Sw;
# C Additive loss assumption for integer-programming
Suppose the loss function of the network L depends on a certain set of variables (weights, activations, etc.), which we denote by a vector v. We would like to measure the effect of adding quantization noise to this set of vectors.
Since the quantization is emulated with additive noise, the loss is smooth and thus can be expanded to the Taylor series:
âL = L(v + ε) â L(v) = (C.15)
_ OLT pl 3 = weté ae +O (lel ) . (C.16)
13
One can see from Eq C.16 that when the quantization error ε is sufï¬ciently small, the overall degradation âL can be approximated as a sum of N independent degradation processes by neglecting the quadratic terms ε2:
OLT âOL ALN Ee = au, (C.17)
We note that Lin et al. (2016); Choukroun et al. (2019) used a similar assumption with respect to the additivity of quantization noise.
# D Experimental Details
In all our experiments, we used a small subset of the training set to run our methods. Speciï¬cally, for vision models, we used 1000 unlabeled images from the ImageNet training set (single image for each class) as a calibration set. For the Bert model, we used one paragraph from the training set. All presented methods AdaQuant, BNT, BT, and IP, performed well on such small calibration set producing SOTA results. Next we detail our setting for each of the technique in our pipelines
# D.1 AdaQuant
AdaQuant optimization problem deï¬ned as following except zero-point of the quantizer which we omitted from eq. (D.18):
# (Av. As. Fir.)
= arg min ||W X + b â Qâw (W + VW ) · Qâx (X) â Q(b + Vb)||2 (D.18) âw,âx,VW ,Vb
Technically to ï¬nd a solution for eq. (D.18), we use Adam optimizer with different learning rates per type of parameters. We set different learning rates for weight, bias, and quantization parameters of input and weights. After experimenting with different models, we found that the same set of LR parameters worked for each model. The learning rates are 1e â 5, 1e â 3, 1e â 1, 1e â 3 for weight, bias, quantization parameters of the inputs, and weights, respectively.
For vision models, we used 1000 unlabeled images from the ImageNet training set (single image for each class), running Adam optimizer for 100 iterations and a batch-size of 50 unless otherwise stated. For BERT-base model, we used one paragraph from the training set, running Adam optimizer for 50 - 100 iterations depending on the type of layer. Learning rates and batch size are the same as of vision models.
In ï¬g. 1 we aimed to answer the following question: Assuming you have a small calibration set and no resources constraints (time,power) which method is the most accurate and robust Assuming you have a small calibration set and no resources constraints (time,power) which method is the most accurate and robust Out method were evaluated by running each experiments ï¬ve times and reporting mean and standrad deviation. Here, in ï¬g. D.2, we add an additional naive early-stop plot on top of QAT-KLD experiment. We split the calibration data into two equal sets and train on half the examples while evaluation our performance on the other half Both KLD experiments used an SGD optimizer over 10 epochs; starting with learning rate of 0.1 and decreasing it by 1e-2 factor after 2 and 8 epochs. We also conducted KLD experiments with Adam optimizer and learning rate of 1e-3 where performed but their results were inferior. As can be seen in the plot AdaQuant is superior to other methods and remarkably excels on small calibration sets. As can be seen in ï¬g. D.2 the early exit results were inferior to the QAT-KLD as they use much smaller training set. However, other types of training-validation splits (e.g. 80-20) may boost the results.
# Integer Programming
Our IP method requires two steps, the ï¬rst is measuring the properties of each layer, and the second is applying the program based on these measurements with user deï¬ned constraint. As reference, we measure the loss (can also be accuracy) of the base precision model on the calibration set. Next, we measure the sensitivity of each layer by evaluating a model where all layers are qunatize to the base-precision but one layer that is quantized to lower precision (e.g., all 8-bit but one layer with
14
80 60 ;- a o 40 a 3 --- Naive min-max + 50 â + AdaQuant â AdaRound ââ}- Early stop KLD 0 âtâ Simple KLD 10° 10+ 102 103 Calibration set size (log scale)
Figure D.2: Calibration size ablation study with additional early-stop plot.
4-bit). The âLl in Eq. 3 is deï¬ned as the difference between the reference model loss and the measured loss. If a layer is robust to quantization, âLl will be small, and if a layer is sensitive to quantization, âLl will be large. The performance gain in the case of compression, is simply the model parameters size difference when lowering the precision of the examined layer. Hence, if a layer has N parameters, the performance gain when changing from 8-bit to 4-bit result in compression gain of âPl = N â 8 â N â 4 = 4N . In the second stage, we run the integer program based on the sensitivity and compression measured on each layer along with the user deï¬ned constraint.
# D.3 Batch Normalization and Bias Tuning
The Batch Norm tuning phase is the most lightweight phase of the pipeline. We found empirically less than ten iterations of statistics update are sufï¬cient. We also found that as compression growth, more iterations of batch norm tuning are required. At the bias tuning phase, we perform 200 iterations of ï¬ne-tuning with the learning-rate of 0.1.
# E Code
For all our vision dataset we used the default torchvision pre-trained model. For BERT-base experi- ment we ï¬ned-tuned on SQUAD1.1 dataset and provide the script for that as a part of our repository. Our code can be found at: https://github.com/papers-submission/CalibTIP.
15 | {
"id": "1912.01274"
} |
2006.07409 | How to Avoid Being Eaten by a Grue: Structured Exploration Strategies for Textual Worlds | Text-based games are long puzzles or quests, characterized by a sequence of
sparse and potentially deceptive rewards. They provide an ideal platform to
develop agents that perceive and act upon the world using a combinatorially
sized natural language state-action space. Standard Reinforcement Learning
agents are poorly equipped to effectively explore such spaces and often
struggle to overcome bottlenecks---states that agents are unable to pass
through simply because they do not see the right action sequence enough times
to be sufficiently reinforced. We introduce Q*BERT, an agent that learns to
build a knowledge graph of the world by answering questions, which leads to
greater sample efficiency. To overcome bottlenecks, we further introduce
MC!Q*BERT an agent that uses an knowledge-graph-based intrinsic motivation to
detect bottlenecks and a novel exploration strategy to efficiently learn a
chain of policy modules to overcome them. We present an ablation study and
results demonstrating how our method outperforms the current state-of-the-art
on nine text games, including the popular game, Zork, where, for the first
time, a learning agent gets past the bottleneck where the player is eaten by a
Grue. | http://arxiv.org/pdf/2006.07409 | Prithviraj Ammanabrolu, Ethan Tien, Matthew Hausknecht, Mark O. Riedl | cs.AI, cs.CL, cs.LG, stat.ML | null | null | cs.AI | 20200612 | 20200612 | 0 2 0 2
n u J 2 1 ] I A . s c [
1 v 9 0 4 7 0 . 6 0 0 2 : v i X r a
# How to Avoid Being Eaten by a Grue: Structured Exploration Strategies for Textual Worlds
# Prithviraj Ammanabrolu â Ethan Tien â Matthew Hausknechtâ¡ Mark O. Riedl â â Georgia Institute of Technology â¡Microsoft Research [email protected]
# Abstract
Text-based games are long puzzles or quests, characterized by a sequence of sparse and potentially deceptive rewards. They provide an ideal platform to develop agents that perceive and act upon the world using a combinatorially sized natural language state-action space. Standard Reinforcement Learning agents are poorly equipped to effectively explore such spaces and often struggle to overcome bottlenecksâstates that agents are unable to pass through simply because they do not see the right action sequence enough times to be sufï¬ciently reinforced. We introduce Q*BERT 1, an agent that learns to build a knowledge graph of the world by answering questions, which leads to greater sample efï¬ciency. To overcome bottlenecks, we further introduce MC!Q*BERT an agent that uses an knowledge-graph-based intrinsic motivation to detect bottlenecks and a novel exploration strategy to efï¬ciently learn a chain of policy modules to overcome them. We present an ablation study and results demonstrating how our method outperforms the current state-of-the-art on nine text games, including the popular game, Zork, where, for the ï¬rst time, a learning agent gets past the bottleneck where the player is eaten by a Grue.
# Introduction
Text-adventureâor interaction fictionâgames are simulations featuring language-based state and action spaces. An example of a one turn agent interaction in the popular text-game Zork] can be seen in Fig. [I] Prior works have focused on a few challenges that are inherent to this medium: (1) Partial observability the agent must reason about the world solely through incomplete textual descriptions [22\[10)[5]. (2) Commonsense reasoning to enable the agent to more intelligently interact with objects in its surroundings [4]. (3) A combinatorial state-action space wherein most games have action spaces exceeding a billion possible actions per step; for example the game ZorkI has 1.64 x 10" possible actions at every step [15] 3]. Despite these challenges, modern text-adventure agents such as KG-A2C [3], TDQN [15], and DRRN have relied on surprisingly simple exploration strategies such as e-greedy or sampling from the distribution of possible actions.
In this paper, we focus on a particular type of exploration problem: that of detecting and overcoming bottleneck states. Most text-adventure games have relatively linear plots in which players must solve a sequence of puzzles to advance the story and gain score. To solve these puzzles, players have freedom to a explore previously unlocked areas of the game, collect clues, and acquire tools needed to solve the next puzzle and unlock the next portion of the game. From a Reinforcement Learning perspective, These puzzles can be viewed as bottlenecks that act as partitions between different regions of the state space. We contend that existing Reinforcement Learning agents are unaware of such latent structure and are thus poorly equipped for solving these types of problems. We present techniques for automatically detecting bottlenecks and efï¬ciently learning policies that take advantage of the natural partitions in the state space.
1Code can be found here https://github.com/rajammanabrolu/Q-BERT
Preprint. Under review.
Overcoming bottlenecks is not as simple as se- lecting the correct action from the bottleneck state. Most bottlenecks have long-range depen- dencies that must ï¬rst be satisï¬ed: Zork1 for instance features a bottleneck in which the agent must pass through the unlit Cellar where a mon- ster known as a Grue lurks, ready to eat unsus- pecting players who enter without a light source. To pass this bottleneck the player must have pre- viously acquired and lit the latern. Other bottle- necks donât rely on inventory items and instead require the player to have satisï¬ed an external condition such as visiting the reservoir control to drain water from a submerged room before being able to visit it. In both cases, the actions that fulï¬ll dependencies of the bottleneck, e.g. acquiring the lantern or draining the room, are not rewarded by the game. Thus agents must correctly satisfy all latent dependencies, most of which are unrewarded, then take the right action from the correct location to overcome such bot- tlenecks. Consequently, most existing agentsâ regardless of whether they use a reduced action space [31, 27] or the full space [15, 3]âhave failed to consistently clear these bottlenecks.
Observation: West of House You are standing in an open ï¬eld west of a white house, with a boarded front door. There is a small mailbox here. Action: Open mailbox Observation: Opening the small mailbox reveals a leaï¬et. Action: Read leaï¬et Observation: (Taken) "WELCOME TO ZORK! ZORK is a game of adventure, danger, and low cunning. In it you will explore some of the most amazing territory ever seen by mortals. No com- puter should be without one!" Action: Go north Observation: North of House You are facing the north side of a white house. There is no door here, and all the windows are boarded up. To the north a narrow path winds through the trees.
Figure 1: Excerpt from Zork1.
To better understand how to design algorithms that pass these bottlenecks, we first need to gain a sense for what they are. We observe that quests in text gamesâand any such sequential decision making problem requiring long term dependenciesâcan be modeled in the form of a dependency graph. These dependency graphs are directed acyclic graphs (DAGs) where the vertices indicate either rewards that can be collected or dependencies that must be met to progress. In text-adventure games the dependencies are of two types: items that must be collected for future use, and locations that must be visited. An example of such a graph for the game of Zork/ can found in Fig. [2| More formally, bottleneck states are vertices in the dependency graph that, when the graph is laid out topographically, are (a) the only state on a level, and (b) there is another state at a higher level with non-zero reward. Bottlenecks can be mathematically expressed as follows: let D = (V, E) be the directed acyclic dependency graph for a particular game where each vertex is tuple v = (57, 5;,7(s)) containing information on some state s such that s; are location dependencies, s; are inventory dependencies, and r(s) is the reward associated with the state. There is a directed edge e ⬠E between any two vertices such that the originating state meets the requirements s; and s; of the terminating vertex. D can be topologically sorted into levels L = {1;, ..., Jn } where each level represents a set of game states that are not dependant on each other. We formulate the set of all bottleneck states in the game:
= li = 1, b li, V ) lj s.t. (j > i r(s) (1)
b : ( | {
s ( â
= 0)) }
# B
|
â
â§
â
â§
This reads as the set of all states that that belong to a level with only one vertex and that there exists some state with a non-zero reward that depends on it. Intuitively, regardless of the path taken to get to a bottleneck state, any agent must pass it in order to continue collecting future rewards. Behind House is an example of a bottleneck state as seen in Fig. 2. The branching factor before and after this state is high but it is the only state through which one can enter the Kitchen through the window.
In this paper, we introduce Q*BERT, a deep reinforcement learning agent that plays text games by building a knowledge graph of the world and answering questions about it. Knowledge graph state representations have been shown to alleviate other challenges associated with text games such as partial-observability [5, 28, 3, 6, 1, 21]. We introduce the Jericho-QA dataset, for question- answering in text-game-like environments, and show that our novel question-answering-based graph construction procedure improves sample efï¬ciency but not asymptotic performance. In order to improve performance and pass through bottlenecks, we extend Q*BERT with a novel exploration strategy that uses intrinsic motivation based on the knowledge graph to alleviate the sparse, deceptive reward problem. Our exploration strategy ï¬rst detects bottlenecks and then modularly chains policies that go from one bottleneck to another. We call this combined system MC!Q*BERT. These two
2
65>) Fr Loc: Forest Path __5( OSE )_, Loc: upa Tree Crattcc) X ~_ oF Grae sane) Le , +25 get painting Loc: Troll Room ao 0: Cellar ni inv: Lamp, Sword 65> nae 6 nv: ord ~â ) CG Loc: West of House | ¢ y * Inv: 5 sein re 5 v § C pon vino D) nave â S°6.05>) 0) ame âNâ Loc: Gallery ** Loc: Behind House __y( Loc: Kitchen oes Key Inv: Lamp, Sword, Inv: C ) Inv: rovoste Positive Rewards Painting Or, cited
Figure 2: Portion of the Zork1 quest structure visualized as a directed acyclic graph. Each node represents a state; clouds represent areas of high branching factor with labels indicating some of the actions that must be performed to progress
enhancements form the two core contributions of this paper. We evaluate Q*BERT, MC!Q*BERT, and ablations of both on a set of nine text games. We further compare our technique to alternative exploration methods such as Go Explore [12]; our full technique achieves state-of-the-art performance on eight out of nine games.
# 2 Related Work and Background
We use the deï¬nition of text-adventure games as seen in Côté et al. [10] and Hausknecht et al. [15]. These games are partially observable Markov decision processes (POMDPs), represented as a 7-tuple representing the set of environment states, mostly deterministic conditional of transition probabilities between states, the vocabulary or words used to compose text commands, observations returned by the game, observation conditional probabilities, reward function, and the discount factor respectively. LSTM-DQN [22] and Action Elimination DQN [31] operate on a reduced action space of the order of 102 actions per step by considering either verb-noun pairs or by using a walkthrough of the game respectively. The agents learn how to produce Q-value estimates that maximize long term expected reward. The DRRN algorithm for choice-based games [16, 32] estimates Q-values for a particular action from a particular state. Fulda et al. [14] try to use word embeddings speciï¬cally in an attempt to model affordances for items in these games, learning how to interact with them.
There have been a couple of works detailing potential methods of exploration in this domain. Jain et al. [17] extend consistent Q-learning [9] to text-games, focusing on taking into account historical context. In terms of exploration strategies, Yuan et al. [29] detail how counting the number of unique states visited improves generalization in unseen games. Côté et al. [10] introduce TextWorld, a framework for procedurally generating parser-based games via a grammar, allowing a user to control the difï¬culty of a generated game.Urbanek et al. [26] introduce LIGHT, a dataset of crowdsourced text-adventure game dialogs focusing on giving collaborative agents the ability to generate contextually relevant dialog and emotes. Hausknecht et al. [15] introduce Jericho, a framework for interacting with text- games, in addition to a series of baseline reinforcement learning agents. Yuan et al. [30] introduce the concept of interactive question-answering in the form of QAitâmodeling QA tasks in TextWorld.
Ammanabrolu and Riedl [5] introduce the KG-DQN, using knowledge graphs as states spaces for text-game agents and Ammanabrolu and Riedl [4] extend it to enable transfer of knowledge between games. Ammanabrolu and Hausknecht [3] showcase the KG-A2C, for the ï¬rst time tackling the fully combinatorial action space and presenting state-of-the-art results on many man-made text games. In a similar vein, Adhikari et al. [1] present the Graph-Aided Transformer Agent (GATA) which learns to construct a knowledge graph during game play and improves zero-shot generalization on procedurally generated TextWorld games.
# 3 Q*BERT
This section presents the base reinforcement learning algorithm we introduce, which we call Q*BERT. Q*BERT is based on KG-A2C [3]; it uses a knowledge-graph to represent itâs understanding of the world state. A knowledge graph is a set of relations (s, 1,0) such that s is a subject, r is a relation, and o is an object. See Figure[3]|(left) for an example fragment of a knowledge graph for a text-adventure game. Instead of using relation extraction rules, Q*BERT uses a variant of the BERT natural
3
graph mask Sequential Action &>Action Predictor Recurrent Text Encoder Key: Locations Surr. Obj.s Inv. Obj.s Behind House attributes = You are behind the white house. A path leads into) the forest to the east. In one corner of the house there is a small window which is slightly ajar. You are carrying: A jewel-encrusted egy Attributes: talkable, openable, animate, treasure ..._) ââ Q*BERT (Observation
Figure 3: One-step knowledge graph extraction in the Jericho-QA format, and overall Q*BERT architecture at time step t. At each step the ALBERT-QA model extracts a relevant highlighted entity set Vt by answering questions based on the observation, which is used to update the knowledge graph.
language transformer to answer questions about the current state text description and populate the knowledge graph from the answers.
Knowledge Graph State Representation Ammanabrolu and Riedl [5] are the ï¬rst to use question answering (QA) in text-game playing to pre-train a network to answer the question of âWhat action best next to take?â using game traces from an oracle agent capable of playing a game perfectly. They pre-train an LSTM to predict the action based on a environment text description. We build on this idea but instead treat the problem of constructing the knowledge graph as a question-answering by asking a question-answering system task. The method ï¬rst extracts a set of graph vertices to form a knowledge relevant questions and then linking them together using a set of relations graph representing information the agent has learned about the world. Examples of questions include: âWhat is my current location?â, âWhat objects are around me?â, and âWhat am I carrying?â to respectively extract information regarding the agentâs current location, surrounding objects, inventory objects. Further, we predict attributes for each object by asking the question âWhat attributes does x object have?â. An example of the knowledge graph that can be extracted from description text and the overall architecture are shown in Fig. 3.
For question-answering, we use the pre-trained language model ALBERT [18], a variant of BERT [11] that is ï¬ne-tuned for question answering on the SQuAD [23] question-answering dataset. We further ï¬ne-tune the ALBERT model on a dataset speciï¬c to the text-game domain. This dataset, dubbed Jericho-QA, was created by making question answering pairs about text-games. Jericho [15]2 is a framework for reinforcement learning in text-games. Using Jericho we construct a question-answering corpus for ï¬ne-tuning ALBERT as follows. For each game in Jericho, we use an oracleâan agent capable of playing the game perfectlyâand a random exploration agent to gather ground truth state information about locations, objects, and attributes. These agents are designed to extract this information directly from the game engine, which is otherwise off-limits when Q*BERT is trained. From this ground truth, we construct pairs of questions in the form that Q*BERT will ask as it encounters environment description text, and the corresponding answers. These question-answer pairs are used to ï¬ne-tune the Q/A model and the ground truth data is discarded. No data from games we test on is used during ALBERT ï¬ne-tuning. Additional details regarding Jericho-QA, graph update rules, and Q*BERT can be found in Appendix A.1.
In a text-game the observation is a textual description of the environment. For every observation received, Q*BERT produces a ï¬xed set of questions. The questions and the observation text are sent triples and added to to the question-answering system. Predicted answers are converted into
(s, 1, 0)
# 2https://github.com/microsoft/jericho
4
the knowledge graph. The complete knowledge graph is the input into Q*BERTâs neural architecture (training described below), which makes a prediction of the next action to take.
Action Space Solving Zork1, the cannonical text-adventure game, requires the generation of actions consisting of up to ï¬ve-words from a relatively modest vocabulary of 697 words recognized by the 1014 possible actions at every step. Hausknecht et al. gameâs parser. This results in [15] propose a template-based action space in which the agent ï¬rst selects a template, consisting of an action verb and preposition, and then ï¬lling that in with relevant entities (e.g. [get] ). Zork1 has 237 templates, each with up to two blanks, yielding a template-action space of size 108. This space is still far larger than most used by previous approaches O applying reinforcement learning to text-based games. We use this template action space for all games.
Training At every step an observation consisting of several components is received: ot = (otdesc , otgame , otinv , atâ1) corresponding to the room description, game feedback, inventory, and previous action, and total score Rt. The room description otdesc is a textual description of the agentâs location, obtained by executing the command âlookâ. The game feedback otgame is the simulators response to the agentâs previous action and consists of narrative and ï¬avor text. The inventory otinv and previous action atâ1 components inform the agent about the contents of its inventory and the last action taken respectively.
Each of these components is processed using a GRU based encoder utilizing the hidden state from the previous step and combined to have a single observation embedding ot. At each step, we update our knowledge graph Gt using ot as described in earlier in Section 3 and it is then embedded into a single vector gt. This encoding is based on the R-GCN and is calculated as:
. 1 ge=f | Wer | > So Kw, On; + Woh; | + bg (2) rER JEN,
râR r is the 1-step neighborhood of a vertex i with respect to relation Where (l) are the learnable convolutional ï¬lter weights with respect to relation r and hidden r, Wr state of a vertex j in the last layer l of the R-GCN respectively, ci,r is a normalization constant, and Wg and bg the weights and biases of the output linear layer. The full architecture can be found in Fig. 3. The state representation consists only of the textual observations and knowledge graph. Another key use of the knowledge graph, introduced as part of KG-A2C, is the graph mask, which restricts the possible set of entities that can be predicted to ï¬ll into the action templates at every step to those found in the agentâs knowledge graph. The rest of the training methodology is unchanged from Ammanabrolu and Hausknecht [3], more details can be found in Appendix A.1.
# 4 Structured Exploration
This section describes an exploration method built on top of Q*BERT that ï¬rst detects bottlenecks and then searches for ways to pass them, learning policies that take it from bottleneck to bottleneck. This method of chaining policies and backtracking can be thought of in terms of options [24, 25], where the agent decomposes the task of solving the text game into the sub-tasks, each of which has itâs own policy. In our case, each sub-task delivers the agent to a bottleneck state.
# 4.1 Bottleneck Detection using Intrinsic Motivation
Examples of some bottlenecks can be seen in Figure 2 based on our deï¬nition of a bottleneck in Eq. 1. Inspired by McGovern and Barto [20], we present an intuitive way of detecting these bottleneck statesâor sub-tasksâin terms of whether or not the agentâs ability to collect reward stagnates. If the agent does not collect a new reward for a number of environment interactionsâdeï¬ned in terms of a patience parameterâthen it is possible that it is stuck due to a bottleneck state. An issue with this method, however, is that the placement of rewards does not always correspond to an agent being stuck. Complicating matters, rewards are sparse and often delayed; the agent not collecting a reward for a while might simply indicate that further exploration is required instead of truly being stuck.
To alleviate these issues, we deï¬ne an intrinsic motivation for the agent that leverages the knowledge graph being built during exploration. The motivation is for the agent to learn more information
5
regarding the world and expand the size of its knowledge graph. This provides us with a better indication of whether an agent is stuck or notâa stuck agent does not visit any new states, learns no new information about the world, and therefore does not expand its knowledge graphâleading to more effective bottleneck detection overall. To prevent the agent from discovering reward loops based on knowledge graph changes, we formally deï¬ne this reward in terms of new information learned.
t-1 Tm, = A(KGetovar â KGr) where KGetoba = U KG; i=l (3)
Here KGglobal is the set of all edges that the agent has ever had in its knowledge graph and the subtraction operator is a set difference. When the agent adds new edges to the graph perhaps as a the result of ï¬nding a new room KGglobal changes and a positive reward is generatedâthis does not happen when that room is rediscovered in subsequent episodes. This is then scaled by the game score so the intrinsic motivation does not drown out the actual quest rewards, the overall reward the agent receives at time step t looks like this:
Tg, +⬠(4) r= Tg, + OTM, Tmax
where « is a small smoothing factor, a is a scaling factor, ry, is the game reward, Tmax is the maximum score possible for that game, and r; is the reward received by the agent on time step t.
# Algorithm 1 Structured Exploration
Algorithm 1 Structured Exploration > Chained, backtrack, current policy > Backtrack, current state buffers {Tenain, TH, T} â @ {Si,S} HH @ S0, Tint <â ENV.RESET() Tmax init, Pp â 0 for timestep t in 0...M do St41,7t, 7 <â Q*BERTUPDATE(S¢, 7) S&S 4+ 541 > Append current state to state buffer peptl > Lose patience > Train for M Steps if 7() < Jmax then if p >= patience then > Stuck at a bottleneck St, Tmax, 7 <- BACKTRACK(7, Sp) > Bottleneck passed; Add 7 to the chained policy Tehain < Tehain â+ 7 if J(m) > Jmax then > New highscore found Tmax â I (7); 7 â 7; Sp â Sip â 0 return 7ehain > Chained policy that reaches max score function Q* BERTUPDATE(s:, 77) St41,1Tg, <â ENV.STEP(s¢, 7) rT, <â CALCULATEREWARD(S141, Tg; ) m < A2C.UPDATE(T, rz) return si41,7%,7 > One-step update > Section| > Eq.|4} > Appendix function BACKTRACK(m», Sp) for b in REVERSE(S») do soc bared for timestep t in 0...N do > Train for N steps Se41, Tt, 7 <â Q*BERTUPDATE(S¢, 7) if J(x) > J (xm) then return s;, 77,7 > Try to overcome bottleneck > States leading to highscore Terminate > Canât find better score; Give up.
# 4.2 Modular Policy Chaining
A primary reason that agents fail to pass bottlenecks is not satisfying all the required dependencies. To solve this problem, we introduce a method of policy chaining, where the agent utilizes the determinism of the simula- tor to backtrack to previously visited states in order to fulï¬ll dependencies required to overcome a bottleneck.
Speciï¬cally, Algorithm 1 optimizes the policy Ï as usual, but also keeps track of a buffer of the distinct states and knowledge graphs that led up to each state (we use state st to collo- quially refer to the combination of an observation ot and knowledge graph t). Similarly, a bottleneck buffer KG b and policy Ïb reï¬ect the sequence S of states and policy with the maximal Jmax. A bottleneck is identiï¬ed return when the agents fails to improve upon Jmax after patience number of steps, i.e. no improvement in raw game score or knowledge-graph- based intrinsic motivation reward. The agent then backtracks by searching backwards through the b, restarting from each of the previous statesâand training for N steps in search of state sequence a more optimal policy to overcome the bottleneck. When such a policy is found, it is appended to modular policy chain Ïchain. Conversely, if no such policy is found, then we have failed to pass the current bottleneck and the training terminates.
> Canât find better score; Give up.
# Terminate
# 5 Evaluation
We ï¬rst evaluate the quality of the knowledge graph construction in a supervised setting. Next we perform and end-to-end evaluation in which knowledge graph construction is used by Q*BERT.
6
Expt. Jericho-QA KG-A2C Q*BERT MC!Q*BERT | GO!Q*BERT Game Reward v v v v v Intrinsic Motive v v Metric EM FI_||_Eps. | Max | Eps. | Max || Max [Max Max zorkl F001 | 44.62 34 35 | 33.6 35 32 | 416 3T library 36.76 | 46.45 14.3 19 | 10.0 18 19 19 18 detective 60.28 | 63.21 || 207.9 | 214 | 246.1 | 274 || 320 | 330 304 balances 55.26 | 56.49 10 10 98 10 10 10 10 pentari 63.89 | 68.37 || 50.7 56 | 48.2 56 56 58 40 ztuu 28.71 | 29.76 6 9 5 5 5 | 118 5 ludicorp 52.32 | 59.95 178 19 | 17.6 19 19 | 22.8 20.6 deephome 8.03 | 9.27 1 1 1 1 8 6 1 temple 48.92 | 49.17 1.6 8 79 8 8 8 8
Eps. Max 35 19 214 10 56 9 19 1 8 Table 1: QA results on Jericho-QA test set and averaged asymptotic scores on games by different methods across 5 independent runs. For KG-A2C and Q*BERT, we present scores averaged across the ï¬nal 100 episodes as well as max scores. Methods using exploration strategies show only max scores given their workings. Agents are allowed 106 steps for each parallel A2C agent with a batch size of 16.
zork1 40 Painfing CellarKEgg 30 20 10 Katcher â*â MC!Q*BERT 5 â%â MCIQ*BERT-no-IM 0 â QâBERT =â KG-a2c 0 ââ GO!Q*BERT 0 20000 40000 60000 80000 190000 0 10000 20000 30000 40000 50000
(a) Episode reward curves for KG-A2C and Q*BERT. (b) Max reward curves for exploration strategies.
Figure 4: Select ablation results on Zork1 conducted across 5 independent runs per experiment. We see where the agents using structured exploration pass each bottleneck seen in Fig. 2. Q*BERT without IM is unable to detect nor surpass bottlenecks beyond the Cellar.
# 5.1 Graph Extraction Evaluation
Table 1 left shows QA performance on the Jericho-QA dataset. Exact match (EM) refers to the percentage of times the model was able to predict the exact answer string, while F1 measures token overlap between prediction and ground truth. We observe a direct correlation between the quality of the extracted graph and Q*BERTâs performance on the games. On games where Q*BERT performed comparatively better than KG-A2C in terms of asymptotic scores, e.g. detective, the QA model had relatively high EM and F1, and vice versa as seen with ztuu. In general, however, Q*BERT reaches comparable asymptotic performance to KG-A2C on 7 out of 9 games. However, as shown in Figure 4a, Q*BERT reaches asymptotic performance faster than KG-A2C, indicating that the QA model leads to faster learning. Appendix B contains more plots illustrating this trend. Both agents rely on the graph to constrain the action space and provide a richer input state representation. Q*BERT uses a QA model ï¬ne-tuned on regularities of a text-game producing more relevant knowledge graphs than those extracted by OpenIE [8] in KG-A2C for this purpose.
# Intrinsic Motivation and Exploration Strategy Evaluation
We evaluate intrinsic motivation through policy chaining, dubbed MC!Q*BERT (Modularly Chained Q*BERT) by ï¬rst testing policy chaining with only game reward and with both game reward and intrinsic motivation. We provide a qualitative analysis of the bottlenecks detected with both methods with respect to those found in Fig. 2 on Zork1. Just as KG-A2C provided us with a direct comparison for assessing the graph extraction abilities of Q*BERT, we test MC!Q*BERTâs structured exploration against an alternative exploration method in the form of Go-Explore [12], an algorithm with similar properties to our policy module chaining and has been show to work well for large, discrete game
7
state spaces. Further, MC!Q*BERT using both game reward and intrinsic motivation matches or outperforms all other methods on 8 out of 9 games, with MC!Q*BERT using only game reward received the highest score on the 9th game, deephome. GO!Q*BERT Go-Explore [12] is an algorithm designed to keep track of sub-optimal and under- explored states in order to allow the agent to explore upon more optimal states that may be a result of sparse rewards. The Go-Explore algorithm consists of two phases, the ï¬rst to continuously explore until a set of promising states and corresponding trajectories are found on the basis of total score, and the second to robustify this found policy against potential stochasticity in the game. Promising states are deï¬ned as those states when explored from will likely result in higher reward trajectories. Madotto et al. [19] look at applying Go-Explore to text-games on a set of simpler games generated using the game generation framework TextWorld [10]. They use a small set of âadmissible actionsââactions guaranteed to change the world state at any given step during Phase 1âto explore and ï¬nd high reward trajectories. We adapt this, instead training Q*BERT in parallel to generate actions from the full action space used for exploration to maintain a constant action space size across all models. Implementation details are found in Appendix A.3.
# 6 Analysis
Table 1 shows that across all the games MC!Q*BERT matches or outperforms the current state-of-the- art when compared across the metric of the max score consistently received across runs. There are two main trends: First, MC!Q*BERT greatly beneï¬ts from the inclusion of intrinsic motivation rewards. Qualitative analysis of bottlenecks detected by each agent on the game of Zork1 reveals differences in the overall accuracy of the bottleneck detection between MC!Q*BERT with and without intrinsic motivation. Figure 4b shows exactly when each of these agents detects and subsequently overcomes the bottlenecks outlined in Figure 2. What we see here is that when the intrinsic motivation is not used, the agent discovers that it can get to the Kitchen with a score of +10 and then Cellar with a score of +25 immediately after. It forgets how to get the Egg with a smaller score of +5 and never makes it past the Grue in the Cellar. Intrinsic motivation prevents this in two ways: (1) it makes it less focused on a locally high-reward trajectoryâmaking it less greedy and helping it chain together rewards for the Egg and Cellar, and (2) provides rewards for fulï¬lling dependencies that would otherwise not be rewarded by the gameâthis is seen by the fact that it learns that picking up the lamp is the right way to surpass the Cellar bottleneck and reach the Painting. A similar behavior is observed with GO!Q*BERT, the agent settles pre-maturely on a locally high-reward trajectory and thus never has incentive to ï¬nd more globally optimal trajectories by fulï¬lling the underlying dependency graph. Here, the likely cause is due to GO!Q*BERTâs inability to backtrack and rethink discovered high reward trajectories.
The second point is that using both the improvements to graph construction in addition to intrinsic motivation and structured exploration consistently yields higher max scores across a majority of the games when compared to the rest of the methods. Having just the improvements to graph building or structured exploration by themselves is not enough. Thus we infer that the full MC!Q*BERT agent is fundamentally exploring this combinatorially-sized space more effectively by virtue of being able to more consistently detect and clear bottlenecks. The improvement over systems using default exploration such as KG-A2C or Q*BERT by itself indicates that structured exploration is necessary when dealing with sparse and ill-placed reward functions.
# 7 Conclusions
Modern deep reinforcement learning agents using default exploration strategies such as e-greedy are ill-equipped to deal with the challenge of sparse and delayed rewards, especially when placed in a combinatorially-sized state-action space. Building on top of Q*BERT, an agent that constructs a knowledge graph of the world by asking questions about it, we introduce MC!Q*BERT, an agent that uses this graph as an intrinsic motivation to help detect bottlenecks arising from delayed rewards and chains policies that go from bottleneck to bottleneck. A key insight from an ablation study is that the graph-based intrinsic motivation is crucial for bottleneck detection, preventing the agent from falling into locally optimal high reward trajectories due to ill-placed rewards. Policy chaining used in tandem with intrinsic motivation results in agents that explore further in the game by clearing bottlenecks more consistently.
8
# 8 Broader Impacts
The ability to plan for long-term state dependencies in partially-observable environment has down- stream applications beyond playing games. We see text games as simpliï¬ed analogues for systems capable of long-term dialogue with humans, such as in assistance with planning complex tasks, and also discrete planning domains such as logistics. Broadly speaking, reinforcement learning is applicable to many sequential tasks, some of which cannot be anticipated. Reinforcement learning for text environments are more suited for domains in which change in the world is affected via language, which mitigates physical risksâour line of work is not directly relevant to roboticsâbut not cognitive and emotional risks, as any system capable of generating natural language is capable of accidental or intentional non-normative language use [13].
# References
[1] A. Adhikari, X. Yuan, M.-A. Côté, M. Zelinka, M.-A. Rondeau, R. Laroche, P. Poupart, J. Tang, A. Trischler, and W. L. Hamilton. Learning dynamic knowledge graphs to generalize on text-based games. arXiv preprint arXiv:2002.09127, 2020.
[2] L. Adolphs and T. Hofmann. Ledeepchef: Deep reinforcement learning agent for families of text-based games. arXiv preprint arXiv:1909.01646, 2019.
[3] P. Ammanabrolu and M. Hausknecht. Graph constrained reinforcement learning for natural language action spaces. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=B1x6w0EtwH.
[4] P. Ammanabrolu and M. Riedl. Transfer in deep reinforcement learning using knowledge graphs. In Proceedings of the Thirteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-13) at EMNLP, 2019. URL https://www.aclweb.org/ anthology/D19-5301.
[5] P. Ammanabrolu and M. O. Riedl. Playing text-adventure games with graph-based deep reinforcement learning. In Proceedings of 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, 2019.
[6] P. Ammanabrolu, W. Cheung, D. Tu, W. Broniec, and M. O. Riedl. Bringing stories alive: In 1st Joint Workshop on Narrative Understanding, Generating interactive ï¬ction worlds. Storylines, and Events (NUSE) at ACL, 2020. URL https://arxiv.org/abs/2001.10161. [7] T. Anderson, M. Blank, B. Daniels, and D. Lebling. Zork. http://ifdb.tads.org/
viewgame?id=4gxk83ja4twckm6j, 1979.
[8] G. Angeli, J. Premkumar, M. Jose, and C. D. Manning. Leveraging Linguistic Structure For Open Domain Information Extraction. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 2015.
Increasing the ac- tion gap: New operators for reinforcement learning. In AAAI Conference on Artiï¬cial In- telligence, 2016. URL https://www.aaai.org/ocs/index.php/AAAI/AAAI16/paper/ view/12428/11761.
[10] M.-A. Côté, A. Kádár, X. Yuan, B. Kybartas, T. Barnes, E. Fine, J. Moore, M. Hausknecht, L. E. Asri, M. Adada, W. Tay, and A. Trischler. Textworld: A learning environment for text-based games. CoRR, abs/1806.11532, 2018.
[11] J. Devlin, M. Chang, K. Lee, and K. Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805, 2018.
[12] A. Ecoffet, J. Huizinga, J. Lehman, K. O. Stanley, and J. Clune. Go-explore: a new approach for hard-exploration problems. CoRR, abs/1901.10995, 2019.
[13] S. Frazier, M. O. Al Nahian, Md Sultan Riedl, and B. Harrison. Learning norms from stories: A prior for value aligned agents. CoRR, abs/1912.03553, 2019.
[14] N. Fulda, D. Ricks, B. Murdoch, and D. Wingate. What can you do with a rock? affordance extraction via word embeddings. In IJCAI, pages 1039â1045, 2017. doi: 10.24963/ijcai.2017/ 144.
9
[15] M. Hausknecht, P. Ammanabrolu, M.-A. Côté, and X. Yuan. Interactive ï¬ction games: A colossal adventure. In Thirty-Fourth AAAI Conference on Artiï¬cial Intelligence (AAAI), 2020. URL https://arxiv.org/abs/1909.05398.
[16] J. He, J. Chen, X. He, J. Gao, L. Li, L. Deng, and M. Ostendorf. Deep reinforcement learning with a natural language action space. In ACL, 2016.
[17] V. Jain, W. Fedus, H. Larochelle, D. Precup, and M. G. Bellemare. Algorithmic improvements for deep reinforcement learning applied to interactive ï¬ction. CoRR, abs/1911.12511, 2019.
[18] Z. Lan, M. Chen, S. Goodman, K. Gimpel, P. Sharma, and R. Soricut. Albert: A lite bert for self-supervised learning of language representations. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=H1eA7AEtvS.
[19] A. Madotto, M. Namazifar, J. Huizinga, P. Molino, A. Ecoffet, H. Zheng, A. Papangelis, D. Yu, C. Khatri, and G. Tur. Exploration based language learning for text-based games. CoRR, abs/2001.08868, 2020.
[20] A. McGovern and A. G. Barto. Automatic discovery of subgoals in reinforcement learning using diverse density. 2001.
[21] K. Murugesan, M. Atzeni, P. Shukla, M. Sachan, P. Kapanipathi, and K. Talamadupula. Enhanc- ing text-based reinforcement learning agents with commonsense knowledge. arXiv preprint arXiv:2005.00811, 2020.
[22] K. Narasimhan, T. D. Kulkarni, and R. Barzilay. Language understanding for text-based games using deep reinforcement learning. In EMNLP, pages 1â11, 2015.
[23] P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang. SQuAD: 100,000+ questions for machine In Proceedings of the 2016 Conference on Empirical Methods in comprehension of text. Natural Language Processing, pages 2383â2392, Austin, Texas, Nov. 2016. Association for Computational Linguistics. doi: 10.18653/v1/D16-1264. URL https://www.aclweb.org/ anthology/D16-1264.
[24] M. Stolle and D. Precup. Learning options in reinforcement learning. In Proceedings of the 5th International Symposium on Abstraction, Reformulation and Approximation, page 212â223, Berlin, Heidelberg, 2002. Springer-Verlag. ISBN 3540439412.
[25] R. S. Sutton, D. Precup, and S. Singh. Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. Artiï¬cial intelligence, 112(1-2):181â211, 1999.
[26] J. Urbanek, A. Fan, S. Karamcheti, S. Jain, S. Humeau, E. Dinan, T. Rocktäschel, D. Kiela, A. Szlam, and J. Weston. Learning to speak and act in a fantasy text adventure game. CoRR, abs/1903.03094, 2019.
[27] X. Yin and J. May. Comprehensible context-driven text game playing. CoRR, abs/1905.02265,
2019.
[28] X. Yin and J. May. Learn how to cook a new recipe in a new house: Using map familiarization, curriculum learning, and common sense to learn families of text-based adventure games. arXiv preprint arXiv:1908.04777, 2019.
[29] X. Yuan, M. Côté, A. Sordoni, R. Laroche, R. T. des Combes, M. J. Hausknecht, and A. Trischler. Counting to explore and generalize in text-based games. CoRR, abs/1806.11525, 2018. [30] X. Yuan, M.-A. Côté, J. Fu, Z. Lin, C. Pal, Y. Bengio, and A. Trischler. Interactive language
learning by question answering. In EMNLP, 2019.
[31] T. Zahavy, M. Haroush, N. Merlis, D. J. Mankowitz, and S. Mannor. Learn what not to learn: Action elimination with deep reinforcement learning. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Pro- cessing Systems 31, pages 3562â3573. Curran Associates, Inc., 2018.
[32] M. Zelinka. Using reinforcement learning to learn how to play text-based games. CoRR, abs/1801.01999, 2018.
10
# A Implementation Details
We would like to preface the appendix with a discussion on the relative differences in the assumptions that Q*BERT and MC!Q*BERT make regarding the underlying environment. Although both are framed as POMDPs, MC!Q*BERT makes stronger assumptions regarding the determinism of the game as compared to Q*BERT. MC!Q*BERT (and GO!Q*BERT) rely on the fact that the set of transition probabilities in a text-game are mostly deterministic. Using this, they are able to assume that frozen policies can be executed deterministically, i.e. with no signiï¬cant deviations from the original trajectory. It is possible to robustify such policies by extending our method of structured exploration to perhaps perform imitation learning on the found highest score trajectories as seen in Phase 2 of the original GoExplore algorithm [12]. Stochasticity is not among set of challenges tackled in this work, howeverâwe focus on learning how to better explore combinatorially-sized spaces with underlying long-term dependencies. For future works in this space, we believe that agents should be compared based on the set of assumptions made: agents like KG-A2C and Q*BERT when operating under standard reinforcement learning assumptions, and MC!Q*BERT and GO!Q*BERT when under the stronger assumption of having a deterministic simulator.
# A.1 Q*BERT
This section outlines how Q*BERT is trained, including details of the Jericho-QA dataset, the overall architecture, A2C training and hyperparameter details.
# A.1.1 Jericho-QA Dataset
Jericho-QA contains 221453 Question-Answer pairs in the training set and 56667 pairs in the held out test set. The test set consists of all the games that we test on in this paper. It is collected by randomly exploring games using a set of admissible actions in addition to using the walkthroughs for each game as found in the Jericho framework [15]. The set of attributes for a game is taken directly from the game engine and is deï¬ned by the game developer.
A single sample looks like this:
Context: [loc] Chiefâs Office You are standing in the chiefâs office. He is telling you, "The mayor was murdered yeaterday night at 12:03 am. I want you to solve it before we get any bad publicity or the FBI has to come in." "Yessir!" you reply. He hands you a sheet of paper. once you have read it, go north or west. You can see a piece of white paper here.
[inv] You are carrying nothing. [obs] [your score has just gone up by ten points.] [atr] talkable, seen, lieable, enterable, nodwarf, indoors, visited, handed, lockable, surface, thing, water_room, unlock,
lost, afflicted, is_treasure, converse, mentioned, male, npcworn, no_article, relevant, scored, queryable, town, pluggable, happy, is_followable, legible, multitude, burning, room, clothing, underneath, ward_area , little, intact, animate, bled_in, supporter, readable, openable, near, nonlocal, door, plugged, sittable, toolbit, vehicle, light, lens_searchable, open, familiar, is_scroll, aimable, takeable, static, unique, concealed, vowelstart, alcoholic, bodypart, general, is_spell, full, dry_land, pushable, known, proper, inside, clean, ambiguously_plural, container, edible, treasure, can_plug, weapon, is_arrow, insubstantial, pluralname, transparent, is_coin, air_room, scenery, on, is_spell_book, burnt, burnable, auto_searched, locked, switchable, absent, rockable, beenunlocked, progressing, severed, worn, windy, stone, random, neuter, legible, female, asleep, wiped
Question: Where am I located? Answer: chiefâs office Question: What is here? Answer: paper, west Question: What do I have? Answer: nothing Question: What attributes does paper have? Answer: legible, animate Question: What attributes does west have? Answer: room, animate
11
# A.1.2 Knowledge Graph Update Rules
Every step, given the current state and possible attributes as contextâthe QA network predicts the current room location, the set of all inventory objects, the set of all surrounding objects, and all attributes for each object.
Linking the current room type (e.g. âKitchenâ, âCellarâ) to the items found in the room with the relation âhasâ, e.g.
(kitchen, has, lamp)
# e
All attribute information for each object is linked to the object with the relation âisâ. e.g. (egg, is, treasure)
⢠inventory objects with relation âhaveâ to the âyouâ node, e.g.
Linking all inventory (you, have, sword)
e Linking rooms with directions based on the action taken to move between the rooms, e.g. (Behind House, east of, Forest) after the action âgo eastâ is taken to go from behind the house to the forest
Below is an excerpt from Zork] showing the exact observations given to the Q*BERT,the knowledge graph, and the corresponding action taken by the agent after the graph extraction and update process has occurred as described above for a trajectory consisting of 5 timesteps. These timesteps begin at the start of the game in West of House and continue till the agent has entered the Kitchen as seen in Fig. The set of (s, 7,0) triples that make up the graph are in the text and the figure shows a partial visualization of the graph at that particular step in the trajectory.
_apimate â â 6 west _ ee % ââââ in oo % mailbox {
[loc] West of House You are standing in an open field west of a white house, with a boarded front door. There is a small mailbox here. [inv] You are empty handed.
[obs] Copyright c 1981, 1982, 1983 Infocom, Inc. All rights reserved. ZORK is a registered trademark of Infocom, Inc. Revision 88 / Serial number 840726 West of House You are standing in an open field west of a white house, with a boarded front door. There is a small mailbox here.
lost, afflicted, is_treasure, converse, mentioned, male, npcworn, no_article, relevant, scored, queryable, town, pluggable, happy, is_followable, legible, multitude, burning, room, clothing, underneath, ward_area, little, intact, animate, bled_in, supporter, readable, openable, near, nonlocal, door, plugged, sittable, toolbit, vehicle, light, lens_searchable, open, familiar, is_scroll, aimable, takeable, static, unique, concealed, vowelstart, alcoholic, bodypart, general, is_spell, full, dry_land, pushable, known, proper, inside, clean, ambiguously_plural, container, edible, treasure, can_plug, weapon, is_arrow, insubstantial, pluralname, transparent, is_coin, air_room, scenery, on, is_spell_book, burnt, burnable, auto_searched, locked, switchable, absent, rockable, beenunlocked, progressing, severed, worn, windy, stone, random, neuter, legible, female, asleep, wiped
[graph] [(you, in, west), (door, is, animate), (door, in, west), (west, is, animate), (mailbox, in, west), (mailbox, is, animate)]
[next act] go north
12
you windows west o \ %, % \ I north_of_housi animate_ < x window path
[loc] North of House You are facing the north side of a white house. There is no door here, and all the windows are boarded up. To the north a narrow path winds through the trees.
[inv] You are empty handed. [obs] North of House You are facing the north side of a white house. There is no door here, and all the windows are boarded
up. To the north a narrow path winds through the trees.
lost, afflicted, is_treasure, converse, mentioned, male, npcworn, no_article, relevant, scored, queryable, town, pluggable, happy, is_followable, legible, multitude, burning, room, clothing, underneath, ward_area, little, intact, animate, bled_in, supporter, readable, openable, near, nonlocal, door, plugged, sittable, toolbit, vehicle, light, lens_searchable, open, familiar, is_scroll, aimable, takeable, static, unique, concealed, vowelstart, alcoholic, bodypart, general, is_spell, full, dry_land, pushable, known, proper, inside, clean, ambiguously_plural, container, edible, treasure, can_plug, weapon, is_arrow, insubstantial, pluralname, transparent, is_coin, air_room, scenery, on, is_spell_book, burnt, burnable, auto_searched, locked, switchable, absent, rockable, beenunlocked, progressing, severed, worn, windy, stone, random, neuter, legible, female, asleep, wiped
[graph] [(north_of_house, north, west), (you, in, north_of_house), (door, is, animate), (door, in, west), (west, is, animate ), (west, in, west), (mailbox, in, west), (mailbox, is, animate), (windows, in, north_of_house), (windows, is, animate ), (north, is, animate), (north, in, north_of_house), (path, is, animate), (path, in, north_of_house), (trees, in, north_of_house), (trees, is, animate)]
[next act] go east
behind_house a west © \ . SA northâ of houst window forest animate
[loc] Behind House You are behind the white house. A path leads into the forest to the east. In one corner of the house there is a small window which is slightly ajar.
[inv] You are empty handed. [obs] Behind House You are behind the white house. A path leads into the forest to the east. In one corner of the house
there is a small window which is slightly ajar.
lost, afflicted, is_treasure, converse, mentioned, male, npcworn, no_article, relevant, scored, queryable, town, pluggable, happy, is_followable, legible, multitude, burning, room, clothing, underneath, ward_area, little, intact, animate, bled_in, supporter, readable, openable, near, nonlocal, door, plugged, sittable, toolbit, vehicle, light, lens_searchable, open, familiar, is_scroll, aimable, takeable, static, unique, concealed, vowelstart, alcoholic, bodypart, general, is_spell, full, dry_land, pushable, known, proper, inside, clean, ambiguously_plural, container, edible, treasure, can_plug, weapon, is_arrow, insubstantial, pluralname, transparent, is_coin, air_room, scenery, on, is_spell_book, burnt, burnable, auto_searched, locked, switchable, absent, rockable, beenunlocked, progressing, severed, worn, windy, stone, random, neuter, legible, female, asleep, wiped
[graph] [(north_of_house, north, west), (behind_house, east, north_of_house), (you, in, behind_house), (door, is, animate), (door, in, west), (west, is, animate), (west, in, west), (you, in, behind_house), (mailbox, in, west), (mailbox, is, animate), (windows, in, north_of_house), (windows, is, animate), (north, is, animate), (north, in, north_of_house), ( path, is, animate), (path, in, north_of_house), (trees, in, north_of_house), (trees, is, animate), (window, in, behind_house), (window, is, animate), (forest, in, behind_house), (forest, is, animate), (east, in, behind_house), ( east, is, animate)]
# [next act] open window
13
behind_house ~~ west a : \ you % \ northâ of houst window ⬠NN / path open
[loc] Behind House You are behind the white house. A path leads into the forest to the east. In one corner of the house there is a small window which is open. [inv] You are empty handed.
[obs] With great effort, you open the window far enough to allow entry. [atr] talkable, seen, lieable, enterable, nodwarf, indoors, visited, handed, lockable, surface, thing, water_room, unlock,
lost, afflicted, is_treasure, converse, mentioned, male, npcworn, no_article, relevant, scored, queryable, town, pluggable, happy, is_followable, legible, multitude, burning, room, clothing, underneath, ward_area, little, intact, animate, bled_in, supporter, readable, openable, near, nonlocal, door, plugged, sittable, toolbit, vehicle, light, lens_searchable, open, familiar, is_scroll, aimable, takeable, static, unique, concealed, vowelstart, alcoholic, bodypart, general, is_spell, full, dry_land, pushable, known, proper, inside, clean, ambiguously_plural, container, edible, treasure, can_plug, weapon, is_arrow, insubstantial, pluralname, transparent, is_coin, air_room, scenery, on, is_spell_book, burnt, burnable, auto_searched, locked, switchable, absent, rockable, beenunlocked, progressing, severed, worn, windy, stone, random, neuter, legible, female, asleep, wiped
(door, in, west), (west, is, animate), (west, in, west), (mailbox, in, west), (mailbox, is, animate), (windows, in, north_of_house), (windows, is, animate), (windows, is, open), (north, is, animate), (north, in, north_of_house), (path , is, animate), (path, in, north_of_house), (trees, in, north_of_house), (trees, is, animate), (window, in, behind_house), (window, is, animate), (forest, in, behind_house), (forest, is, animate), (east, in, behind_house), ( east, is, animate)]
behind_house 4 northâ of _houst bottle table
[loc] Kitchen You are in the kitchen of the white house. A table seems to have been used recently for the preparation of food. A passage leads to the west and a dark staircase can be seen leading upward. A dark chimney leads down and to the east is a small window which is open. On the table is an elongated brown sack, smelling of hot peppers. A bottle is sitting on the table. The glass bottle contains: A quantity of water
[inv] You are empty handed. [obs] Kitchen You are in the kitchen of the white house. A table seems to have been used recently for the preparation of
food. A passage leads to the west and a dark staircase can be seen leading upward. A dark chimney leads down and to the east is a small window which is open. On the table is an elongated brown sack, smelling of hot peppers. A bottle is sitting on the table. The glass bottle contains: A quantity of water
lost, afflicted, is_treasure, converse, mentioned, male, npcworn, no_article, relevant, scored, queryable, town, pluggable, happy, is_followable, legible, multitude, burning, room, clothing, underneath, ward_area, little, intact, animate, bled_in, supporter, readable, openable, near, nonlocal, door, plugged, sittable, toolbit, vehicle, light, lens_searchable, open, familiar, is_scroll, aimable, takeable, static, unique, concealed, vowelstart, alcoholic, bodypart, general, is_spell, full, dry_land, pushable, known, proper, inside, clean, ambiguously_plural, container, edible, treasure, can_plug, weapon, is_arrow, insubstantial, pluralname, transparent, is_coin, air_room, scenery, on, is_spell_book, burnt, burnable, auto_searched, locked, switchable, absent, rockable, beenunlocked, progressing, severed, worn, windy, stone, random, neuter, legible, female, asleep, wiped
[graph] [(north_of_house, north, west), (behind_house, east, north_of_house), (behind_house, in, kitchen), (you, in, kitchen ), (door, is, animate), (door, in, west), (west, is, animate), (west, in, west), (west, in, kitchen), (mailbox, in, west), (mailbox, is, animate), (windows, in, north_of_house), (windows, is, animate), (north, is, animate), (north, in , north_of_house), (path, is, animate), (path, in, north_of_house), (trees, in, north_of_house), (trees, is, animate), (window, in, behind_house), (window, is, animate), (forest, in, behind_house), (forest, is, animate), (east, in, behind_house), (east, is, animate), (table, in, kitchen), (table, is, animate)]]
[next act] go in
14
# A.1.3 Architecture
Further details of what is found in Figure 3. The sequential action decoder consists two GRUs that are linked together as seen in Ammanabrolu and Hausknecht [3]. The ï¬rst GRU decodes an action template and the second decodes objects that can be ï¬lled into the template. These objects are constrained by a graph mask, i.e. the decoder is only allowed to select entities that are already present in the knowledge graph.
The question answering network based on ALBERT [18] has the following hyperparameters, taken from the original paper and known to work well on the SQuAD [23] dataset. No further hyperparam- eter tuning was conducted.
Parameters batch size learning rate max seq len doc stride warmup steps max steps gradient accumulation steps 3 Value 8 10â5 512 128 814 8144 24 Ã
# A.1.4 A2C Training
The rest of the A2C training is unchanged from Ammanabrolu and Hausknecht [3]. A2C training starts with calculating the advantage of taking an action in a state A(st, at), deï¬ned as the value of taking an action Q(st, at) compared to the average value of taking all possible admissible actions in that state V (st):
V (st) A(st, at) = Q(st, at) Q(st, at) = E[rt + γV (st+1)]
A(st, a1) = Q(st, a2) â V(s2) (5)
â
Q(se; a4) = Efre + yV (S141) (6)
The value is predicted by the critic as shown in Fig. 3 and rt is the reward received at step t.
The action decoder or actor is then updated according to the gradient:
n âVo(logra(T|s813 4) + > logmo, (0i|81,T, ---, O11; 4)) A(St, ae) (7) i=l
updating the template policy ÏT and object policies ÏOi based on the fact that each step in the action decoding process is conditioned on all the previously decoded portions. The critic is updated with respect to the gradient:
1 2 â θ(Q(st, at; θt) â V (st; θt))2 (8)
bringing the criticâs prediction of the value of being in a state closer to its true underlying value. An entropy loss is also added:
Lw(se, a0; 01) = > P(a|s:)logP(as1) (9) aâ¬V (se)
Hyperparameters are taken from KG-A2C as detailed by Ammanabrolu and Hausknecht [3] and not tuned any further.
# A.2 MC!Q*BERT
The additional hyperparamters used for modular policy chaining are detailed below. Patience batch factor is the proportion of the batch that must have stagnated at a particular score for patience number of episodes of unchanging score before a bottleneck is detected. Patience within a range of 1000 60 in increments of 10 were the only additional parameters tuned for, on Zork1. The resulting best hyperparameter set was used on the rest of the games.
15
(5) (6)
Parameters patience buffer size batch size patience batch factor Value 3000 40 16 .75
# A.3 GO!Q*BERT
Since the text games we are dealing with are mostly deterministic, with the exception of Zork1 in later stages, we only focus on using Phase 1 of the Go-Explore algorithm to ï¬nd an optimal policy. Go-Explore maintains an archive of cellsâdeï¬ned as a set of states that map to a single representationâto keep track of promising states. Ecoffet et al. [12] simply encodes each cell by keeping track of the agentâs position and Madotto et al. [19] use the textual observations encoded by recurrent neural network as a cell representation. We improve on this implementation by training the Q*BERT network in parallel, using the snapshot of the knowledge graph in conjunction with the game state to further encode the current state and use this as a cell representation. At each step, Go-Explore chooses a cell to explore at random (weighted by score to prefer more advanced cells). Q*BERT will run for a number of steps in each cell, for all our experiments we use a cell step size of 32, starting with the knowledge graph state and the last seen state of the game from the cell. This will generate a trajectory for the agent while further training Q*BERT at each iteration, creating a new representation for the knowledge graph as well as a new game state for the cell. After expanding a cell, Go-Explore will continue to sample cells by weight to continue expanding its known states. At the same time, Q*BERT will beneï¬t from the heuristics of selecting preferred cells and be trained on promising states more often.
# B Results
# B.1 Graph Evaluation Results
ork > aim coh bhoe ° â onsen â Koc balances zu â oneer â Koa â onsen â Koc temple 0-841 deephome ener Koa : â onsen â Koc
Figure 5: Episode initial reward curves for KG-A2C and Q*BERT.
16
B.2 Intrinsic Motivation and Structured Exploration Results
zorkl brary detective â MciQ*BERT 9) ââ MCIQ*BERT:noM â GorgBeRr w â â neig*eRt â MciQ*BERT ââ MC!Q*RERT-n0-IM so â MCIQ*BERT.noM ° 4 â corg*nerr GO'Q*BERT balances ztuu â MciQ*BERT aa » 0 1) MCIQ*BERT-no-IM y , | â Gororperr y â MciQ*BERT . MC1Q°BERF-no-IM : ° â GorgBeRr ° ludicorp deephome temple MCIQ°BERT MC1Q°BERT:no-IM â Gorg:BeRr â MciQ*BERT ââ MC1Q*BERF-noiM â corgBerr â McIQ*BERT = MC1Q*BERF-noM. ® â GorgBerr
Figure 6: Best initial reward curves for the exploration strategies.
C Zork1
Start here |y
Figure 7: Map of Zork1 annotated with rewards taken from Ammanabrolu and Hausknecht [3] and corresponding to the states and rewards found in Figure 2.
17 | {
"id": "2005.00811"
} |
2006.07185 | Grounding Language to Autonomously-Acquired Skills via Goal Generation | We are interested in the autonomous acquisition of repertoires of skills.
Language-conditioned reinforcement learning (LC-RL) approaches are great tools
in this quest, as they allow to express abstract goals as sets of constraints
on the states. However, most LC-RL agents are not autonomous and cannot learn
without external instructions and feedback. Besides, their direct language
condition cannot account for the goal-directed behavior of pre-verbal infants
and strongly limits the expression of behavioral diversity for a given language
input. To resolve these issues, we propose a new conceptual approach to
language-conditioned RL: the Language-Goal-Behavior architecture (LGB). LGB
decouples skill learning and language grounding via an intermediate semantic
representation of the world. To showcase the properties of LGB, we present a
specific implementation called DECSTR. DECSTR is an intrinsically motivated
learning agent endowed with an innate semantic representation describing
spatial relations between physical objects. In a first stage (G -> B), it
freely explores its environment and targets self-generated semantic
configurations. In a second stage (L -> G), it trains a language-conditioned
goal generator to generate semantic goals that match the constraints expressed
in language-based inputs. We showcase the additional properties of LGB w.r.t.
both an end-to-end LC-RL approach and a similar approach leveraging
non-semantic, continuous intermediate representations. Intermediate semantic
representations help satisfy language commands in a diversity of ways, enable
strategy switching after a failure and facilitate language grounding. | http://arxiv.org/pdf/2006.07185 | Ahmed Akakzia, Cédric Colas, Pierre-Yves Oudeyer, Mohamed Chetouani, Olivier Sigaud | cs.AI, cs.LG, stat.ML | Published at ICLR 2021 | null | cs.AI | 20200612 | 20210125 | 1 2 0 2
n a J 5 2 ] I A . s c [
3 v 5 8 1 7 0 . 6 0 0 2 : v i X r a
Published as a conference paper at ICLR 2021
# GROUNDING LANGUAGE TO AUTONOMOUSLY- ACQUIRED SKILLS VIA GOAL GENERATION
Ahmed Akakziaâ Sorbonne Universit´e [email protected]
C´edric Colasâ Inria [email protected]
Pierre-Yves Oudeyer Inria
Mohamed Chetouani Sorbonne Universit´e
Olivier Sigaud Sorbonne Universit´e
# ABSTRACT
We are interested in the autonomous acquisition of repertoires of skills. Language- conditioned reinforcement learning (LC-RL) approaches are great tools in this quest, as they allow to express abstract goals as sets of constraints on the states. However, most LC-RL agents are not autonomous and cannot learn without ex- ternal instructions and feedback. Besides, their direct language condition can- not account for the goal-directed behavior of pre-verbal infants and strongly lim- its the expression of behavioral diversity for a given language input. To resolve these issues, we propose a new conceptual approach to language-conditioned RL: the Language-Goal-Behavior architecture (LGB). LGB decouples skill learning and language grounding via an intermediate semantic representation of the world. To showcase the properties of LGB, we present a speciï¬c implementation called DECSTR. DECSTR is an intrinsically motivated learning agent endowed with an innate semantic representation describing spatial relations between physical ob- jects. In a ï¬rst stage (GâB), it freely explores its environment and targets self- generated semantic conï¬gurations. In a second stage (LâG), it trains a language- conditioned goal generator to generate semantic goals that match the constraints expressed in language-based inputs. We showcase the additional properties of LGB w.r.t. both an end-to-end LC-RL approach and a similar approach leverag- ing non-semantic, continuous intermediate representations. Intermediate seman- tic representations help satisfy language commands in a diversity of ways, enable strategy switching after a failure and facilitate language grounding.
1
# INTRODUCTION
Developmental psychology investigates the interactions between learning and developmental pro- cesses that support the slow but extraordinary transition from the behavior of infants to the sophis- ticated intelligence of human adults (Piaget, 1977; Smith & Gasser, 2005). Inspired by this line of thought, the central endeavour of developmental robotics consists in shaping a set of machine learning processes able to generate a similar growth of capabilities in robots (Weng et al., 2001; Lungarella et al., 2003). In this broad context, we are more speciï¬cally interested in designing learning agents able to: 1) explore open-ended environments and grow repertoires of skills in a self-supervised way and 2) learn from a tutor via language commands.
The design of intrinsically motivated agents marked a major step towards these goals. The Intrin- sically Motivated Goal Exploration Processes family (IMGEPs), for example, describes embodied agents that interact with their environment at the sensorimotor level and are endowed with the ability to represent and set their own goals, rewarding themselves over completion (Forestier et al., 2017). Recently, goal-conditioned reinforcement learning (GC-RL) appeared like a viable way to implement IMGEPs and target the open-ended and self-supervised acquisition of diverse skills.
# âEqual contribution.
1
Published as a conference paper at ICLR 2021
Goal-conditioned RL approaches train goal-conditioned policies to target multiple goals (Kaelbling, 1993; Schaul et al., 2015). While most GC-RL approaches express goals as target features (e.g. target block positions (Andrychowicz et al., 2017), agent positions in a maze (Schaul et al., 2015) or target images (Nair et al., 2018)), recent approaches started to use language to express goals, as language can express sets of constraints on the state space (e.g. open the red door) in a more abstract and interpretable way (Luketina et al., 2019).
However, most GC-RL approaches â and language-based ones (LC-RL) in particular â are not intrin- sically motivated and receive external instructions and rewards. The IMAGINE approach is one of the rare examples of intrinsically motivated LC-RL approaches (Colas et al., 2020). In any case, the lan- guage condition suffers from three drawbacks. 1) It couples skill learning and language grounding. Thus, it cannot account for goal-directed behaviors in pre-verbal infants (Mandler, 1999). 2) Direct conditioning limits the behavioral diversity associated to language input: a single instruction leads to a low diversity of behaviors only resulting from the stochasticity of the policy or the environment. 3) This lack of behavioral diversity prevents agents from switching strategy after a failure.
To circumvent these three limitations, one can decouple skill learning and language grounding via an intermediate innate semantic representation. On one hand, agents can learn skills by targeting conï¬gurations from the semantic representation space. On the other hand, they can learn to generate valid semantic conï¬gurations matching the constraints expressed by language instructions. This generation can be the backbone of behavioral diversity: a given sentence might correspond to a whole set of matching conï¬gurations. This is what we propose in this work.
Contributions. We propose a novel conceptual RL architecture, named LGB for Language-Goal- Behavior and pictured in Figure 1 (right). This LGB architecture enables an agent to decouple the intrinsically motivated acquisition of a repertoire of skills (Goals â Behavior) from language grounding (Language â Goals), via the use of semantic goal representation. To our knowledge, the LGB architecture is the only one to combine the following four features:
It is intrinsically motivated: it selects its own (semantic) goals and generates its own rewards, ⢠It decouples skill learning from language grounding, accounting for infants learning, ⢠It can exhibit a diversity of behaviors for any given instruction, ⢠It can switch strategy in case of failures.
Besides, we introduce an instance of LGB, named DECSTR for DEep sets and Curriculum with SemanTic goal Representations. Using DECSTR, we showcase the advantages of the conceptual In the skill learning phase, the DECSTR agent evolves in a manipulation envi- decoupling idea. ronment and leverages semantic representations based on predicates describing spatial relations be- tween physical objects. These predicates are known to be used by infants from a very young age (Mandler, 2012). DECSTR autonomously learns to discover and master all reachable conï¬gurations in its semantic representation space. In the language grounding phase, we train a Conditional Vari- ational Auto-Encoder (C-VAE) to generate semantic goals from language instructions. Finally, we can evaluate the agent in an instruction-following phase by composing the two ï¬rst phases. The experimental section investigates three questions: how does DECSTR perform in the three phases? How does it compare to end-to-end LC-RL approaches? Do we need intermediate representations to be semantic? Code and videos can be found at https://sites.google.com/view/decstr/.
# 2 RELATED WORK
Standard language-conditioned RL. Most approaches from the LC-RL literature deï¬ne instruc- tion following agents that receive external instructions and rewards (Hermann et al., 2017; Chan et al., 2019; Bahdanau et al., 2018; Cideron et al., 2019; Jiang et al., 2019; Fu et al., 2019), except the IMAGINE approach which introduced intrinsically motivated agents able to set their own goals and to imagine new ones (Colas et al., 2020). In both cases, the language-condition prevents the decoupling of language acquisition and skill learning, true behavioral diversity and efï¬cient strat- egy switching behaviors. Our approach is different, as we can decouple language acquisition from skill learning. The language-conditioned goal generation allows behavioral diversity and strategy switching behaviors.
2
Published as a conference paper at ICLR 2021
Language - Behavior (LB) Language - Goal - Behavior (LGB) Known or language-conditioned RL Semantic language embedding Goals language r OR embedding initial state semantic goal LG phase: language grounding (language > semantic goals) LB phase: instruction following (language â> behavior) (language -> semantic goals + behavior)
Figure 1: A standard language-conditioned RL architecture (left) and our proposed LGB architecture (right).
Goal-conditioned RL with target coordinates for block manipulation. Our proposed imple- mentation of LGB, called DECSTR, evolves in a block manipulation domain. Stacking blocks is one of the earliest benchmarks in artiï¬cial intelligence (e.g. Sussman (1973); Tate et al. (1975)) and has led to many simulation and robotics studies (Deisenroth et al., 2011; Xu et al., 2018; Colas et al., 2019a). Recently, Lanier et al. (2019) and Li et al. (2019) demonstrated impressive results by stack- ing up to 4 and 6 blocks respectively. However, these approaches are not intrinsically motivated, involve hand-deï¬ned curriculum strategies and express goals as speciï¬c target block positions. In contrast, the DECSTR agent is intrinsically motivated, builds its own curriculum and uses semantic goal representations (symbolic or language-based) based on spatial relations between blocks.
Decoupling language acquisition and skill learning. Several works investigate the use of seman- tic representations to associate meanings and skills (Alomari et al., 2017; Tellex et al., 2011; Kulick et al., 2013). While the two ï¬rst use semantic representations as an intermediate layer between lan- guage and skills, the third one does not use language. While DECSTR acquires skills autonomously, previous approaches all use skills that are either manually generated (Alomari et al., 2017), hand- engineered (Tellex et al., 2011) or obtained via optimal control methods (Kulick et al., 2013). Closer to us, Lynch & Sermanet (2020) also decouple skill learning from language acquisition in a goal- conditioned imitation learning paradigm by mapping both language goals and images goals to a shared representation space. However, this approach is not intrinsically motivated as it relies on a dataset of human tele-operated strategies. The deterministic merging of representations also limits the emergence of behavioral diversity and efï¬cient strategy-switching behaviors.
# 3 METHODS
This section presents our proposed Language-Goal-Behavior architecture (LGB) represented in Fig- ure 1 (Section 3.1) and a particular instance of the LGB architecture called DECSTR. We ï¬rst present the environment it is set in [3.2], then describe the implementations of the three modules com- posing any LGB architecture: 1) the semantic representation [3.3]; 2) the intrinsically motivated goal-conditioned algorithm [3.4] and 3) the language-conditioned goal generator [3.5]. We ï¬nally present how the three phases described in Figure 1 are evaluated [3.6].
3.1
# THE LANGUAGE-GOAL-BEHAVIOR ARCHITECTURE
The LGB architecture is composed of three main modules. First, the semantic representation deï¬nes the behavioral and goal spaces of the agent. Second, the intrinsically motivated GC-RL algorithm is in charge of the skill learning phase. Third, the language-conditioned goal generator is in charge of the language grounding phase. Both phases can be combined in the instruction following phase. The three phases are respectively called GâB for Goal â Behavior, LâG for Language â Goal and LâGâB for Language â Goal â Behavior, see Figure 1 and Appendix A. Instances of the LGB architecture should demonstrate the four properties listed in the introduction: 1) be intrinsically motivated; 2) decouple skill learning and language grounding (by design); 3) favor behavioral di- versity; 4) allow strategy switching. We argue that any LGB algorithm should fulï¬ll the following constraints. For LGB to be intrinsically motivated (1), the algorithm needs to integrate the generation
3
Published as a conference paper at ICLR 2021
and selection of semantic goals and to generate its own rewards. For LGB to demonstrate behavioral diversity and strategy switching (3, 4), the language-conditioned goal generator must efï¬ciently model the distribution of semantic goals satisfying the constraints expressed by any language input.
# 3.2 ENVIRONMENT
The DECSTR agent evolves in the Fetch Manipulate environment: a robotic manipulation domain based on MUJOCO (Todorov et al., 2012) and derived from the Fetch tasks (Plappert et al., 2018), see Figure 2. Actions are 4-dimensional: 3D gripper velocities and grasping velocity. Observations include the Cartesian and angular positions and velocities of the gripper and the three blocks. In- spired by the framework of Zone of Proximal Development that de- scribes how parents organize the learning environment of their chil- dren (Vygotsky, 1978), we let a social partner facilitate DECSTRâs exploration by providing non-trivial initial conï¬gurations. After a ï¬rst period of autonomous exploration, the social partner initializes the scene with stacks of 2 blocks 21% of times, stacks of 3 blocks 9% of times, and a block is initially put in the agentâs gripper 50% of times. This help is not provided during ofï¬ine evaluations.
3.3 SEMANTIC REPRESENTATION
Semantic predicates deï¬ne the behavioral space. Deï¬ning the list of semantic predicates is deï¬ning the dimensions of the behavioral space explored by the agent. It replaces the traditional deï¬nition of goal spaces and their associated reward functions. We believe it is for the best, as it does not require the engineer to fully predict all possible behaviors within that space, to know which behaviors can be achieved and which ones cannot, nor to deï¬ne reward functions for each of them.
Semantic predicates in DECSTR. We assume the DECSTR agent to have access to innate seman- tic representations based on a list of predicates describing spatial relations between pairs of objects in the scene. We consider two of the spatial predicates infants demonstrate early in their develop- ment (Mandler, 2012): the close and the above binary predicates. These predicates are applied to all permutations of object pairs for the 3 objects we consider: 6 permutations for the above predicate and 3 combinations for the close predicate due to its order-invariance. A semantic conï¬guration is the concatenation of the evaluations of these 9 predicates and represents spatial relations between objects in the scene. In the resulting semantic conï¬guration space {0, 1}9, the agent can reach 35 physically valid conï¬gurations, including stacks of 2 or 3 blocks and pyramids, see examples in Figure 2. The binary reward function directly derives from the semantic mapping: the agent rewards itself when its current conï¬guration cp matches the goal conï¬guration cp = g. Appendix B provides formal deï¬nitions and properties of predicates and semantic conï¬gurations.
INTRINSICALLY MOTIVATED GOAL-CONDITIONED REINFORCEMENT LEARNING
This section describes the implementation of the intrinsically motivated goal-conditioned RL module in DECSTR. It is powered by the Soft-Actor Critic algorithm (SAC) (Haarnoja et al., 2018) that takes as input the current state, the current semantic conï¬guration and the goal conï¬guration, for both the critic and the policy. We use Hindsight Experience Replay (HER) to facilitate transfer between goals (Andrychowicz et al., 2017). DECSTR samples goals via its curriculum strategy, collects experience in the environment, then performs policy updates via SAC. This section describes two particularities the self-generated goal selection curriculum and the object-centered of our RL implementation: network architectures. Implementation details and hyperparameters can be found in Appendix C.
Goal selection and curriculum learning. The DECSTR agent can only select goals among the set of semantic conï¬gurations it already experienced. We use an automatic curriculum strategy (Portelas et al., 2020) inspired from the CURIOUS algorithm (Colas et al., 2019a). The DECSTR agent tracks aggregated estimations of its competence (C) and learning progress (LP). Its selection of goals to target during data collection and goals to learn about during policy updates (via HER) is biased towards goals associated with high absolute LP and low C.
4
Published as a conference paper at ICLR 2021
Automatic bucket generation. To facilitate robust estimation, LP is usually estimated on sets of goals with similar difï¬culty or similar dynamics (Forestier et al., 2017; Colas et al., 2019a). While previ- ous works leveraged expert-deï¬ned goal buckets, we cluster goals based on their time of discovery, as the time of discovery is a good proxy for goal difï¬culty: easier goals are discovered earlier. Buckets are initially empty (no known conï¬gurations). When an episode ends in a new conï¬gura- tion, the Nb = 5 buckets are updated. Buckets are ï¬lled equally and the ï¬rst buckets contain the conï¬gurations discovered earlier. Thus goals change buckets as new goals are discovered.
Tracking competence, learning progress and sampling probabilities. Regularly, the DECSTR agent evaluates itself on goal conï¬gurations sampled uniformly from the set of known ones. For each bucket, it tracks the recent history of past successes and failures when targeting the corresponding goals (last W = 1800 self-evaluations). C is estimated as the success rate over the most recent half of that history C = Crecent. LP is estimated as the difference between Crecent and the one evaluated over the ï¬rst half of the history (Cearlier). This is a crude estimation of the derivative of the C curve w.r.t. time: LP = Crecent - Cearlier. The sampling probability Pi for bucket i is:
_ (1-0) # [LPI SGC) LP)
In addition to the usual LP bias (Colas et al., 2019a), this formula favors lower C when LP is similar. The absolute value ensures resampling buckets whose performance decreased (e.g. forgetting).
Object-centered architecture. Instead of fully-connected or recurrent networks, DECSTR uses for the policy and critic an object-centered architecture similar to the ones used in Colas et al. (2020); Karch et al. (2020), adapted from Deep-Sets (Zaheer et al., 2017). For each pair of objects, a shared network independently encodes the concatenation of body and objects features and current and target semantic conï¬gurations, see Appendix Figure 4. This shared network ensures efï¬cient transfer of skills between pairs of objects. A second inductive bias leverages the symmetry of the behavior required to achieve above(oi, oj) and above(oj, oi). To ensure automatic transfer between the two, we present half of the features (e.g. those based on pairs (oi, oj) where i < j) with goals containing one side of the symmetry (all above(oi, oj) for i < j) and the other half with the goals containing the other side (all above(oj, oi) for i < j). As a result, the above(oi, oj) predicates fall into the same slot of the shared network inputs as their symmetric counterparts above(oj, oi), only with different permutations of object pairs. Goals are now of size 6: 3 close and 3 above predicates, corresponding to one side of the above symmetry. Skill transfer between symmetric predicates are automatically ensured. Appendix C.1 further describes these inductive biases and our modular architecture.
3.5
# LANGUAGE-CONDITIONED GOAL GENERATION
The language-conditioned goal generation module (LGG) is a generative model of semantic repre- sentations conditioned by language inputs. It is trained to generate semantic conï¬gurations matching the agentâs initial conï¬guration and the description of a change in one object-pair relation.
A training dataset is collected via interactions between a DECSTR agent trained in phase GâB and a social partner. DECSTR generates semantic goals and pursues them. For each trajectory, the social partner provides a description d of one change in objects relations from the initial conï¬guration ci to the ï¬nal one cf . The set of possible descriptions contains 102 sentences, each describing, in a simpliï¬ed language, a positive or negative shift for one of the 9 predicates (e.g. get red above green). This leads to a dataset D of 5000 triplets: (ci, d, cf ). From this dataset, the LGG is learned using a conditional Variational Auto-Encoder (C-VAE) (Sohn et al., 2015). Inspired by the context- conditioned goal generator from Nair et al. (2019), we add an extra condition on language instruction to improve control on goal generation. The conditioning instruction is encoded by a recurrent net- work that is jointly trained with the VAE via a mixture of Kullback-Leibler and cross-entropy losses. Appendix C.2 provides the list of sentences and implementation details. By repeatedly sampling the LGG, a set of goals is built for any language input. This enables skill diversity and strategy switching: if the agent fails, it can sample another valid goal to fulï¬ll the instruction, effectively switching strategy. This also enables goal combination using logical functions of instructions: and is an intersection, or is an union and not is the complement within the known set of goals.
5
Published as a conference paper at ICLR 2021
3.6
EVALUATION OF THE THREE LGB PHASES
Skill learning phase GâB: DECSTR explores its semantic representation space, discovers achiev- able conï¬gurations and learns to reach them. Goal-speciï¬c performance is evaluated ofï¬ine across learning as the success rate (SR) over 20 repetitions for each goal. The global performance SR is measured across either the set of 35 goals or discovery-organized buckets of goals, see Section 3.4.
Language grounding phase LâG: DECSTR trains the LGG to generate goals matching constraints expressed via language inputs. From a given initial conï¬guration and a given instruction, the LGG should generate all compatible ï¬nal conï¬gurations (goals) and just these. This is the source of behavioral diversity and strategy switching behaviors. To evaluate LGG, we construct a synthetic, oracle dataset O of triplets (ci, d, Cf (ci, d)), where Cf (ci, d) is the set of all ï¬nal conï¬gurations compatible with (ci, d). On average, Cf in O contains 16.7 conï¬gurations, while the training dataset D only contains 3.4 (20%). We are interested in two metrics: 1) The Precision is the probability that a goal sampled from the LGG belongs to Cf (true positive / all positive); 2) The Recall is percentage of elements from Cf that were found by sampling the LGG 100 times (true positive / all true). These metrics are computed on 5 different subsets of the oracle dataset, each calling for a different type of generalization (see full lists of instructions in Appendix C.2):
1. Pairs found in D, except pairs removed to form the following test sets. This calls for the extrap- olation of known initialization-effect pairs (ci, d) to new ï¬nal conï¬gurations cf (D contains only 20% of Cf on average).
2. Pairs that were removed from D, calling for a recombination of known effects d on known ci. 3. Pairs for which the ci was entirely removed from D. This calls for the transfer of known effects
d on unknown ci.
4. Pairs for which the d was entirely removed from D. This calls for generalization in the language space, to generalize unknown effects d from related descriptions and transpose this to known ci. 5. Pairs for which both the ci and the d were entirely removed from D. This calls for the general-
izations 3 and 4 combined.
Instruction following phase LâGâB: DECSTR is instructed to modify an object relation by one of the 102 sentences. Conditioned on its current conï¬guration and instruction, it samples a compati- ble goal from the LGG, then pursues it with its goal-conditioned policy. We consider three evaluation settings: 1) performing a single instruction; 2) performing a sequence of instructions without failure; 3) performing a logical combination of instructions. The transition setup measures the success rate of the agent when asked to perform the 102 instructions 5 times each, resetting the environment each time. In the expression setup, the agent is evaluated on 500 randomly generated logical functions of sentences, see the generation mechanism in Appendix C.2. In both setups, we evaluate the perfor- mance in 1-shot (SR1) and 5-shot (SR5) settings. In the 5-shot setting, the agent can perform strategy switching, to sample new goals when previous attempts failed (without reset). In the sequence setup, the agent must execute 20 sequences of random instructions without reset (5-shot). We also test be- havioral diversity. We ask DECSTR to follow each of the 102 instructions 50 times each and report the number of different achieved conï¬gurations.
# 4 EXPERIMENTS
Our experimental section investigates three questions: [4.1]: How does DECSTR perform in the three phases? [4.2]: How does it compare to end-to-end language-conditioned approaches? [4.3]: Do we need intermediate representations to be semantic?
4.1 HOW DOES DECSTR PERFORM IN THE THREE PHASES?
This section presents the performance of the DECSTR agent in the skill learning, language grounding, and instruction following phases.
Skill learning phase GâB: Figure 3 shows that DECSTR successfully masters all reachable con- ï¬gurations in its semantic representation space. Figure 3a shows the evolution of SR computed per bucket. Buckets are learned in increasing order, which conï¬rms that the time of discovery is a good
6
Published as a conference paper at ICLR 2021
proxy for difï¬culty. Figure 3b reports C, LP and sampling probabilities P computed online using self-evaluations for an example agent. The agent leverages these estimations to select its goals: ï¬rst focusing on the easy goals from bucket 1, it moves on towards harder and harder buckets as easier ones are mastered (low LP, high C). Figure 3c presents the results of ablation studies. Each condition removes one component of DECSTR: 1) Flat replaces our object-centered modular architectures by ï¬at ones; 2) w/o Curr. replaces our automatic curriculum strategy by a uniform goal selection; 3) w/o Sym. does not use the symmetry inductive bias; 4) In w/o SP, the social partner does not provide non-trivial initial conï¬gurations. In the Expert buckets condition, the curriculum strategy is applied on expert-deï¬ned buckets, see Appendix D.1. The full version of LGB performs on par with the Expert buckets oracle and outperforms signiï¬cantly all its ablations. Appendix E.3 presents more examples of learning trajectories, and dissects the evolution of bucket compositions along training.
(a) (b) (c)
Figure 3: Skill Learning: (a) SR per bucket. (b): C, LP and P estimated by a DECSTR agent. (c): ablation study. Medians and interquartile ranges over 10 seeds for DECSTR and 5 seeds for others in (a) and (c). Stars indicate signiï¬cant differences to DECSTR as reported by Welchâs t-tests with α = 0.05 (Colas et al., 2019b).
Table 1: LâG phase. Metrics are averaged over 10 seeds, stdev < 0.06 and 0.07 respectively. Table 2: LâGâB phase. Mean ± stdev over 10 seeds.
Metrics Precision Recall Test 1 0.97 0.93 Test 2 0.93 0.94 Test 3 0.98 0.95 Test 4 0.99 0.90 Test 5 0.98 0.92 Metr. SR1 SR5 Transition 0.89 ± 0.05 0.99 ± 0.01 Expression 0.74 ± 0.08 0.94 ± 0.06
Language grounding phase LâG: The LGG demonstrates the 5 types of generalization from Table 1. From known conï¬gurations, agents can generate more goals than they observed in training data (1, 2). They can do so from new initial conï¬gurations (3). They can generalize to new sentences (4) and even to combinations of new sentences and initial conï¬gurations (5). These results assert that DECSTR generalizes well in a variety of contexts and shows good behavioral diversity.
Instruction following phase LâGâB: Table 2 presents the 1-shot and 5-shot results in the transi- tion and expression setups. In the sequence setups, DECSTR succeeds in L = 14.9 ± 5.7 successive instructions (mean±stdev over 10 seeds). These results conï¬rm efï¬cient language grounding. DEC- STR can follow instructions or sequences of instructions and generalize to their logical combinations. Strategy switching improves performance (SR5 - SR1). DECSTR also demonstrates strong behavioral diversity: when asked over 10 seeds to repeat 50 times the same instruction, it achieves at least 7.8 different conï¬gurations, 15.6 on average and up to 23 depending on the instruction.
4.2 DO WE NEED AN INTERMEDIATE REPRESENTATION?
This section investigates the need for an intermediate semantic representation. To this end, we in- troduce an end-to-end LC-RL baseline directly mapping Language to Behavior (LâB) and compare its performance with DECSTR in the instruction following phase (LâGâB).
The LB baseline. To limit the introduction of confounding factors and under-tuning concerns, we base this implementation on the DECSTR code and incorporate deï¬ning features of IMAGINE, a state- of-the-art language conditioned RL agent (Colas et al., 2020). We keep the same HER mechanism, object-centered architectures and RL algorithm as DECSTR. We just replace the semantic goal space
7
Published as a conference paper at ICLR 2021
by the 102 language instructions. This baseline can be seen as an oracle version of the IMAGINE algorithm where the reward function is assumed perfect, but without the imagination mechanism.
Comparison in the instruction following phase LâB vs LâGâB: After training the LB baseline for 14K episodes, we compare its performance to DECSTRâs in the instruction-following setup. In the transition evaluation setup, LB achieves SR1 = 0.76±0.001: it always manages to move blocks close to or far from each other, but consistently fails to stack them. Adding more attempts does not help: SR5 = 0.76 ± 0.001. The LB baseline cannot be evaluated in the expression setup because it does not manipulate goal sets. Because it cannot stack blocks, LB only succeeds in 3.01 ± 0.43 random instructions in a row, against 14.9 for DECSTR (sequence setup). We then evaluate LBâs diversity on the set of instructions it succeeds in. When asked to repeat 50 times the same instruction, it achieves at least 3.0 different conï¬gurations, 4.2 on average and up to 5.2 depending on the instruction against 7.8, 17.1, 23 on the same set of instructions for DECSTR. We did not observe strategy-switching behaviors in LB, because it either always succeeds (close/far instructions) or fails (stacks).
Conclusion. The introduction of an intermediate semantic representation helps DECSTR decou- ple skill learning from language grounding which, in turns, facilitates instruction-following when compared to the end-to-end language-conditioned learning of LB. This leads to improved scores in the transition and sequence setups. The direct language-conditioning of LB prevents the general- ization to logical combination and leads to a reduced diversity in the set of mastered instructions. Decoupling thus brings signiï¬cant beneï¬ts to LGB architectures.
4.3 DO WE NEED A SEMANTIC INTERMEDIATE REPRESENTATION?
This section investigates the need for the intermediate representation to be semantic. To this end, we introduce the LGB-C baseline that leverages continuous goal representations in place of semantic ones. We compare them on the two ï¬rst phases.
The LGB-C baseline. The LGB-C baseline uses continuous goals expressing target block coor- dinates in place of semantic goals. The skill learning phase is thus equivalent to traditional goal- conditioned RL setups in block manipulation tasks (Andrychowicz et al., 2017; Colas et al., 2019a; Li et al., 2019; Lanier et al., 2019). Starting from the DECSTR algorithm, LGB-C adds a translation module that samples a set of target block coordinates matching the targeted semantic conï¬guration which is then used as the goal input to the policy. In addition, we integrate deï¬ning features of the state-of-the-art approach from Lanier et al. (2019): non-binary rewards (+1 for each well placed block) and multi-criteria HER, see details in Appendix D.2.
Comparison in skill learning phase GâB: The LGB-C baseline successfully learns to discover and master all 35 semantic conï¬gurations by placing the three blocks to randomly-sampled target coordinates corresponding to these conï¬gurations. It does so faster than DECSTR: 708 · 103 episodes to reach SR= 95%, against 1238 · 103 for DECSTR, see Appendix Figure 6. This can be explained by the denser learning signals it gets from using HER on continuous targets instead of discrete ones. In this phase, however, the agent only learns one parameterized skill: to place blocks at their target po- sition. It cannot build a repertoire of semantic skills because it cannot discriminate between different block conï¬gurations. Looking at the sum of the distances travelled by the blocks or the completion time, we ï¬nd that DECSTR performs opportunistic goal reaching: it ï¬nds simpler conï¬gurations of the blocks which satisfy its semantic goals compared to LGB-C. Blocks move less (âdist = 26 ± 5 cm), and goals are reached faster (âsteps = 13±4, mean±std across goals with p-values > 1.3·10â5 and 3.2 · 10â19 respectively).
Table 3: LGB-C performance in the LâG phase. Mean over 10 seeds. Stdev < 0.003 and 0.008 respectively.
Metrics Precision Recall Test 1 0.66 0.05 Test 2 0.78 0.02 Test 3 0.39 0.06 Test 4 0.0 0.0 Test 5 0.0 0.0
Comparison in language grounding phase LâG: We train the LGG to generate continuous tar- get coordinates conditioned on language inputs with a mean-squared loss and evaluate it in the same
8
Published as a conference paper at ICLR 2021
setup as DECSTRâs LGG, see Table 3. Although it maintains reasonable precision in the ï¬rst two testing sets, the LGG achieves low recall â i.e. diversity â on all sets. The lack of semantic represen- tations of skills might explain the difï¬culty of training a language-conditioned goal generator.
Conclusion. The skill learning phase of the LGB-C baseline is competitive with the one of DECSTR. However, the poor performance in the language grounding phase prevents this baseline to perform instruction following. For this reason, and because semantic representations enable agents to perform opportunistic goal reaching and to acquire repertoires for semantic skills, we believe the semantic representation is an essential part of the LGB architecture.
# 5 DISCUSSION AND CONCLUSION
This paper contributes LGB, a new conceptual RL architecture which introduces an intermediate se- mantic representation to decouple sensorimotor learning from language grounding. To demonstrate its beneï¬ts, we present DECSTR, a learning agent that discovers and masters all reachable conï¬gu- rations in a manipulation domain from a set of relational spatial primitives, before undertaking an efï¬cient language grounding phase. This was made possible by the use of object-centered inductive biases, a new form of automatic curriculum learning and a novel language-conditioned goal genera- tion module. Note that our main contribution is in the conceptual approach, DECSTR being only an instance to showcase its beneï¬ts. We believe that this approach could beneï¬t from any improvement in GC-RL (for skill learning) or generative models (for language grounding).
Semantic representations. Results have shown that using predicate-based representations was sufï¬cient for DECSTR to efï¬ciently learn abstract goals in an opportunistic manner. The proposed semantic conï¬gurations showcase promising properties: 1) they reduce the complexity of block manipulation where most effective works rely on a heavy hand-crafted curriculum (Li et al., 2019; Lanier et al., 2019) and a speciï¬c curiosity mechanism (Li et al., 2019); 2) they facilitate the ground- ing of language into skills and 3) they enable decoupling skill learning from language grounding, as observed in infants (Piaget, 1977). The set of semantic predicates is, of course, domain-dependent as it characterizes the space of behaviors that the agent can explore. However, we believe it is easier and requires less domain knowledge to deï¬ne the set of predicates, i.e. the dimensions of the space of potential goals, than it is to craft a list of goals and their associated reward functions.
A new approach to language grounding. The approach proposed here is the ï¬rst simultaneously enabling to decouple skill learning from language grounding and fostering a diversity of possible be- haviors for given instructions. Indeed, while an instruction following agent trained on goals like put red close to green would just push the red block towards the green one, our agent can generate many matching goal conï¬gurations. It could build a pyramid, make a blue-green-red pile or target a dozen other compatible conï¬gurations. This enables it to switch strategy, to ï¬nd alternative approaches to satisfy a same instruction when ï¬rst attempts failed. Our goal generation module can also gen- eralize to new sentences or transpose instructed transformations to unknown initial conï¬gurations. Finally, with the goal generation module, the agent can deal with any logical expression made of instructions by combining generated goal sets. It would be of interest to simultaneously perform language grounding and skill learning, which would result in âoverlapping wavesâ of sensorimotor and linguistic development (Siegler, 1998).
Semantic conï¬gurations of variable size. Considering a constant number of blocks and, thus, ï¬xed-size conï¬guration spaces is a current limit of DECSTR. Future implementations of LGB may handle inputs of variable sizes by leveraging Graph Neural Networks as in Li et al. (2019). Corre- sponding semantic conï¬gurations could be represented as a set of vectors, each encoding informa- tion about a predicate and the objects it applies to. These representations could be handled by Deep Sets (Zaheer et al., 2017). This would allow to target partial sets of predicates that would not need to characterize all relations between all objects, facilitating scalability.
Conclusion In this work, we have shown that introducing abstract goals based on relational predi- cates that are well understood by humans can serve as a pivotal representation between skill learning and interaction with a user through language. Here, the role of the social partner was limited to: 1)
9
Published as a conference paper at ICLR 2021
helping the agent to experience non-trivial conï¬gurations and 2) describing the agentâs behavior in a simpliï¬ed language. In the future, we intend to study more intertwined skill learning and language grounding phases, making it possible to the social partner to teach the agent during skill acquisition.
ACKNOWLEDGMENTS
This work was performed using HPC resources from GENCI-IDRIS (Grant 20XX-AP010611667), the MeSU platform at Sorbonne-Universit´e and the PlaFRIM experimental testbed. C´edric Colas is partly funded by the French Minist`ere des Arm´ees - Direction G´en´erale de lâArmement.
10
Published as a conference paper at ICLR 2021
# REFERENCES
Muhannad Alomari, Paul Duckworth, David C Hogg, and Anthony G Cohn. Natural language In Thirty-First AAAI Conference on acquisition and grounding for embodied robotic systems. Artiï¬cial Intelligence, 2017.
Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, Pieter Abbeel, and Wojciech Zaremba. Hindsight Experience Replay. arXiv preprint arXiv:1707.01495, 2017.
Dzmitry Bahdanau, Felix Hill, Jan Leike, Edward Hughes, Arian Hosseini, Pushmeet Kohli, and Edward Grefenstette. Learning to understand goal speciï¬cations by modelling reward. arXiv preprint arXiv:1806.01946, 2018.
Harris Chan, Yuhuai Wu, Jamie Kiros, Sanja Fidler, and Jimmy Ba. Actrce: Augmenting experience via teacherâs advice for multi-goal reinforcement learning. arXiv preprint arXiv:1902.04546, 2019.
Geoffrey Cideron, Mathieu Seurin, Florian Strub, and Olivier Pietquin. guage agent with hindsight experience replay for instruction following. arXiv:1910.09451, 2019. Self-educated lan- arXiv preprint
C´edric Colas, Pierre-Yves Oudeyer, Olivier Sigaud, Pierre Fournier, and Mohamed Chetouani. CU- RIOUS: Intrinsically motivated multi-task, multi-goal reinforcement learning. In International Conference on Machine Learning (ICML), pp. 1331â1340, 2019a.
C´edric Colas, Olivier Sigaud, and Pierre-Yves Oudeyer. A hitchhikerâs guide to statistical compar- isons of reinforcement learning algorithms. arXiv preprint arXiv:1904.06979, 2019b.
C´edric Colas, Tristan Karch, Nicolas Lair, Jean-Michel Dussoux, Cl´ement Moulin-Frier, Peter Ford Dominey, and Pierre-Yves Oudeyer. Language as a cognitive tool to imagine goals in curiosity- driven exploration. arXiv preprint arXiv:2002.09253, 2020.
Marc Peter Deisenroth, Carl Edward Rasmussen, and Dieter Fox. Learning to control a low-cost manipulator using data-efï¬cient reinforcement learning. Robotics: Science and Systems VII, pp. 57â64, 2011.
S´ebastien Forestier, Yoan Mollard, and Pierre-Yves Oudeyer. Intrinsically motivated goal explo- ration processes with automatic curriculum learning. arXiv preprint arXiv:1708.02190, 2017.
From language to goals: Inverse reinforcement learning for vision-based instruction following. arXiv preprint arXiv:1902.07742, 2019.
Soft actor-critic: Off- policy maximum entropy deep reinforcement learning with a stochastic actor. arXiv preprint arXiv:1801.01290, 2018.
Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, et al. Grounded lan- guage learning in a simulated 3d world. arXiv preprint arXiv:1706.06551, 2017.
Yiding Jiang, Shixiang Shane Gu, Kevin P Murphy, and Chelsea Finn. Language as an abstrac- tion for hierarchical deep reinforcement learning. In Advances in Neural Information Processing Systems, pp. 9414â9426, 2019.
Leslie Pack Kaelbling. Learning to achieve goals. In IJCAI, pp. 1094â1099, 1993.
Tristan Karch, C´edric Colas, Laetitia Teodorescu, Cl´ement Moulin-Frier, and Pierre-Yves Oudeyer. Deep sets for generalization in RL. arXiv preprint arXiv:2003.09443, 2020.
Johannes Kulick, Marc Toussaint, Tobias Lang, and Manuel Lopes. Active learning for teaching a robot grounded relational symbols. In Twenty-Third International Joint Conference on Artiï¬cial Intelligence, 2013.
11
Published as a conference paper at ICLR 2021
John B. Lanier, Stephen McAleer, and Pierre Baldi. Curiosity-driven multi-criteria hindsight experi- ence replay. CoRR, abs/1906.03710, 2019. URL http://arxiv.org/abs/1906.03710.
Richard Li, Allan Jabri, Trevor Darrell, and Pulkit Agrawal. Towards practical multi-object manip- ulation using relational reinforcement learning. arXiv preprint arXiv:1912.11032, 2019.
Jelena Luketina, Nantas Nardelli, Gregory Farquhar, Jakob Foerster, Jacob Andreas, Edward Grefen- stette, Shimon Whiteson, and Tim Rockt¨aschel. A survey of reinforcement learning informed by natural language. arXiv preprint arXiv:1906.03926, 2019.
Max Lungarella, Giorgio Metta, Rolf Pfeifer, and Giulio Sandini. Developmental robotics: a survey. Connection Science, 15(4):151â190, 2003.
Corey Lynch and Pierre Sermanet. Grounding language in play. arXiv preprint arXiv:2005.07648, 2020.
Jean M. Mandler. Preverbal representation and language. Language and space, pp. 365, 1999.
Jean M Mandler. On the spatial foundations of the conceptual system and its enrichment. Cognitive science, 36(3):421â451, 2012.
Ashvin Nair, Shikhar Bahl, Alexander Khazatsky, Vitchyr Pong, Glen Berseth, and Sergey Levine. Contextual imagined goals for self-supervised robotic learning. arXiv preprint arXiv:1910.11670, 2019.
Ashvin V Nair, Vitchyr Pong, Murtaza Dalal, Shikhar Bahl, Steven Lin, and Sergey Levine. Vi- sual reinforcement learning with imagined goals. In Advances in Neural Information Processing Systems, pp. 9191â9200, 2018.
Jean Piaget. The development of thought: Equilibration of cognitive structures. Viking, 1977. (Trans A. Rosin).
Matthias Plappert, Marcin Andrychowicz, Alex Ray, Bob McGrew, Bowen Baker, Glenn Pow- ell, Jonas Schneider, Josh Tobin, Maciek Chociej, Peter Welinder, et al. Multi-goal reinforce- ment learning: Challenging robotics environments and request for research. arXiv preprint arXiv:1802.09464, 2018.
R´emy Portelas, C´edric Colas, Lilian Weng, Katja Hofmann, and Pierre-Yves Oudeyer. Automatic curriculum learning for deep rl: A short survey. arXiv preprint arXiv:2003.04664, 2020.
Tom Schaul, Daniel Horgan, Karol Gregor, and David Silver. Universal value function approxima- tors. In International Conference on Machine Learning, pp. 1312â1320, 2015.
Robert S Siegler. Emerging minds: The process of change in childrenâs thinking. Oxford University Press, 1998.
Linda Smith and Michael Gasser. The development of embodied cognition: Six lessons from babies. Artiï¬cial life, 11(1-2):13â29, 2005.
Kihyuk Sohn, Honglak Lee, and Xinchen Yan. Learning structured output representation using deep conditional generative models. In Advances in neural information processing systems, pp. 3483â3491, 2015.
Gerald J. Sussman. A computational model of skill acquisition. Technical report, HIT Technical Report AI TR-297, 1973.
Austin Tate et al. Interacting goals and their use. In IJCAI, volume 10, pp. 215â218, 1975.
Stefanie Tellex, Thomas Kollar, Steven Dickerson, Matthew R Walter, Ashis Gopal Banerjee, Seth Teller, and Nicholas Roy. Approaching the symbol grounding problem with probabilistic graphi- cal models. AI magazine, 32(4):64â76, 2011.
Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5026â5033. IEEE, 2012.
12
Published as a conference paper at ICLR 2021
In Mind in Society, chapter Tool and Symbol in Child Development, pp. 19â30. Harvard University Press, 1978. ISBN 0674576292. doi: 10.2307/j.ctvjf9vz4.6.
J. Weng, J. McClelland, A. Pentland, O. Sporns, I. Stockman, M. Sur, and E. Thelen. Autonomous mental development by robots and animals. Science, 291(5504):599â600, 2001.
Danfei Xu, Suraj Nair, Yuke Zhu, Julian Gao, Animesh Garg, Li Fei-Fei, and Silvio Savarese. Neural task programming: Learning to generalize across hierarchical tasks. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 1â8. IEEE, 2018.
Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Russ R Salakhutdinov, and Alexander J Smola. Deep sets. In Advances in neural information processing systems, pp. 3391â 3401, 2017.
13
Published as a conference paper at ICLR 2021
# A LGB PSEUDO-CODE
Algorithm 1 and 2 present the high-level pseudo-code of any algorithm following the LGB architec- ture for each of the three phases.
Algorithm 1 LGB architecture Algorithm 2 LGB architecture GB phase LâG and L+GâB phases > Goal â Behavior phase > Language â Goal phase 1: Require Env E 1: Require I, £, G',, social partner SP 2: Initialize policy I, goal sampler Gz, 2: Initialize language goal generator LGG buffer B 3: dataset + SP.interact(L, I, G,) 3: loop 4: LGG update(dataset) 4: g + G,.sample() 5: return LGG 5: (8,4, 8â, 9, Cp, â¬)trai <â E-rollout(g) > Language â Behavior phase 6: G.update(c) ) 6: Require E, 1, LGG, SP 7: Buupdate((s,a, 8", 9, ¢p,C, raj 7: loop 8: so eee 058s: Con Cpe) 8: instr. < SP listen() : : 9: loop > Strategy switching loop 1b. return II, G', 10: g â LGG-sample(instr., c°) iL. I: cp & E.rollout(g) 1D: 12: if g == c} then break
# B SEMANTIC PREDICATES AND APPLICATION TO FETCH MANIPULATE
In this paper, we restrict the semantic representations to the use of the close and above binary pred- icates applied to M = 3 objects. The resulting semantic conï¬gurations are formed by:
cp = [c(o1, o2), c(o1, o3), c(o2, o3), a(o1, o2), a(o2, o1), a(o1, o3), a(o3, o1), a(o2, o3), a(o3, o2)],
where c() and a() refer to the close and above predicates respectively and (o1, o2, o3) are the red, green and blue blocks respectively.
Symmetry and asymmetry of close and above predicates. We consider objects o1 and o2.
⢠close is symmetric: âo1 is close to o2â â âo2 is close to o1â. The corresponding semantic mapping function is based on the Euclidean distance, which is symmetric.
⢠above is asymmetric: âo1 is above o2â â not âo2 is above o1â. The corresponding semantic mapping function evaluates the sign of the difference of the object Z-axis coordinates.
# C THE DECSTR ALGORITHM
C.1
# INTRINSICALLY MOTIVATED GOAL-CONDITIONED RL
Overview. Algorithm 3 presents the pseudo-code of the sensorimotor learning phase (GâB) of DECSTR. It alternates between two steps:
⢠Data acquisition. A DECSTR agent has no prior on the set of reachable semantic conï¬g- urations. Its ï¬rst goal is sampled uniformly from the semantic conï¬guration space. Using this goal, it starts interacting with its environment, generating trajectories of sensory states s, actions a and conï¬gurations cp. The last conï¬guration cT p achieved in the episode after T time steps is considered stable and is added to the set of reachable conï¬gurations. As it in- teracts with the environment, the agent explores the conï¬guration space, discovers reachable conï¬gurations and selects new targets.
⢠Internal models updates. A DECSTR agent updates two models: its curriculum strategy and its policy. The curriculum strategy can be seen as an active goal sampler. It biases the selection of goals to target and goals to learn about. The policy is the module controlling the agentâs behavior and is updated via RL.
14
Published as a conference paper at ICLR 2021
Algorithm 3 DECSTR: sensorimotor phase GâB. 1: Require: env E, # buckets Nb, # episodes before biased init. nunb, self-evaluation probability
pself eval, noise function Ï()
2: Initialize: policy Î , buffer B, goal sampler Gs, bucket sampling probabilities pb, language module LGG.
3: loop 4: self_eval â random() < Pseif-eval 5: g < G,.sample(self_eval, pp) 6: biased_init ~â epoch < nunb 7: 8°, & â Exseset(biased_init) 8: fort =1:T do 9: aâ & policy(s',c',g) 10: if not self_eval then I: aâ ea! +a() 12: sit cht! & E.step(a') 13: episode < (s,c,a,sâ,câ) 14: G,.update(c7) 15: B.update(episode) 16: g <â G,.sample(p,) 17: batch + B.sample(g) 18: IL.update(batch) 19: if self_eval then 20: Pv < Gs.update_LP()
> If True then evaluate competence
> Bias initialization only after nun epochs > co: Initial semantic configuration
Policy updates with a goal-conditioned Soft Actor-Critic. Readers familiar with Markov Deci- sion Process and the use of SAC and HER algorithms can skip this paragraph.
We want the DECSTR agent to explore a semantic conï¬guration space and master reachable conï¬gurations in it. We frame this problem as a goal-conditioned MDP (Schaul et al., 2015): M = (S, Gp, A, T , R, γ), where the state space S is the usual sensory space augmented with the conï¬guration space Cp, the goal space Gp is equal to the conï¬guration space Gp = Cp, A is the action space, T : S à A à S â [0, 1] is the unknown transition probability, R : S à A â {0, 1} is a sparse reward function and γ â [0, 1] is the discount factor.
Policy updates are performed with Soft Actor-Critic (SAC) (Haarnoja et al., 2018), a state-of-the-art off-policy actor-critic algorithm. We also use Hindsight Experience Replay (HER) (Andrychowicz et al., 2017). This mechanism enables agents to learn from failures by reinterpreting past trajectories in the light of goals different from the ones originally targeted. HER was designed for continuous goal spaces, but can be directly transposed to discrete goals (Colas et al., 2019a). In our setting, we simply replace the originally targeted goal conï¬guration by the currently achieved conï¬guration in the transitions fed to SAC. We also use our automatic curriculum strategy: the LP-C-based probabil- ities are used to sample goals to learn about. When a goal g is sampled, we search the experience buffer for the collection of episodes that ended in the conï¬guration cp = g. From these episodes, we sample a transition uniformly. The HER mechanism substitutes the original goal with one of the conï¬gurations achieved later in the trajectory. This substitute g has high chances of being the sampled one. At least, it is a conï¬guration on the path towards this goal, as it is sampled from a trajectory leading to it. The HER mechanism is thus biased towards goals sampled by the agent.
In the proposed Fetch Manipulate environment, the three Object-Centered Inductive Biases. blocks share the same set of attributes (position, velocity, color identiï¬er). Thus, it is natural to encode a relational inductive bias in our architecture. The behavior with respect to a pair of objects should be independent from the position of the objects in the inputs. The architecture used for the policy is depicted in Figure 4.
A shared network (N Nshared) encodes the concatenation of: 1) agentâs body features; 2) object pair features; 3) current conï¬guration (cp) and 4) current goal g. This is done independently for all object pairs. No matter the location of the features of the object pair in the initial observations, this shared network ensures that the same behavior will be performed, thus skills are transferred between object
15
Published as a conference paper at ICLR 2021
pairs. A sum is then used to aggregate these outputs, before a ï¬nal network (N Npolicy) maps the aggregation to actions a. The critic follows the same architecture, where a ï¬nal network N Ncritic maps the aggregation to an action-value Q. Parallel encoding of each pair-speciï¬c inputs can be seen as different modules trying to reach the goal by only seeing these pair-speciï¬c inputs. The intuition is that modules dealing with the pair that should be acted upon to reach the goal will supersede others in the sum aggregation.
\ E : i | me of am). [J-â-RGH= body Â¥ feo obj, J obj, | obj, | body J . ta policy >| mapping >| : function | ~ 6 | sau a
Figure 4: Object-centered modular architecture for the policy.
Although in principle our architecture could work with combinations of objects (3 modules), we found permutations to work better in practice (6 modules). With combinations, the shared network would need to learn to put block A on block B to achieve a predicate above(oi, oj), and would need to learn the reverse behavior (put B on A) to achieve the symmetric predicate above(oj, oi). With permutations, the shared network can simply learn one of these behaviors (e.g. A on B). Considering the predicate above(oA, oB), at least one of the modules has objects organized so that this behavior is the good one: if the permutation (oB, oA) is not the right one, permutation (oA, oB) is. The symmetry bias is explained in Section 3.4. It leverages the symmetry of the behaviors required to achieve the predicates above(oi, oj) and above(oj, oi). As a result, the two goal conï¬gurations are:
g1 = [c(o1, o2), c(o1, o3), c(o2, o3), a(o1, o2), a(o1, o3), a(o2, o3)], g2 = [c(o1, o2), c(o1, o3), c(o2, o3), a(o2, o1), a(o3, o1), a(o3, o2)], where g1 is used in association with object permutations (oi, oj) with i < j and g2 is used in association with object permutations (oj, oi) with i < j. As a result, the shared network automatically ensures transfer between predicates based on symmetric behaviors.
Implementation Details. This part includes details necessary to reproduce results. The code is available at https://sites.google.com/view/decstr/.
Parallel implementation of SAC-HER. We use a parallel implementation of SAC (Haarnoja et al., 2018). Each of the 24 parallel worker maintains its own replay buffer of size 106 and performs its own updates. Updates are summed over the 24 actors and the updated network are broadcast to all workers. Each worker alternates between 2 episodes of data collection and 30 updates with batch size 256. To form an epoch, this cycle is repeated 50 times and followed by the ofï¬ine evaluation of the agent on each reachable goal. An epoch is thus made of 50 à 2 à 24 = 2400 episodes.
Goal sampler updates. The agent performs self-evaluations with probability self eval = 0.1. During these runs, the agent targets uniformly sampled discovered conï¬gurations without explo- ration noise. This enables the agent to self-evaluate on each goal. Goals are organized into buckets. Main Section 3.4 presents our automatic bucket generation mechanism. Once buckets are formed, we compute C, LP and P , based on windows of the past W = 1800 self-evaluation interactions for each bucket.
16
Published as a conference paper at ICLR 2021
Modular architecture. The shared network of our modular architecture N Nshared is a 1-hidden layer network of hidden size 256. After all pair-speciï¬c inputs have been encoded through this module, their output (of size 84) are summed. The sum is then passed through a ï¬nal network with a hidden layer of size 256 to compute the ï¬nal actions (policy) or action-values (critic). All networks use ReLU activations and the Xavier initialization. We use Adam optimizers, with learning rates 10â3. The list of hyperparameters is provided in Table 4.
Table 4: Sensorimotor learning hyperparameters used in DECSTR.
Hyperparam. nb mpis nb cycles nb rollouts per mpi nb updates start bias init W self eval Nb replay strategy k replay batch size γ Ï lr actor lr critic α automatic entropy Description Number of workers Number of repeated cycles per epoch Number of rollouts per worker Number of updates per cycle Epoch from which initializations are biased Curriculum window size Self evaluation probability Number of buckets HER replay strategy Ratio of HER data to data from normal experience Size of the batch during updates Discount factor to model uncertainty about future decisions Polyak coefï¬cient for target critics smoothing Actor learning rate Critic learning rate Entropy coefï¬cient used in SAC Automatically tune the entropy coefï¬cient Values. 24 50 2 30 100 1800 0.1 5 f uture 4 256 0.98 0.95 10â3 10â3 0.2 F alse
Computing resources. The sensorimotor learning experiments contain 8 conditions: 2 of 10 seeds and 6 of 5 seeds. Each run leverages 24 cpus (24 actors) for about 72h for a total of 9.8 cpu years. Experiments presented in this paper requires machines with at least 24 cpu cores. The language grounding phase runs on a single cpu and trains in a few minutes.
C.2 LANGUAGE-CONDITIONED GOAL GENERATOR
Language-Conditioned Goal Generator Training. We use a conditional Variational Auto- Encoder (C-VAE) (Sohn et al., 2015). Conditioned on the initial conï¬guration and a sentence describ- ing the expected transformation of one object relation, it generates compatible goal conï¬gurations. After the ï¬rst phase of goal-directed sensorimotor training, the agent interacts with a hard-coded social partner as described in Main Section 3. From these interactions, we obtain a dataset of 5000 triplets: initial conï¬guration, ï¬nal conï¬guration and sentence describing one change of predicate from the initial to the ï¬nal conï¬guration. The list of sentences used by the synthetic social partner is provided in Table 5. Note that red, green and blue refer to objects o1, o2, o3 respectively.
Content of test sets. We describe the 5 test sets:
1. Test set 1 is made of input pairs (ci, s) from the training set, but tests the coverage of all compatible ï¬nal conï¬gurations Cf , 80% of which are not found in the training set. In that sense, it is partly a test set.
2. Test set 2 contains two input pairs: {[0 1 0 0 0 0 0 0 0], put blue close to green} and {[0 0 1 0 0 0 0 0 0], put green below red} corresponding to 7 and 24 compatible ï¬nal conï¬g- urations respectively.
3. Test set 3 corresponds to all pairs including the initial conï¬guration ci = [1 1 0 0 0 0 0 0 0] (29 pairs), with an average of 13 compatible ï¬nal conï¬gurations.
4. Test set 4 corresponds to all pairs including one of the sentences put green on top of red and put blue far from red, i.e. 20 pairs with an average of 9.5 compatible ï¬nal conï¬gurations.
17
Published as a conference paper at ICLR 2021
5. Test set 5 is all pairs that include both the initial conï¬guration of test set 3 and one of the sentences of test set 4, i.e. 2 pairs with 6 and 13 compatible goals respectively. Note that pairs of set 5 are removed from sets 3 and 4.
Table 5: List of instructions. Each of them speciï¬es a shift of one predicate, either from false to true (0 â 1) or true to false (1 â 0). block A and block B represent two different blocks from {red, blue, green}.
Sentences Put block A close to block B, Bring block A and block B together, Get block A and block B close from each other, Get block A close to block B. Put block A far from block B, Get block A far from block B, Get block A and block B far from each other, Bring block A and block B apart, Put block A above block B, Put block A on top of block B, Put block B under block A, Put block B below block A. Remove block A from above block B, Remove block A from block B, Remove block B from below block A, Put block B and block A on the same plane, Put block A and block B on the same plane.
Testing on logical expressions of instructions. To evaluate DECSTR on logical functions of in- structions, we generate three types of expressions:
1. 100 instructions of the form âA and Bâ where A and B are basic instructions corresponding to shifts of the form above 0 â 1 (see Table 5). These intersections correspond to stacks of 3 or pyramids.
2. 200 instructions of the form âA and Bâ where A and B are above and close instructions respectively. B can be replaced by ânot Bâ with probability 0.5.
3. 200 instructions of the form â(A and B) or (C and D))â, where A, B, C, D are basic instruc- tions: A and C are above instructions while B and D are close instructions. Here also, any instruction can be replaced by its negation with probability 0.5.
Implementation details. The encoder is a fully-connected neural network with two layers of size 128 and ReLU activations. It takes as input the concatenation of the ï¬nal binary conï¬guration and its two conditions: the initial binary conï¬guration and an embedding of the NL sentence. The NL sentence is embedded with an recurrent network with embedding size 100, tanh non-linearities and biases. The encoder outputs the mean and log-variance of the latent distribution of size 27. The decoder is also a fully-connected network with two hidden layers of size 128 and ReLU activations. It takes as input the latent code z and the same conditions as the encoder. As it generates binary vectors, the last layer uses sigmoid activations. We train the architecture with a mixture of Kullback- Leibler divergence loss (KDloss) w.r.t a standard Gaussian prior and a binary Cross-Entropy loss (BCEloss). The combined loss is BCEloss + β à KDloss with β = 0.6. We use an Adam optimizer, a learning rate of 5 à 10â4, a batch size of 128 and optimize for 150 epochs. As training is fast (â 2 min on a single cpu), we conducted a quick hyperparameter search over β, layer sizes, learning rates and latent sizes (see Table 6). We found robust results for various layer sizes, various β below 1. and latent sizes above 9.
Table 6: LGG hyperparameter search. In bold are the selected hyperparameters.
Hyperparam. β layers size learning rate latent sizes Values. [0.5, 0.6, 0.7, 0.8, 0.9, 1.] [128, 256] [0.01, 0.005, 0.001] [9, 18, 27]
# D BASELINES AND ORACLE
The language-conditioned LB baseline is fully described in the main document.
18
Published as a conference paper at ICLR 2021
D.1 EXPERT BUCKETS ORACLE
In the EXPERT BUCKETS oracle, the automatic bucket generation of DECSTR is replaced with an expert-predeï¬ned set of buckets using a priori measures of similarity and difï¬culty. To deï¬ne these buckets, one needs prior knowledge of the set of unreachable conï¬gurations, which are ruled out. The 5 predeï¬ned buckets contain all conï¬gurations characterized by:
⢠Bucket 1: a single close relation between a pair of objects and no above relations (4 conï¬gu- rations).
Bucket 2: 2 or 3 close relations and no above relations (4 conï¬gurations). ⢠Bucket 3: 1 stack of 2 blocks and a third block that is either away or close to the base, but is
not close to the top of the stack (12 conï¬gurations).
⢠Bucket 4: 1 stack of 2 blocks and the third block close to the stack, as well as pyramid conï¬gurations (9 conï¬gurations).
⢠Bucket 5: stacks of 3 blocks (6 conï¬gurations).
These buckets are the only difference between the EXPERT BUCKETS baseline and DECSTR.
D.2
# LGB-C BASELINE
D.2 LGB-C BASELINE
The LGB-C baseline represent goals not as semantic conï¬gurations but as particular 3D targets po- sitions for each block, as deï¬ned for example in Lanier et al. (2019) and Li et al. (2019). The goal vector size is also 9 and contains the 3D target coordinates of the three blocks. This baselines also implements decoupling and, thus, can be compared to DECSTR in the three phases. We keep as many modules as possible common with DECSTR to minimize the amount of confounding factors and reduce the under-ï¬tting bias. The goal selection is taken from DECSTR, but converts semantic conï¬guration into speciï¬c randomly-sampled target coordinates for the blocks, see Figure 5. The agent is not conditioned on its current semantic conï¬guration nor its semantic goal conï¬guration. For this reason, we do not apply the symmetry bias. The binary reward is positive when the maxi- mal distance between a block and its target position is below 5 cm, i.e. the size of a block (similar to (Andrychowicz et al., 2017)). To make this baseline competitive, we integrate methods from a state of the art block manipulation algorithm (Lanier et al., 2019). The agent receives positive re- wards of 1, 2, 3 when the corresponding number of blocks are well placed. We also introduce the multi-criteria HER from Lanier et al. (2019). Finally, we add an additional object-centered inductive bias by only considering, for each Deep Sets module, the 3D target positions of the corresponding pair.That is, for each object pair, we ignore the 3D positions of the remaining object, yielding to a vector of size 6. Language grounding is based on a C-VAE similar to the one used by DECSTR. We only replace the cross-entropy loss by a mean-squared loss due to the continuous nature of the target goal coordinates. We use the exact same training and testing sets as with semantic goals.
Figure 5: The LGB-C baseline samples target positions for each block (example for a pyramid here).
19
Published as a conference paper at ICLR 2021
# E ADDITIONAL RESULTS
E.1 COMPARISON DECSTR - LGB-C IN SKILL LEARNING PHASE
Figure 6 presents the average success rate over the 35 valid conï¬gurations during the skill learning phase for DECSTR and the LGB-C baseline. Because LGB-C cannot pursue semantic goals as such, we randomly sample a speciï¬c instance of this semantic goal: target block coordinates that satisfy the constraints expressed by it. Because LGB-C is not aware of the original semantic goal, we cannot measure success as the ability to achieve it. Instead, success is deï¬ned as the achievement of the corresponding speciï¬c goal: bringing blocks to their respective targets within an error margin of 5 cm each. In short, DECSTR targets semantic goals and is evaluated on its ability to reach them. LGB-C targets speciï¬c goals and is evaluated on its ability to reach them. These two measures do not match exactly. Indeed, LGB-C sometimes achieves its speciï¬c goal but, because of the error margins, does not achieve the original semantic goal.
â® DECSTR âÂ¥*â LGB-C 1.006 * * & & ke ke ek kK ° N a Success Rate ° uu So 200 460600 8601000 1200 Episodes (x103)
Figure 6: Comparison DECSTR and LGB-C in the skill learning phase.
E.2 AUTOMATIC BUCKET GENERATION.
Figure 7 depicts the evolution of the content of buckets along training (epochs 1, 50 and 100). Each pie chart corresponds to a reachable conï¬guration and represents the distribution of conï¬gurations into buckets across 10 different seeds. Blue, orange, green, yellow, purple represent buckets 1 to 5 respectively and grey are undiscovered conï¬gurations. At each moment, the discovered conï¬gura- tions are equally spread over the 5 buckets. A given conï¬guration may thus change bucket as new conï¬gurations are discovered, so that the ones discovered earlier are assigned buckets with lower indexes. Goals are organized by their bucket assignments in the Expert Buckets condition (from top to bottom).
After the ï¬rst epoch (left), DECSTR has discovered all conï¬gurations from the expert buckets 1 and 2, and some runs have discovered a few other conï¬gurations. After 50 epochs, more conï¬gurations have been discovered but they are not always the same across runs. Finally, after 100 epochs, all conï¬gurations are found. Buckets are then steady and can be compared to expert-deï¬ned buckets. It seems that easier goals (top-most group) are discovered ï¬rst and assigned in the ï¬rst-easy buckets (blue and orange). Hardest conï¬gurations (stacks of 3, bottom-most group) seem to be discovered last and assigned the last-hardest bucket (purple). In between, different runs show different composi- tions, which are not always aligned with expert-deï¬ned buckets. Goals from expert-deï¬ned buckets 3 and 4 (third and fourth group from the top) seem to be attributed different automatic buckets in different runs. This means that they are discovered in different orders depending on the runs. In summary, easier and harder goals from expert buckets 1 - 2 and 5 respectively seem to be well de- tected by our automatic bucket generations. Goals in medium-level expected difï¬culty as deï¬ned by expert buckets seem not to show any signiï¬cant difference in difï¬culty for our agents.
# E.3 DECSTR LEARNING TRAJECTORIES
Figure 8 shows the evolution of internal estimations of the competence C, the learning progress LP and the associated sampling probabilities P. Note that these metrics are computed online by
20
Published as a conference paper at ICLR 2021
Automatic Buckets GD Bucket: GB sucker2 GB suckers GU suckers HM Buckets HH not discovered Expert Bucket 1 Expert Bucket 2 > Expert Bucket 4 Expert Bucket 5
Figure 7: Evolution of the content of buckets from automatic bucket generation: epoch 1 (2400 episodes, left), 50 (middle) and 100 (right). Each pie chart corresponds to one of the 35 valid conï¬gurations. It represents the distribution of the bucket attributions of that conï¬guration across 10 runs. Blue, orange, green, yellow, purple represent automatically generated buckets 1 to 5 respectively (increasing order of difï¬culty) and grey represents undiscovered conï¬gurations. Goals are organized according to their expert bucket attributions in the Expert Buckets condition (top-bottom organization).
DECSTR, as it self-evaluates on random discovered conï¬gurations. Learning trajectories seem to be uniform across different runs, and buckets are learned in increasing order. This conï¬rms that the time of discovery is a good proxy for goal difï¬culty. In that case, conï¬gurations discovered ï¬rst end up in the lower index buckets and are indeed learned ï¬rst. Note that a failing automatic bucket generation would assign goals to random buckets. This would result in uniform measures of learning progress across different buckets, which would be equivalent to uniform goal sampling. As Main Figure 3c shows, DECSTR performs much better than the random goals conditions. This proves that our automatic bucket algorithm generates useful goal clustering.
81 -s 82 83 = 84 -» a5 one BL a B2 BS kes Bl -s 82 83 = 84 â9 a5 Bl -e 82 = B3 = Ba â9 BS Bl -s 82 = B3 = Ba â» Bs
Figure 8: Learning trajectories of 6 DECSTR agents.
21 | {
"id": "1707.01495"
} |
2006.07235 | SemEval-2020 Task 12: Multilingual Offensive Language Identification in Social Media (OffensEval 2020) | We present the results and main findings of SemEval-2020 Task 12 on
Multilingual Offensive Language Identification in Social Media (OffensEval
2020). The task involves three subtasks corresponding to the hierarchical
taxonomy of the OLID schema (Zampieri et al., 2019a) from OffensEval 2019. The
task featured five languages: English, Arabic, Danish, Greek, and Turkish for
Subtask A. In addition, English also featured Subtasks B and C. OffensEval 2020
was one of the most popular tasks at SemEval-2020 attracting a large number of
participants across all subtasks and also across all languages. A total of 528
teams signed up to participate in the task, 145 teams submitted systems during
the evaluation period, and 70 submitted system description papers. | http://arxiv.org/pdf/2006.07235 | Marcos Zampieri, Preslav Nakov, Sara Rosenthal, Pepa Atanasova, Georgi Karadzhov, Hamdy Mubarak, Leon Derczynski, Zeses Pitenis, Çağrı Çöltekin | cs.CL, 68T50, 68T07, I.2.7 | Proceedings of the International Workshop on Semantic Evaluation
(SemEval-2020) | null | cs.CL | 20200612 | 20200930 | 2020:
0 2 0 2
p e S 0 3 ] L C . s c [
2 v 5 3 2 7 0 . 6 0 0 2 : v i X r a
# SemEval-2020 Task 12: Multilingual Offensive Language Identiï¬cation in Social Media (OffensEval 2020)
Marcos Zampieri1, Preslav Nakov2, Sara Rosenthal3, Pepa Atanasova4, Georgi Karadzhov5 Hamdy Mubarak2, Leon Derczynski6, Zeses Pitenis7, C¸ aËgrı C¸ ¨oltekin8 1Rochester Institute of Technology, USA, 2Qatar Computing Research Institute, Qatar 3IBM Research, USA, 4University of Copenhagen, Denmark, 5University of Cambridge, UK 6IT University Copenhagen, Denmark, 7University of Wolverhampton, UK 8University of T¨ubingen, Germany [email protected]
# Abstract
We present the results and the main ï¬ndings of SemEval-2020 Task 12 on Multilingual Offensive Language Identiï¬cation in Social Media (OffensEval-2020). The task included three subtasks corresponding to the hierarchical taxonomy of the OLID schema from OffensEval-2019, and it was offered in ï¬ve languages: Arabic, Danish, English, Greek, and Turkish. OffensEval-2020 was one of the most popular tasks at SemEval-2020, attracting a large number of participants across all subtasks and languages: a total of 528 teams signed up to participate in the task, 145 teams submitted ofï¬cial runs on the test data, and 70 teams submitted system description papers.
# 1 Introduction
Offensive language is ubiquitous in social media platforms such as Facebook, Twitter, and Reddit, and it comes in many forms. Given the multitude of terms and deï¬nitions related to offensive language used in the literature, several recent studies have investigated the common aspects of different abusive language detection tasks (Waseem et al., 2017; Wiegand et al., 2018). One such example is SemEval-2019 Task 6: OffensEval1(Zampieri et al., 2019b), which is the precursor to the present shared task. OffensEval- 2019 used the Offensive Language Identiï¬cation Dataset (OLID), which contains over 14,000 English tweets annotated using a hierarchical three-level annotation schema that takes both the target and the type of offensive content into account (Zampieri et al., 2019a). The assumption behind this annotation schema is that the target of offensive messages is an important variable that allows us to discriminate between, e.g., hate speech, which often consists of insults targeted toward a group, and cyberbullying, which typically targets individuals. A number of recently organized related shared tasks followed sim- ilar hierarchical models. Examples include HASOC-2019 (Mandl et al., 2019) for English, German, and Hindi, HatEval-2019 (Basile et al., 2019) for English and Spanish, GermEval-2019 for German (Struà et al., 2019), and TRAC-2020 (Kumar et al., 2020) for English, Bengali, and Hindi.
OffensEval-2019 attracted nearly 800 team registrations and received 115 ofï¬cial submissions, which demonstrates the interest of the research community in this topic. Therefore, we organized a follow-up, OffensEval-20202 (SemEval-2020 Task 12), which is described in this report, building on the success of OffensEval-2019 with several improvements. In particular, we used the same three-level taxonomy to annotate new datasets in ï¬ve languages, where each level in this taxonomy corresponds to a subtask in the competition:
Subtask A: Offensive language identiï¬cation; ⢠Subtask B: Automatic categorization of offense types; ⢠Subtask C: Offense target identiï¬cation.
This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http://creativecommons.org/licenses/by/4.0/.
# 1http://sites.google.com/site/offensevalsharedtask/offenseval2019 2http://sites.google.com/site/offensevalsharedtask/home
The contributions of OffensEval-2020 can be summarized as follows:
We provided the participants with a new, large-scale semi-supervised training dataset containing
over nine million English tweets (Rosenthal et al., 2020).
We introduced multilingual datasets,
and we expanded the task to four new lan- guages: Arabic (Mubarak et al., 2020b), Danish (Sigurbergsson and Derczynski, 2020), Greek (Pitenis et al., 2020), and Turkish (C¸ ¨oltekin, 2020). This opens the possibility for cross-lingual training and analysis, which several participants indeed explored.
⢠Compared to OffensEval-2019, we used larger test datasets for all subtasks.
Overall, OffensEval-2020 was a very successful task. The huge interest demonstrated last year con- tinued this year, with 528 teams signing up to participate in the task, and 145 of them submitting ofï¬cial runs on the test dataset. Furthermore, OffensEval-2020 received 70 system description papers, which is an all-time record for a SemEval task.
The remainder of this paper is organized as follows: Section 2 describes the annotation schema. Sec- tion 3 presents the ï¬ve datasets that we used in the competition. Sections 4-9 present the results and discuss the approaches taken by the participating systems for each of the ï¬ve languages. Finally, Sec- tion 10 concludes and suggests some possible directions for future work.
# 2 Annotation Schema
OLIDâs annotation schema proposes a hierarchical modeling of offensive language. It classiï¬es each example using the following three-level hierarchy:
Level A - Offensive Language Detection Is the text offensive (OFF) or not offesive (NOT)?
NOT: text that is neither offensive, nor profane;
OFF: text containing inappropriate language, insults, or threats.
Level B - Categorization of Offensive Language Is the offensive text targeted (TIN) or untargeted (UNT)?
TIN: targeted insults or threats towards a group or an individual;
UNT: untargeted profanity or swearing.
Level C - Offensive Language Target Identiï¬cation Who or what is the target of the offensive content?
IND: the target is an individual, which can be explicitly mentioned or it can be implicit;
GRP: the target is a group of people based on ethnicity, gender, sexual orientation, religious belief, or other common characteristic;
OTH: the target does not fall into any of the previous categories, e.g., organizations, events, and issues.
# 3 Data
In this section, we describe the datasets for all ï¬ve languages: Arabic, Danish, English, Greek, and Turkish. All of the languages follow the OLID annotation schema and all datasets were pre-processed in the same way, e.g., all user mentions were substituted by @USER for anonymization. The introduction of new languages using a standardized schema with the purpose of detecting offensive and targeted speech should improve dataset consistency. This strategy is in line with current best practices in abusive language data collection (Vidgen and Derczynski, 2020). All languages contain data for subtask A, and only English contains data for subtasks B and C. The distribution of the data across categories for all languages for subtask A is shown in Table 1, while Tables 2 and 3 present statistics about the data for the English subtasks B and C, respectively. Labeled examples from the different datasets are shown in Table 4.
Training Test Language OFF NOT Total OFF NOT Total English Arabic Danish Greek Turkish 1 448 861 7 640 279 9 089 140 8 000 2 961 8 743 31 756 1 589 384 2 486 6 131 6 411 2 577 6 257 25 625 1 090 402 41 425 716 2 807 1 598 288 1 119 2 812 3 897 2 000 329 1 544 3 528
# Table 1: Subtask A (all languages): statistics about the data.
Training Test Language TIN UNT Total TIN UNT Total English 149 550 39 424 188 974 850 1 072 1 922
Table 2: Subtask B (English): statistics about the data.
Training Test Language IND GRP OTH Total IND GRP OTH Total English 120 330 22 176 7 043 149 549 580 190 80 850
Table 3: Subtask C (English): statistics about the data.
Language Tweet A B Cc English â This account owner asks for people to think rationally. NOT â â Arabic SS! ok ole L dle & dle wl wd OFF â â Translation: May God curse you, O coward, O son of a dog. Danish Du glemmer @steuropaer som er de veerste OFF â â Translation: You forget Eastern Europeans, who are the worst Greek TlopadéZou to, cioon aydunty ede) xo xouLod... OFF â â Translation: Admit it, youâve been unfucked for a while now... Turkish Boyle devam et seni gerizekali OFF â â Translation: Go on like this, you idiot English this job got me all the way fucked up real shit OFF UNT â English wtf ari her ass tooo big OFF TIN IND English @USER We are a country of morons OFF TIN GRP
Table 4: Annotated examples for all subtasks and languages.
English For English, we provided two datasets: OLID from OffensEval-2019 (Zampieri et al., 2019a), and SOLID, which is a new dataset we created for the task (Rosenthal et al., 2020). SOLID is an abbrevi- ation for Semi-Supervised Offensive Language Identiï¬cation Dataset, and it contains 9,089,140 English tweets, which makes it the largest dataset of its kind. For SOLID, we collected random tweets using the 20 most common English stopwords such as the, of, and, to, etc. Then, we labeled the collected tweets in a semi-supervised manner using democratic co-training, with OLID as a seed dataset. For the co-training, we used four models with different inductive biases: PMI (Turney and Littman, 2003), Fast- Text (Joulin et al., 2017), LSTM (Hochreiter and Schmidhuber, 1997), and BERT (Devlin et al., 2019). We selected the OFF tweets for the test set using this semi-supervised process and we then annotated them manually for all subtasks. We further added 2,500 NOT tweets using this process without further annotation. We computed a Fleissâ κ Inter-Annotator Agreement (IAA) on a small subset of instances that were predicted to be OFF, and obtained 0.988 for Level A (almost perfect agreement), 0.818 for Level B (substantial agreement), and 0.630 for Level C (moderate agreement). The annotation for Level C was more challenging as it is 3-way and also as sometimes there could be different types of targets mentioned in the offensive tweet, but the annotators were forced to choose only one label.
Arabic The Arabic dataset consists of 10,000 tweets collected in AprilâMay 2019 using the Twitter API with the language ï¬lter set to Arabic: lang:ar. In order to increase the chance of having offensive content, only tweets with two or more vocative particles (yA in Arabic) were considered for annotation; the vocative particle is used mainly to direct the speech to a person or to a group, and it is widely observed in offensive communications in almost all Arabic dialects. This yielded 20% offensive tweets in the ï¬nal dataset. The tweets were manually annotated (for Level A only) by a native speaker familiar with several Arabic dialects. A random subsample of offensive and non-offensive tweets were doubly annotated and the Fleiss κ IAA was found to be 0.92. More details can be found in (Mubarak et al., 2020b).
Danish The Danish dataset consists of 3,600 comments drawn from Facebook, Reddit, and a local newspaper, Ekstra Bladet3. The selection of the comments was partially seeded using abusive terms gath- ered during a crowd-sourced lexicon compilation; in order to ensure sufï¬cient data diversity, this seeding was limited to half the data only. The training data was not divided into distinct training/development splits, and participants were encouraged to perform cross-validation, as we wanted to avoid issues that ï¬xed splits can cause (Gorman and Bedrick, 2019). The annotation (for Level A only) was performed at the individual comment level by males aged 25-40. A full description of the dataset and an accompanying data statement (Bender and Friedman, 2018) can be found in (Sigurbergsson and Derczynski, 2020).
Greek The Offensive Greek Twitter Dataset (OGTD) used in this task is a compilation of 10,287 tweets. These tweets were sampled using popular and trending hashtags, including television programs such as series, reality and entertainment shows, along with some politically related tweets. Another por- tion of the dataset was fetched using pejorative terms and âyou areâ as keywords. This particular strategy was adopted with the hypothesis that TV and politics would gather a handful of offensive posts, along with tweets containing vulgar language for further investigation. A team of volunteer annotators partic- ipated in the annotation process (for Level A only), with each tweet being judged by three annotators. In cases of disagreement, labels with majority agreement above 66% were selected as the actual tweet labels. The IAA was 0.78 (using Fleissâ κ coefï¬cient). A full description of the dataset collection and annotation is detailed in (Pitenis et al., 2020).
Turkish The Turkish dataset consists of over 35,000 tweets sampled uniformly from the Twitter stream and ï¬ltered using a list of the most frequent words in Turkish, as identiï¬ed by Twitter. The tweets were annotated by volunteers (for Level A only). Most tweets were annotated by a single annotator. The Cohenâs κ IAA calculated on 5,000 doubly-annotated tweets was 0.761. Note that we did not include any speciï¬c method for spotting offensive language, e.g., ï¬ltering by offensive words, or following usual targets of offensive language. As a result, the distribution closely resembles the actual offensive language use on Twitter, with more non-offensive tweets than offensive tweets. More details about the sampling and the annotation process can be found in (C¸ ¨oltekin, 2020).
# 4 Task Participation
A total of 528 teams signed up to participate in the task, and 145 of them submitted results: 6 teams made submissions for all ï¬ve languages, 19 did so for four languages, 11 worked on three languages, 13 on two languages, and 96 focused on just one language. Tables 13, 14, and 15 show a summary of which team participated in which task. A total of 70 teams submitted system description papers, which are listed in Table 12. Below, we analyze the representation and the models used for all language tracks.
Representation The vast majority of teams used some kind of pre-trained embeddings such as contex- tualized Transformers (Vaswani et al., 2017) and ELMo (Peters et al., 2018) embeddings. The most pop- ular Transformers were BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), and the multi-lingual mBERT (Devlin et al., 2019).4
3http://ekstrabladet.dk/ 4Note that there are some issues with the way mBERT processes some languages, e.g., there is no word segmentation for
Arabic, the Danish Ëa/aa mapping is not handled properly (Strømberg-Derczynski et al., 2020), etc.
Many teams also used context-independent embeddings from word2vec (Mikolov et al., 2013) such as Maza- or GloVe (Pennington et al., 2014), jak (Farha and Magdy, 2019) for Arabic. Some teams used other techniques: word n-grams, character n-grams, lexicons for sentiment analysis, and lexicon of offensive words. Other representations included emoji priors extracted from the weakly supervised SOLID dataset for English, and sentiment analysis using NLTK (Bird et al., 2009), Vader (Hutto and Gilbert, 2014), and FLAIR (Akbik et al., 2018).
Machine learning models In terms of machine learning models, most teams used some kind of pre-trained Transformers: typically BERT, but RoBERTa, XLM-RoBERTa (Conneau et al., 2020), AL- BERT (Lan et al., 2019), and GPT-2 (Radford et al., 2019) were also popular. Other popular models included CNNs (Fukushima, 1980), RNNs (Rumelhart et al., 1986), and GRUs (Cho et al., 2014). Older models such as SVMs (Cortes and Vapnik, 1995) were also used, typically as part of ensembles.
# 5 English Track
A total of 87 teams made submissions for the English track (23 of them participated in the 2019 edition of the task): 27 teams participated in all three English subtasks, 18 teams participated in two English subtasks, and 42 focused on one English subtask only.
Pre-processing and normalization Most teams performed some kind of pre-processing (67 teams) or text normalization (26 teams), which are typical steps when working with tweets. Text normalization included various text transformations such as converting emojis to plain text,5 segmenting hashtags,6 general tweet text normalization (Satapathy et al., 2019), abbreviation expansion, bad word replacement, error correction, lowercasing, stemming, and/or lemmatization. Other techniques included the removal of @user mentions, URLs, hashtags, emojis, emails, dates, numbers, punctuation, consecutive character repetitions, offensive words, and/or stop words.
Additional data Most teams found the weakly supervised SOLID dataset useful, and 58 teams ended up using it in their systems. Another six teams gave it a try, but could not beneï¬t from it, and the re- maining teams only used the manually annotated training data. Some teams used additional datasets from HASOC-2019 (Mandl et al., 2019), the Kaggle competitions on Detecting Insults in Social Com- mentary7 and Toxic Comment Classiï¬cation8, the TRAC-2018 shared task on Aggression Identiï¬ca- tion (Kumar et al., 2018a; Kumar et al., 2018b), the Wikipedia Detox dataset (Wulczyn et al., 2017), and the datasets from (Davidson et al., 2017) and (Wulczyn et al., 2017), as well as some lexicons such as HurtLex (Bassignana et al., 2018) and Hatebase.9 Finally, one team created their own dataset.
# 5.1 Subtask A
A total of 82 teams made submissions for subtask A, and the results can be seen in Table 5. This was the most popular subtask among all subtasks and across all languages. The best team UHH- LT achieved an F1 score of 0.9204 using an ensemble of ALBERT models of different sizes. The team ranked second was UHH-LT with an F1 score of 0.9204, and it used RoBERTa-large that was ï¬ne-tuned on the SOLID dataset in an unsupervised way, i.e., using the MLM objective. The third team, Galileo, achieved an F1 score of 0.9198, using an ensemble that combined XLM-RoBERTa-base and XLM-RoBERTa-large trained on the subtask A data for all languages. The top-10 teams used BERT, RoBERTa or XLM-RoBERTa, sometimes as part of ensembles that also included CNNs and LSTMs (Hochreiter and Schmidhuber, 1997). Overall, the competition for this subtask was very strong, and the scores are very close: the teams ranked 2â16 are within one point in the third decimal place, and those ranked 2â59 are within two absolute points in the second decimal place from the best team. All
# 5http://github.com/carpedm20/emoji 6http://github.com/grantjenks/python-wordsegment 7http://www.kaggle.com/c/detecting-insults-in-social-commentary 8http://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge 9http://hatebase.org/
but one team beat the majority class baseline (we suspect that team might have accidentally ï¬ipped their predicted labels).
# Team Score # Team Score # Team 0.9204 1 UHH-LT 0.9198 2 Galileo 0.9187 3 Rouges 0.9166 4 GUIR 0.9162 5 KS@LTH 0.9151 6 7 0.9146 8 AlexU-BackTranslation-TL 0.9139 0.9136 9 SpurthiAH 0.9135 amsqr 10 0.9134 11 m20170548 12 Coffee Latte 0.9132 0.9129 13 wac81 0.9129 14 NLPDove 0.9128 15 UJNLP 0.9119 16 ARA 0.9115 17 Ferryman 0.9114 18 ALT 0.9105 19 SINAI 0.9105 20 MindCoders IRLab DAIICT 21 0.9104 0.9103 erfan 22 0.9103 23 Light 0.9099 24 KAFK 0.9098 25 PALI 0.9097 26 PRHLT-UPV 27 YNU oxz 0.9097 0.9094 28 kungfupanda TysonYU 0.9094 29 UTFPR 0.9094 30 31 TAC 0.9093 32 SSN NLP MLRG 0.9092 0.9091 33 Hitachi 0.9091 34 CoLi @ UdS 0.9090 35 XD 0.9090 36 UoB 0.9089 37 PAI-NLP 0.9089 38 PingANPAI 0.9089 39 Veriï¬edXiaoPAI 0.9089 40 nlpUP 0.9088 41 NLP Passau 0.9087 42 TheNorth 0.9085 43 0.9084 44 Lee 0.9081 45 Wu427 0.9081 ITNLP 46 0.9077 47 Better Place 0.9075 IIITG-ADBU 48 0.9075 49 doxaAI 0.9067 50 NTU NLP 0.9065 51 FERMI 0.9063 52 AdelaideCyC 0.9061 53 INGEOTEC 0.9060 54 PGSG 0.9048 55 SRIB2020 0.9036 56 GruPaTo IU-UM@LING problemConquero 57 OffensSzeged 58 FBK-DH 59 RGCL 60 byteam 61 ANDES 62 PUM 63 NUIG I2C 64 sonal.kumari 65 IJS 66 67 IR3218-UI 68 TeamKGP 69 UNT Linguistics 70 71 Team Oulu 72 TECHSSN 73 KDELAB 74 HateLab 75 76 77 Duluth 78 RTNLU 79 KarthikaS 80 Bodensee janecek1 IASBS IUST 81 Majority Baseline IRlab@IITV IITP-AINLPML Score 0.9032 0.9032 0.9006 0.8994 0.8990 0.8973 0.8927 0.8919 0.8900 0.8887 0.8843 0.8822 0.8820 0.8744 0.8655 0.8655 0.8653 0.8617 0.8577 0.8288 0.7714 0.7665 0.6351 0.4954 0.4193 0.0728
Table 5: Results for English subtask A, ordered by macro-averaged F1 in descending order.
# 5.2 Subtask B
A total of 41 teams made submissions for subtask B, and the results can be seen in Table 6. The best team is Galileo (which were third on subtask A), whose ensemble model achieved an F1 score of 0.7462. The second-place team, PGSG, used a complex teacher-student architecture built on top of a BERT-LSTM model, which was ï¬ne-tuned on the SOLID dataset in an unsupervised way, i.e., optimizing for the MLM objective. NTU NLP was ranked third with an F1 score of 0.6906. They tackled subtasks A, B, and C as part of a multi-task BERT-based model. Overall, the differences in the scores for subtask B are much larger than for subtask A. For example, the 4th team is two points behind the third one and seven points behind the ï¬rst one. The top-ranking teams used BERT-based Transformer models, and all but four teams could improve over the majority class baseline.
# 5.3 Subtask C
A total of 37 teams made submissions for subtask C and the results are shown in Table 7. The best team was once again Galileo, with an F1 score of 0.7145. LT@Helsinki was ranked second with an F1 score of 0.6700. They used ï¬ne-tuned BERT with oversampling to improve class imbalance. The third best system was PRHLT-UPV with an F1 score of 0.6692, which combines BERT with hand-crafted features; it is followed very closely by UHH-LT at rank 4, which achieved an F1 score of 0.6683. This subtask is also dominated by BERT-based models, and all teams outperformed the majority class baseline.
Note that the absolute F1-scores obtained by the best teams in the English subtasks A and C are
# Team Score # Team Score # Team Score 0.7462 1 Galileo 0.7362 2 PGSG 3 NTU NLP 0.6906 0.6734 4 UoB 0.6687 5 0.6650 6 GUIR 0.6598 7 UHH-LT 0.6576 8 Ferryman 0.6528 IIITG-ADBU 9 0.6445 10 CoLi @ UdS 0.6412 11 0.6321 12 0.6303 13 HateLab 14 AlexU-BackTranslation-TL 0.6300 TysonYU IRLab DAIICT INGEOTEC 15 Wu427 16 UNT Linguistics 17 18 PRHLT-UPV 19 SRIB2020 20 FERMI 21 22 PingANPAI nlpUP 23 24 Team Oulu 25 KDELAB 26 wac81 27 28 I2C IU-UM@LING IITP-AINLPML problemConquero 0.6208 0.6174 0.6012 0.5987 0.5805 0.5804 0.5746 0.5687 0.5687 0.5676 0.5638 0.5627 0.5569 0.5569 0.5533 0.5524 0.5518 0.5451 0.5451 0.5382 0.4926 0.3894 0.3741 0.3741 38 0.2950 39 SSN NLP MLRG 0.2912 0.2841 40 0.2777 41 KEIS@JUST 29 PALI 30 AdelaideCyC 31 KAFK 32 PAI-NLP 33 Veriï¬edXiaoPAI 34 Duluth 35 Bodensee 36 TECHSSN 37 KarthikaS Majority Baseline IRlab@IITV IJS
Table 6: Results for English subtask B, ordered by macro-averaged F1 in descending order.
# Team Score # Team Score # Team Score 0.7145 1 Galileo 0.6700 2 0.6692 3 0.6683 4 UHH-LT 0.6543 5 ITNLP 0.6489 6 wac81 0.6473 PUM 7 PingANPAI 8 0.6394 IITP-AINLPML 0.6388 9 0.6347 10 PAI-NLP 0.6319 11 GUIR IU-UM@LING 0.6265 12 0.6232 13 AdelaideCyC LT@Helsinki PRHLT-UPV 14 KAFK 0.6168 ssn nlp 15 0.6116 16 IJS 0.6094 17 PALI 0.6015 18 FERMI 0.5882 19 0.5871 0.5809 20 Ferryman 21 AlexU-BackTranslation-TL 0.5761 0.5756 22 0.5744 23 Duluth 0.5720 24 KDELAB 0.5695 25 NTU NLP 0.5626 26 problemConquero IIITG-ADBU 0.5515 27 0.5355 28 0.5260 29 0.5147 30 SRIB2020 0.4817 31 KEIS@JUST 0.4776 32 ultraviolet 0.4535 33 HateLab 0.3462 34 Bodensee 35 Team Oulu 0.3220 36 SSN NLP MLRG 0.3178 0.2704 nlpUP IS sonal.kumari Majority Baseline INGEOTEC
Table 7: Results for English subtask C, ordered by macro-averaged F1 in descending order.
substantially higher than the scores obtained by the best teams in OffensEval-2019: 0.9223 vs. 0.8290 for subtask A and 0.7145 vs. 0.6600 for subtask C. This suggests that the much larger SOLID dataset made available in OffensEval-2020 helped the models make more accurate predictions.
Furthermore, it suggests that the weakly supervised method used to compile and annotate SOLID is a viable alternative to popular purely manual annotation approaches. A more detailed analysis of the systemsâ performances will be carried out in order to determine the contribution of the SOLID dataset for the results.
# 5.4 Best Systems
We provide some more details about the approaches used by the top teams for each subtask. We use subindices to show their rank for each subtask. Additional summaries for some of the best teams can be found in Appendix A.
Galileo (A:3,B:1,C:1) This team was ranked 3rd, 1st, and 1st on the English subtasks A, B, and C, respectively. This is also the only team ranked among the top-3 across all languages. For subtask A, they used multi-lingual pre-trained Transformers based on XLM-RoBERTa, followed by multi-lingual ï¬ne-tuning using the OffensEval data. Ultimately, they submitted an ensemble that combined XLM- RoBERTa-base and XLM-RoBERTa-large, achieving an F1 score of 0.9198. For subtasks B and C, they used knowledge distillation in a teacher-student framework, using Transformers such as ALBERT and
# Team Score # Team Score # Team Score 0.9017 1 ALAMIHamza 0.9016 2 ALT 0.8989 3 Galileo 4 KUISAIL 0.8972 5 AMR-KELEG 0.8958 0.8902 6 KS@LTH 0.8778 iaf7 7 0.8744 INGEOTEC 8 0.8714 BhamNLP 9 0.8691 10 yasserotiefy 0.8655 11 SAJA 0.8592 12 Ferryman 0.8555 13 SAFA 0.8520 14 0.8519 15 TAC 0.8500 16 0.8498 17 0.8480 18 Rouges 0.8474 19 TysonYU 0.8455 20 NLPDove hhaddad saradhix lukez 21 SaiSakethAluru 22 will go 23 erfan 24 ANDES 25 Bushr 26 27 28 mircea.tanase 29 machouz 30 orabia 31 Taha 32 hamadanayel 33 CoLi @ UdS fatemah 34 jbern 35 zahra.raj 36 I2C 37 jlee24282 38 problemConquero 39 asking28 40 klaralang zoher orabe 0.8455 0.8440 0.8418 0.8402 0.8395 0.8241 0.8221 0.8220 0.8216 0.8198 0.8183 0.8182 0.8176 0.8147 0.8125 0.8057 0.8056 0.8024 0.8021 0.8002 41 tharindu 42 PRHLT-UPV IRlab@IITV 43 yemen2016 44 saroarj 45 kxkajava 46 47 frankakorpel 48 COMA JCT 49 50 FBK-DH 51 sonal.kumari 52 CyberTronics 53 SpurthiAH Majority Baseline 0.7881 0.7868 0.7793 0.7721 0.7474 0.7306 0.7251 0.5436 0.4959 0.4642 0.4536 0.4466 0.4451 0.4441
Table 8: Results for Arabic subtask A, ordered by macro-averaged F1 in descending order.
ERNIE 2.0 (Sun et al., 2020) as teacher models, achieving an F1 score of 0.7462 and 0.7145, for subtasks B and C respectively.
UHH-LT (A:1) This team was ranked 1st on subtask A with an F1 score of 0.9223. They ï¬ne-tuned different Transformer models on the OLID training data, and then combined them into an ensemble. They experimented with BERT-base and BERT-large (uncased), RoBERTa-base and RoBERTa-large, XLM- RoBERTa, and four different ALBERT models (large-v1, large-v2, xxlarge-v1, and xxlarge-v2). In their ofï¬cial submission, they used an ensemble combining different ALBERT models. They did not use the labels of the SOLID dataset, but found the tweets it contained nevertheless useful for unsupervised ï¬ne-tuning (i.e., using the MLM objective) of the pre-trained Transformers.
# 6 Arabic Track
A total of 108 teams registered to participate in the Arabic track, and ultimately 53 teams entered the competition with at least one valid submission. Among them, ten teams participated in the Arabic track only, while the rest participated in other languages in addition to Arabic. This was the second shared task for Arabic after the one at the 4th workshop on Open-Source Arabic Corpora and Processing Tools (Mubarak et al., 2020a), which had different settings and less participating teams.
Pre-processing and normalization Most teams performed some kind of pre-processing or text nor- malization, e.g., Hamza shapes, Alif Maqsoura, Taa Marbouta, diacritics, non-Arabic characters, etc., and only one team replaced emojis with their textual counter-parts.
# 6.1 Results
Table 8 shows the teams and the F1 scores they achieved for the Arabic subtask A. The majority class baseline had an F1 score of 0.4441, and several teams achieved results that doubled that baseline score. The best-performing team was ALAMIHamza with an F1 score of 0.9017. The second-best team, ALT, was almost tied with the winner, with an F1 score of 0.9016. The Galileo team was third with an F1 score of 0.8989. A summary of the approaches taken by the top-performing teams can be found in Appendix A; here we brieï¬y describe the winning system:
# Team Score # Team Score # Team Score 0.8119 1 0.8021 2 Galileo 0.7923 3 NLPDove 0.7766 4 FBK-DH 0.7750 5 KS@LTH 0.7741 6 JCT 0.7723 7 ANDES 0.7685 8 TysonYU 0.7685 FERMI 8 10 NLP Passau 0.7673 11 GruPaTo 0.7620 12 KEIS@JUST 0.7612 0.7596 13 will go LT@Helsinki 14 Rouges 14 Smatgrisene 16 machouz 17 18 Ferryman 19 MindCoders 20 ARA 21 22 KUISAIL 23 JAK 24 LIIR 25 MeisterMorxrc 26 IU-UM@LING INGEOTEC problemConquero 0.7587 0.7587 0.7561 0.7553 0.7525 0.7380 0.7267 0.7237 0.7231 0.7086 0.7019 0.6998 0.6974 27 TeamKGP 0.6973 28 Stormbreaker 0.6842 29 TAC 0.6819 30 Sonal 0.6711 31 RGCL 0.6556 32 PRHLT-UPV 0.6369 33 IUST 0.6226 34 SRIB2020 0.6127 0.5736 IR3218-UI 35 36 SSN NLP MLRG 0.5678 0.5587 37 Team Oulu 0.4913 38 0.4668 IJS Majority Baseline
Table 9: Results for Danish subtask A, ordered by macro-averaged F1 in descending order.
ALAMIHamza(A:1) The winning team achieved the highest F1-score using BERT to encode Arabic tweets, followed by a sigmoid classiï¬er. They further performed translation of the meaning of emojis.
# 7 Danish Track
A total of 72 teams registered to participate in the Danish track, and 39 of them actually made ofï¬cial submissions on the test dataset. This is the ï¬rst shared task on offensive language identiï¬cation to include Danish, and the dataset provided to the OffensEval-2020 participants is an extended version of the one from (Sigurbergsson and Derczynski, 2020).
Pre-processing and normalization Many teams used the pre-processing included in the relevant em- bedding model, e.g., BPE (Heinzerling and Strube, 2018) and WordPiece. Other pre-processing tech- niques included emoji normalization, spelling correction, sentiment tagging, lexical and regex-based term and phrase ï¬agging, and hashtag segmentation.
# 7.1 Results
The results are shown in Table 9. We can see that all teams managed to outperform the majority class baseline. Moreover, all but one team improved over a FastText baseline (F1 = 0.5148), and most teams achieved an F1 score of 0.7 or higher. Interestingly, one of the top-ranked teams, JCT, was entirely non-neural.
LT@Helsinki (A:1) The winning team LT@Helsinki used NordicBERT for representation, as provided by BotXO.10 NordicBERT is customized to Danish, and avoids some of the pre-processing noise and ambiguity introduced by other popular BERT implementations. The team further reduced orthographic lengthening to maximum two repeated characters, converted emojis to sentiment scores, and used co- occurrences of hashtags and references to usernames. They tuned the hyper-parameters of their model using 10-fold cross validation.
# 8 Greek Track
A total of 71 teams registered to participate in the Greek track, and ultimately 37 of them made an ofï¬cial submission on the test dataset. This is the ï¬rst shared task on offensive language identiï¬cation to include Greek, and the dataset provided to the OffensEval-2020 participants is an extended version of the one from (Pitenis et al., 2020).
10See http://github.com/botxo/nordic_bert
# Team Score # Team Score # Team Score 1 NLPDove 2 Galileo 3 KS@LTH 4 KUISAIL IJS 5 SU-NLP 6 LT@Helsinki 7 FERMI 8 Ferryman 9 10 INGEOTEC 11 will go 12 ANDES 13 LIIR 0.8522 0.8507 0.8481 0.8432 0.8329 0.8317 0.8258 0.8231 0.8222 0.8197 0.8176 0.8153 0.8148 14 CoLi @ UdS 15 TAC 16 17 MindCoders 18 RGCL 19 20 Rouges 21 TysonYU 22 Sonal 23 JAK 24 ARA 25 machouz 26 PRHLT-UPV IU-UM@LING problemConquero 0.8147 0.8141 0.8140 0.8137 0.8135 0.8115 0.8030 0.8022 0.8017 0.7956 0.7828 0.7820 0.7763 0.7756 27 0.7730 28 KEIS@JUST 0.7700 29 FBK-DH 0.7615 30 Team Oulu 0.7568 JCT 31 0.7181 32 IRlab@IITV 33 TeamKGP 0.7041 34 SSN NLP MLRG 0.6779 0.6036 fatemah 35 0.4265 36 CyberTronics 0.4202 0.2688 IUST Majority Baseline 37 Stormbreaker
Table 10: Results for Greek subtask A, ordered by macro-averaged F1 in descending order.
Pre-processing and normalization The participants experimented with various pre-processing and text normalization techniques, similarly to what was done for the other languages above. One team further reported replacement of emojis with their textual equivalent.
# 8.1 Results
The evaluation results are shown in Table 10. The top team, NLPDove, achieved an F1 score of 0.852, with Galileo coming close at the second place with an F1 score of 0.851. The KS@LTH team was ranked third with an F1 score of 0.848. It is no surprise that the majority of the high-ranking submissions and participants used large-scale pre-trained Transformers, with BERT being the most prominent among them, along with wordwvec-style non-contextualized pre-trained word embeddings.
NLPDove (A:1) The winning team NLPDove used pre-trained word embeddings from mBERT, which they ï¬ne-tuned using the training data. A domain-speciï¬c vocabulary was generated by running the WordPiece algorithm (Schuster and Nakajima, 2012) and using embeddings for extended vocabulary to pre-train and ï¬ne-tune the model.
# 9 Turkish Track
A total of 86 teams registered to participate in the Turkish track, and ultimately 46 of them made an ofï¬- cial submission on the test dataset. All teams except for one participated in at least one other track. This is the ï¬rst shared task on offensive language identiï¬cation to include Turkish, and the dataset provided to the OffensEval-2020 participants is an extended version of the one from (C¸ ¨oltekin, 2020).
# 9.1 Results
The results are shown in Table 11. We can see that team Galileo achieved the highest macro-averaged F1 score of 0.8258, followed by SU-NLP and KUI-SAIL with F1 scores of 0.8167 and 0.8141, respectively. Note that the latter two teams are from Turkey, and they used some language-speciï¬c resources and tuning. Most results were in the interval 0.7â0.8, and almost all teams managed to outperform the majority class baseline, which had an F1 score of 0.4435.
Galileo (A:1) The best team in the Turkish subtask A was Galileo, which achieved top results in several other tracks. Unlike the systems ranked second and third, Galileoâs system is language-agnostic, and it used data for all ï¬ve languages in a multi-lingual training setup.
# 10 Conclusion and Future Work
We presented the results of OffensEval-2020, which featured datasets in ï¬ve languages: Arabic, Danish, English, Greek, and Turkish. For English, we had three subtasks, representing the three levels of the
# Team 0.8258 1 Galileo 0.8167 2 SU-NLP 0.8141 3 KUISAIL 0.8101 4 KS@LTH 0.7967 5 NLPDove 0.7933 6 TysonYU 0.7859 7 RGCL 0.7815 8 Rouges 0.7790 9 GruPaTo 0.7789 10 MindCoders 0.7758 11 INGEOTEC 0.7737 12 Ferryman 0.7737 13 ANDES 0.7735 14 I2C IU-UM@LING 0.7729 15 0.7724 16 IJS 0.7720 17 LIIR Score # Team 35 PRHLT-UPV 36 SRIB2020 37 Team Oulu 38 ARA 39 DH-FBK 40 f shahaby 41 CyberTronics 42 IASBS 43 JCT 44 machouz 45 Score # Team 18 LT@Helsinki 19 NLP Passau 20 will go 21 FERMI 22 23 24 TAC 25 IUST 26 alaeddin 27 fatemah 28 CoLi @ UdS 29 Sonal 30 MeisterMorxrc 31 32 KEIS@JUST 33 TeamKGP 34 TOBB ETU 0.7719 0.7676 0.7653 0.7578 0.7553 0.7496 0.7477 0.7476 0.7473 0.7469 0.7461 0.7422 0.7398 0.7334 0.7330 0.7301 0.7154 problemConquero pin cod jooyeon Lee Majority Baseline 46 Stormbreaker JAK Score 0.7127 0.6993 0.6868 0.6381 0.6268 0.5730 0.5420 0.5362 0.5099 0.4518 0.4435 0.4435 0.3109
Table 11: Results for Turkish subtask A, ordered by macro-averaged F1 in descending order.
OLID hierarchy. For the other four languages, we had a subtask for the top-level of the OLID hierar- chy only. A total of 528 teams signed up to participate in OffensEval-2020, and 145 of them actually submitted results across all languages and subtasks.
Out of the 145 participating teams, 96 teams participated in one language only, 13 teams participated in two languages, 11 in three languages, 19 in four languages, and 6 teams submitted systems for all ï¬ve languages. The ofï¬cial submissions per language ranged from 37 (for Greek) to 81 (for English). Finally, 70 of the 145 participating teams submitted system description papers, which is an all-time record.
The wide participation in the task allowed us to compare a number of approaches across different languages and datasets. Similarly to OffensEval-2019, we observed that the best systems for all lan- guages and subtasks used large-scale BERT-style pre-trained Transformers such as BERT, RoBERTa, and mBERT. Unlike 2019, however, the multi-lingual nature of this yearâs data enabled cross-language approaches, which proved quite effective and were used by some of the top-ranked systems.
In future work, we plan to extend the task in several ways. First, we want to offer subtasks B and C for all ï¬ve languages from OffensEval-2020. We further plan to add some additional languages, especially under-represented ones. Other interesting aspects to explore are code-mixing, e.g., mixing Arabic script and Latin alphabet in the same Arabic message, and code-switching, e.g., mixing Arabic and English words and phrases in the same message. Last but not least, we plan to cover a wider variety of social media platforms.
# Acknowledgements
This research was partly supported by the IT University of Copenhagenâs Abusive Language Detection project. It is also supported by the Tanbih project at the Qatar Computing Research Institute, HBKU, which aims to limit the effect of âfake news,â propaganda and media bias by making users aware of what they are reading.
# References
Hwijeen Ahn, Jimin Sun, Chan Young Park, and Jungyun Seo. 2020. NLPDove at SemEval-2020 Task 12: Im- proving offensive language detection with cross-lingual transfer. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In Proceedings of the International Conference on Computational Linguistics (COLING).
Hamza Alami, Said Ouatik El Alaoui, Abdessamad Benlahbib, and Noureddine En-nahnahi. 2020. LISAC FSDM- USMBA Team at SemEval 2020 Task 12: Overcoming AraBERTâs pretrain-ï¬netune discrepancy for Arabic offensive language identiï¬cation. In Proceedings of the International Workshop on Semantic Evaluation (Se- mEval).
Abdullah I. Alharbi and Mark Lee. 2020. BhamNLP at SemEval-2020 Task 12: An ensemble of different word In embeddings and emotion transfer learning for Arabic offensive language identiï¬cation in social media. Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Pedro Alonso, Rajkumar Saini, and Gy¨orgy Kovacs. 2020. TheNorth at SemEval-2020 Task 12: Hate speech detection using RoBERTa. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Talha Anwar and Omer Baig. 2020. TAC at SemEval-2020 Task 12: Ensembling approach for multilingual offensive language identiï¬cation in social media. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Aym´e Arango, Juan Manuel P´erez, and Franco Luque. 2020. ANDES at SemEval-2020 Task 12: A single BERT multilingual model for offensive language detection. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Pınar Arslan. 2020. pin cod at SemEval-2020 Task 12: Injecting lexicons into bidirectional long short-term memory networks to detect Turkish offensive tweets. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Arup Baruah, Kaushik Das, Ferdous Barbhuiya, and Kuntal Dey. 2020. IIITG-ADBU at SemEval-2020 Task 12: Comparison of BERT and BiLSTM in detecting offensive language. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Valerio Basile, Cristina Bosco, Elisabetta Fersini, Debora Nozza, Viviana Patti, Francisco Manuel Rangel Pardo, Paolo Rosso, and Manuela Sanguinetti. 2019. SemEval-2019 task 5: Multilingual detection of hate speech against immigrants and women in Twitter. In Proceedings of the International Workshop on Semantic Evalua- tion (SemEval).
Elisa Bassignana, Valerio Basile, and Viviana Patti. 2018. Hurtlex: A multilingual lexicon of words to hurt. In Proceedings of the Fifth Italian Conference on Computational Linguistics (CLiC-it).
Emily M Bender and Batya Friedman. 2018. Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics, 6:587â 604.
Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural language processing with Python: analyzing text with the natural language toolkit. OâReilly.
Marcos Boriola and Gustavo Paetzold. 2020. UTFPR at SemEval-2020 Task 12: Identifying offensive tweets with lightweight ensembles. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Camilla Casula, Stefano Menini, Alessio Palmero Aprosio, and Sara Tonelli. 2020. DH-FBK at SemEval-2020 In Proceedings of the Task 12: Using multi-channel BERT for multilingual offensive language detection. International Workshop on Semantic Evaluation (SemEval).
C¸ aËgrı C¸ ¨oltekin. 2020. A corpus of Turkish offensive language on social media. In Proceedings of the 12th International Conference on Language Resources and Evaluation (LREC).
Kathryn Chapman, Johannes Bernhard, and Dietrich Klakow. 2020. CoLi @ UdS at SemEval-2020 Task 12: Of- fensive tweet detection with ensembling. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Po-Chun Chen, Hen-Hsen Huang, and Hsin-Hsi Chen. 2020. NTU NLP at SemEval-2020 Task 12: Identifying offensive tweets using hierarchical multi-task learning approach. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Kyunghyun Cho, Bart van Merri¨enboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoderâdecoder for statistical machine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP).
Davide Colla, Tommaso Caselli, Valerio Basile, Jelena Mitrovi´c, and Michael Granitzer. 2020. GruPaTo at In SemEval-2020 Task 12: Retraining mBERT on social media and ï¬ne-tuned offensive language models. Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm´an, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross- In Proceedings of the 58th Annual Meeting of the Association for lingual representation learning at scale. Computational Linguistics (ACL).
Corinna Cortes and Vladimir Vapnik. 1995. Support-vector networks. Machine learning, 20(3):273â297.
Tanvi Dadu and Kartikey Pant. 2020. Team Rouges at SemEval-2020 Task 12: Cross-lingual inductive transfer to detect offensive language. In Proceedings of the International Workshop on Semantic Evaluation (SemEval). Wenliang Dai, Tiezheng Yu, Zihan Liu, and Pascale Fung. 2020. Kungfupanda at SemEval-2020 Task 12: BERT- based multi-task learning for offensive language detection. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Kaushik Amar Das, Arup Baruah, Ferdous Ahmed Barbhuiya, and Kuntal Dey. 2020. KAFK at SemEval-2020 Task 12: Checkpoint ensemble of transformers for hate speech classiï¬cation. In Proceedings of the Interna- tional Workshop on Semantic Evaluation (SemEval).
Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In Proceedings of the 11th International AAAI Conference on Web and Social Media, ICWSM â17.
Gretel Liz De la PeËna Sarrac´en and Paolo Rosso. 2020. PRHLT-UPV at SemEval-2020 Task 12: BERT for mul- tilingual offensive language detection. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidi- rectional transformers for language understanding. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT). Xiangjue Dong and Jinho D. Choi. 2020. XD at SemEval-2020 Task 12: Ensemble approach to offensive language identiï¬cation in social media using transformer encoders. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Ibrahim Abu Farha and Walid Magdy. 2019. Mazajak: An online Arabic sentiment analyser. In Proceedings of the Fourth Arabic Natural Language Processing Workshop (WANLP).
Jared Fromknecht and Alexis Palmer. 2020. UNT Linguistics at OffensEval 2020: Linear SVC with pre-trained In Proceedings of the International word embeddings as document vectors and targeted linguistic features. Workshop on Semantic Evaluation (SemEval).
Kunihiko Fukushima. 1980. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological cybernetics, 36(4):193â202.
Avishek Garain. 2020. Garain at SemEval-2020 Task 12: Sequence based deep learning for categorizing offensive language in social media. In Proceedings of the International Workshop on Semantic Evaluation (SemEval). Erfan Ghadery and Marie-Francine Moens. 2020. LIIR at SemEval-2020 Task 12: A cross-lingual augmentation approach for multilingual offensive language identiï¬cation. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
IITP-AINLPML at SemEval-2020 Task 12: Offensive tweet identiï¬cation and target categorization in a multitask environment. In Proceedings of the Inter- national Workshop on Semantic Evaluation (SemEval).
Kyle Gorman and Steven Bedrick. 2019. We need to talk about standard splits. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL).
Ehab Hamdy, Jelena Mitrovi´c, and Michael Granitzer. 2020. nlpUP at SemEval-2020 Task 12: A blazing fast system for offensive language detection. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Keisuke Hanahata and Masaki Aono. 2020. KDELAB at SemEval-2020 Task 12: A system for estimating aggres- sion of tweets using two layers of BERT features. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Sabit Hassan, Younes Samih, Hamdy Mubarak, and Ahmed Abdelali. 2020. ALT at SemEval-2020 Task 12: Ara- bic and English offensive language identiï¬cation in social media. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Benjamin Heinzerling and Michael Strube. 2018. BPEmb: Tokenization-free pre-trained subword embeddings in 275 languages. In Proceedings of the 11th International Conference on Language Resources and Evaluation. Peter Juel Henrichsen and Marianne Rathje. 2020. Smatgrisene at SemEval-2020 Task 12: Offense detection by AI â with a pinch of real I. In Proceedings of the International Workshop on Semantic Evaluation (SemEval). Mahen Herath, Thushari Atapattu, Dung Anh Hoang, Christoph Treude, and Katrina Falkner. 2020. Adelaide- CyC at SemEval-2020 Task 12: Ensemble of classiï¬ers for offensive language detection in social media. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735â1780. Fatemah Husain, Jooyeon Lee, Samuel Henry, and Ozlem Uzuner. 2020. SalamNET at SemEval-2020 Task 12: Deep learning approach for Arabic offensive language detection. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Omar Hussein, Hachem Sfar, Jelena Mitrovi´c, and Michael Granitzer. 2020. NLP Passau at SemEval-2020 Task 12: Multilingual neural network for offensive language detection in English, Danish and Turkish. In Proceed- ings of the International Workshop on Semantic Evaluation (SemEval).
Clayton J Hutto and Eric Gilbert. 2014. VADER: A parsimonious rule-based model for sentiment analysis of In Proceedings of the 8th International AAAI Conference on Weblogs and Social Media social media text. (ICWSM).
Mai Ibrahim, Marwan Torki, and Nagwa El-Makky. 2020. AlexU-BackTranslation-TL at SemEval-2020 Task 12: Improving offensive language detection using data augmentation and transfer learning. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Md Saroar Jahan and Mourad Oussalah. 2020. Team Oulu at SemEval-2020 Task 12: Multilingual identiï¬cation of offensive language, type and target of Twitter post using translated datasets. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Piotr Janiszewski, Mateusz Skiba, and Urszula Wali´nska. 2020. PUM at SemEval-2020 Task 12: Aggregation of Transformer-based modelsâ features for offensive language recognition. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efï¬cient text clas- siï¬cation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL).
Li Junyi, Zhou Xiaobing, and Zhang Zichen. 2020. Lee at SemEval-2020 Task 12: A BERT model based on the maximum self-ensemble strategy for identifying offensive. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
A. Kalaivani and Thenmozhi D. 2020. SSN NLP MLRG at SemEval-2020 Task 12: Offensive language identiï¬- cation in English, Danish, Greek using BERT and machine learning approach. In Proceedings of the Interna- tional Workshop on Semantic Evaluation (SemEval).
Ritesh Kumar, Atul Kr. Ojha, Shervin Malmasi, and Marcos Zampieri. 2018a. Benchmarking aggression identiï¬- cation in social media. In Proceedings of the First Workshop on Trolling, Aggression and Cyberbulling (TRAC). Ritesh Kumar, Aishwarya N. Reganti, Akshit Bhatia, and Tushar Maheshwari. 2018b. Aggression-annotated Cor- pus of Hindi-English Code-mixed Data. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018).
Ritesh Kumar, Atul Kr. Ojha, Shervin Malmasi, and Marcos Zampieri. 2020. Evaluating Aggression Identiï¬cation in Social Media. In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbulling (TRAC), Santa Fe, USA.
Sonal Kumari. 2020. Sonal.kumari at SemEval-2020 Task 12: Social media multilingual offensive text iden- tiï¬cation and categorization using neural network models. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
IR3218-UI at SemEval-2020 Task 12: Emoji effects on offensive language identiï¬cation. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. AL- BERT: A lite BERT for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942. Karishma Laud, Jagriti Singh, Randeep Kumar Sahu, and Ashutosh Modi. 2020. problemConquero at SemEval- 2020 Task 12: Transformer and soft label-based approaches. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Wah Meng Lim and Harish Tayyar Madabushi. 2020. UoB at SemEval-2020 Task 6: Boosting BERT with corpus level information. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692.
Thomas Mandl, Sandip Modha, Prasenjit Majumder, Daksh Patel, Mohana Dave, Chintak Mandlia, and Aditya Patel. 2019. Overview of the HASOC Track at FIRE 2019: Hate speech and offensive content identiï¬cation in Indo-European languages. In Proceedings of the 11th Forum for Information Retrieval Evaluation (FIRE). Abir Messaoudi, Hatem Haddad, and Moez Ben Haj Hmida. 2020. iCompass at SemEval-2020 Task 12: From a syntax-ignorant n-gram embeddings model to a deep bidirectional language model. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Proceedings of Advances in Neural Information Processing Systems (NIPS).
Sabino Miranda-Jim´enez, Eric S. Tellez, Mario Graff, and Daniela Moctezuma. 2020. INGEOTEC at SemEval- 2020 Task 12: Multilingual classiï¬cation of offensive text. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Alejandro Mosquera. 2020. amsqr at SemEval-2020 Task 12: Offensive language detection using neural networks and anti-adversarial features. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Hamdy Mubarak, Kareem Darwish, Walid Magdy, Tamer Elsayed, and Hend Al-Khalifa. 2020a. Overview of OSACT4 Arabic offensive language detection shared task. In Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Processing Tools, with a Shared Task on Offensive Language Detection.
Hamdy Mubarak, Ammar Rashed, Kareem Darwish, Younes Samih, and Ahmed Abdelali. 2020b. Arabic offen- sive language on Twitter: Analysis and experiments. arXiv preprint arXiv:2004.02192.
Hamada A. Nayel. 2020. NAYEL at SemEval-2020 Task 12: TF/IDF-based approach for automatic offensive language detection in Arabic tweets. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Zoher Orabe, Bushr Haddad, Nada Ghneim, and Anas Al-Abood. 2020. DoTheMath at SemEval-2020 Task 12: Deep neural networks with self attention for Arabic offensive language detection. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Yasser Otiefy, Ahmed Abdelmalek, and Islam El Hosary. 2020. WOLI at SemEval-2020 Task 12: Arabic offensive language identiï¬cation on different Twitter datasets. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Xiaozhi Ou, Xiaobing Zhou, and Xuejie Zhang. 2020. YNU oxz at SemEval-2020 Task 12: Bidirectional GRU with capsule for identifying multilingual offensive language. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Marc P`amies, Emily ¨Ohman, Kaisla Kajava, and J¨org Tiedemann. 2020. LT@Helsinki at SemEval-2020 Task 12: Multilingual or language-speciï¬c BERT? In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Apurva Parikh, Abhimanyu Singh Bisht, and Prasenjit Majumder. 2020. IRLab DAIICT at SemEval-2020 Task 12: Machine learning and deep learning methods for offensive language identiï¬cation. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Ted Pedersen. 2020. Duluth at SemEval-2020 Task 12: Offensive tweet identiï¬cation in English with logistic regression. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word repre- sentation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP).
Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettle- moyer. 2018. Deep contextualized word representations. In Proceedings of the Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technology (NAACL-HLT).
Bao-Tran Pham-Hong and Setu Chokshi. 2020. PGSG at SemEval-2020 Task 12: BERT-LSTM with tweetsâ pretrained model and noisy student training method. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Zeses Pitenis, Marcos Zampieri, and Tharindu Ranasinghe. 2020. Offensive language identiï¬cation in Greek. In Proceedings of the 12th Language Resources and Evaluation Conference (LREC).
Flor Miriam Plaza-del Arco, M. Dolores Molina-Gonz´alez, L. Alfonso UreËna-L´opez, and M. Teresa Mart´ın- Valdivia. 2020. SINAI at SemEval-2020 Task 12: Offensive language identiï¬cation exploring transfer learning models. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8).
Tharindu Ranasinghe and Hansi Hettiarachchi. 2020. BRUMS at SemEval-2020 Task 12: Transformer based multilingual offensive language identiï¬cation in social media. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Manikandan Ravikiran, Amin Ekant Muljibhai, Toshinori Miyoshi, Hiroaki Ozaki, Yuta Koreeda, and Sakata Masayuki. 2020. Hitachi at SemEval-2020 Task 12: Offensive language identiï¬cation with noisy labels using statistical sampling and post-processing. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Sara Rosenthal, Pepa Atanasova, Georgi Karadzhov, Marcos Zampieri, and Preslav Nakov. 2020. A large-scale semi-supervised dataset for offensive language identiï¬cation. arXiv preprint arXiv:2004.14454.
David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. 1986. Learning representations by back- propagating errors. Nature, 323(6088):533â536.
Ali Safaya, Moutasem Abdullatif, and Deniz Yuret. 2020. KUISAIL at SemEval-2020 Task 12: BERT-CNN for offensive speech identiï¬cation in social media. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
IRlab@IITV at SemEval-2020 Task 12: multilingual offensive language identiï¬cation in social media using SVM. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Ranjan Satapathy, Yang Li, Sandro Cavallari, and Erik Cambria. 2019. Seq2seq deep learning models for micro- text normalization. In Proceedings of the International Joint Conference on Neural Networks (IJCNN).
Paul Sayanta, Saha Sriparna, and Hasanuzzaman Mohammed. 2020. CyberTronics at SemEval-2020 Task 12: Multilingual offensive language identiï¬cation over social media. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Mike Schuster and Kaisuke Nakajima. 2012. Japanese and Korean voice search. In Proceedings of the Interna- tional Conference on Acoustics, Speech and Signal Processing (ICASSP).
Gudbjartur Ingi Sigurbergsson and Leon Derczynski. 2020. Offensive language and hate speech detection for Danish. In Proceedings of the 12th Language Resources and Evaluation Conference (LREC).
Abhishek Singh and Surya Pratap Singh Parmar. 2020. Voice@SRIB at SemEval-2020 Task [9,12]: Sentiment and offensiveness detection in social media. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Rajalakshmi Sivanaiah, Angel Deborah S, S Milton Rajendram, and Mirnalinee T T. 2020. TECHSSN at In Proceedings of the In- SemEval-2020 Task 12: Offensive language detection using BERT embeddings. ternational Workshop on Semantic Evaluation (SemEval).
Kasper Socha. 2020. KS@LTH at SemEval-2020 Task 12: Fine-tuning multi- and monolingual transformer models for offensive language detection. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Sajad Sotudeh, Tong Xiang, Hao-Ren Yao, Sean MacAvaney, Eugene Yang, Nazli Goharian, and Ophir Frieder. 2020. GUIR at SemEval-2020 Task 12: Domain-tuned contextualized models for offensive language detection. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Leon Strømberg-Derczynski, Rebekah Baglini, Morten H Christiansen, Manuel R Ciosici, Jacob Aarup Dalsgaard, Riccardo Fusaroli, Peter Juel Henrichsen, Rasmus Hvingelby, Andreas Kirkedal, Alex Speed Kjeldsen, et al. 2020. The Danish Gigaword Project. arXiv preprint arXiv:2005.03521.
Julia Maria StruÃ, Melanie Siegel, Josep Ruppenhofer, Michael Wiegand, and Manfred Klenner. 2019. Overview of GermEval task 2, 2019 shared task on the identiï¬cation of offensive language. In Proceedings of KONVENS.
Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Hao Tian, Hua Wu, and Haifeng Wang. 2020. Ernie 2.0: A continual pre-training framework for language understanding. In Proceedings of the 34th AAAI Conference on Artiï¬cial Intelligence (AAAI).
Shardul Suryawanshi, Mihael Arcan, and Paul Buitelaar. 2020. NUIG at SemEval-2020 Task 12: Pseudo labelling In Proceedings of the International Workshop on Semantic Evaluation for offensive content classiï¬cation. (SemEval).
Mircea-Adrian Tanase, Dumitru-Clementin Cercel, and Costin-Gabriel Chiru. 2020. UPB at SemEval-2020 Task 12: Multilingual offensive language detection on social media by ï¬ne-tuning a variety of BERT-based models. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Saja Khaled Tawalbeh, Mahmoud Hammad, and Mohammad AL-Smadi. 2020. KEIS@JUST at SemEval-2020 Task 12: Identifying multilingual offensive tweets using weighted ensemble and ï¬ne-tuned BERT. In Proceed- ings of the International Workshop on Semantic Evaluation (SemEval).
Peter D Turney and Michael L Littman. 2003. Measuring praise and criticism: Inference of semantic orientation from association. ACM Transactions on Information Systems (TOIS), 21(4):315â346.
Moshe Uzan and HaCohen-Kerner Yaakov. 2020. JCT at SemEval-2020 Task 12: Offensive language detection in tweets using preprocessing methods, character and word n-grams. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems (NIPS).
Bertie Vidgen and Leon Derczynski. 2020. Directions in abusive language training data: Garbage in, garbage out. arXiv preprint arXiv:2004.01670.
Susan Wang and Zita Marinho. 2020. Nova-Wang at SemEval-2020 Task 12: OffensEmblert: an ensemble of offensive language classiï¬ers. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Shuohuan Wang, Jiaxiang Liu, Xuan Ouyang, and Yu Sun. 2020. Galileo at SemEval-2020 Task 12: Multi- lingual learning for offensive language identiï¬cation using pre-trained language models. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Zeerak Waseem, Thomas Davidson, Dana Warmsley, and Ingmar Weber. 2017. Understanding abuse: A typology of abusive language detection subtasks. In Proceedings of the First Workshop on Abusive Language Online.
Chen Weilong, Wang Peng, Li Jipeng, Zheng Yuanshuai, Wang Yan, and Zhang Yanru. 2020. Ferryman at SemEval-2020 Task 12: BERT-based model with advanced improvement methods for multilingual offensive language identiï¬cation. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Gregor Wiedemann, Seid Yimam, and Chris Biemann. 2020. UHH-LT at SemEval-2020 Task 12: Fine-tuning of pre-trained transformer networks for offensive language detection. In Proceedings of the International Work- shop on Semantic Evaluation (SemEval).
Michael Wiegand, Melanie Siegel, and Josef Ruppenhofer. 2018. Overview of the GermEval 2018 shared task on the identiï¬cation of offensive language. In Proceedings of the GermEval 2018 Workshop (GermEval).
Ellery Wulczyn, Nithum Thain, and Lucas Dixon. 2017. Ex machina: Personal attacks seen at scale. In Proceed- ings of the 26th International Conference on World Wide Web (WWW).
Yinnan Yao, Nan Su, and Kun Ma. 2020. UJNLP at SemEval-2020 Task 12: Detecting offensive language using bidirectional transformers. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. 2019a. Predicting the type and target of offensive posts in social media. In Proceedings of the Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technology (NAACL-HLT).
Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. 2019b. SemEval-2019 Task 6: Identifying and categorizing offensive language in social media (OffensEval). In Pro- ceedings of the 13th International Workshop on Semantic Evaluation (SemEval).
Victoria Pach´on ´Alvarez, Jacinto Mata V´azquez, Jos´e Manuel L´opez Betanzos, and Jos´e Luis Arjona Fern´andez. 2020. I2C in SemEval2020 Task 12: Simple but effective approaches to offensive speech detection in Twitter. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
Anıl ¨Ozdemir and Reyyan Yeniterzi. 2020. SU-NLP at SemEval-2020 Task 12: Offensive language identiï¬cation
in Turkish tweets. In Proceedings of the International Workshop on Semantic Evaluation (SemEval).
# A Best-Performing Teams
Below we present a short overview of the top-3 systems for all subtasks and for all languages:
Galileo (EN B:1, EN C:1, TR A:1; DK A:2, GR A:2; AR A:3, EN A:3) This team was ranked 3rd, 1st, and 1st on the English subtasks A, B, and C, respectively; it was also ranked 1st for Turkish, 2nd for Greek and 3rd for Arabic and Danish. This is the only team ranked among the top-3 across all languages. For subtask A (all languages), they used multi-lingual pre-trained Transformers based on XLM-RoBERTa, followed by multi-lingual ï¬ne-tuning using the OffensEval data. Ultimately, they submitted an ensemble that combined XLM-RoBERTa-base and XLM-RoBERTa-large. For the English subtasks B and C, they used knowledge distillation in a teacher-student framework, using Transformers such as ALBERT and ERNIE 2.0 (Sun et al., 2020) as teacher models.
UHH-LT (EN A:1) This team was ranked 1st on the English subtask A. They ï¬ne-tuned different Trans- former models on the OLID training data, and then combined them into an ensemble. They experimented with BERT-base and BERT-large (uncased), RoBERTa-base and RoBERTa-large, XLM-RoBERTa, and four different ALBERT models (large-v1, large-v2, xxlarge-v1, and xxlarge-v2). In their ofï¬cial sub- mission, they used an ensemble combining different ALBERT models. They did not use the labels of the SOLID dataset, but found the tweets it contained nevertheless useful for unsupervised ï¬ne-tuning (i.e., using the MLM objective) of the pre-trained Transformers.
LT@Helsinki (DK A:1; EN C:2) This team was ranked 1st for Danish and 2nd for English subtask C. For Danish, they used NordicBERT, which is customized to Danish, and avoids some of the pre-processing noise and ambiguity introduced by other popular BERT implementations. The team further reduced orthographic lengthening to maximum two repeated characters, converted emojis to sentiment scores, and used co-occurrences of hashtags and references to usernames. They tuned the hyper-parameters of their model using 10-fold cross validation. For English subtask C, they used a very simple approach: over-sample the training data to overcome the class imbalance, and then ï¬ne-tune BERT-base-uncased.
NLPDove (GR A:1; DK A:3) This team was ranked 1st for Greek and 3rd for Danish. This team used extensive preprocessing and two data augmentation strategies: using additional semi-supervised labels from SOLID with different thresholds, and cross-lingual transfer with data selection. They further pro- posed and used a new metric, Translation Embedding Distance, in order to measure the transferability of instances for cross-lingual data selection. Moreover, they used data from different languages to ï¬ne- tune an mBERT model. Ultimately, they used a majority vote ensemble of several mBERT models, with minor variations in the parameters.
ALAMIHamza(AR A:1) This team was ranked 1st for Arabic. They used BERT to encode Arabic tweets, followed by a sigmoid classiï¬er. They further performed translation of the meaning of emojis.
PGSG (EN B:2) The team was ranked 2nd on the English subtask B. They ï¬rst ï¬ne-tuned the BERT- Large, Uncased (Whole Word Masking) checkpoint using the tweets from SOLID, but ignoring their labels. For this, they optimized for the MLM objective only, without the Next Sentence Prediction loss in BERT. Then, they built a BERT-LSTM model using this ï¬ne-tuned BERT, and adding LSTM layers on top of it, together with the [CLS] token. Finally, they used this architecture to train a Noisy Student model using the SOLID data.
ALT (AR A:2) The team was ranked 2nd for Arabic. They used an ensemble of SVM, CNN-BiLSTM and Multilingual BERT. The SVMs used character n-grams, word n-grams, word embeddings as fea- tures, while the CNN-BiLSTM learned character embeddings and further used pre-trained word embed- dings as input.
SU-NLP (TR A:2) The team was ranked 2nd for Turkish. They used an ensemble of three different models: CNN-LSTM, BiLSTM-Attention, and BERT. They further used word embeddings, pre-trained on tweets, and BERTurk, BERT model for Turkish.
Rouges (EN A:3) The team was ranked 3rd for the English subtask A. They used XLM-RoBERTa ï¬ne- tuned sequentially on all languages in a particular order: English, then Turkish, then Greek, then Arabic, then Danish.
NTU NLP (EN B:3) This team was ranked 3rd on the English subtask B. They proposed a hierarchical multi-task learning approach that solves subtasks A, B, and C simultaneously, following the hierarchical structure of the annotation schema of the OLID dataset. Their architecture has three layers. The input of the ï¬rst layer is the output of BERT, and its output (D1-OUT) is directly connected to the output layer for subtask A. The second layerâs input is the BERT output concatenated with D1-OUT, and its output (D2-OUT) is directly connected to the output layer for subtask B. The third layerâs input is the BERT output concatenated with D2-OUT, and its output is directly connected to the output layer for subtask C.
PRHLT-UPV (EN C:3) The team was ranked 3rd on the English subtask C. They used a combination of BERT and hand-crafted features, which were concatenated to the [CLS] representation from BERT. The features include the length of the tweets, the number of misspelled words, and the use of punctuation marks, emoticons, and noun phrases.
KS@LTH (GR: A:3) This team was ranked 3rd for Greek. They experimented with monolingual and cross-lingual models, BERT and XLM-RoBERTa model, respectively, and they found BERT to perform slightly better.
KUISAIL (TR: A:3) This team was ranked 3rd for Turkish. They combined BERTurk with a CNN, in a BERT-CNN model.
# B Participants
LISAC FSDM-USMBA (Alami et al., 2020) AdelaideCyC (Herath et al., 2020) AlexU-BackTranslation-TL (Ibrahim et al., 2020) (P`amies et al., 2020) LT@Helsinki (Nayel, 2020) NAYEL (Hassan et al., 2020) ALT (Mosquera, 2020) amsqr (Hussein et al., 2020) NLP Passau (Arango et al., 2020) ANDES (Ahn et al., 2020) NLPDove (Alharbi and Lee, 2020) BhamNLP (Hamdy et al., 2020) nlpUP (Uzan and Yaakov, 2020) JCT (Wang and Marinho, 2020) Nova-Wang (Chen et al., 2020) (Ranasinghe and Hettiarachchi, 2020) NTU NLP BRUMS (Chapman et al., 2020) CoLi @ UdS (Suryawanshi et al., 2020) (Sayanta et al., 2020) CyberTronics (Jahan and Oussalah, 2020) (Orabe et al., 2020) DoTheMath (Pham-Hong and Chokshi, 2020) (Pedersen, 2020) Duluth (Arslan, 2020) (De la PeËna Sarrac´en and Rosso, 2020) (Casula et al., 2020) FBK-DH (Weilong et al., 2020) Ferryman (Laud et al., 2020) (Janiszewski et al., 2020) (Wang et al., 2020) Galileo (Garain, 2020) Garain (Dadu and Pant, 2020) (Colla et al., 2020) GruPaTo (Husain et al., 2020) (Sotudeh et al., 2020) GUIR (Plaza-del Arco et al., 2020) (Ravikiran et al., 2020) Hitachi (Henrichsen and Rathje, 2020) ( ´Alvarez et al., 2020) (Kumari, 2020) I2C (Singh and Parmar, 2020) (Messaoudi et al., 2020) iCompass (Baruah et al., 2020) IIITG-ADBU (Kalaivani and D., 2020) ( ¨Ozdemir and Yeniterzi, 2020) (Ghosh et al., 2020) IITP-AINLPML (Miranda-Jim´enez et al., 2020) INGEOTEC (Anwar and Baig, 2020) (Kurniawan et al., 2020) IR3218-UI (Sivanaiah et al., 2020) (Alonso et al., 2020) (Saroj et al., 2020) IRlab@IITV IRLab DAIICT (Wiedemann et al., 2020) (Parikh et al., 2020) (Das et al., 2020) KAFK (Yao et al., 2020) (Hanahata and Aono, 2020) KDELAB (Fromknecht and Palmer, 2020) (Tawalbeh et al., 2020) KEIS@JUST (Lim and Madabushi, 2020) (Tanase et al., 2020) (Socha, 2020) KS@LTH (Safaya et al., 2020) KUISAIL (Boriola and Paetzold, 2020) (Otiefy et al., 2020) (Dai et al., 2020) Kungfupanda (Junyi et al., 2020) Lee (Dong and Choi, 2020) (Ghadery and Moens, 2020) LIIR (Ou et al., 2020)
Table 12: The teams that participated in OffensEval-2020 and submitted system description papers and the corresponding reference to their papers.
Team A-Arabic A-Danish A-Greek A-Turkish A-English B-English C-English à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à Ã
AdelaideCyC AlexU-BackTranslation-TL ALT alaeddin ALAMIHamza AMR-KELEG amsqr ANDES ARA asking28 Better Place BhamNLP Bodensee Bushr byteam Coffee Latte CoLi @ UdS COMA CyberTronics doxaAI Duluth erfan f shahaby fatemah FBK-DH FERMI Ferryman frankakorpel Galileo GruPaTo GUIR hamadanayel HateLab hhaddad Hitachi I2C iaf7 IASBS IIITG-ADBU IITP-AINLPML IJS INGEOTEC IR3218-UI IRlab@IITV IRLab DAIICT IS ITNLP IU-UM@LING IUST JAK janecek1 jbern JCT jlee24282 jooyeon Lee KAFK KarthikaS KDELAB KEIS@JUST
# Ã
# Ã
# Ã
# Ã Ã Ã
# Ã Ã Ã Ã
Table 13: Overview of team participation in the subtasks (part 1).
# Ã Ã
Team A-Arabic A-Danish A-Greek A-Turkish A-English B-English C-English à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à à Ã
klaralang KS@LTH KUISAIL kungfupanda kxkajava Lee Light LIIR LT@Helsinki lukez m20170548 machouz MeisterMorxrc MindCoders mircea.tanase NLP Passau NLPDove nlpUP NTU NLP NUIG OffensSzeged orabia Oulu PAI-NLP PALI PGSG pin cod PingANPAI PRHLT-UPV problemConquero PUM RGCL Rouges RTNLU SAFA SaiSakethAluru SAJA saradhix saroarj SINAI Smatgrisene Sonal sonal.kumari SpurthiAH SRIB2020 SSN NLP MLRG Stormbreaker SU-NLP Taha TAC GruPaTo Team Oulu TeamKGP TECHSSN tharindu TheNorth TOBB ETU TysonYU UHH-LT UJNLP
# Ã
# Ã
# Ã
# Ã Ã
# Ã Ã Ã
# Ã Ã
Table 14: Overview of team participation in the subtasks (part 2).
# Ã
Team A-Arabic A-Danish A-Greek A-Turkish A-English B-English C-English ultraviolet UNT Linguistics UoB UTFPR Veriï¬edXiaoPAI wac81 will go KUISAIL Wu427 XD yasserotiefy yemen2016 YNU oxz zahra.raj zoher orabe à à à à à à à à à à à à à à à à à à à à à à à à Ã
Table 15: Overview of team participation in the subtasks (part 3). | {
"id": "2004.02192"
} |
2006.08328 | ETHOS: an Online Hate Speech Detection Dataset | Online hate speech is a recent problem in our society that is rising at a
steady pace by leveraging the vulnerabilities of the corresponding regimes that
characterise most social media platforms. This phenomenon is primarily fostered
by offensive comments, either during user interaction or in the form of a
posted multimedia context. Nowadays, giant corporations own platforms where
millions of users log in every day, and protection from exposure to similar
phenomena appears to be necessary in order to comply with the corresponding
legislation and maintain a high level of service quality. A robust and reliable
system for detecting and preventing the uploading of relevant content will have
a significant impact on our digitally interconnected society. Several aspects
of our daily lives are undeniably linked to our social profiles, making us
vulnerable to abusive behaviours. As a result, the lack of accurate hate speech
detection mechanisms would severely degrade the overall user experience,
although its erroneous operation would pose many ethical concerns. In this
paper, we present 'ETHOS', a textual dataset with two variants: binary and
multi-label, based on YouTube and Reddit comments validated using the
Figure-Eight crowdsourcing platform. Furthermore, we present the annotation
protocol used to create this dataset: an active sampling procedure for
balancing our data in relation to the various aspects defined. Our key
assumption is that, even gaining a small amount of labelled data from such a
time-consuming process, we can guarantee hate speech occurrences in the
examined material. | http://arxiv.org/pdf/2006.08328 | Ioannis Mollas, Zoe Chrysopoulou, Stamatis Karlos, Grigorios Tsoumakas | cs.CL, cs.LG, stat.ML, I.2.6; I.2.7; I.5.4; H.2.4 | 16 Pages, 3 Figures, 9 Tables, Submitted to the special issue on
"Intelligent Systems for Safer Social Media" of Complex & Intelligent Systems | null | cs.CL | 20200611 | 20210706 | 1 2 0 2
l u J 6 ] L C . s c [
2 v 8 2 3 8 0 . 6 0 0 2 : v i X r a
# ETHOS: AN ONLINE HATE SPEECH DETECTION DATASET â
A PREPRINT
# Ioannis Mollas Aristotle University of Thessaloniki Thessaloniki, 54636, Greece [email protected]
Zoe Chrysopoulou Aristotle University of Thessaloniki Thessaloniki, 54636, Greece [email protected]
Stamatis Karlos Aristotle University of Thessaloniki Thessaloniki, 54636, Greece [email protected]
Grigorios Tsoumakas Aristotle University of Thessaloniki Thessaloniki, 54636, Greece [email protected]
August 21, 2021
# ABSTRACT
Online hate speech is a recent problem in our society that is rising at a steady pace by leveraging the vulnerabilities of the corresponding regimes that characterise most social media platforms. This phenomenon is primarily fostered by offensive comments, either during user interaction or in the form of a posted multimedia context. Nowadays, giant corporations own platforms where millions of users log in every day, and protection from exposure to similar phenomena appears to be necessary in order to comply with the corresponding legislation and maintain a high level of service quality. A robust and reliable system for detecting and preventing the uploading of relevant content will have a signiï¬cant impact on our digitally interconnected society. Several aspects of our daily lives are undeniably linked to our social proï¬les, making us vulnerable to abusive behaviours. As a result, the lack of accurate hate speech detection mechanisms would severely degrade the overall user experience, although its erroneous operation would pose many ethical concerns. In this paper, we present âETHOSâ, a textual dataset with two variants: binary and multi-label, based on YouTube and Reddit comments validated using the Figure-Eight crowdsourcing platform. Furthermore, we present the annotation protocol used to create this dataset: an active sampling procedure for balancing our data in relation to the various aspects deï¬ned. Our key assumption is that, even gaining a small amount of labelled data from such a time-consuming process, we can guarantee hate speech occurrences in the examined material.
Keywords Hate Speech · Dataset Presentation · Machine Learning · Binary/ Multi-label Classiï¬cation · Active Learning
# Introduction
Hate speech (HS) is a form of insulting public speech directed at speciï¬c individuals or groups of people on the basis of characteristics, such as race, religion, ethnic origin, national origin, sex, disability, sexual orientation, or gender identity2. This phenomenon is manifested either verbally or physically (e.g., speech, text, gestures), promoting the emergence of racism and ethnocentrism. Because of the social costs arising out of HS, several countries consider it an illegal act, particularly when violence or hatred is encouraged [9]. Although a fundamental human right, freedom of speech, it is in conï¬ict with laws that protect people from HS. Therefore, almost every country has responded by drawing up corresponding legal frameworks, while research which is related to mechanisms that try to remedy such phenomena has recently been done by the Data Mining and Machine Learning (ML) research communities [22].
â(co)winning CrowdFlowerâs AI for Everyone Challenge for Q4 of 2017: https://prn.to/2KVWubz 2https://en.wikipedia.org/wiki/Hate_speech
A PREPRINT - AUGUST 21, 2021
Another important issue is that the occurrence of HS phenomena is emerging in the social media ecosystem, distorting their initial ambition of favouring communication between their corresponding members independently of geographical restrictions and enriching similar activities [48]. The anonymity of social media is the primary explanation for the growth of such phenomena, as is the deliberate avoidance of subsequent legislation. Big companies, like Google and Facebook, are therefore obliged to remove such kind sof violent content from their platforms. As a result, Artiï¬cial Intelligence (AI) methodologies are employed to detect (semi-)automatically HS in real time, or even to prevent users from publishing similar content with appropriate warnings or bans.
The solution of quarantining in an online fashion has recently been demonstrated [53], trying to smooth the censorship and the possible harmful consequences of HS attacks, while learning from short-text segments blooms in the last years [49]. Two of the most important features accompanying the short-text segments, sparseness and the presence of noise [51], settle HS detection, a difï¬cult task for the creation of fully automated solutions. Whereas problems of scalability arise when large quantities of data are simply collected without pre-processing or ï¬ltering. These points are of primary importance to this work.
To achieve high performance in real-world tasks, AI methodologies require balanced, accurate, and unbiased datasets. This requirement, however, is rarely met without applying proper annotation stages [6, 20]. This is the direction in which our work aims to make a signiï¬cant contribution, motivated by the HS use case, providing also a generic-based protocol that could be extended to a wide variety of learning tasks. To be more precise, the relevant literature currently contains a large number of manually created HS datasets [57, 60]. However, since the majority of them were not carefully collected during the corresponding sampling stages, they are essentially large sets of annotated samples on which undesirable phenomena occur frequently. Speciï¬cally, highly imbalanced classes or redundant information prevent the subsequent implemented learning models from effectively harnessing the underlying patterns.
Moreover, by sampling the regions of feature space which express only a restricted level of uncertainty when unlabelled data are queried may settle the learning strategy myopic. All these phenomena violate the previously speciï¬ed desired requirements resulting in solutions with low variance and/or high bias [41]. Furthermore, most of them are concerned with binary or multi-class classiï¬cation tasks, while overlooking the more practical case of multi-label classiï¬cation (MLL). Label dependencies and the semantic overlap that occurs on MLL cannot be ignored when protection from hateful comments is the main task. Since an online comment can ï¬t to more than one deï¬ned label at the same time, rather than being limited to just one outcome, investigation of the latter scenario appears to be more effective (see Figure 1). This aspect is also studied here because the difï¬culties described previously are enforced under the MLL scenario.
Hate speech detection | Wish you cut your veins. Donât shout out you have mental problems. Ban system with binary | Act. Cut them; information labels: os Hate speech detection | Wish you cut your veins. Donât shout out you have mental problems. system with multilabel | Act. Cut them; information adem, SPeSEN 7 JR nctes Violence 2 It Directed 100% ft Disbity 100% |
Figure 1: A realistic example of informing a human reviewer about an investigated comment on binary (top) and multi-label (bottom) level
A simple application that uses the MLL schema provided by the proposed HS dataset could be an assistance system for human staff reviewing comments on social media platforms. This would make it easier for the reviewer(s) to decide if the message contains HS content by providing more insights. For example, if a comment is presented as targeting people with disabilities, directed at a person, and encourages violence, it will be more helpful for the reader to conclude and condemn it for containing HS rather than being presented with a single label (e.g., âmay contain HSâ:{âyesâ,ânoâ}). In terms of the ethical issues that emerge in the case of HS, it appears that a proper manipulation protocol is required for preventing possible defects. Such protocols have addressed wider or more focused research topics, such as news articles, although similar directions have recently been explored in the ï¬eld of HS detection [42].
In this paper, we present the process of creating a multi-labelled dataset with a step-by-step narrative, to avoid the consequences that typically occur in attempts with data that depend on social media platforms, and to increase the likelihood of mining more informative instances. Although the design of the proposed protocol can ï¬t with any target domain indisputably, we are currently focusing on addressing the HS scenario and provide some insightful analysis of this use case. In this attempt, an existing dataset mined from popular social media platforms has been exploited, while a
2
A PREPRINT - AUGUST 21, 2021
well-known crowdsourcing platform was used for validating the ï¬nal result. The proposed annotation protocolâs effects are discussed in detail and visualised using explanatory methods. Following that, a series of experiments are being conducted to determine the baseline performance of this particular dataset using state-of-the-art (SOTA) techniques. From traditional ML algorithms and ensemble models to neural networks (NNs) with and without embeddings (emb) information, binary and multi-label experiments have been performed, inspired primarily by other similar approaches to presenting research datasets [2, 7, 28]. Despite the limited size of the investigated dataset, its careful design during the active sampling stage and the consistency of he included samples were proven beneï¬cial based on our results.
Our ultimate ambition, by describing the total procedure and providing the corresponding dataset, is to foster any interested researchers/businesses to take into consideration an approach that attempts to transform the existing insulting environment of social media into a non-hate, inclusive online society. Adoption of the proposed annotation protocol into different scientiï¬c ï¬elds could prove quite beneï¬cial, especially when the knowledge acquired by oracles during annotation may be ambiguous. The assets also gained by examining the HS problem through a multi-label view help us clarify the harasserâs actual motivations and lead to more targeted comments when dedicated platforms try to inform the corresponding victims [9]. And, of course, the insights gained through such protocols could enhance the ability of ML learners to generalise when applied to different datasets that contain similar classiï¬cation categories, despite the limited size of the proposed dataset over which they are trained. The proposed strategy of actively creating a balanced dataset, preserving the informativeness of each class and minimising the redundancy of the included instances, constitutes the key asset of our protocol. Our in-depth experiments support our hypotheses, particularly regarding the most difï¬cult classes to detect.
The rest of this paper is structured as follows: Section 2 includes several well-documented attempts to address the HS problem using samples gathered from related sources. The proposed annotation protocol is deï¬ned next, followed by some extended single/multi-label classiï¬cation experiments in Section 4, which demonstrate the discriminating ability of several algorithms under consideration. Section 5 presents a few studies with a variation of the original dataset and two additional datasets. Finally, Section 6 discusses the more crucial assets of the proposed dataset, and the annotation protocol, also regarding the recorded experiments, reporting later some remarkable future points that could be further investigated.
# 2 Related Datasets
In this section, we present datasets related to HS, along with their formulation as well as some useful information about their structure and/or the manner under their composition took place. The last paragraph describes the Hatebustersâ data that we utilise as a seed data through the proposed protocol to produce the ï¬nal structure of data, named ETHOS (onlinE haTe speecH detectiOn dataSet).
A collection of 16.914 hate speech tweets was introduced in a study of how different features improve the identiï¬cation of users that use analogous language online [57]. Out of the total number of messages, 3.383, 1.972 and 11.559 concerned sexism, racism and did not include HS, respectively, while were sent by 613, 9 and 614 users. The corpus was generated by a manual tweet search, containing popular slurs and terms related to sexual, religious, gender and ethnic minorities in order to include samples that are not offensive regardless of the inclusion of such words. The main drawback here is the access to the text of the tweets only through the public Twitter API.
Another dataset (D1) [7] contains 24.783 tweets, manually classiï¬ed as HS (1.430), offensive but not HS (19.190), and neither hate nor offensive speech (4.163) by Figure-Eightâs3 members. The data was gathered again via the Twitter API, ï¬ltering tweets containing HS words submitted to Hatebase.org. The outcome was a sample of 33.548 instances, while 85.4 million tweets were collected from the accounts of all users. A random sample of this collection led to the ï¬nal dataset. Nevertheless, this dataset lacks diversity in terms of HS content. For example, the gender-based HS tweets are biased towards women, while the greatest number of them contain ethnicity content.
Research focusing on the identiï¬cation of misogynistic language on Twitter uses a dataset called Automatic Misogyny Identiï¬cation (AMI) [11] with 4.000 annotated comments and binary labels. Apart from this labelling mode, every comment is deï¬ned by two extra ï¬elds. The ï¬rst one concerns the type of misogynistic behaviour: stereotype, dominance, derailing, sexual harassment, discredit or none (if the tweet is not misogynous). The second one concerns the subject of the misogynistic tweet: active, when it attacks a speciï¬c target (individual), passive, when it denotes potential receivers (generic), and again none, if there is no misogyny in the tweet.
The largest online community of white nationalists, called Stormfront, was used to form another dataset [17]. The content in this forum revolves around discussions of race, with various degrees of offensiveness, included. The annotation of the samples is at the sentence level, which is a technique that keeps the smallest unit containing hate
# 3Formerly Crowdï¬ower and latterly Appen: https://appen.com/ï¬gure-eight-is-now-appen/
3
A PREPRINT - AUGUST 21, 2021
speech and reduces noise. The dataset contains 10.568 sentences that are classiï¬ed as HS (1.119 comments) or not (8.537 comments), as well as two supplementary classes, relation for sentences that express HS only when related to each other and skip for sentences which are not in English or do not contain any information as to be accordingly classiï¬ed. Furthermore, information like the post identiï¬er and the sentenceâs position in the post, a user identiï¬er and a sub-forum identiï¬er, as well as the number of previous posts the annotator had to read before making a decision over the sentenceâs category are also recorded. The samples were picked randomly from 22 sub-forums covering diverse topics and nationalities.
A dataset introduced by Fox News [15] consists of 1.528 Fox News usersâ comments (435 hateful), which were acquired from 10 discussion threads of 10 widely read Fox News articles published during August 2016. Context information is considered extremely important, so details such as the screen name of the user, all the comments in the same thread and the original article, are also included.
A recent multilingual work (D2) [33], a trilingual (English, French and Arabic) dataset with tweets, was created attempting to mine similar expressions of 15 common phrases over these languages, focused on different sources of obscene phrases (e.g. more sensitive topic-based discussions based on locality criteria). After tackling some linguistic challenges per separate language, and a strict rule set that was posed to human annotators from the Amazon Mechanical Turk platform to ensure trustworthy feedback, a pilot test set was provided. Having gathered the necessary evaluations, another one reconstruction of the label set was applied, before the ï¬nal formulation of 5.647 English, 4.014 French and 3.353 Arabic tweets was reached, annotated over 5 separate tasks. Apart from the binary directness of each tweet that was tackled better by single task language models, the rest 4 classiï¬cation tasks, which included at least 5 label gradations, were clearly boosted via multi task single/multi language or single/multi multilingual models.
The issue of cyberbullying has been recently investigated also, where the skewed distribution of positive and negative comments was tackled by tuning a cost-sensitive linear SVM learner over various combinations of joined feature spaces, and obtaining similar performance on both English and Dutch corpus [54]. An investigation of recognising also the role of each participant during such phenomena took place, while a qualitative analysis raised the difï¬culty of reducing misclassiï¬cation scores when irony exists in offensive comments.
Finally, a small collection of 454 YouTube comments annotated as HS (120) or not (334) was introduced by the creators of the Hatebusters Platform [3], which aims to establish an online inclusive community of volunteers actively reporting illegal HS content on YouTube. This dataset, through semi-supervised learning, was evolving in the Hatebusters Platform improving the predictivity of the ML models. However, this unpremeditated expansion of the dataset led to a more redundant variant of its original form. We use the initial collection of Hatebustersâ data as a seed to the protocol that we propose in the following section.
Hatebusters Data Dataset <=> => Update LL wu Dataset Platform Manual Data Selection & Data Prediction â - Annotation Data Collection Request New Unlabelled Data Stage 1: Initial Dataset Creation and Manual Annotation Stage geet eee ls Stage EE uateeee Figure-Eight Platform Configuration
Figure 2: Dataset creation stages ï¬owchart
# 3 ETHOS Dataset Creation
To overcome the key weaknesses of the existing collections of HS instances, we introduce a small, yet fairly, informative dataset, ETHOS, that does not suffer from issues such as imbalanced or biased labels (e.g., gender), produced appropriately following the proposed protocol. Considering the aforementioned popular approaches of mining similar datasets for tackling with HS problem, we assume that an appropriate pre-process of initially collected data could improve in general their overall utilisation under ML or AI products, improving the total ï¬tness of data quality, blending data mining techniques related with the ï¬eld of Active Learning [44], such as query strategy and crowdsourcing platforms. The overview of the proposed annotation protocol is visualised through a ï¬ow chart in Figure 2. The ï¬nally obtained dataset is the outcome of a 3-stage process, which we describe shortly in the current Section.
4
A PREPRINT - AUGUST 21, 2021
# Initial Dataset Creation and Manual Annotation
The ï¬rst three procedures, mentioned as âPlatform Selection & Data Collectionâ, âData Predictionâ and âManual Data Annotationâ, could be seen as the initial stage (Stage 1) which is executed until a stopping criterion is satisï¬ed regarding the cardinality of the collected instances, based on the original available HS dataset which operates as the input. This stage works like a âstreamâ, speciï¬cally for groups of comments that we have already collected, annotating their weak labelsâ predictions through a predeï¬ned ML classiï¬er, before an active selection and manually annotation takes place over some unlabelled (U ) mined examples.
# 3.1.1 Platform Selection & Data Collection
To create this dataset (D), initially D = â
, a data collection protocol has been designed. We chose the platforms of Hatebusters4 and Reddit through the Public Reddit Data Repository5 to collect our data. Hatebusters Platform collects new data daily via the YouTube Data v3 API. After these new data have been collected, the Hatebusters Platform performs the classiï¬cation process. The locally retained pre-trained ML model predicts the class of each comment, exporting a âhateâ score. Currently, this model is a Support Vector Machine (SVM) [55] model with a linear kernel embedded with the well-known vectorization technique of the term frequency-inverse document frequency (TF-IDF). Instead of transforming the output of the SVM learner to conï¬dence score, we kept its inherent property to compute the distance from the decision boundary. Through this, lower time overheads and more faithful decisions are drawn.
After granting access to Hatebustersâ SQL database, based on the input data, this ï¬rst part was to query the Hatebustersâ database for comments already annotated by the corresponding users, without spending any monetisation resources. These comments were deemed to be accurate, and they were the ï¬rst group of comments to be manually annotated. The second part concerns the enrichment of the gathered comments, by querying Hatebustersâ database with a speciï¬c frequency (e.g. daily) for a time period â in our case this was equal to two months â with various queries. Based on the data obtained each previous day, the applied query strategy had been updated concerning only them. For example, when we received a sufï¬cient amount for all categories of HS, except for one category, the queries in the Hatebustersâ database were updated to make comments speciï¬c to the residual category. Later on, we will show the categories and the amount of comments we have received.
The ï¬nal part of the data collection process was based on a public Reddit data archive, which provides batches of ï¬les regarding Reddit comments on a monthly basis. The ï¬les of this directory were processed through a JSON crawler for selecting comments from speciï¬c subreddits for particular time periods. The discovery of subreddits incorporating different HS contents has been investigated6,7, we distinguished the next entities:
⢠Incels, this subreddit became known as a place where men blamed women for their unintended celibacy, often promoting rape or other abuse. Those posts had a misogynistic and sometimes racist content.
⢠TheRedPill, which is devoted to the rights of men, containing misogynous material.
⢠The_Donald, a subreddit where the participants create discussions and memes supportive of U.S. President Donald Trump. This channel has been described as hosting conspiracy theories and racist, misogynous, Islamophobic, and antisemitic content.
⢠RoastMe, in this subreddit, reddit users can ask their followers to âroastâ (insult) them.
While some of these subreddits were suspended and shut down by Reddit at the end of 2017 due to their context, it was possible to access comments from these subreddits by selecting ï¬les from the archive for October 2017 and earlier.
# 3.1.2 Data Prediction
The next process of Stage 1 is the âData Predictionâ. For each batch of comments extracted from the ï¬rst part, the assignment of some useful labels to the available unlabelled set (U current) is triggered through an ML model trained on an expanded version (L ⪠D) of the Hatebustersâ dataset (L) and the new data annotated on Stage 3 (D). Per each iteration of the previous part, we were performing a grid search among a bunch of classiï¬cation methods in the currently expanded dataset, obtaining the best algorithm through a typical 10-fold-CV process so as to be set as the annotator of the (U current).
4https://hatebusters.org 5https://ï¬les.pushshift.io/reddit/comments/ 6https://en.wikipedia.org/wiki/R/The_Donald 7https://en.wikipedia.org/wiki/Incel
5
A PREPRINT - AUGUST 21, 2021
The selected bunch consisted of various ML models: SVMs, Random Forests (RF), Logistic Regression (LR), as well as simple or more complex architectures of Neural Networks (NNs). In addition to the classiï¬er tuning, some TF-IDF vectorization techniques â with word or char n-grams (n from 1 to 13) â were also examined in this search.
# 3.1.3 Manual Data Annotation
By the end of the âData Predictionâ phase, the âData Annotationâ process is initiated. In the sense of active learning concept, a hybrid combination of query strategy has been employed in order to pick informative comments for manual annotation. The mentioned query strategy combines appropriately both concepts of Uncertainty Sampling and Maximum Relevance with predeï¬ned ranges of accepted conï¬dence values based on the expected labels of the classiï¬er we had trained [38]. More speciï¬cally, we were annotating the comments within the [.4, .6] probability range, while we were examining few comments in the ranges [.0, .1] ⪠[.9, 1.0] to detect any major misclassiï¬cation. Eventually, only comments with speciï¬c labels and content were added to the new dataset (D), preserving both the balance of the labels and the diversity of the comments per label.
At the end of this process, if the number of comments collected is not more than a targeted threshold (T ) â in our case T = 1.000 â we update the D, and Stage 1 will be repeated to request new unlabelled comments. Otherwise, Stage 2 will be triggered. Despite the limited cardinality of the exported dataset, the adopted actively sampling process eliminates defects of redundancy, maintaining the both informativeness of each label, and reducing at the same time overï¬tting phenomena. The issue of obtaining a myopic strategy is also eliminated, since different regions of uncertainty are explored [26]. The efï¬cacy of such methods has been highly declared in the literature [29, 27]. Therefore, an in-depth evaluation stage regarding several learning models has been conducted in Section 4.
# 3.2 Data Validation via Figure-Eight Platform
The second stage will begin when T â in our case 1.000 â comments have been collected. Moreover, Hatebustersâ dataset is discarded, since it does not further contribute to our protocol. After a number of different experiments on the Figure-Eight platform, we settled on the next process. Firstly, given a speciï¬c comment, we ask the contributors to identify whether that comment contains HS or not. In a positive scenario, we raise 3 more questions: whether the comment incites violence, deï¬ning violence as âthe use of physical force to injure, abuse, damage, or destroyâ, and whether the comment includes directed or generalized HS. The case of targeting a single person or a small group of people is deï¬ned as directed HS, whereas the case of targeting a class/large group of people is described as generalised HS. Finally, we ask the contributors to pick one or more from the following HS categories, which, according to their opinion, better reï¬ect(s) the content of the comments. The categories of HS concern gender, race, national origin, disability, religion and sexual orientation.
Comment: Your eyes obviously ain't attracted to each other Does this comment contains hate speech? (required) © Yes No Does this comment incites violence? (required) © Yes No Is this comment targeting a specific individual (directed) or a group/class of people (generalized)? (required) © Directed Generalized Which category of hate speech is it? (required) Gender Race National Origin Disability Religion Sexual Orientation
Figure 3: Example of Figure-Eightâs platform design
Through Figure 3, an example of the designed query template on the crowdsourcing platform is presented, depicting the types of questions and potential responses. With respect to the proposed template, we introduced a set of 40 questions
6
A PREPRINT - AUGUST 21, 2021
for validating the trustworthiness of human annotators, as well as a test run of 100 unlabelled comments, with 5 different contributors determining the labels for each one of these. The annotators were frequently reminded that there was the possibility of explicit material in the comments through the crowdsourcing platform. The feedback we collected helped us to ï¬nalise some design and instruction issues. Most importantly, we have ï¬xed some naive and few serious mistakes on the validation questions, such as the following:
[naive mistake]: For the comment âJ**s are lower class pigs.â we had âYesâ on question âDoes this comment incite violence?â, while we gave as an explanation âIt spreads hatred, but not violenceâ.
[serious mistake]: We misclassiï¬ed the comment âQ****s are an abomination and need to be helped to go straight to H**l!â to the wrong category. By receiving the feedback we ï¬xed it.
Then, we executed the task for the whole D, collecting in total 5.360 judgements. Almost every comment was therefore annotated by ï¬ve different annotators. The level of expertise of the annotators was the 3rd, on a scale of 3 levels. âThe 3rd level annotators are the smallest group of the most experienced, most accurate, contributorsâ according to the Figure-Eight System. We also computed the Fleissâ kappa, a statistical measure for assessing the reliability of agreement of annotators, and we present the results in Table 1. A kappa value greater than 0.75 implies good agreement, while kappa values greater than 0.90 indicate perfect agreement [21].
Contains Hate Speech Violence Directed vs Generalized Gender Race National Origin Disability Sexual Orientation Religion Fleissâ Kappa 0.814 0.865 0.854 0.904 0.931 0.917 0.977 0.954 0.963
# Table 1: Reliability of annotators agreement per label
# 3.3 Dataset Conï¬guration
The ï¬nal stage regards dataset conï¬guration. Taking as input the results from the Stage 2, the dataset takes its ï¬nal form. Examining the annotated data one last time manually, we checked for any misclassiï¬cation. Few errors occurred on some of the most disambiguous examples, assuring us about the quality of the annotators that participated. Although the Figure-Eight platform provides several attributes for informing suitably the human annotators, even stricter measures should be taken into consideration when large-scale datasets are aimed to be obtained [18].
The use of representative test questions that follow a more realistic label distribution than the uniform could be useful to the overall process. This might be improved further by incorporating an interactive procedure that alerts annotators to mislabelled samples and/or allows them to provide feedback when they disagree. Despite the inherent uncertainties introduced by the human factor, crowdsourcing is the sole viable technique for gathering the required information regarding the label space. This is true not only for large-scale datasets, but also for smaller cases [32].
Furthermore, given the semantic overlap of label space encountered during HS detection, the assumption of obtaining cheap labels is violated. Given the idiomatic expressions and highly unstructured nature of the comments posted on social media platforms, this becomes especially clear when examined in a multi-label fashion. To address this, additional human supervision, as stated at this stage, is required, while the active sampling process, which aims to create a balanced dataset, is clearly justiï¬ed.
# 3.4 ETHOS Dataset Overview
Two datasets8 were the product of the above operation. âEthos_Binary.csvâ, the ï¬rst one, includes 998 comments and a label on the presence or absence of hate speech content (âisHateâ). The second ï¬le, called âEthos_Multi_Label.csvâ, includes 433 hate speech messages along with the following 8 labels: (âviolenceâ, âdirected_vs _generalizedâ, âgenderâ, âraceâ, ânational_originâ, âdisabilityâ, âsexual_orientationâ, âreligionâ).
For every comment ci, Ni annotators voted for the labels that we set. The label âisHateâ was the result of summing up the positive votes P1,i of the contributors, divided by Ni, so its values are within the range of [0, 1]. We measured the âviolenceâ label by summarising the positive votes of the contributors P2,i to the question: âDoes this comment incite vi- olence?â, which was divided by P1,i to be normalised to [0, 1]. Likewise, the value of the label âdirected_vs_generalizedâ was determined by summarising the annotators replied âdirectedâ P3,i to the question, âIs this comment targeting a speciï¬c individual (directed) or a group/class of people (generalized)?â, divided by P1,i. Finally, we accumulated the votes of the P1,i contributors for each of the 6 hate speech categories, and dividing them by P1,i, we obtained six independent labels.
# 8https://github.com/intelligence-csd-auth-gr/Ethos-Hate-Speech-Dataset.git
7
A PREPRINT - AUGUST 21, 2021
© Disabiity «© proces © Gender @ Religion @ Race © National Origin @ Sexual Orientation © Generalized © Voience © Newolence
Gender Race National Origin Disability Religion Sexual Orientation V-D nV-D V-G nV-G 14 4 5 12 11 11 57 22 13 11 15 8 15 84 13 12 18 8 24 11 86 37 47 40 18 38 36 216 86 76 74 53 81 73 443
Table 2: Correlation of HS categories with (not) violence (nV - V) and directed/generalized (D - G) labels
Figure 4: Ratio of labels
This dataset achieves to create balanced labels. In particular, it maintains balance between the two classes of âisHateâ label, almost perfect balance between the 6 labels of hate speech categories, while it has a fair ratio between the rest of the labels (Figure 4). In Table 2, the balance between hate speech categories (last column) and their correlation with violence and directed/generalized labels is further portrayed.
# 4 Dataset Baseline Evaluation
In order to evaluate ETHOS, after pre-processing the data, we used a variety of algorithms in binary/multi-label scope to present the baseline performance in this dataset. For the purpose of providing the unbiased performance of each algorithm we performed nested-CV [56] evaluation, using a variety of parameter setups, for every algorithm except NNs, where we applied 10-fold-CV [16]. In addition, we binarise the values of each label, which are initially discrete in a range of [0,1], to the {0,1} classes using the rule âIf value ⥠0.5 â 1 Else value â 0â. More in-depth details follow next.
# 4.1 Data Preparation
The pre-processing methodology used in our case begins with lowercasing transformation, contraction transformations (available into the zip ï¬le), removal of punctuation marks, and stemming and lemmatization via Snow-ball stemmer [37] and WordNet lemmatizer [31].
Before we proceed to the experiments, we transform the pre-processed textual data into word vectors using TF-IDF and Text-to-Sequences processes. Particularly, for the former, several parameter tuples of (n_gram, max _features, stopwords existence) were examined, while on the latter, the corresponding number of maximum features was set at 50k. Moreover, 3 pre-trained models that concern computation of emb were included: FastText (FT) [23], GloVe (GV) [34], Bert Language Model (BERT) [8], and the distilled version of BERT (DistilBERT) [43]. We should mention that the steps of stemming and lemmatization were skipped in the Text-to-Sequence experiments.
# 4.2 Binary Classiï¬cation
A lot of applications are investigating the problem of HS detection through a binary scope. It is therefore necessary to present the performance of SOTA algorithms on such a version of this dataset.
Thus, we used the following algorithms for our experiments in this stage: Multinomial and Bernoulli variations of Naive Bayes (MNB and BNB, respectively) [30], LR, SVMs, RF and Gradient Boosting (Grad) [12]. Moreover, we used four different NN architectures, as other similar works attempt [35]. The ï¬rst one utilises convolutional NNs (CNNs) [14] with an attention [4] layer. A single LSTM-based NN constitutes the second architecture. The third model is an NN with multiple parallel layers, which contain CNNs, LSTMs and FeedForward layers (FFs). The last architecture consists of Bidirectional LSTMs (BiLSTMs). We combined these NNs with FT and GV. Lastly, we used BERT and DistilBERT, which were ï¬ne-tuned in our classiï¬cation task. Such architectures have met great acceptance in the related ML community [36, 58].
We chose accuracy and precision, recall and F1-score with macro indication, and the confusion matrix as met- rics.Furthermore, we calculate speciï¬city T N/N and sensitivity T P/P . However, in applications like HS monitoring where human interference is essential to ensure that usersâ rights are not abused on the grounds of incorrect HS charges, we must rely on metrics such as high recall and precision of HS category that we can guarantee to not overwhelm the human effort of checking redundant content. However, in such applications as HS reporting and handling, where human
8
A PREPRINT - AUGUST 21, 2021
intervention is required to ensure that usersâ rights are not violated by false HS accusations, we should focus on metrics like high recall and F1 score of the HS category, which ensure that human personnel checking redundant content are not overburdened.
MultinomialNB BernoulliNB Logistic Regression SVM Random Forests Gradient Boosting CNN+Attention+FT+GV LSTM+FT+GV FF+LSTM+CNN+FT+GV BiLSTM+FT+GV BERT DistilBERT F1 Score F1 Hate Accuracy 59.14 44.52 64.35 63.77 60.07 59.21 71.76 72.24 72.08 75.40 77.13 77.16 63.78 47.78 66.5 66.07 64.41 63.55 75.76 75.24 75.49 77.84 79.60 79.92 64.73 48.3 66.94 66.43 65.04 64.33 76.56 75.95 76.15 78.16 79.96 80.36 Precision 64.06 48.23 66.94 66.47 64.69 64.34 76.86 76.57 76.29 78.05 79.89 80.28 Sensitivity Recall Recall Hate 63.96 48.16 67.07 66.7 64.68 64.2 75.66 75.53 75.52 78.04 79.73 79.91 58.82 47.81 68.78 68.08 60.61 59.67 68.64 72.11 70.88 77.15 77.87 76.47 59.45 41.65 60.46 59.96 59.54 58.76 75.18 72.36 73.28 73.73 76.4 77.87 Speciï¬city 69.2 48.51 65.36 65.32 68.75 68.73 82.68 78.95 80.16 78.94 81.59 83.36
Table 3: Performance of selected models on binary HS classiï¬cation
The handling of textual data is a thoroughly researched task and has a dedicated category, NLP, which stands for natural language processing. We used common and widely accepted techniques to process them, as described previously. In Table 3, we are showcasing the results of the selected evaluation processes per each classiï¬er. The best performance per metric is highlighted in bold format. The NNs seem to outperform the conventional ML techniques. It is worth mentioning that Bayesian learners had the lowest performance in terms of almost every metric, while tree-ensembles achieved similar performance between them, but lower compared to the SVMs and LR.
Between the examined NNs, those who achieved the highest performance using emb were the architectures using BiLSTMs. BiLSTMs + FT + GV achieved the highest recall on hate category, as well as high accuracy. Finally, BERT and DistilBERT outperformed every other model in any metric, using ï¬ne-tuning on the data, with DistilBERT performing slightly better than BERT, validating its superior performance on similar tasks [39].
# 4.3 Multi-Label Classiï¬cation
Providing a dataset with multi-label information about HS, we are able to uncover new insights. HS is indeed an ML task that cannot be studied thoroughly just through the binary aspect. Indeed, it is a multi-dimensional task.
The algorithms handling MLL can be either problem transformation or adaptation techniques [52]. MLkNN [61] and MLARAM [5], as well as Binary Relevance (BR) and Classiï¬er Chains (CC) [40] with base learners like LR, SVMs and RF, are utilised. We used FT emb for our NNs and designed models inspired by classic MLL systems, such as BR and CC. Speciï¬cally, NNBR is an NN containing BiLSTMs, an attention layer, two FFs and an output layer with 8 outputs in a BR fashion. NNCC is inspired by the CC technique, but during its output each label is given as input for the next label prediction.
MLkNN MLARAM BR CC NNBR NNCC F1 Example 48.01 18.47 48.59 56.51 75.05 47.66 F1 Macro 53.04 6.06 52.49 59.24 76.23 51.25 F1 Micro 53.74 18.71 56.76 58.23 74.87 55.47 P Example 55.27 21.44 57.69 62.49 81.02 57.34 P Macro 71.29 3.78 79.74 69.08 83.21 73.36 P Micro 69.95 21.44 79.37 63.44 79.27 84.27 R Example 46.28 17.69 45.30 56.54 74.33 44.06 R Macro 45.04 16.25 42 56.22 73.04 42.40 R Micro 43.98 18.27 44.37 53.99 71.29 41.70 AP Macro 46.63 20.79 47.66 49.74 67.33 50.02 AP Micro 42.79 21.55 47.04 44.07 62.64 47.36 Subset Accuracy 26.53 7.15 26.28 31.4 48.39 26.61 Hamming Loss 0.1566 0.2948 0.1395 0.1606 0.0993 0.1378
Table 4: Performance of selected models on MLL HS (P: Precision, R: Recall, AP: Average Precision)
In the evaluation of MLL systems, a very common measure is the Hamming loss (symmetric difference between the ground truth labels and the predicted ones). Furthermore, subset accuracy (symmetric similarity), as well as precision, recall and F1-score, are contained here (instance-based metrics). Moreover, some label-based metrics like B-macro and B-micro, where B â {F1, Precision, Recall} were computed. We present our results in Table 4. The superior performance of neural-based approaches compared to classical ML models is observed. Speciï¬cally, NNBR achieves the highest score in 12 out of 13 metrics.
9
A PREPRINT - AUGUST 21, 2021
# 5 Dataset Experimentation
After setting the baseline performance of ETHOS in multiple ML algorithms, in both binary and multi-label scope, this section aims at highlighting some interesting views and aspects of its usefulness over other learning tasks. First, we fulï¬l our experimental soundness by setting a fair comparison between a balanced subset and a random subset of ETHOS capturing useful insights under a 1-vs-1 evaluation stage. Secondly, we examine how the ETHOS dataset can generalise over separate HS datasets when it is applicable. Thus, we transfer its discriminative ability obtained by the proposed underlying representation through training proper ML models. These experiments have been conducted for two well-known datasets on binary (D1) [7] (2017) and multi-label (D2) [33] (2019) level, as described brieï¬y in Section 2, commenting the produced results regarding the aspects that we had initially posed and providing accurate explanations about any mismatches over this attempt.
# 5.1 Balanced vs Random Comparison
Initially, we are going to experiment with the proposed dataset using just a few variations in the binary level. More precisely, we create two versions of ETHOS, one of which collects 75% of data at random (DRa), while the other collects 75% of data preserving the class balance (DBa), from a pool of 87.5%. The remainder of the data (DRe), which is 12.5%, will be used as test data. Two SVM models are then trained on DRa and DBa using a TFIDF vectorizer and evaluated on the DRe. We are running this experiment 10 times, shufï¬ing appropriately our data. In addition, the two SVM models are evaluated on the D1 dataset as well. Under this scenario, we are further investigating the learning capacity of the constructed ETHOS dataset comparing two different variants: a strictly balanced and a random one, while our evaluation protocol is consistent with maintaining the balancing property of the generated sub-optimal subsamples. The application of the trained learners into separate datasets may conï¬rm also our assumptions about the efï¬cacy of our strategy: the active selection of multi-label samples for constructing a balanced HS dataset.
The results are shown in Table 5, verifying that the performance of the SVM on the test set is higher when the dataset maintains a balance between classes. However, in terms of accuracy, a higher score is obtained by random datasets. We cannot conclude for the F1 weighted performance of DRa and DBa on D1, as the wide standard deviation of the DBa makes it difï¬cult. This result comes of course with an explanation: a deï¬ning characteristic of D1 dataset concerns its imbalanced nature. This indicates that the SVM trained on random data is more biased towards the majority class. In order to investigate this, the weighted F1 score per label is shown in Table 6.
Train on DRa Train on DBa Train on DRa Train on DBa DRe 63.15 ± 3.93 67.99 ± 2.17 64.19 ± 4.89 69.06 ± 2.29 D1 50.62 ± 1.10 43.61 ± 12.39 36.15 ± 1.05 37.21 ± 8.25 Accuracy F1 weighted
Table 5: Comparison of SVM performance (metric±std) trained on random and balanced subsets of ETHOS and tested on unknown data from the same source (DRe) and a different one (D1)
Train on DRa Train on DBa Train on DRa Train on DBa D1 66.53 ± 1.01 54.48 ± 16.32 5.77 ± 19.94 19.94 ± 3.34 F1 Non HS F1 HS
# Table 6: Performance of SVM (metric±std) on D1 per label
As we previously assumed, the SVM model trained on DRa has a bias towards the majority class (No Hate) obtaining a better score than the SVM model trained on DBa. However, this is not the case for the minority class, which seems to be best predicted by the SVM trained on the DBa. In tasks such as hate speech identiï¬cation, it would be more valuable to more precisely identify comments of hate speech. Consequently, a balanced dataset despite its limited cardinality may play a crucial role in tackling this phenomenon, verifying the assets of the proposed protocol.
10
A PREPRINT - AUGUST 21, 2021
# 5.2 Generalising on binary level
In an attempt to prove that a small but carefully collected dataset is of higher quality and more useful than larger datasets collected under unknown conditions, we will compare ETHOS to D1, a dataset 24 times larger. In this cross-validation experiment, we train an SVM model (with default parameters) on the ETHOS dataset and predict the D1 dataset, and vice versa. We have also computed the performance of SVMs on the D1 through nested cross-validation, resulting in 66.18% balanced accuracy, 68.77% F1 weighted score, 96.97% F1 on non HS tweets and 42.09% on HS tweets, revealing thus its optimal performance which also did not manage to get improved regarding the predictiveness of HS instances.
The results of each cross validation training are shown in Table 7 and Table 8. It is visible that both SVMs perform equally in both metrics. It could be expected that the SVM trained on D1, a larger dataset, would perform better than a smaller dataset, but the more sophisticated manner of collecting and annotating data in the case of ETHOS overcomes its limited cardinality offering similar predictive ability with a quite larger collection of instances.
Balanced Accuracy F1 weighted F1 Non HS F1 HS ETHOS 58.03 56.41 74.03 33.21 D1 54.03 87.32 91.88 12.85 Balanced Accuracy F1 weighted F1 Non HS F1 HS D1 50.90 42.67 72.66 3.53 ETHOS 53.33 92.31 97.10 12.38
Table 7: SVM model trained on ETHOS and predicting D1
Table 8: SVM model trained on D1 and predicting ETHOS
It is peculiar that the two models do not predict the otherâs hate speech instances. Digging into that further, we can see that there are few problematic instances in D1. For example, the following sentence: ârealdonaldtrump he looks like reg memphis tn trash we got them everywhereâ does not contain hate speech content, rather than offensive. Moreover, the distribution of the hate instances to hate categories in D1 is non-uniform, favouring three categories: race (dark-skinned people), sexual orientation (homosexual people) and gender (women). The aforementioned conclusion was the product of applying the ETHOS Multi-labelled dataset, predicting 326 - race, 257 - sexuality and 230 - gender instances out of the 1430 hate speech tweets, as well as the product of a simple word frequency calculation, suggesting that there are 378 - race (words: ân****râ, ân***aâ, ân****hâ), 417 - sexuality (words: âf****tâ, âf*gâ, âg*yâ, âq***râ) and 352 - gender (words: âb***hâ, âc**tâ, âh*eâ) instances.
Finally, it would be interesting to investigate the overall performance of an SVM model trained on a combination dataset of those two. After a 10-fold cross validation training the combined dataset achieved 55.27% balanced accuracy, 90.88% F1 weighted score, 96.48% F1 on Non Hate Speech and 18.84% F1 on Hate Speech. The overall performance of the model increased, implying that combining datasets with different dynamics can lead to better models. To this aspect, one of the posed ambitions of our work seems to be satisï¬ed, since its integration with the D1 dataset leads to improved learning behaviour.
# 5.3 Generalising on multi-label level
The dataset of ETHOS has two variants, a binary and a multi-labelled dataset. After experimenting with the binary version of it, we use the D2 dataset in this Section to show the usefulness of ETHOS. D2 is a multilingual and multi-aspect hate speech dataset containing information for tweets such as hostility type, directness, target attribute and category, as well as annotatorâs sentiment. However, there is no one-to-one mapping between these attributes and the attributes of ETHOS. For example, the type of hostility deï¬nes the sentiment of a tweet as abusive, hateful, offensive, disrespectful, fearful and normal. We assign instances described as abusive, hateful or fearful as violent, while others are described as non-violent. The mapping of the hostility directness to the ETHOS directed_vs_generalized label is straightforward. Finally, the mapping between the hate categories and the target attributes is almost the same, while the âraceâ category is absent. However, by extracting information from the target group attribute, we assign tweets to the âraceâ category when the target group is either âAfrican descentâ or âAsianâ.
Training a neural network with BiLSTM layers using ETHOS multi-labelled dataset, we are predicting the labels of D2. In Table 9, the performance of the model on the D2 dataset per label is showcased. The model achieved to predict perfectly the âsexual_orientationâ label, decently the âdisabilityâ, ânational_originâ and âgenderâ labels, but poorly the âdirected_vs_generalizedâ, âviolenceâ, âraceâ and âreligionâ labels. Speciï¬cally, on the âreligionâ label the model can identify if a tweet does not contain hate speech towards religion by 97.82%, but its performance is downgraded on the opposite case, achieving 27.31%. About the âviolenceâ label, the model fails to predict when a sentence incites
11
A PREPRINT - AUGUST 21, 2021
Accuracy F1 weighted F1 (negative) F1 (positive) Violence 50.86 59.48 72.50 29.09 Directed vs Generalized 55.28 55.39 59.36 19.98 Gender 70.34 87.71 92.94 46.59 Race 75.97 92.78 94.61 24.06 National Origin 67.88 68.97 74.89 61.23 Disability Religion 69.64 83.81 91.06 53.44 71.65 97.65 98.51 27.31 Sexual Orientation 89.83 94.21 96.50 71.29
Table 9: The performance of the model trained on ETHOS predicting the labels of D2. F1 (negative): The label is not appearing in the instance, F1 (positive): The label is appearing in the instance
violence with 29.09%. The worst predicted label by the model is the âdirected_vs_generalizedâ. This means that the model cannot generalise well when a tweet is targeting a speciï¬c individual.
As it regards the âraceâ, due to the lack of information in the D2 about this label, it was expected to counter such a low performance. To be more convenient with this aspect, we depict some of the instances which had as groups âAsianâ or âAfrican descentâ, and our model did not categorise as race the following four:
âwell my parents like carikla ching chong guy in your collegeâ âyay kelas ching chong todayâ âokay ching chongâ âremember it was some ching chong hoe on here that was ï¬exin on him years ago found out they was fuckin smhâ
It seems the BiLSTM model has not encountered such examples. Indeed, ETHOS dataset does not contain any example with the phrase âching chongâ. However, we should investigate the reversed situation as well, namely, the instances that did not have the race label, but the BiLSTM model assigned it erroneously. This misclassiï¬cation occurred to 35 instances, while 26 of them contain hate speech targeting âraceâ. We present here the most representative of them:
âsee the type of n****r you are hmphâ âdie n****râ and 20 similar ânow yes this politically motivated terrorist is white and leftistâ and 3 similar
Such issues are quite possible to occur because of mismatching between the separate collections of data. Enrichment of the source dataset, in our case the ETHOS, by a careful selection of instances that describe such cases could help our attempt. Therefore, the adoption of metric learning mechanisms may help us alleviate the hubness phenomenon which put obstacles on recovering distinct classes [25].
# 6 Discussion
The provision of a new well-designed dataset to the public on a speciï¬c subject is always considered a signiï¬cant contribution [19, 47]. In this sense, our HS dataset, called ETHOS, collected from social media platforms, could be reused by the ML and AI communities. Alleviating redundant information through balancing the proposed dataset between ï¬ne-grained classes through a ï¬ne-tuned learner and an Active Learning scheme beneï¬ted us both from the aspect of less human-laborious effort and, of course, by scoring good learning rates despite the limited cardinality of our collected instances. Redundancy reduction has been shown to be quite beneï¬cial for a variety of learning tasks. More speciï¬cally, the proposed protocol offers us a balanced dataset with a rich quality of included instances for both binary and multi-label HS problems. At the same time, our experimental procedure revealed that a proper balance has been achieved between the discriminative ability of the learners, both traditional and neural networks, and the computational resources consumed.
The issue of imbalanced data collection has also affected the performance of similar works, where the need for proper manipulation is clearly stated [18, 32]. The solution of proactive learning has been applied in the latter approach, trying to match the expertise of each human annotator with the most appropriate unlabelled instances. Based on this, the negative effect of harmful annotations can be seriously avoided. This asset should be carefully explored and adopted by our side before enlargement of the current dataset takes place or new data collection attempts get started. We must emphasise once more that, despite the relatively small size of the ETHOS dataset, the human resources invested in adequate labelling cannot be overlooked (2 consecutive months of daily querying of the targeted databases, human annotation in 2 stages, input by a crowdsourcing process). Thus, besides the need for high-quality annotators, mining informative instances that retain the ability to discriminate between hate speech examples, both in binary and
12
A PREPRINT - AUGUST 21, 2021
multi-label classiï¬cation tasks, is of high importance. The conducted experiments verify our assumptions following our straightforward protocol, since the learning performance of various models is satisfactory, especially these based on embeddings. Simultaneously, a proof-of-concept of how to exploit the ETHOS datasetâs learning capacity was provided, serving as a seed dataset for generalising to similar hate speech detection datasets.
Some promising directions of our work are mentioned here, trying to take advantage of its assets and the baselines that were posed. The main issue, the shortage of collected data, is a fact that depends on the limitations that occur during exploiting crowdsourcing platforms (e.g. restricted budget, usersâ trafï¬c) and the further costs that are induced by the human-intensive stage of actively selecting instances that keep a balanced proï¬le of the target dataset on a daily basis. Investigating the related literature, we have mined some clever ideas that tackle this limitation. We record here the case where an annotation process has been designed using a game-based approach, motivating the human oracles to contribute to assigning sentiment labels to a variety of Twitter instances, surpassing the monetisation incentive [13]. Further enrichment of this dataset could also be carried out, integrating either multilingual resources for capturing even more hate speech occurrences, or applying data augmentation techniques [45]. From the perspective of the ML models that we used, pre-processing stages â such as feature selection mechanisms [50] or methods for creation of semantic features [46] â which are established in the realm of short-text input data, could improve the obtained results and retain interpretability properties in speciï¬c cases.
In addition, the ETHOS can be combined with various similar HS datasets â as we stated here initially with two different data collections â for evaluation reasons. The development of hybrid weakly supervised HS detection models, merging semi-supervised and active learning strategies under common frameworks, alleviating human intervention based on decisions over the gathered unlabelled instances that come solely from the side of a robust learner [24, 59], constitutes another very promising ambition. Online HS detection and prevention tools, such as Hatebusters among others, are highly favoured by such approaches. The impact of such detection tools could have been very beneï¬cial in terms of enforcing social awareness and addressing effective ethical issues [1, 9].
Finally, the fact of examining ETHOS under the spectrum of multi-labelled nature appears favouring to reviewers on social media platforms, facilitating informative suggestions for HS comments regarding the level of violence, the target of comments and the categories of HS that are present. However, this is not a multi-purpose HS detection dataset, as the mined comments are based on social media. This means that the corpus contains relatively small sentences. Thus, models trained on this dataset may fail to detect HS in documents on a larger scale without segmentation. On the other hand, the general structure of the proposed protocol could be applied to a variety of learning tasks, especially on large databases, towards better predictions and less intensive annotation [10]. Last but not least, examination of alternative query sampling strategies that support inherent MLL could have proven quite beneï¬cial regarding both the reduction of human effort and the enrichment of attempts like the proposed one [27].
# References
[1] Alharthi, D.N., Regan, A.C.: Social engineering defense mechanisms: A taxonomy and a survey of employeesâ awareness level. In: K. Arai, S. Kapoor, R. Bhatia (eds.) Intelligent Computing - Proceedings of the 2020 Computing Conference, Volume 1, SAI 2020, London, UK, 16-17 July 2020, Advances in Intelligent Systems and Computing, vol. 1228, pp. 521â541. Springer (2020). DOI 10.1007/978-3-030-52249-0\_35. URL https: //doi.org/10.1007/978-3-030-52249-0_35
[2] Almeida, T., Hidalgo, J.M.G., Silva, T.P.: Towards sms spam ï¬ltering: Results under a new dataset. International Journal of Information Security Science 2(1), 1â18 (2013)
[3] Anagnostou, A., Mollas, I., Tsoumakas, G.: Hatebusters: A web application for actively reporting youtube hate speech. In: Proceedings of the Twenty-Seventh International Joint Conference on Artiï¬cial Intelligence, IJCAI-18, pp. 5796â5798. International Joint Conferences on Artiï¬cial Intelligence Organization, Stockholm, Sweden (2018). DOI 10.24963/ijcai.2018/841. URL https://doi.org/10.24963/ijcai.2018/841
[4] Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. In: 3rd International Conference on Learning Representations, ICLR 2015, May 7-9, 2015, Conference Track Proceedings. San Diego, California, USA (2015). URL http://arxiv.org/abs/1409.0473
[5] Benites, F., Sapozhnikova, E.: Haram: A hierarchical aram neural network for large-scale text classiï¬cation. In: 2015 IEEE International Conference on Data Mining Workshop (ICDMW), pp. 847â854. IEEE Computer Society, USA (2015). DOI 10.1109/ICDMW.2015.14
[6] Chen, J., Mao, J., Liu, Y., Zhang, M., Ma, S.: Tiangong-st: A new dataset with large-scale reï¬ned real- world web search sessions. In: Proceedings of the 28th ACM International Conference on Information and Knowledge Management, CIKM 2019, November 3-7, 2019, pp. 2485â2488. ACM, Beijing, China (2019). DOI 10.1145/3357384.3358158. URL https://doi.org/10.1145/3357384.3358158
13
A PREPRINT - AUGUST 21, 2021
[7] Davidson, T., Warmsley, D., Macy, M., Weber, I.: Automated hate speech detection and the problem of offensive language. In: Proceedings of the 11th International AAAI Conference on Web and Social Media, ICWSM â17, pp. 512â515. AAAI Press, Montreal, Canada (2017)
[8] Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)
[9] Dinakar, K., Picard, R.W., Lieberman, H.: Common sense reasoning for detection, prevention, and mitigation of cyberbullying (extended abstract). In: Q. Yang, M.J. Wooldridge (eds.) Proceedings of the Twenty-Fourth International Joint Conference on Artiï¬cial Intelligence, IJCAI 2015, Buenos Aires, Argentina, July 25-31, 2015, pp. 4168â4172. AAAI Press (2015). URL http://ijcai.org/Abstract/15/589
[10] Dramé, K., Mougin, F., Diallo, G.: Large scale biomedical texts classiï¬cation: a knn and an esa-based ap- proaches. J. Biomedical Semantics 7, 40 (2016). DOI 10.1186/s13326-016-0073-1. URL https://doi.org/10.1186/ s13326-016-0073-1
[11] Fersini, E., Rosso, P., Anzovino, M.: Overview of the task on automatic misogyny identiï¬cation at ibereval 2018. In: IberEval@ SEPLN, pp. 214â228 (2018)
[12] Friedman, J.: Stochastic gradient boosting. department of statistics. Tech. rep., Stanford University, Technical Report, San Francisco, CA (1999)
[13] Furini, M., Montangero, M.: Sentiment analysis and twitter: a game proposal. Pers. Ubiquitous Comput. 22(4), 771â785 (2018). DOI 10.1007/s00779-018-1142-5. URL https://doi.org/10.1007/s00779-018-1142-5
In: Z. Waseem, W.H.K. Chung, D. Hovy, J.R. Tetreault (eds.) Proceedings of the First Workshop on Abusive Language Online, ALW@ACL 2017, Vancouver, BC, Canada, August 4, 2017, pp. 85â90. Association for Computational Linguistics (2017). DOI 10.18653/v1/w17-3013. URL https://doi.org/10.18653/v1/w17-3013
[15] Gao, L., Huang, R.: Detecting online hate speech using context aware models. In: RANLP (2017)
[16] Geisser, S.: Predictive inference, vol. 55. CRC press (1993)
[17] de Gibert, O., Perez, N., GarcÃa-Pablos, A., Cuadros, M.: Hate speech dataset from a white supremacy forum. Proceedings of the 2nd Workshop on Abusive Language Online (ALW2) (2018). DOI 10.18653/v1/w18-5102. URL http://dx.doi.org/10.18653/v1/w18-5102
[18] Haagsma, H., Bos, J., Nissim, M.: MAGPIE: A large corpus of potentially idiomatic expressions. In: N. Calzolari, F. Béchet, P. Blache, K. Choukri, C. Cieri, T. Declerck, S. Goggi, H. Isahara, B. Maegaard, J. Mariani, H. Mazo, A. Moreno, J. Odijk, S. Piperidis (eds.) Proceedings of The 12th Language Resources and Evaluation Conference, LREC 2020, Marseille, France, May 11-16, 2020, pp. 279â287. European Language Resources Association (2020). URL https://www.aclweb.org/anthology/2020.lrec-1.35/
[19] Hoang, T., Vo, K.D., Nejdl, W.: W2E: A worldwide-event benchmark dataset for topic detection and tracking. In: Proceedings of the 27th ACM International Conference on Information and Knowledge Management, CIKM 2018, Torino, Italy, October 22-26, 2018, pp. 1847â1850. ACM (2018). DOI 10.1145/3269206.3269309. URL https://doi.org/10.1145/3269206.3269309
In: Proceedings of the Third Workshop on Abusive Language Online, pp. 46â57. Association for Computational Linguistics, Florence, Italy (2019). DOI 10.18653/v1/W19-3506. URL https://www.aclweb.org/anthology/ W19-3506
[21] Inc., M.: Kappa statistics for attribute agreement analysis. Available at https://support.minitab.com/ en-us/minitab/18/help-and-how-to/quality-and-process-improvement/measurement-system-analysis/how-to/ attribute-agreement-analysis/attribute-agreement-analysis/interpret-the-results/all-statistics-and-graphs/ kappa-statistics/ (2021/04/17)
[22] Jirotka, M., Stahl, B.C.: The need for responsible technology. Journal of Responsible Technology 1, 100,002 (2020). DOI https://doi.org/10.1016/j.jrt.2020.100002. URL http://www.sciencedirect.com/science/article/pii/ S2666659620300020
[23] Joulin, A., Grave, E., Bojanowski, P., Douze, M., Jégou, H., Mikolov, T.: Fasttext.zip: Compressing text classiï¬cation models (2016)
[24] Karlos, S., Kanas, V.G., Aridas, C.K., Fazakis, N., Kotsiantis, S.: Combining active learning with self-train algorithm for classiï¬cation of multimodal problems. In: IISA 2019, Patras, Greece, July 15-17, 2019, pp. 1â8 (2019). DOI 10.1109/IISA.2019.8900724. URL https://doi.org/10.1109/IISA.2019.8900724
14
A PREPRINT - AUGUST 21, 2021
[25] Kim, S., Kim, D., Cho, M., Kwak, S.: Proxy anchor loss for deep metric learning. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pp. 3235â3244. IEEE (2020). DOI 10.1109/CVPR42600.2020.00330. URL https://doi.org/10.1109/CVPR42600.2020.00330 [26] Krempl, G., Kottke, D., Lemaire, V.: Optimised probabilistic active learning (OPAL) - for fast, non-myopic, cost-sensitive active classiï¬cation. Mach. Learn. 100(2-3), 449â476 (2015). DOI 10.1007/s10994-015-5504-1. URL https://doi.org/10.1007/s10994-015-5504-1
[27] Kumar, P., Gupta, A.: Active learning query strategies for classiï¬cation, regression, and clustering: A survey. J. Comput. Sci. Technol. 35(4), 913â945 (2020). DOI 10.1007/s11390-020-9487-4. URL https://doi.org/10.1007/ s11390-020-9487-4
[28] LjubeÅ¡i´c, N., Erjavec, T., FiÅ¡er, D.: Datasets of Slovene and Croatian moderated news comments. In: Proceedings of the 2nd Workshop on Abusive Language Online (ALW2), pp. 124â131. Association for Computational Linguistics, Brussels, Belgium (2018). DOI 10.18653/v1/W18-5116. URL https://www.aclweb.org/anthology/ W18-5116
[29] Malik, H., Shakshuki, E.M.: Detecting performance anomalies in large-scale software systems using entropy. Pers. Ubiquitous Comput. 21(6), 1127â1137 (2017). DOI 10.1007/s00779-017-1036-y. URL https://doi.org/10.1007/ s00779-017-1036-y
[30] McCallum, A., Nigam, K., et al.: A comparison of event models for naive bayes text classiï¬cation. In: AAAI-98 workshop on learning for text categorization, vol. 752, pp. 41â48. Citeseer (1998)
[31] Miller, G.A.: Wordnet: a lexical database for english. Communications of the ACM 38(11), 39â41 (1995) [32] Nghiem, M., Baylis, P., Ananiadou, S.: Paladin: an annotation tool based on active and proactive learning. In: D. Gkatzia, D. Seddah (eds.) Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations, EACL 2021, Online, April 19-23, 2021, pp. 238â243. Association for Computational Linguistics (2021). URL https://www.aclweb.org/anthology/2021.eacl-demos.28/ [33] Ousidhoum, N., Lin, Z., Zhang, H., Song, Y., Yeung, D.: Multilingual and multi-aspect hate speech analysis. In: EMNLP-IJCNLP 2019, November 3-7, 2019, pp. 4674â4683. Association for Computational Linguistics, Hong Kong, China (2019). DOI 10.18653/v1/D19-1474. URL https://doi.org/10.18653/v1/D19-1474
[34] Pennington, J., Socher, R., Manning, C.D.: Glove: Global vectors for word representation. In: Empirical Methods in Natural Language Processing (EMNLP), pp. 1532â1543. Doha, Qatar (2014). URL http://www.aclweb.org/ anthology/D14-1162
[35] Pitenis, Z., Zampieri, M., Ranasinghe, T.: Offensive language identiï¬cation in greek. CoRR abs/2003.07459 (2020). URL https://arxiv.org/abs/2003.07459
[36] Polignano, M., Basile, P., de Gemmis, M., Semeraro, G., Basile, V.: Alberto: Italian BERT language understanding model for NLP challenging tasks based on tweets. In: R. Bernardi, R. Navigli, G. Semeraro (eds.) Proceedings of the Sixth Italian Conference on Computational Linguistics, Bari, Italy, November 13-15, 2019, CEUR Workshop Proceedings, vol. 2481. CEUR-WS.org (2019). URL http://ceur-ws.org/Vol-2481/paper57.pdf
[37] Porter, M.F.: Snowball: A language for stemming algorithms. Published online (2001). URL http://snowball. tartarus.org/texts/introduction.html. Accessed 11.03.2008, 15.00h
# Pipe bie, OGR. Allain AH. Von
[38] Pupo, O.G.R., Altalhi, A.H., Ventura, S.: Statistical comparisons of active learning strategies over multiple datasets. Knowl. Based Syst. 145, 274â288 (2018). DOI 10.1016/j.knosys.2018.01.033. URL https://doi.org/10. 1016/j.knosys.2018.01.033
[39] Ranasinghe, T., Zampieri, M., Hettiarachchi, H.: BRUMS at HASOC 2019: Deep learning models for multilingual hate speech and offensive language identiï¬cation. In: Working Notes of FIRE 2019, December 12-15, 2019, CEUR Workshop Proceedings, vol. 2517, pp. 199â207. CEUR-WS.org, Kolkata, India (2019). URL http: //ceur-ws.org/Vol-2517/T3-3.pdf
[40] Read, J., Pfahringer, B., Holmes, G., Frank, E.: Classiï¬er chains for multi-label classiï¬cation. In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 254â269. Springer, Springer, Bled, Slovenia (2009)
[41] van Rosendaal, J., Caselli, T., Nissim, M.: Lower bias, higher density abusive language datasets: A recipe. In: J. Monti, V. Basile, M.P. di Buono, R. Manna, A. Pascucci, S. Tonelli (eds.) Proceedings of the Workshop on Resources and Techniques for User and Author Proï¬ling in Abusive Language, ResTUP@LREC 2020, Marseille, France, May 2020, pp. 14â19. European Language Resources Association (ELRA) (2020). URL https://www.aclweb.org/anthology/2020.restup-1.4/
[42] Rosenthal, S., Atanasova, P., Karadzhov, G., Zampieri, M., Nakov, P.: A large-scale semi-supervised dataset for offensive language identiï¬cation. arXiv preprint arXiv:2004.14454 (2020)
15
A PREPRINT - AUGUST 21, 2021
[43] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019)
[44] Sharma, M., Zhuang, D., Bilgic, M.: Active learning with rationales for text classiï¬cation. In: R. Mihalcea, J.Y. Chai, A. Sarkar (eds.) NAACL HLT 2015, Denver, Colorado, USA, May 31 - June 5, 2015, pp. 441â451. The Association for Computational Linguistics (2015). DOI 10.3115/v1/n15-1047. URL https://doi.org/10.3115/v1/ n15-1047
[45] Shim, H., Luca, S., Lowet, D., Vanrumste, B.: Data augmentation and semi-supervised learning for deep neural networks-based text classiï¬er. In: C. Hung, T. Cerný, D. Shin, A. Bechini (eds.) SAC â20: The 35th ACM/SIGAPP Symposium on Applied Computing, online event, [Brno, Czech Republic], March 30 - April 3, 2020, pp. 1119â 1126. ACM (2020). DOI 10.1145/3341105.3373992. URL https://doi.org/10.1145/3341105.3373992
[46] Skrlj, B., Martinc, M., Kralj, J., Lavrac, N., Pollak, S.: tax2vec: Constructing interpretable features from taxonomies for short text classiï¬cation. Comput. Speech Lang. 65, 101,104 (2021). DOI 10.1016/j.csl.2020. 101104. URL https://doi.org/10.1016/j.csl.2020.101104
[47] Sun, C., Asudeh, A., Jagadish, H.V., Howe, B., Stoyanovich, J.: Mithralabel: Flexible dataset nutritional labels for responsible data science. In: Proceedings of the 28th ACM International Conference on Information and Knowledge Management, CIKM 2019, Beijing, China, November 3-7, 2019, pp. 2893â2896. ACM, Beijing, China (2019). DOI 10.1145/3357384.3357853. URL https://doi.org/10.1145/3357384.3357853
[48] Tang, M.J., Chan, E.T.: Social media: Inï¬uences and impacts on culture. In: K. Arai, S. Kapoor, R. Bhatia (eds.) Intelligent Computing - Proceedings of the 2020 Computing Conference, Volume 1, SAI 2020, London, UK, 16-17 July 2020, Advances in Intelligent Systems and Computing, vol. 1228, pp. 491â501. Springer (2020). DOI 10.1007/978-3-030-52249-0\_33. URL https://doi.org/10.1007/978-3-030-52249-0_33
[49] Tommasel, A., Godoy, D.: A social-aware online short-text feature selection technique for social media. Informa- tion Fusion 40, 1 â 17 (2018). DOI https://doi.org/10.1016/j.inffus.2017.05.003. URL http://www.sciencedirect. com/science/article/pii/S1566253516302354
[50] Tommasel, A., Godoy, D.: A social-aware online short-text feature selection technique for social media. Inf. Fusion 40, 1â17 (2018). DOI 10.1016/j.inffus.2017.05.003. URL https://doi.org/10.1016/j.inffus.2017.05.003
[51] Tommasel, A., Godoy, D.: Short-text learning in social media: a review. Knowl. Eng. Rev. 34, e7 (2019). DOI 10.1017/S0269888919000018. URL https://doi.org/10.1017/S0269888919000018
[52] Tsoumakas, G., Katakis, I.: Multi-label classiï¬cation: An overview. International Journal of Data Warehousing and Mining (IJDWM) 3(3), 1â13 (2007)
[53] Ullmann, S., Tomalin, M.: Quarantining online hate speech: technical and ethical perspectives. Ethics Inf. Technol. 22(1), 69â80 (2020). DOI 10.1007/s10676-019-09516-z. URL https://doi.org/10.1007/s10676-019-09516-z [54] Van Hee, C., Jacobs, G., Emmery, C., Desmet, B., Lefever, E., Verhoeven, B., De Pauw, G., Daelemans, W., Hoste, V.: Automatic detection of cyberbullying in social media text. PLOS ONE 13(10), e0203,794 (2018). DOI 10.1371/journal.pone.0203794. URL https://dx.plos.org/10.1371/journal.pone.0203794
[55] Vapnik, V.N.: The Nature of Statistical Learning Theory, Second Edition. Statistics for Engineering and Information Science. Springer (2000)
[56] Varma, S., Simon, R.: Bias in error estimation when using cross-validation for model selection. BMC bioinfor- matics 7(1), 91 (2006)
[57] Waseem, Z., Hovy, D.: Hateful symbols or hateful people? predictive features for hate speech detection on twitter. In: Proceedings of the NAACL Student Research Workshop, pp. 88â93. Association for Computational Linguistics, San Diego, California (2016). URL http://www.aclweb.org/anthology/N16-2013
[58] Yang, F., Peng, X., Ghosh, G., Shilon, R., Ma, H., Moore, E., Predovic, G.: Exploring deep multimodal fusion of text and photo for hate speech classiï¬cation. In: Proceedings of the Third Workshop on Abusive Language Online, pp. 11â18. Association for Computational Linguistics, Florence, Italy (2019). DOI 10.18653/v1/W19-3502. URL https://www.aclweb.org/anthology/W19-3502
[59] Yu, D., Fu, B., Xu, G., Qin, A.: Constrained nonnegative matrix factorization-based semi-supervised multilabel learning. Int. J. Mach. Learn. & Cyber. 10(5), 1093â1100 (2019). DOI 10.1007/s13042-018-0787-8. URL https://doi.org/10.1007/s13042-018-0787-8
[60] Zampieri, M., Malmasi, S., Nakov, P., Rosenthal, S., Farra, N., Kumar, R.: Predicting the type and target of offensive posts in social media. In: NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pp. 1415â1420 (2019). DOI 10.18653/v1/n19-1144. URL https://doi.org/10.18653/v1/n19-1144 [61] Zhang, M.L., Zhou, Z.H.: Ml-knn: A lazy learning approach to multi-label learning. Pattern recognition 40(7),
2038â2048 (2007)
16 | {
"id": "2004.14454"
} |
2006.06251 | Performance in the Courtroom: Automated Processing and Visualization of Appeal Court Decisions in France | Artificial Intelligence techniques are already popular and important in the
legal domain. We extract legal indicators from judicial judgment to decrease
the asymmetry of information of the legal system and the access-to-justice gap.
We use NLP methods to extract interesting entities/data from judgments to
construct networks of lawyers and judgments. We propose metrics to rank lawyers
based on their experience, wins/loss ratio and their importance in the network
of lawyers. We also perform community detection in the network of judgments and
propose metrics to represent the difficulty of cases capitalising on
communities features. | http://arxiv.org/pdf/2006.06251 | Paul Boniol, George Panagopoulos, Christos Xypolopoulos, Rajaa El Hamdani, David Restrepo Amariles, Michalis Vazirgiannis | cs.CL, cs.AI, cs.IR, cs.LG | null | null | cs.CL | 20200611 | 20200709 | 0 2 0 2
l u J 9 ] L C . s c [
3 v 1 5 2 6 0 . 6 0 0 2 : v i X r a
# Performance in the Courtroom: Automated Processing and Visualization of Appeal Court Decisions in France
Paul Boniol [email protected] LIX, Ãcole Polytechnique Paris, France
George Panagopoulos [email protected] LIX, Ãcole Polytechnique Paris, France
Christos Xypolopoulos [email protected] LIX, Ãcole Polytechnique Paris, France
Rajaa El Hamdani [email protected] HEC Paris Paris, France
David Restrepo Amariles [email protected] HEC Paris Paris, France
Michalis Vazirgiannis [email protected] LIX, Ãcole Polytechnique Paris, France
ABSTRACT Artificial Intelligence techniques are already popular and impor- tant in the legal domain. We extract legal indicators from judicial judgments to decrease the asymmetry of information of the legal system and the access-to-justice gap. We use NLP methods to ex- tract interesting entities/data from judgments to construct networks of lawyers and judgments. We propose metrics to rank lawyers based on their experience, wins/loss ratio and their importance in the network of lawyers. We also perform community detection in the network of judgments and propose metrics to represent the difficulty of cases capitalising on communities features.
CCS CONCEPTS ⢠Applied computing â Law; ⢠Computing methodologies â Information extraction.
[11] "one of six Americans is a self-represented litigant in a newly filed case each year," however, the resolutions are in favor of liti- gants represented with a lawyer. Both [1, 11] suggests "the ease of access to information" is a solution to address the gap in accessing justice. Access to free basic legal information could help the user to navigate the justice system easily, understand better the legal area his problem falls into, and choose a lawyer with experience on the subject matter of the dispute. In our work, we extract and represent information from past judgments to increase the trans- parency of judicial procedures and make them more accessible to laypersons. First, we pre-process judgments by extracting relevant legal entities, such as the lawyers of each party, by using Named Entity Recognition (NER) models. Second, we analyze the win/loss rate of lawyers by building two lawyersâ networks: an opposing network of lawyers and a collaborative network of lawyers. Third, we use network analysis of judgments to suggest a measure of case difficulty based on case types/communities with distinct win/lose rates.
KEYWORDS Natural Language Processing, Named Entity Recognition, Graph Mining, Network Analysis, Case-Law Analysis, Legal Text
1 INTRODUCTION Recent advances in Artificial Intelligence (AI) and Natural Lan- guage Processing (NLP) allow the analysis of large numbers of legal documents in aggregate in contrast to traditional methods. A long-standing application of NLP to legal documents is informa- tion extraction and retrieval from judicial decisions. The interest in mining data from judgments can be explained by the critical role they play in the administration of justice in both common and civil law systems. The objective of our work is to analyze judgments by French courts to gain insights about the operation of the French judicial system, which could in turn help developing an interface for laypersons. As explained in [23], a legal user interface could shield the user of the legal system from the complexity of the un- derlying legal system. Ordinary people perceive the legal system as too complex [2], which results in part from the asymmetry of information in the market of legal services, where ordinary people are disadvantaged in comparison with providers of legal services [1]. The asymmetry of information adds to the access-to-justice gap, such that a layperson lacks the right information and tools to choose the right lawyer at an affordable cost, and might prefer to self represent herself or refrain from filing a lawsuit. According to
2 RELATED WORK Numerous research have been carried out on case-law corpora fo- cusing on specific objectives. One of the long-time objectives is the prediction of case outcomes. One of the first approaches in this field was [15] to manually convert a case factual elements into numerical values, compute their sum and predict a decision in favor of the petitioner if the sum is above a manually selected threshold. Recent efforts [6, 14, 16, 25] have used machine learning techniques to build outcome prediction models. Judicial judgments are rich in data, which could be used to analyze the operations of the legal sys- tem. The authors of [8, 20] used empirical methods to understand and describe judicial decision-making. Other researchers extract information to empower legal decision-makers and legal practition- ers [12, 17, 18, 27]. Judicial decisions lend themselves to the use of network analysis techniques. Networks of case law have been used several times [7, 10] to measure the importance of a case. The theory of graphs provides tools well adapted to analyze the complexity of case law networks; for example, [26] employs a hybrid version for bipartite graphs to clarify procedural aspects of the International Criminal Court. Judgments are expressed in natural language, there- fore to scale their automatic processing, several researchers have been developing natural language processing techniques for the legal domain. Some adapt NLP techniques built for the general
language to the legal language. [24] build their model of sentence boundary detection (SBD) for legal documents. Researchers from the Lynx project [21] developed a set of NLP services to extract a variety of information from legal documents: term extraction, text structure recognition, and NER. NER techniques have several applications in the legal domain. [5] improved existing NER models and used the resulting models to extract, from French judgments, entities that should be anonymized before the public release of the judgments.
3 DATA 3.1 Data collection Our dataset consists of 40,000 rtf files that were crawled through Légifrance 1, a French legal publisher providing access to law codes and legal decisions. To navigate and crawl through Légifrance we used Selenium2, a python framework that simulates a real web browser. For our experiments, we used a sample of cases from the court of appeal consisting of 17,215 cases. We limit our first analysis to cases from the court of appeal due to the specificity of cases from trial courts and the Court of Cassation. For future works and analysis, a sample of cases from the Court of Cassation could also be used (more than 400,000 documents available on Légifrance). We decided to focus first on cases decided by civil courts and to exclude both criminal, administrative, and specialized courts. We also remove procedural judgments, such as court orders. Judgments analyzed here are solely final decisions called "arrÃÅt de Cour dâappel."
3.2 Data preprocessing Data preprocessing was the most challenging part of the project. The structure and wording of the legal documents, which vary between different courts and dockets, as well as the use of legal formal language, were challenging obstacles to conduct the text mining tasks. Below we analyze in detail how we approached each part, from segmenting the documents to extracting the persons taking part in each court case and their roles.
3.2.1 Segmentation.
Analysis of the macrostructure of cases. The decisions of courts of appeals in France follow an overall similar structure. First, the documents state practical information about the litigation such as dates, jurisdiction, and the different entities involved in the trial, listed in the following order:
⢠Appellant (appelant in French): The name of the party is always anonymized, for example: "Monsieur Jean X."
⢠Appellantâs counsel: can be anonymized but always start with the keyword "Me" or "Maître," for example, "Me Jean Dupont."
⢠Appellee (intimée in French): this entity has the same format as the appellant.
⢠Appelleeâs counsel: Same format as the appellantâs coun- sel.
⢠Court Entities (non-fixed order):
# 1www.legifrance.gouv.fr 2https://selenium-python.readthedocs.io/
2
â Judge (magistrats, conseillers in French ): could be anonymized but always start with the keyword "Président." â Clerk (Greffier in French): Can be anonymized but is
always expressed near the word "Greffier."
After listing the entities, decisions from French appeal courts con- tinue with the debate. The debate describes all the facts and pro- cedure leading to the appeal. It also states the different arguments brought forth by the parties, and follows with the reasoning of the court. Finally, it closes with the conclusion which states the final decision. The keywords separating the different parts vary significantly, and are sometimes absent, which makes the segmen- tation task complex. Keywords may vary from one appeal court to another. We use graphs to compare the structure of judgments of appeal courts in several territorial jurisdictions. Figure 1 represents the flow of cases in two different jurisdictions. Each graph is built by parsing judgments from the same jurisdiction into sentences and then linking consecutive sentences by an edge. The name of a node is the text of the sentence. When the sentence is more than five words, the name of the node is "Long_Text_i_j" where i is the index of the case in the whole dataset, and j is the index of the sentence within the case. The size of the node is the occurrence of its text in all the judgment of the considered jurisdiction. To account for keywords that have small variations across documents, we use the Jaro similarity to identify these variations, examples are in table 1. The Jaro similarity is a similarity measure between two strings s1 and s2 [13] defined with the following formula:
Sim J (s1, s2) = 0 3 ( m 1 |s1 | + m |s2 | if m = 0, + mât m ) otherwise
Where:
| si | is the length of string si ⢠m is the number of same characters not further than â max (|s1 |, |s2 |)
1 2
⢠t is the number of transpositions.
The Jaro similarity is used to contract nodes of similar sentences, such that if two sentences have a Jaro similarity larger than 0.8, then they are considered belonging to the same node. Therefore the big nodes are common parts from all documents, and they represent the structure of these documents.
Figure 1 points out the difference in structure and flows that documents from different jurisdictions can have. For instance, Agen will use "ENTRE" to announce the appellant, and "ET" to announce the appellee, whereas Douai will use respectively "APPELANT" and "INTIMEE."
Nevertheless, we empirically observed that all the decisions, whatever the jurisdiction was, shared the same keyword "PAR CES MOTIFS" to announce the final decision of the court (last big node in the two flows of figure 1).
Segmentation with keywords. We also sought to extract en- tities corresponding to the lawyers defending each party. As de- scribed above, legal entities are mentioned after the practical infor- mation in a fixed order. Moreover, domain experts confirmed these legal entities are mentioned in separate segments. These segments are often preceded by known keywords, as shown in figure 2. Once we have identified the beginning and end of each segment, we
ââ
Table 1: Examples of Jaro distance between pairs of similar sentences contracted to a same node
Sentence 1 Sentence 2 Jaro distance faits et procedure procedure et pretentions des parties moyens et pretentions des parties faits procedure procedure et moyens des parties pretentions et moyens des parties 0.86 0.83 0.92
Introduction Entre, Appelant, Présenté par Appellant segment Et, Intimé, Défendeur Appellee segment ââââââââââââ_| Debate Par ces motifs Conclusion
Figure 2: Segmentation of a legal case based on keywords, to facilitate entity recognition
the appellee segment as lawyers of the appellee. It should be noted that decisions without any reference to a lawyer on both sides were overlooked.
Figure 1: Most recurrent Keywords in decisions by courts in Douai, and Agen respectively. The nodes are part of the docu- ment. When the sentence is more than five words, the name of the node is attributed to Long_Text_i with i unique. There- fore, the big nodes are common parts from all documents, and therefore the structure.
use them to extract lawyersâ names as described in the following subsection.
3.3 Extraction of lawyersâ entities To detect lawyersâ names throughout the document, we discard, first, all segments except the appellant and the appellee segments. Second, we segment them further into sentences using the sentence tokenizer by Polyglot 3, which is a Python package providing mul- tilingual natural language processing tools. Third, we only keep sentences containing honorifics used for lawyers such as "Me," "Maître," or lawyersâ keywords like: "representé par." We then use a well-established Named Entity Recognition model by Polyglot [3] to recognize person entities from the remaining sentences. The model uses pretrained word embeddings from Wikipedia [4] to classify whether a word is an entity or not based on its sentence. Last, we consider the extracted named entities appearing in the appellant segment as lawyers of the appellant, and the names appearing in
3https://polyglot.readthedocs.io/
3.4 Extraction of the judge decision From the initial segmentation, the final decision of the court is to be found in the conclusion segment of the judgement. Concerning judgments from appeal courts, the court will either confirm the first lower court decision (Tribunal judiciaire) or reverse it. However, the court can also partially confirm the judgment. In other words, the court can decide to accept one of the appellantâs requests, and therefore change the first decision partially. Empirically, we noticed that certain words are present in certain types of decisions, and after validation from the domain experts, we resorted to a keyword-based solution:
⢠"Confirme", "Rejete", "Irrecevable": keep the first decision (Appellee "wins")
⢠"Infirme", "Rectifier", "Réforme"": change the first decision (Appellant "wins")
Out of a sample of 5832 cases, 570 conclusions (â¼10%) include at least a keyword representing both outcomes, in which case we keep the outcome that has most keywords. This is a temporary solution that requires refinement in the future.
4 NETWORK ANALYSIS OF LAWYERS Once the entitiesâ recognition is complete, we extract all the in- stances (and their function) in every document. Since courts tend to have a limited number of lawyers, judges and court clerks, cases
3
share the same entities. Therefore, all the cases can be considered a big graph where entities interact with each other.
4.1 Opposing network of lawyers We extracted the winning and losing lawyers in each decision. From this, we can define a directed weighted network. We draw an edge between lawyers if they have been opposed. The edge from lawyer i to lawyer j is weighted by the wins wins; ; of lawyer i to lawyer j: WiNSj5j = AWirsj,acc + OWir 4), def Where wj-4j,acc is the number of wins of lawyer i as an appellant and wj,4;,def is the number of wins of lawyer i as an appellee. Pa- rameters âaâ and âbâ are used to weigh more winning as an appellant than winning as an appellee since it is known by legal experts that the event of winning an appeal is less frequent than losing it We also confirm this intuition by counting the rate of appealsâ rejection from our dataset. We get a rejection rate of 0.9. We collapse both edges between two lawyers into one directed edge weighted by: |winsj.5; â winsj.+;|log(winsj4j; + winsj+i + 1). In this case the edge direction is determined by the sign of wins;,; â winsj4; such that the edge target is the lawyer with most wins. To visualize the most important nodes, we remove lawyers with only one case (899 out of 2146), which leaves us with a network with 1247 nodes and 2182 edges. The resulting network appears in figure 3. The edge
@ More Loses © More Wins
Figure 3: The network of lawyers that have opposed in ap- peal cases. The width of the edge is a function of the number of wins the source node has over the target. The node color defines whether the lawyer has more losses or more wins, and the size is analogous to total number of cases, large yel- low nodes mean the lawyer has won much more cases than he lost and vice versa.
goes from a "losing" to a "winning" lawyer, and the width of the
4
@ Wesing @ Losing
Figure 4: Network of lawyers that have collaborated in court cases. The edge color signifies the sign of the win-loss metric and the width of the absolute value.
edge represents the difference in the number of wins. Node size is the number of appeal cases where the lawyer appears and the color captures the win-loss difference.
4.2 Collaboration network of lawyers The collaboration network in figure 4 indicates lawyers that have been on the same side during an appeal case. The edges are weighted based on the wins minus the losses, so the network can capture which collaborations are the most successful. We have removed nodes with number of collaborations below a fixed threshold to ob- tain a decluttered visualization of the network. We obtain a network of 47 nodes and 94 edges out of 2182 nodes and 2950 edges.
4.3 Lawyers Ranking In this section, we suggest three metrics to rank and compare be- tween lawyers. First, we measure the experience of a lawyer by the number of judgments mentioning him as the appellantâs or appelleeâs lawyer. Second, we compute the win-loss rate of lawyers. Third, we calculate the centrality of a lawyer in the opposing net- work.
In figure 5a, lawyers are ranked by their experience in going in front of the court of appeal. However, this measure alone does not indicate the performance of the lawyer. Thus we need to refer to the win/loss ratio to evaluate the performance. Lawyer 353 ranks first in terms of the win/loss ratio instead of fifth in terms of the total number of cases. In this case, lawyer 353 performs better than lawyer 387, who is ranked first in figure 5a but ranks 9 in terms of win/loss ratio. A weakness of the win/loss ratio ranking is that it does not consider the experience of the opposing lawyer; while the opponentâs worth can be a measure of the winâs value. To this end, we compute the weighted directed PageRank of the opposing network 5c. As explained in section 4.1, the weights are the number of wins such that wins as an appellantâs lawyer counts more than wins as an appelleeâs lawyer. So edges directed towards lawyers who win more representing an appellant have higher weights than
wo SF OS PE PO PS SX oe So CSS gS By oF oT Fo lems
(a) Lawyers ordered by the total number of cases
2 SPSS SSS SL Pd eS son
(b) Lawyers ordered by the win/loss ratio
(c) Lawyers ordered by their im- portance using PageRank algo- rithm
Figure 5: Ranking of lawyers using three different measures
edges directed towards a lawyer who wins more representing an appellee. Therefore top lawyers in figure 5c are lawyers who won against experienced lawyers and who win most as an appellantâs lawyer. Lawyer 387 is ranked best than lawyer 350 in terms of win/loss ratio, but worst in terms of PageRank measure. We could explain this difference in the ranking by the fact that the majority of wins of lawyer 350 wins as an appellantâs lawyer, while the majority of wins of lawyer 387 wins as an appelleeâs lawyer. Thus it is recommended for an appellant to choose lawyer 350 rather than lawyer 387.
4.4 Network analysis of judgments In this section, we develop a method to assess casesâ difficulty from the perspective of the appellant. More precisely, the aim is to compute the difficulty withing a group of cases dealing with the same legal issues. First, we built a network of cases to discover communities of cases about similar legal issues. Second, we use the win/loss rate of the appeal as a proxy to its difficulty.
5
Graphs encode knowledge and patterns more efficiently [19, 22]. The crucial element is the edges representing some kind of simi- larity/affiliation among the nodes. Graphs are said to to have the property of community structure [9] when there are groups of vertices with high concentration of internal edges and low concen- tration of edges between these groups, see example in figure 6.c. These special groups are called communities, clusters or modules. In order to build a graph of cases, we needed to connect them with some property that represented similarity. Cases about the same legal issues tend to cite the sames groups of law articles, therefore we define the similarity of two cases by the number of common law articles mentioned in the text of the cases reflecting apparently the thematic similarity among them. Thus we build a network of judgments to discover the communitiesâ structure and natural di- visions among the set of studied cases. First, we prepare cases by extracting cited articles of law. We extract articles by using regular expressions. Then we create an edge between two cases if they cite at least k same articles. Figure 7 shows graphs of judgments for different values of k. It is evident that as we increase k the graph becomes smaller with the cases having higher similarity due to the higher number of common articles.
(a) Cases with at least 3 articles in common (b) Cases with at least 5 articles in common
© 2 oe) C+ Xo © ©
@ ecte2(@ 4 ) âType of case 1/ © ey
(c) A case graph com- displaying munity structure: two groups of cases with dense internal connections and sparser connections between groups
Figure 6: Examples of cases graphs and communities for a number of 7 cases
We built networks for different values of k from cases of the last three months of 2018, as shown in figure 7. The network naturally groups similar cases in communities. For example, in figure 7c cases against the same appellee and about the same issue. We also notice, figure 8 that cases with the same win/loss rate are grouped in the same communities.
5 CONCLUSION We used NLP methods to extract information from judgments of the French court of appeal. We constructed indicators about the difficulty of lawyersâ performance and cases by using network analysis techniques on lawyersâ networks and casesâ networks.
(a): iron ong term renting (b): Holidays not given by SNCF ae
(a): ong renting (b): Holidays not given by SNCF ae
(a) k=2 80,000 edges 1,000 nodes (b) k=3 20,000 edges 600 nodes (c) k=4 5,000 edges 400 nodes
Figure 7: Examples of cases graphs and communities for dif- ferent values of k, (5500 cases)
Figure 8: Examples of detected communities. Communities circled in red have a high losing rate. Communities circled in green have a high winning rate.
Our objective is to use these indicators to guide laypersons when confronted with the legal systems and contribute to the decrease of the access-to-justice gap by reducing the asymmetry of information characterizing the legal market. The lawyersâ ranking could serve to build a system that guides an appellee in choosing a lawyer. However, the lawyersâ ranking relies only on wins and losses of lawyers. In future work, we expect to produce a ranking that takes into account the legal area of the case and its difficulty, in such a way that the ranking could be more personalized to the needs of a layperson.
REFERENCES [1] 2016. Access to Justice and Market Failure. Slaw (November 2016).
http:
# //www.slaw.ca/2016/11/01/access-to-justice-and-market-failure/ Understanding Effective Access to Justice.
Retrieved April 4, 2020r from http://www.oecd.org/gov/Understanding-effective-access-justice- workshop-paper-final.pdf
[3] Rami Al-Rfou, Vivek Kulkarni, Bryan Perozzi, and Steven Skiena. 2015. Polyglot- NER: Massive multilingual named entity recognition. In Proceedings of the 2015 SIAM International Conference on Data Mining. SIAM, 586â594.
[4] Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2013. Polyglot: Distributed word representations for multilingual nlp. arXiv preprint arXiv:1307.1662 (2013). [5] Valentin Barriere and Amaury Fouret. 2019. May I Check Again?âA simple but efficient way to generate and use contextual dictionaries for Named Entity Recognition. Application to French Legal Texts. arXiv preprint arXiv:1909.03453 (2019).
6
[6] Karl Branting, B Weiss, B Brown, C Pfeifer, A Chakraborty, L Ferro, M Pfaff, and A Yeh. 2019. Semi-Supervised Methods for Explainable Legal Prediction. In Proceedings of the Seventeenth International Conference on Artificial Intelligence and Law. 22â31.
[7] Mattias Derlén and Johan Lindholm. 2014. Goodbye van G end en L oos, Hello B osman? Using Network Analysis to Measure the Importance of Individual CJEU Judgments. European Law Journal 20, 5 (2014), 667â687.
[8] Lee Epstein, William M Landes, and Richard A Posner. 2013. The behavior of federal judges: a theoretical and empirical study of rational choice. Harvard University Press.
[9] Santo Fortunato. 2010. Community detection in graphs. Physics reports 486, 3-5 (2010), 75â174.
[10] James H Fowler, Timothy R Johnson, James F Spriggs, Sangick Jeon, and Paul J Wahlbeck. 2007. Network analysis and the law: Measuring the legal importance of precedents at the US Supreme Court. Political Analysis 15, 3 (2007), 324â346. [11] John M Greacen, Amy Dunn Johnson, and Vincent Morris. 2014. From market failure to 100% access: Toward a civil justice continuum. UALR L. Rev. 37 (2014), 551.
[12] Jerrold Soh Tsin Howe, Lim How Khang, and Ian Ernst Chai. 2019. Legal Area Classification: A Comparative Study of Text Classifiers on Singapore Supreme Court Judgments. arXiv preprint arXiv:1904.06470 (2019).
[13] Matthew A Jaro. 1989. Advances in record-linkage methodology as applied to matching the 1985 census of Tampa, Florida. J. Amer. Statist. Assoc. 84, 406 (1989), 414â420.
[14] Daniel Martin Katz, II Bommarito, J Michael, and Josh Blackman. 2014. Predicting the behavior of the supreme court of the united states: A general approach. arXiv preprint arXiv:1407.6333 (2014).
[15] Fred Kort. 1957. Predicting Supreme Court decisions mathematically: A quanti- tative analysis of the âÄIJright to counselâÄİ cases. American Political Science Review 51, 1 (1957), 1â12.
[16] Shangbang Long, Cunchao Tu, Zhiyuan Liu, and Maosong Sun. 2019. Auto- matic judgment prediction via legal reading comprehension. In China National Conference on Chinese Computational Linguistics. Springer, 558â572.
[17] Dennis P Michalopoulos, Jessica Jacob, and Alfredo Coviello. 2019. AI-Enabled Litigation Evaluation: Data-Driven Empowerment for Legal Decision Makers. In Proceedings of the Seventeenth International Conference on Artificial Intelligence and Law. 264â265.
[18] Wai Yin Mok and Jonathan R Mok. 2019. Legal Machine-Learning Analysis: First Steps towards AI Assisted Legal Research. In Proceedings of the Seventeenth International Conference on Artificial Intelligence and Law. 266â267.
[19] Giannis Nikolentzos, Antoine J-P Tixier, and Michalis Vazirgiannis. 2019. Mes- sage Passing Attention Networks for Document Understanding. arXiv preprint arXiv:1908.06267 (2019).
[20] Jeffrey J Rachlinski and Andrew J Wistrich. 2017. Judging the judiciary by the numbers: Empirical research on judges. Annual Review of Law and Social Science 13 (2017), 203â229.
[21] Georg Rehm, Julian Moreno Schneider, Jorge Gracia, Artem Revenko, Victor Mireles, Maria Khvalchik, Ilan Kernerman, Andis Lagzdins, M¯arcis Pinnis, Artus Vasilevskis, et al. 2019. Developing and orchestrating a portfolio of natural legal language processing and document curation services. In Proceedings of the Natural Legal Language Processing Workshop 2019. 55â66.
[22] François Rousseau and Michalis Vazirgiannis. 2013. Graph-of-word and TW- IDF: new approach to ad hoc IR. In Proceedings of the 22nd ACM international conference on Information & Knowledge Management. 59â68.
[23] JB Ruhl and Daniel Martin Katz. 2015. Measuring, monitoring, and managing legal complexity. Iowa L. Rev. 101 (2015), 223.
[24] George Sanchez. 2019. Sentence boundary detection in legal text. In Proceedings of the Natural Legal Language Processing Workshop 2019. 31â38.
[25] Octavia-Maria Sulea, Marcos Zampieri, Mihaela Vela, and Josef Van Genabith. 2017. Predicting the law area and decisions of french supreme court cases. arXiv preprint arXiv:1708.01681 (2017).
[26] Fabien Tarissan and Raphaëlle Nollez-Goldbach. 2016. Analysing the first case of the international criminal court from a network-science perspective. Journal of Complex Networks 4, 4 (2016), 616â634.
[27] Thomas Vacek, Ronald Teo, Dezhao Song, Timothy Nugent, Conner Cowling, and Frank Schilder. 2019. Litigation Analytics: Case outcomes extracted from US federal court dockets. In Proceedings of the Natural Legal Language Processing Workshop 2019. 45â54. | {
"id": "1909.03453"
} |
2006.06217 | SECure: A Social and Environmental Certificate for AI Systems | In a world increasingly dominated by AI applications, an understudied aspect
is the carbon and social footprint of these power-hungry algorithms that
require copious computation and a trove of data for training and prediction.
While profitable in the short-term, these practices are unsustainable and
socially extractive from both a data-use and energy-use perspective. This work
proposes an ESG-inspired framework combining socio-technical measures to build
eco-socially responsible AI systems. The framework has four pillars:
compute-efficient machine learning, federated learning, data sovereignty, and a
LEEDesque certificate.
Compute-efficient machine learning is the use of compressed network
architectures that show marginal decreases in accuracy. Federated learning
augments the first pillar's impact through the use of techniques that
distribute computational loads across idle capacity on devices. This is paired
with the third pillar of data sovereignty to ensure the privacy of user data
via techniques like use-based privacy and differential privacy. The final
pillar ties all these factors together and certifies products and services in a
standardized manner on their environmental and social impacts, allowing
consumers to align their purchase with their values. | http://arxiv.org/pdf/2006.06217 | Abhishek Gupta, Camylle Lanteigne, Sara Kingsley | cs.CY, cs.AI, cs.LG, econ.GN, q-fin.EC | Accepted for presentation at the Canadian Society for Ecological
Economics 2020 Research Symposium, Tracing the Veins 2020, ICML 2020
Deploying and Monitoring Machine Learning Systems workshop | null | cs.CY | 20200611 | 20200719 | MAI
# Montreal AI Ethics Institute
An international, non-profit research institute helping humanity define its place in a world increasingly driven and characterized by algorithms
# Website: Newsletter:
# https://montrealethics.ai https://aiethics.substack.com
Newsletter: https://aiethics.substack.com
# SECure: A Social and Environmental Certificate for AI Systems
Abhishek Gupta , , Camylle Lanteigne 1 2 1,3 , and Sara Kingsley 4
1 Montreal AI Ethics Institute 2 Microsoft 3 McGill University Carnegie Mellon University
# Abstract
In a world increasingly dominated by AI applications, an understudied aspect is the carbon and social footprint of these power-hungry algorithms that require copious computation and a trove of data for training and prediction. While profitable in the short-term, these practices are unsustainable and socially extractive from both a data-use and energy-use perspective. This work proposes an ESG-inspired framework combining socio-technical measures to build eco-socially responsible AI systems. The framework has four pillars: compute-efficient machine learning, federated learning, data sovereignty, and a LEEDesque certificate.
Compute-efficient machine learning is the use of compressed network architectures that show marginal decreases in accuracy. Federated learning augments the first pillar's impact through the use of techniques that distribute computational loads across idle capacity on devices. This is paired with the third pillar of data sovereignty to ensure the privacy of user data via techniques like use-based privacy and differential privacy. The final pillar ties all these factors together and certifies products and services in a standardized manner on their environmental and social impacts, allowing consumers to align their purchase with their values.
2
# Introduction and background
This research project aims to take the most comprehensive approach to assess the environmental and social impacts of machine learning. Current machine learning practices emit excessively large amounts of carbon dioxide, while also requiring access to expensive and highly specialized hardware (Strubell et al, 2019). Additionally, issues related to data privacy abound. We employ an âenvironment, society, and governanceâ (ESG) framework to understandâand subsequently shiftâhow machine learning and the actors behind its development affect our world. Through its four pillars, our ESG framework targets researchers, industry, and consumers.
Our motivation for embarking on this work is to surface the tradeoffs that researchers and practitioners should consider when they develop more complex models which call for the use of larger datasets. These efforts can yield more predictive power, but do not come without a second-order effect, which is often abstracted away from developers. This is even more so the case for consumers of these systems who do not see the environmental and social impacts as AI is integrated into existing software systems as an additional capability. Through this work, we seek to bring these impacts to the foreground and provide the tools and necessary data for both consumers and developers to make informed decisions when considering the use of AI systems. There are already discussions, especially as it relates to the use of supervised machine learning techniques that require labelled training datasets and how that has led to the emergence of a shadow workforce that toils in the background (Brawley & Pury, 2016) enabling some of the impressive feats we have seen these systems accomplish in recent times.
This brings up issues of unjust labor practices and unfair compensation that lie behind some of the modern conveniences that are powered by AI-enabled solutions. It calls for a greater scrutiny on the entire supply chain of building and deploying these systems such that people are able to make choices that are akin to âfair-tradeâ product labels guiding consumers on some of the practices that were involved in the generation of the products and services. Ultimately, having empowered users who better understand the impacts of their purchasing decisions (Hainmueller et al., 2015) can become a powerful lever for galvanizing change in the current practices in the development and deployment of machine learning systems.
# Existing and related work
The work done at the intersection of AI and environmental impact is still very sparse. Most work that engages with both of these topics addresses ways in which AI can help counteract or adapt to the impacts of climate change. This work is unquestionably extremely valuable, and we commend researchers for attempting to use AI in such a way. We believe it is, as a result, all the to engage with how AI is unnecessarily and excessively environmentally more important harmful.
Crucially, many of the elements that make the field of machine learning carbon intensive can also make it inaccessible. For instance, the need for onerous hardware and extremely large compute resources make it nearly impossible for anyone who is not affiliated with an already
3
well-established academic institution or business to contribute (Strubell et al., 2019; Schwartz et al., 2019). The appalling lack of diversity in the field (Snow, 2018) means the necessary resources to take part in the machine learning community are overwhelmingly available to those who are wealthy, white, male, or able-bodied at the expense of those who do not match most of these criteria. Thus, the elements that drive AIâs large carbon footprint also play into social effects.
Research on the environmental impacts of AI offers both short-term and long-term suggestions to make AI less carbon intensive. Methodologies and frameworks offer immediate ways such that researchers can assess, understand, and mitigate their environmental impacts, while slower, cultural changes to how AI is developed are meant to take shape gradually. Prior research acknowledges the need to balance environmental goals with the importance of ground-breaking research and innovation. We believe this is indeed important, considering the non-negligible (environmental and otherwise) benefits we can get from AI. However, two issues arise:
First, it is unclear who gets to decide what potentially innovative research is seemingly valuable enough to be pursued, and according to what criteria. Considering that limiting energy use and carbon footprint can place non-negligible constraints on a project (even though this may be reasonable in light of a cost-benefit analysis), in-depth discussions will undoubtedly be necessary to determine when research should or should not be done at the detriment of the environment. We believe that embodying a participatory design (Gupta and De Gasperis, 2020) approach in making these decisions will ultimately help develop technology that is not only aligned with the norms and values of the community that the technology seeks to serve but also to create greater accountability and transparency in the process.
Second, if a significant portion of AI research continues to strive primarily for accuracy over energy efficiency and curbing environmental harm, will AI research that focuses on efficiency be seen as âsecond classâ AI? Energy-efficient AI may be less prestigious because it may not attain the same levels of accuracy and performance as AI that is unrestricted in how much energy it uses. As a result, AI focused on efficiency may be seen by many at the top of the field as inferior to AI research centered on performance and accuracy. Nothing short of an overhaul of the AI community culture seems necessary to avoid this. This idea of a focus on single metrics (Thomas to the & Uminsky, 2020) to make design tradeoffs has been shown to be detrimental development of technology and adopting a more comprehensive approach that can internalize some of the externalities presents a great starting point to address this problem.
How this project is different from existing work
The few studies done at the intersection of AI and environmental issues have made important contributions to achieving a comprehensive and standardized method for calculating carbon impacts. Of course, arriving at such a result is an iterative and collaborative process. And disagreements in terms of what elements to include are often productive and foster innovation. It is in this spirit that we undertake this research work.
4
To begin with, let us consider how each group of researchers attempts to measure energy use and carbon impact. Strubell et al. (2019), Lacoste et al. (2019), and Henderson et al. (2020) agree on calculating carbon intensity (CO2eq per kWh) as a way to measure the environmental impact of AI models. Schwartz et al. (2019) argue that measuring floating point operations (FPOs) is a more accurate way of assessing energy use and subsequent environmental impact. However, Henderson et al. (2020) claim that FPOs are not as reliable as some claim to measure energy use. This, among other issues, leaves us in a confusing situation as to how energy use and environmental impact should be measured for AI. We plan to investigate these discrepancies in order to propose a sound methodology that considers each paperâs position.
It is important to note that Henderson et al. have taken notice of the lack of standardization in the work being done on the environmental impacts of AI, and attempt to remediate it by offering a standardized approach that takes into account the work by Strubell et al., Lacoste et al., and Schwartz et al. However, we believe some important elements are left out of Henderson et al.âs framework. Hence, we have a similar aim of offering a standardized framework for understanding and mitigating the environmental harm caused by AI, but hope to offer a more comprehensive and applied approach which is ultimately necessary for widespread adoption and use of the measure.
Interestingly, many of the elements we had outlined to accomplish prior to diving in the existing research (the standardized approach, the social badge for green AI, the technical innovations) are mentioned by Henderson et al. as important goals and promising avenues to make AI less carbon intensive. We dare hope that these similarities in our thinking highlight how useful an approach centred around these elements could be.
One area we aim to explore further is the effectiveness of carbon offsets as well as big companiesâ claims surrounding the use of renewable energy. Lacoste et al. (2019) and Henderson et al. (2020) engage critically with cloud providersâ claims about carbon neutrality. Schwartz et al. (2019) briefly show skepticism towards claims made about the efficiency of carbon offsets, while Strubell et al. (2019) highlight two caveats to carbon offsets and renewable energy use. Critical engagement with these claims is essential for a few reasons. First, big tech companies have much to gain from customers perceiving them as environmentally friendly and as at the edge of innovation in terms of renewable energy use. This leads to perverse incentives and potential for âgreen-washingâ. Presenting themselves as such is attractive to the growing number of individuals who care about (or at least, want to be perceived as caring about) environmental issues and climate change. Second, in comparing carbon offsets to other means of curbing carbon emissions (like reducing overall energy use), it may be the case that carbon offsets are the most effective. This could significantly affect the best way forward for the development of environmentally sensible AI systems.
Strubell et al. (2019), Lacoste et al. (2019), and Henderson et al. (2020) address the location of the power grid on which oneâs AI model is trained in terms of carbon emissions, there seems to be agreement concerning how central this feature can be in curbing oneâs AI research carbon emissions. Henderson et al. (2020) include the most parameters because, according to them,
5
overly simple estimates of carbon emissions are imprecise and can significantly under- or overestimate the amount of carbon emitted.
# ESG framework
Our SECure Environmental, Social, and Governance (ESG) framework targets different audiences in varying ways. To begin, the first pillar of compute-efficient machine learning, primarily targets AI researchers and practitioners. To a certain extent, compute-efficient machine learning can potentially have a large social impact in terms of access to the means necessary to do AI research. Indeed, greater efficiency in terms of compute needed could drastically lower the barrier to entry for individuals like undergraduate researchers (Schwartz et al., 2019) and/or those who are not affiliated with wealthy universities and organizations (Strubell et al. 2019). This is because, for one, the hardware needed to train an AI model is currently very expensive, and while cloud-based servers are cheaper, they still do not allow âany inspired undergraduate with a laptop has the opportunity to write high-quality papers that could be accepted at premier research conferencesâ (Schwartz et al., 2019). If AI is more compute-efficient to the point where it requires only a laptop or other relatively obtainable hardware, the field of AI may become much more accessible. Compute-efficient machine learning could thereby have a sizable social impact.
The second and third pillars, federated learning and data sovereignty, directly target AI researchers and practitioners. Both are primarily aimed at AI practitioners and researchers because, once again, these are techniques to be implemented by individuals in these positions. However, in a secondary manner, these are also addressed to customers, as they tend to value privacy, and welcome more secure data analysis for AI. Presented with two options, one less secure and one more secure, it is reasonable to expect, all else being equal, that consumers will choose the most secure option. In this case, this is represented by the pillars of federated learning and data sovereignty.
The fourth and last pillar, the LEEDesque certificate, targets consumers as well as the AI industry. The certificate is an opportunity for consumers to choose environmentally sensible AI. This means the industry may now have some added economic incentive to limit unnecessary environmental impact. Change in customer behaviour (and subsequently, change in industry behaviour) may happen through, for instance, social pressure (Mani et al., 2013) related to making a choice (i.e. purchasing environmentally sensible AI systems) that is associated with a more virtuous and positive outcome (e.g. helping curb carbon emissions, which can help slow climate change). A public display of using solutions that carry such a certification of mark is a signal in oneâs social circles of being well-intentioned and taking oneâs civic duties seriously. Prior success with these virtue signals has shown to make a shift in industry norms as seen in food products consumption that follows environmental best practices and the electric-vehicle industry.
6
# Components of the SECure framework
1. Compute-efficient ML
Using compute-efficient machine learning methods has the potential to lower the computation burdens that typically make access inequitable for researchers and practitioners who are not associated with large organizations that have access to heavy compute and data processing infrastructures. As an example, recent advances in the quantization of computations in neural networks (Jacob et al., 2018) have led to significant decreases in computational requirements. This also allows for the use of lower-cost resources like CPUs compared to more expensive hardware for the training of complex architectures which typically require GPUs or TPUs.
Studies such as the one by Jouppi et al. (2017) highlight the performance tradeoffs and give an indication on a pathway to incorporating hardware improvements such as the use of specialized for machine learning related chips, ASICs computations. Though, we see the access and limited availability of such hardware as a potential barrier, the possibility of cost-efficiency makes this approach promising. Documenting the use of specific underlying hardware for the training of systems within the framework paired with benchmarking of performance metrics will provide one piece of essential information in the computation of the final metric.
Another area of ML research that has bearing for compute-efficient machine learning is that of machine learning models for resource-constrained devices like edge-computing on IoT devices. For example, with devices that have RAM sizes in KB, model size can be minimized along with prediction costs using approaches like Bonsai (Kumar et al., 2017) that proposes a shallow, sparse tree-based algorithm. Another approach is called ProtoNN that is inspired by kNN but uses minimal computation and memory to make real-time predictions on resource-constrained devices (Gupta et al., 2017). Novel domain-specific languages like SeeDot (Gopinath et al., 2019), which expresses ML-inference algorithms and then compiles that into fixed points, makes these systems amenable to run on edge-computing devices. Other distilled versions of large-scale networks like MobileNets (Howard et al., 2017) and the growing prevalence of TinyML will also bring about cost- and compute-efficiency gains.
This part of is parametrized by the above components as a way of making quantified comparisons across different hardware and software configurations allowing people to make informed decisions in picking one solution over another. We are currently in the experimental phases to assess the right formulation capturing these statistics into a mathematical equation that enables a comprehensive comparison from the hardware and software configuration standpoint.
2. Federated learning
As a part of this framework, we propose the utilization of federated learning (Bonawitz et al., 2019) approaches as a mechanism to do on-device training and inference of ML models. The purpose of utilizing this technique is to mitigate risks and harm that arises from centralization of
7
data, including data breaches and privacy intrusions. These are known to fundamentally harm the trust levels that people have in technology and are typically socially-extractive given that they may use data for more than the purposes specified when the data is sourced into a single, the second-order benefit of enabling centralized source. Federated learning also has computations to run locally thus potentially decreasing carbon impacts if the computations are done in a place where electricity is generated using clean sources. We acknowledge that there may be gains to be had from an âeconomies of scaleâ perspective when it comes to energy consumption in a central placeâlike for a data center that relies on government-provided access to clean energy. This is something that still needs to be validated, but the benefits in terms of reducing social harm are definite, and such mechanisms provide for secure and private methods for working on data that constitutes personally identifiable information (PII).
Our goal with this research work is to empirically assess these methods and provide information to the developers such that they can adopt these mechanisms. We also aim to empower users to demand such solutions rather than resign to technology-fatalism.
3. Data sovereignty
Data sovereignty refers to the idea of strong data ownership and giving individuals control over how their data is used, for what purposes, and for how long. It also allows users to withdraw consent for use if they see fit. In the domain of machine learning, especially when large datasets the withdrawal of consent presents a major challenge. are pooled from numerous users, Specifically, there are no clear mechanisms today that allow for the removal of data traces or of the impacts of data related to a user in a meaningful manner from a machine learning system without requiring a retraining of the system. Preliminary work (Bourtoule et al., 2019) in this domain showcases some techniques for doing soâyet, there is a lot more work needed in this domain before this can be applied across the board for the various models that are used.
Thus, having data sovereignty at the heart of system design which necessitates the use of techniques like federated learning is a great way to combat socially-extractive practices in machine learning today.
Data sovereignty also has the second-order effect of respecting differing norms around data ownership which are typically ignored in discussions around diversity and inclusion as it relates to the development of AI systems. For example, indigenous perspectives on data (Kukutai & Taylor, 2016) are quite different and ask for data to be maintained on indigenous land, used and processed in ways that are consistent with their values. This is something that can be captured more holistically which is why we include it as a part of the SECure framework. The precise incorporation of this into the framework will depend on the research that is carried out as a part of this work.
4. LEEDesque certification
The certification model today relies on some sort of a trusted, third-party, independent authority that has the requisite technical expertise to certify the system meets the needs as set out in
8
standards, if there are any that are widely accepted. Certificates typically consist of having a reviewer who assesses the system to see if it meets the needs as set out by the certifying agency. The organization is then issued a certificate if they meet all the requirements. An important, but seldom discussed component of certification is something called the Statement of Applicability (SoA).
Certificates are limited in terms of what they assess. What the certifying agency chooses to evaluate, and the inherent limitation that these choices are representative of the system at a particular moment in time with a particular configuration. This is typically addressedâwhat gets left out of the conversation is the SoA and how much of the system was covered under the scope of evaluation. The SoA is also not publicly or easily available, while the certification mark is shared widely to signal to consumers that the system meets the requirements as set out by the certification authority. Without the SoA, one cannot really be sure of what parts of the system were covered. This might be quite limiting in a system that uses AI, as there are many points of integration as well as pervasive use of data and inferences made from the data in various downstream tasks.
What are some best practices to make certificates more effective?
Recognizing some of the pitfalls in the current mechanisms for certification, our proposal is for the certification body to bake in the SoA into the certificate itself such that there is not a part of the certification that is opaque to the public. Secondly, given the fast-evolving nature of the system, especially in an online-learning environment for machine learning applications, we see the certificate having a very short lifespan. An organization would have to be recertified so that the certificate reflects as accurately as possible the state of the system in its current form.
Certification tends to be an expensive operation and can thus create barriers to competitiveness in the market where only large organizations are able to afford the expenses of having their systems certified. To that end, we require that the certification process be automated as much as possible to reduce administrative costsâas an example, having mechanisms like Deon ( About â Deon , n.d.) might help. Also, tools that would enable an organization to become compliant for a certification should be developed and made available in an open-source manner.
Standardized measurement technique
The proposed standardization will also serve to allow for multiple certification authorities to offer their services, thus further lowering the cost barriers and improving market competitiveness while still maintaining an ability to compare across certificates provided by different organizations. An additional measure that we have deemed to be of utmost importance is to have the certificate itself be intelligible to a wide group of people. It should not be arcane and prevent people from understanding the true meaning and impact of the certification. It will also empower users to make choices that are well-informed.
Survey Component
9
To build on the point made above, the goal of the certification process is to empower users to be able to make well-informed choices As a part of this research work, we will be embarking on extensive user survey to identify what information users are seeking from certification marks and how that information can be communicated in the most effective manner.
Additionally, triggering behaviour change on the part of the users through better-informed decisions on which products/services to use needs to be supplemented with behaviour change on the part of the organizations building these systems. We believe that clear comparison metrics that allow organizations to assess the state of their systems in comparison with actors in the ecosystem will also be important. Keeping that in mind, a survey of the needs of practitioners will help ensure the certification is built in a manner that meets their needs head-on, thereby encouraging widespread adoption.
Data storage and Usability
Software developers and Machine Learning (ML) engineers work with data files that are not easy to use among general audiences that lack programming experience. For example, JSON is a common file format that is used by developers when working with and analyzing web data. JSON is efficient for storing massive amounts of nested data. In this format, data takes up less machine storage space than more user-friendly file formats, such as Excel or CSV. While JSON is more efficient in terms of compute storage and perhaps memory, therefore environmentally efficient; ML engineers do not often work in isolation. In corporate settings, developers work in or collaborate with data science departments. This means that it is often necessary to convert files to formats that are usable to those without computer programming skills, such as Excel or CSV file formats. This conversion is very costly. For example, approximately 250 MB of JSON data or less, as a CSV file, converts to a file size that is over 500 MB. Converting JSON to CSV, in this instance, at least doubles the file size and need for machine storage. This example does not account for the memory or compute power required to make the conversion. In isolation, or in individual workflows, file conversion tasks may seem less computationally demanding than running an image classification model on the cloud, for example; but, added up, these tasks across developers and organizations significantly increase the environmental cost of computation. Importantly, file conversion tasks are avoidable, if we design user-interfaces for data and data science work that are usable by a non-programming audiences, and make it so that user-interfaces for data and data science do not require efficiently stored data to be converted to inefficient formats.
In our work, we propose to measure the cost of file conversion work, both in terms of storage and compute power (memory). We will integrate our cost estimate models into the software packages we are developing. These software tools would allow developers to estimate the environmental cost of each development or engineering task, in real time. These real-time estimates would allow developers to observe the efficiency or cost of each data task and whether their workflows could be designed in a more environmentally friendly way.
# Future research directions
10
The potential future research that may follow from this project could contribute significantly to making AI research more accessible as well as more environmentally sensible. From a broad perspective, this project lends itself well to future recommendations in terms of public policy. One could devise a framework to create public compute facilities that make it easier for people who are not affiliated with large organizations to work on AI applications. In addition, inquiring into making this as cost- and energy-efficient as possible while ensuring it remains accessible and powerful enough to foster quality research appears crucial to us. To accompany public compute facilities, a data commons (Miller et al., 2008) could also be useful, and has the possibility of making large amounts of quality data more accessible to researchers while upholding individualsâ privacy. Particularly in a supervised machine learning setting, it is important to have high-quality data to do a meaningful analysis. Data co-operatives (Hafen et al., 2014) are another solution in this domain that if implemented in a practical fashion and adopted widely will lead to more equitable outcomes and bring about inclusion for people who are currently marginalized. Another avenue for exploration is to investigate the use of small data approaches and meta-learning that have the likelihood of being more inclusive by minimizing the need for extensive data collection for making predictions and doing classification.
Given the strong influence that market forces have on which solutions are developed and deployed, we see the SECure certificate as a mechanism creating the impetus for consumers and investors to demand more transparency on the social and environmental impacts of these technologies and then use their purchasing power to steer the progress of development in this field that accounts for these impacts. Responsible AI investment, akin to impact investing, will be easier with a mechanism that allows for standardized comparisons across various solutions, which is what SECure is perfectly geared towards.
# Conclusion and Final Remarks
In this paper, we have presented a novel framework titled SECure that combines elements of social and environmental impacts into a single instrument to enhance decision-making when it comes to the development and deployment of AI-enabled solutions in a responsible fashion. We laid out the groundwork for the importance of considering both the environmental and social impacts, and how this has the potential to democratize access to AI development and use. We also explored how these considerations can lead to solutions that are more inclusive. Expanding on the details of our framework, we review the most pertinent approaches that will be required to make SECure a comprehensive evaluation including the approaches of compute-efficient machine learning, federated learning, data sovereignty, and a LEEDesque certification. We also expand on the essential features of this certification to enable developers, consumers, and investors to make informed decisions. Finally, we conclude with how this research work will lay the groundwork for future efforts in helping us build responsible AI systems in a more concrete manner.
11
# References
https://deon.drivendata.org/
# . (n.d.). Deon. Retrieved 10 June 2020, from AboutâDeon
Bonawitz, K., Eichner, H., Grieskamp, W., Huba, D., Ingerman, A., Ivanov, V., Kiddon, C., KoneÄný, J., Mazzocchi, S., McMahan, H. B., Van Overveldt, T., Petrou, D., Ramage, D., & Roselander, J. (2019). Towards Federated Learning at Scale: System Design. . ArXiv:1902.01046 [Cs, Stat]
Bourtoule, L., Chandrasekaran, V., Choquette-Choo, C., Jia, H., Travers, A., Zhang, B., Lie, . ArXiv:1912.03817 [Cs]
Brawley, A. M., & Pury, C. L. S. (2016). Work experiences on MTurk: Job satisfaction, , 531â546. 54
Gopinath, S., Ghanathe, N., Seshadri, V., & Sharma, R. (2019). Compiling KB-sized machine learning models to tiny IoT devices. Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation, 79â95. https://doi.org/10.1145/3314221.3314597
Gupta, A., & De Gasperis, T. (2020). Participatory Design to build better contact- and proximity-tracing apps. http://arxiv.org/abs/2006.00432
. ArXiv:2006.00432 [Cs]
Gupta, C., Suggala, A. S., Goyal, A., Simhadri, H. V., Paranjape, B., Kumar, A., Goyal, S., Udupa, R., Varma, M., & Jain, P. (2017). ProtoNN: Compressed and Accurate kNN for Resource-scarce Devices. , 1331â1340. International Conference on Machine Learning http://proceedings.mlr.press/v70/gupta17a.html
Hafen, E., Kossmann, D., & Brand, A. (2014). Health Data CooperativesâCitizen 82â86. , in Medicine (2), 53 of Methods Empowerment. https://doi.org/10.3414/ME13-02-0051 Information
Hainmueller, J., Hiscox, M. J., & Sequeira, S. (2015). Consumer Demand for Fair Trade: , Review of Economics and Statistics
Henderson, P., Hu, J., Romoff, J., Brunskill, E., Jurafsky, D., & Pineau, J. (2020). Towards the Systematic Reporting of the Energy and Carbon Footprints of Machine Learning. . ArXiv:2002.05651 [Cs]
Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., & Adam, H. (2017). MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications.
. ArXiv:1704.04861 [Cs]
12
Jacob, B., Kligys, S., Chen, B., Zhu, M., Tang, M., Howard, A., Adam, H., & Kalenichenko, for Efficient 2018 IEEE/CVF Conference on Computer Vision of Neural Networks and Training (2018). Quantization D. Integer-Arithmetic-Only Inference. , 2704â2713. and Pattern Recognition https://doi.org/10.1109/CVPR.2018.00286
Jouppi, N. P., Young, C., Patil, N., Patterson, D., Agrawal, G., Bajwa, R., Bates, S., Bhatia, S., Boden, N., Borchers, A., Boyle, R., Cantin, P., Chao, C., Clark, C., Coriell, J., Daley, M., Dau, M., Dean, J., Gelb, B., ⦠Yoon, D. H. (2017). In-Datacenter Performance Analysis of a Tensor Processing Unit. ACM SIGARCH Computer , Architecture News
Kukutai, T., & Taylor, J. (Eds.). (2016). . Indigenous Data Sovereignty: Toward an agenda (PDF). eBook Australian http://press-files.anu.edu.au/downloads/press/n2140/pdf/book.pdf?referer=2140 Press. University National (ANU)
Lacoste, A., Luccioni, A., Schmidt, V., & Dandres, T. (2019). Quantifying the Carbon . [Cs]
Mani, A., Rahwan, Cooperation. (2013). Inducing Peer Pressure to Promote I., & Pentland, A. https://doi.org/10.1038/srep01735
# , Scientific Reports
(1), 1735. 3
Miller, P., Styles, R., & Heath, T. (2008). Open Data Commons, A License for Open Data.
# , 5. LDOW
Schwartz, R., Dodge, J., Smith, N. A., & Etzioni, O. (2019). Green AI. ArXiv:1907.10597 http://arxiv.org/abs/1907.10597
# . [Cs, Stat]
âWeâre in a diversity crisisâ: Cofounder of Black in AI on whatâs Technology Review. . MIT lives our https://www.technologyreview.com/2018/02/14/145462/were-in-a-diversity-crisis-blac k-in-ais-founder-on-whats-poisoning-the-algorithms-in-our/
Strubell, E., Ganesh, A., & McCallum, A. (2019). Energy and Policy Considerations for Deep Learning in NLP. http://arxiv.org/abs/1906.02243
. ArXiv:1906.02243 [Cs]
Thomas, R., & Uminsky, D. (2020). The Problem with Metrics is a Fundamental Problem for AI. http://arxiv.org/abs/2002.08512
. ArXiv:2002.08512 [Cs]
13 | {
"id": "2002.08512"
} |
2006.06195 | Large-Scale Adversarial Training for Vision-and-Language Representation Learning | We present VILLA, the first known effort on large-scale adversarial training
for vision-and-language (V+L) representation learning. VILLA consists of two
training stages: (i) task-agnostic adversarial pre-training; followed by (ii)
task-specific adversarial finetuning. Instead of adding adversarial
perturbations on image pixels and textual tokens, we propose to perform
adversarial training in the embedding space of each modality. To enable
large-scale training, we adopt the "free" adversarial training strategy, and
combine it with KL-divergence-based regularization to promote higher invariance
in the embedding space. We apply VILLA to current best-performing V+L models,
and achieve new state of the art on a wide range of tasks, including Visual
Question Answering, Visual Commonsense Reasoning, Image-Text Retrieval,
Referring Expression Comprehension, Visual Entailment, and NLVR2. | http://arxiv.org/pdf/2006.06195 | Zhe Gan, Yen-Chun Chen, Linjie Li, Chen Zhu, Yu Cheng, Jingjing Liu | cs.CV, cs.CL, cs.LG | NeurIPS 2020 Spotlight paper | null | cs.CV | 20200611 | 20201022 | 0 2 0 2
t c O 2 2 ] V C . s c [
2 v 5 9 1 6 0 . 6 0 0 2 : v i X r a
# Large-Scale Adversarial Training for Vision-and-Language Representation Learning
Zhe Gan1, Yen-Chun Chen1, Linjie Li1, Chen Zhu2, Yu Cheng1, Jingjing Liu1 1Microsoft Dynamics 365 AI Research, 2University of Maryland, College Park {zhe.gan,yen-chun.chen,lindsey.li,yu.cheng,jingjl}@microsoft.com [email protected]
# Abstract
We present VILLA, the ï¬rst known effort on large-scale adversarial training for vision-and-language (V+L) representation learning. VILLA consists of two training stages: (i) task-agnostic adversarial pre-training; followed by (ii) task-speciï¬c adversarial ï¬netuning. Instead of adding adversarial perturbations on image pixels and textual tokens, we propose to perform adversarial training in the embedding space of each modality. To enable large-scale training, we adopt the âfreeâ adver- sarial training strategy, and combine it with KL-divergence-based regularization to promote higher invariance in the embedding space. We apply VILLA to current best-performing V+L models, and achieve new state of the art on a wide range of tasks, including Visual Question Answering, Visual Commonsense Reasoning, Image-Text Retrieval, Referring Expression Comprehension, Visual Entailment, and NLVR2.1
# Introduction
Inspired by the success of BERT [13] on natural language understanding, there has been a surging re- search interest in developing multimodal pre-training methods for vision-and-language representation learning (e.g., ViLBERT [38], LXMERT [65], and UNITER [12]). When ï¬netuned on downstream tasks, these pre-trained models have achieved state-of-the-art performance across diverse V+L tasks, such as Visual Question Answering (VQA) [4, 17], Visual Commonsense Reasoning (VCR) [81], and Referring Expression Comprehension [78]. However, due to the immense capacity of large-scale pre-trained models yet limited amount of labeled data in downstream tasks, aggressive ï¬netuning often falls into the overï¬tting trap [24]. Adversarial training, a method to combat adversarial attacks in order to create robust neural networks [64, 16], has recently shown great potential in improving the generalization ability of pre-trained language models [86, 24] and image classiï¬ers [72]. A natural question that came to our mind: can we apply similar adversarial training techniques to V+L problems to improve model performance?
We propose VILLA (Vision-and-Language Large-scale Adversarial training), which advocates the use of adversarial training for V+L representation learning. As illustrated in Figure 1, VILLA consists of two training stages: (i) task-agnostic adversarial pre-training (APT); followed by (ii) task-speciï¬c adversarial ï¬ne-tuning (AFT). Intuitively, if well-designed, multimodal pre-training tasks such as image-conditioned masked language modeling and image-text matching can resonate well with many downstream tasks that require visual grounding and reasoning abilities. This leads to our hypothesis that the improved generalization ability of pre-trained models learned during APT stage can be readily transferred to the AFT stage for diverse tasks. In other words, APT is able to uniformly lift model performance for all downstream tasks in a task-agnostic way, while AFT can further enhance the ï¬netuned models by leveraging task-speciï¬c supervision signals.
1Code is available at https://github.com/zhegan27/VILLA.
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
Adversarial Pre-training: © Masked Language Modeling © Image-Text Matching (ITM) Adversarial Finetuning: VOA oVCR ONLVR2 Visual Entailment Referring Expression Comprehension âolmage-Text Retrieval 0... sautsoysuety 19427-1310 Word Embedding [Vj Regional Feature L â ) ER Ad versarial Perturbation [CLS] A dog lying on the grass next to a frisbee [SEP]
Figure 1: Overview of the proposed VILLA framework for vision-and-language representation learning.
To bring in more ï¬exibility in generating adversarial examples for robust training, we propose to perform adversarial training on the embedding level for multi-modalities, instead of operating on image pixel and sub-word token level in conventional practice. For text modality, we add adversarial perturbations to word embeddings [45, 86, 24]. For image modality, most previous work observes that robustness is at odds with generalization, i.e., trained models are able to resist adversarial attacks on clean images at the expense of performance [42, 73, 84]. Distinctive from these studies, we directly add adversarial perturbations to extracted image-region features [2], as our end goal is the ï¬nal V+L model performance rather than crafting adversarial image examples. Experiments show that this strategy leads to large performance gain on clean inputs.
Adversarial training procedure is time-consuming and computationally expensive. To power efï¬cient large-scale training, we adopt the recently proposed âfreeâ adversarial training strategy [56, 82, 86], which obtains the gradients of parameters with almost no extra cost when computing the gradients of inputs. In addition to requiring adversarial perturbations to be label-preserving, we also introduce KL-divergence-based regularization to enforce the conï¬dence level of the prediction to be close, characterized by the âdarkâ knowledge hidden in the probability vectors. This promotes higher smoothness of the training objective and has empirically proven as important regularization effective for further performance boost.
For evaluation, we mostly focus on UNITER [12], the current best-performing V+L model with state-of-the-art performance across many popular V+L benchmarks, and enhance UNITER with VILLA through comprehensive experiments on six V+L tasks: VQA [17], VCR [81], NLVR2 [61], Visual Entailment [74], Referring Expression Comprehension [78], and Image-Text Retrieval [29]. VILLA is a generic framework that can be applied to any multimodal pre-training method. To demonstrate its versatility, we further apply it to LXMERT on VQA, GQA [23], and NLVR2 tasks for generalizability test.
The main contributions are summarized as follows. (i) We present VILLA, the ï¬rst known effort on adversarial pre-training and adversarial ï¬netuning for V+L representation learning. (ii) Instead of operating on pixel and word token level, we propose to add adversarial perturbations in the embedding space of multi-modalities, and introduce a smoothness-inducing adversarial regularization term on top of the âfreeâ adversarial training strategy. (iii) VILLA achieves new state of the art across six popular V+L tasks. In particular, by relying on standard bottom-up image features only [2], VILLA improves the single-model performance of UNITER-large from 74.02 to 74.87 on VQA, and from 62.8 to 65.7 on VCR. With ensemble, VQA performance is further boosted to 75.85.
# 2 Related Work
Multimodal Pre-training ViLBERT [38] and LXMERT [65] are the pioneering works in vi- sion+language pre-training, where two Transformers are used to encode image and text modalities, respectively, then a third Transformer is built on top for multimodal fusion. Compared to this two-stream architecture, recent work such as VL-BERT [60], VisualBERT [33], B2T2 [1], Unicoder- VL [30] and UNITER [12] advocate a single-stream model design, where two modalities are directly fused in early stage. More recent studies leverage multi-task learning [39] to enhance ï¬netuning and use detected image tags [35] to further enhance pre-training. Pixel-BERT [21] proposes to align text with image pixels instead of conventional bottom-up features. Multimodal pre-training has brought leaping advances in vision+language understanding tasks such as VQA and VCR, with great potential in extending to visual captioning [85, 71], visual dialog [47, 69], vision-language naviga-
2
tion [18, 43], as well as video-and-language representation learning [63, 62, 41, 31]. Recent work [7] also investigates the design of probing tasks to understand the knowledge learned in pre-training.
V+L Representation Learning Before multimodal pre-training dominated the scene, there had been a long line of studies on how to learn better V+L representations. Prominent work includes: (i) advanced attention mechanisms [40, 79, 51]; (ii) better multimodal fusion methods [14, 80, 27, 26]; (iii) multi-step reasoning [77, 22, 15]; (iv) incorporation of object relations [55, 48, 6, 32]; and (v) neural module networks for compositional reasoning [3, 25, 20, 11]. In principle, our proposed VILLA framework can be plugged into these âshallowerâ models. In this paper, we mainly focus on enhancing Transformer-based state-of-the-art models.
Adversarial Training Adversarial machine learning is an active research area [64, 16, 5]. Algorithms are developed to either attack existing models by constructing adversarial examples, or train robust models to defend against adversarial attacks. Among existing defense approaches, adversarial training (AT) is a general strategy to empower models with state-of-the-art robustness in different settings [66, 42, 73, 84, 50]. Existing research mostly focuses on AT for image classiï¬cation, and the general notion is that robustness is often at odds with accuracy. Most recently, [72] shows that model accuracy on clean images can be improved if a separate auxiliary batch norm is used for adversarial examples. There are also some parallel studies on applying AT to language modeling [68] and natural language understanding [45, 86, 24]. Due to growing dominance of large-scale pre-training, very recent work has started to explore adversarial training in the pre-training stage [19, 10, 37]. VILLA is the ï¬rst known effort that studies AT for V+L tasks and adds adversarial perturbations to both image and word embedding space. We also prove that AT can be effectively incorporated in both pre-training and ï¬ne-tuning stages. A more detailed discussion on related work is provided in Appendix.
# 3 Vision-and-Language Large-scale Adversarial Training
There are three key designs that encapsulate VILLAâs unique strengths in improving performance and generalization of pre-trained V+L models : (i) Adversarial pre-training and ï¬ne-tuning; (ii) Adding perturbations in the embedding space; and (iii) Enhanced adversarial training algorithm.
# 3.1 Adversarial Pre-training and Finetuning
We ï¬rst brieï¬y review the pretrain-then-ï¬netune paradigm that has become prevalent in V+L repre- sentation learning, then describe our proposed two-stage adversarial training framework.
Pre-training Let Dp denote a pre-training dataset, which consists of image-text pairs (ximg, xtxt). The goal in the pre-training stage is to learn universal image-text representations that are generalizable to different downstream tasks. Take one-stream models [12, 60] as an example. Image and text inputs are ï¬rst represented as low-dimensional feature vectors zimg = gbu(ximg) and ztxt = gemb(xtxt), where gbu(·) represents a ï¬xed bottom-up image feature extractor [2], and gemb(·) represents a learnable word embedding function. Then, a multi-layer Transformer [67] is applied on top to learn multimodal fusion. The above process can be abbreviated as Ëzimg, Ëztxt, Ëzcls = fθ(ximg, xtxt), where Ëzimg and Ëztxt represent the contextualized representations of each image region and each textual token, respectively. Typically, V+L models employ a special [CLS] token whose embedding Ëzcls is considered as the joint V+L representation to be used for downstream tasks. θ denotes all the learnable parameters including the word embedding matrix.
Let y denote the output supervision signal, which is different across different pre-training tasks. There are three typical pre-training tasks used in most V+L models: (i) Masked Language Modeling (MLM): some tokens in xtxt are replaced by special [MASK] tokens, and the goal is to predict the masked tokens y based on surrounding multimodal context; (ii) Masked Region Modeling (MRM): the features of some image regions in ximg are replaced by zero vectors, and the goal is to predict the masked image regions y given the remaining multimodal information (via cross-entropy loss, KL-divergence loss [38], or contrastive learning [62]); (iii) Image-Text Matching (ITM): both ximg and xtxt are kept intact, and the goal is to predict a binary label y to judge whether the input image and text are paired or not.
Finetuning Given a downstream task Tf and a supervised dataset Df consisting of (ximg, xtxt, y), the pre-trained model can be ï¬netuned by introducing a small neural network h(·) on top of Ëzcls and minimizing the cross-entropy loss. θ is initialized with pre-trained weights, and y now becomes
3
a label. For example, in VQA, y corresponds to the ground-truth answer from a candidate pool, represented as a one-hot vector. In VCR [81], it is a four-way classiï¬cation label.
In both pre-training and ï¬netuning, by instantiating different y, the training process can be uniformly abstracted as an empirical risk minimization problem:
min θ E(ximg,xtxt,y)â¼D[L(fθ(ximg, xtxt), y)] . (1)
Two-stage Adversarial Training Pre-training and ï¬netuning are inherently connected. Independent of the tasks (e.g., MLM, ITM for pre-training, or VQA for ï¬netuning), model training requires the acquisition of essential reasoning skills that can catalyze multimodal fusion for cross-modality joint understanding. For example, in MLM, a masked token âdogâ can be predicted by looking at the image region that contains a dog; and in VQA, when asked whether there is a dog in an image, such visual grounding skills learned through pre-training can be readily applied. We hypothesize that: (i) by performing adversarial training in the pre-training stage, the improved generalization ability of a learned model can be beneï¬cial to the ï¬netuning stage; and (ii) in the subsequent ï¬netuning stage, where task-speciï¬c training signals become available, adversarial ï¬netuning can be applied again to further boost performance. Since pre-training and ï¬netuning share the same mathematical formulation (Eqn. (1)), the same AT algorithm can be adopted in both stages.
# 3.2 Perturbations in the Embedding Space
For the image modality, since state-of-the-art V+L models typically use image features from pre- trained object detectors as input, we add adversarial perturbations in the feature space directly. Note that even though the main difference is simply the noise injecting space, our approach is distinctive from most previous work where perturbations are applied to the pixel space, which is more rigid than ï¬ne-grained embedding perturbation. On the other hand, unlike image pixels that are continuous-valued, discrete tokens in the text modality are more difï¬cult to manipulate. It remains unclear how to craft label-preserving adversarial examples without changing the original semantic meaning of the sentence. But since we only care about the ultimate effects of adversarial training on downstream tasks, not intepretability of adversarial examples, we choose to add perturbations to the word embeddings following [86].
In pre-trained V+L models, positional embeddings are used to encode the location of image regions and sub-word tokens. Our adversaries only modify image and word embeddings, leaving other components of the multimodal features unchanged. Furthermore, due to the distinct characteristics of image and text modalities, we propose to add perturbations to one modality at a time. Speciï¬cally, we add adversarial perturbations δimg and δtxt such that the prediction becomes Ëy = fθ(ximg + δimg, xtxt) and Ëy = fθ(ximg, xtxt + δtxt). To preserve original semantics, the norm of δimg and δtxt is controlled to be small. Also assumed is that model prediction should not change after the perturbation.
# âFreeâ Multimodal Adversarial Training
Training Objective In VILLA, we use adversarial training as an effective regularization to improve model generalization, i.e., to minimize the following objective:
min Eeiqtrariy)~D [Lot(8) + Rat() +0 Rui) , @)
# min θ
where Lstd(θ) = L(fθ(ximg, xtxt), y) is the cross-entropy loss on clean data, Rat(θ) is the label- preserving AT loss, and Rkl(θ) is a ï¬ner-grained adversarial regularization term. Speciï¬cally,
Rat(9) = ane L(fo(®img + Simg, Lit), Y) + ae L(fo(@img, Ciet + Stet), y), (3) âimg ||S⬠txt || Se
where L is the cross-entropy loss on adversarial embeddings. Frobenius norm is used to constrain δimg and δtxt. For optimization, [42] demonstrated that the outer minimization in Eqn. (2) can be solved by SGD, while the inner maximization in Eqn. (3) can be solved reliably by PGD, a standard method for large-scale constrained optimization. Take δimg for example: PGD takes the following step (with step-size α) in each iteration:
Simgt+1 = Ws ,,.4||<e(Simg,t + A9(Simgt)/||9(Simg,t) ||P) » (4)
4
# Algorithm 1 âFreeâ Multi-modal Adversarial Training used in VILLA.
Require: Training samples D = {(Ximg, Lixt,y)}, perturbation bound e, learning rate rT, ascent steps Ix, ascent step size a 1: Initialize 0 2: for epoch = 1... Nep do 3: for minibatch B C X do 4: do â yr (-6 9); 9) 0 5: fort=1...K do 6: Accumulate gradient of parameters @ given dimg4â1 and Oye44â1 7: 91 = 91-1 + KE wing.ereryenlVo(Lsta(8) + Rar(O) + Rii(4))] 8: Update the perturbation djm, and 642; via gradient ascend 9: y= fo(®img, Ltxt) 10: Gimg â Véimg LE(fo(Limg+5img: Leet), Y) +L (fo(Limg+Simg: Liet),Â¥)] Il: Simgt â Ujdimglie<e(Simg,tâ1 +O * Gimg/||Gimglle) 12: Gtxt â Vober (L(fo(®img, Liat + Stet), Y) + Ler(fo(Limg, Lixt + Stat), Â¥)] 13: Stxt,t â Wy é,c4\) ese (Otwe,tâ1 +.0- Grot/Gterlle) 14: end for 15: 06+ 0-T9K 16: end for 17: end for
where 9(Jimg,t) = VoingL(fo(®img + img, tet), y) is the gradient of the loss w.r.t. dimg, and 11 \6,,,||<< Performs a projection onto the e-ball.
To further enhance the above AT algorithm, Rkl(θ) is deï¬ned as
Rei(9) = max Lyi(fo(Ximg + Simg, Liat), fo(Limg, Ltat)) img|| Se ence Lyi(fo(Limgs Lit + Stet); fo(®img, Lret)) 5 (5)
where Lkl(p, q) = KL(p||q) + KL(q||p), p, q denote the two probability distributions, and KL(·) denotes the Kullback-Leibler Divergence. Compared to Eqn. (3) that promotes label-preserving adversarial attack, Eqn. (5) further advocates that the conï¬dence level of the prediction, characterized by the probability vector over the simplex ân (n is the number of classes), should also be close. Similar techniques are used in Virtual AT [46], TRADES [84], and UDA [75]. However, previous work mostly focuses on semi-supervised learning or trade-off between accuracy and robustness; in our work, we found that it is highly effective for boosting model generalization ability.
âFreeâ AT Strategy K-step PGD requires K forward-backward passes through the network, which is computationally heavy. Another limitation is that after K steps, only perturbations at the ï¬nal step are used for model training. To enable AT for large-scale training and promote diverse adversaries, we follow FreeLB [86] to perform multiple PGD iterations to craft adversarial embeddings, and simultaneously accumulate the âfreeâ parameter gradients âθL in each iteration. After that, the model parameters θ are updated all at once with the accumulated gradients, effectively creating a K-times-larger âvirtualâ mini-batch. The full procedure is provided in Algorithm 1.
# 4 Experiments
# 4.1 Experimental Setting
Downstream Tasks To validate the effectiveness of VILLA, we apply it to existing V+L pre-trained models and conduct a comprehensive evaluation over a wide range of downstream tasks, including Visual Question Answering (VQA), Visual Commonsense Reasoning (VCR), Referring Expression (RE) Compression, Visual Entailment, Image-Text Retrieval, and NLVR2. To validate the strength of VILLA in model pre-training and ï¬netuning, we ï¬rst incorporate it into state-of-the-art UNITER model in both stages for downstream evaluation and ablation analysis. And to demonstrate the versatility of VILLA, we further apply it to another V+L model LXMERT [65] with a different architecture design from UNITER (two-stream vs. one-stream) for generalizability test.
5
Method VQA VCR NLVR2 SNLI-VE test-dev test-std QâA QAâR QâAR dev test-P val test ViLBERT VisualBERT LXMERT Unicoder-VL 12-in-1 VL-BERTBASE OscarBASE UNITERBASE VILLABASE VL-BERTLARGE OscarLARGE UNITERLARGE VILLALARGE 70.55 70.80 72.42 - 73.15 71.16 73.16 72.70 73.59 71.79 73.61 73.82 74.69 70.92 71.00 72.54 - - - 73.44 72.91 73.67 72.22 73.82 74.02 74.87 72.42 (73.3) 70.8 (71.6) - 72.6 (73.4) - 73.8 (-) - 74.56 (75.0) 75.54 (76.4) 75.5 (75.8) - 77.22 (77.3) 78.45 (78.9) 74.47 (74.6) 73.2 (73.2) - 74.5 (74.4) - 74.4 (-) - 77.03 (77.2) 78.78 (79.1) 77.9 (78.4) - 80.49 (80.8) 82.57 (82.8) 54.04 (54.8) 52.2 (52.4) - 54.4 (54.9) - 55.2 (-) - 57.76 (58.2) 59.75 (60.6) 58.9 (59.7) - 62.59 (62.8) 65.18 (65.7) - 67.4 74.90 - - - 78.07 77.18 78.39 - 79.12 79.12 79.76 - 67.0 74.50 - 78.87 - 78.36 77.85 79.30 - 80.37 79.98 81.47 - - - - - - - 78.59 79.47 - - 79.39 80.18 - - - - 76.95 - - 78.28 79.03 - - 79.38 80.02
# (a) Results on VQA, VCR, NLVR2, and SNLI-VE.
Method val RefCOCO+ vald testA testB testAd testBd val RefCOCO vald testA testB testAd testBd ViLBERT VL-BERTBASE UNITERBASE VILLABASE VL-BERTLARGE UNITERLARGE VILLALARGE - 79.88 83.66 84.26 80.31 84.25 84.40 - 82.40 86.19 86.95 83.62 86.34 86.22 - 75.01 78.89 79.22 75.45 79.75 80.00 72.34 71.60 75.31 76.05 72.59 75.90 76.17 78.52 77.72 81.30 81.65 78.57 81.45 81.54 62.61 60.99 65.58 65.70 62.30 66.70 66.84 - - 91.64 91.93 - 91.84 92.58 - - 92.26 92.79 - 92.65 92.96 - - 90.46 91.38 - 91.19 91.62 - - 81.24 81.65 - 81.41 82.39 - - 86.48 87.40 - 87.04 87.48 - - 73.94 74.48 - 74.17 74.84
(b) Results on RefCOCO+ and RefCOCO. The superscript d denotes evaluation using detected proposals.
Method val RefCOCOg vald test testd R@1 Flickr30k IR R@5 R@10 R@1 Flickr30k TR R@5 R@10 ViLBERT Unicoder-VL UNITERBASE VILLABASE UNITERLARGE VILLALARGE - - 86.52 88.13 87.85 88.42 - - 86.52 88.03 87.73 88.97 - - 74.31 75.90 74.86 76.18 - - 74.51 75.93 75.77 76.71 58.20 71.50 72.52 74.74 75.56 76.26 84.90 90.90 92.36 92.86 94.08 94.24 91.52 94.90 96.08 95.82 96.76 96.84 - 86.20 85.90 86.60 87.30 87.90 - 96.30 97.10 97.90 98.00 97.50 - 99.00 98.80 99.20 99.20 98.80
(c) Results on RefCOCOg and Flickr30k Image Retrieval (IR) and Text Retrieval (TR).
Table 1: Comparison with state-of-the-art pre-trained models on all the downstream tasks.
UNITER and LXMERT UNITER-base is a single-stream model, which has 12 layers, with 768 hidden units per layer and 12 attention heads; UNITER-large has 24 layers, with 1024 hidden units per layer and 16 attention heads. UNITER shares the same structure as BERT, except that the input now becomes a mixed sequence of two modalities. LXMERT is a two-stream model, which ï¬rst performs self-attention through several layers on each modality independently (9 layers for text modality, and 5 layers for image modality), then fuses the outputs of both streams through another 5 layers (ï¬rst cross-attention, then self-attention).
Implementation Details For UNITER experiments, we pre-train with the same four large-scale datasets used in the original model: COCO [36], Visual Genome (VG) [28], Conceptual Captions [58] and SBU Captions [49]. VILLA is applied to both MLM and ITM pre-training tasks. The original UNITER-base (12 layers) and UNITER-large (24 layers) models take 200k and 500k steps for pre-training, respectively. For fair comparison, when applying VILLA to UNITER-base, we run 100k steps of standard training, followed by 100k steps of adversarial training. When applying VILLA to UNITER-large, to save pre-training time,2 we run 425k steps of standard training, followed by 75k steps of adversarial training.
2VILLA is K times computationally heavier than UNITER, where K is the number of adversarial training steps. We typically select adversarial learning rate from {1e-2, 1e-3}, adversarial training steps to 3, and α (Eqn. 2) from 1.0, 1.5, 2.0. More implementation details are provided in Appendix.
6
(a) VQA (b) VCR (c) NLVR2
Figure 2: The training curves of VILLA and UNITER on different tasks. For VQA, an internal val set is used.
Method VQA VCR (val) NLVR2 VE Flickr30k IR RefCOCO Ave. test-dev QâA QAâR QâAR test-P test R@1 R@5 R@10 testAd testBd UNITER (reimp.) VILLA-pre VILLA-ï¬ne VILLA 72.70 73.03 73.29 73.59 74.24 74.76 75.18 75.54 76.93 77.04 78.29 78.78 57.31 57.82 59.08 59.75 77.85 78.44 78.84 79.30 78.28 78.43 78.86 79.03 72.52 73.76 73.46 74.74 92.36 93.02 92.98 92.86 96.08 96.28 96.26 95.82 86.48 87.34 87.17 87.40 73.94 74.35 74.31 74.48 78.06 78.57 78.88 79.21
Table 2: Ablation study on VILLA-pre (pre-training) and VILLA-ï¬ne (ï¬netuning) with base model size.
Method VQA VCR (val) Method VQA VCR (val) test-dev QâA QAâR QâAR test-dev QâA QAâR QâAR VILLABASE (txt) VILLABASE (img) VILLABASE (both) VILLALARGE (txt) VILLALARGE (img) VILLALARGE (both) 73.50 73.50 73.59 74.55 74.46 74.69 75.60 75.81 75.54 78.08 78.08 78.45 78.70 78.43 78.78 82.31 82.28 82.57 59.67 59.68 59.75 64.63 64.51 65.18 UNITERBASE (reimp.) UNITERBASE+FreeLB VILLABASE-ï¬ne UNITERLARGE (reimp.) UNITERLARGE+FreeLB VILLALARGE-ï¬ne 72.70 72.82 73.29 73.82 73.87 74.32 74.24 75.13 75.49 76.70 77.19 77.75 76.93 77.90 78.34 80.61 81.44 82.10 57.31 58.73 59.30 62.15 63.24 63.99
(a) Image vs. Text Modality.
(b) FreeLB vs. VILLA.
Table 3: Ablation study on adding perturbations to different modalities and on the VILLA algorithm.
# 4.2 Results and Ablation Analysis
Downstream Task Evaluation Table 1 summarizes the results of VILLA applied to UNITER on all evaluation tasks. Compared with existing pre-trained V+L models, our VILLA method achieves new state of the art across all the benchmarks. Speciï¬cally, VILLA-base model outperforms UNITER-base by +0.76 on VQA, +2.4 on VCR for QâAR, +1.45 on NLVR2, +0.75 on SNLI-VE, +2.22/+0.70 on Flickr30k for Image/Text Retrieval (R@1), and +0.99 on average for the three RE datasets.
Similar universal performance lift is also observed in VILLA-large. It is highly encouraging to see that VILLA-large brings an absolute +2.9 points performance gain over UNITER-large for VCR on the QâAR metric. Compared to the others, VCR is a relatively more challenging task, which requires commonsense reasoning and understanding complex social dynamics that is implicitly encoded in the image. Another signiï¬cant boost is over the well-studied VQA benchmark, from 74.02 to 74.87. With ensemble, the performance of VILLA-large is further lifted to 75.85.
Pre-training vs. Finetuning To understand the effects of adversarial training on pre-training and ï¬netuning, we conduct an ablation study with UNITER-base and summarize the results in Table 2. UNITER (reimp.) denotes our re-implementation of the UNITER-base model with standard training. VILLA-pre and VILLA-ï¬ne apply adversarial training to only the pre-training or ï¬netuning stage, respectively. Averaged over the six evaluation tasks, VILLA-pre and VILLA-ï¬ne brings +0.51 and +0.82 points performance gain. By combining the two, +1.15 points gain is achieved. Figure 2 further provides the training curves of each task, which illustrate growing performance gaps between AT- enhanced models and the original UNITER, as the number of training steps increases. Interestingly, on VQA, though in early epochs UNITER achieves better performance than VILLA, VILLA catches up quickly after a few hundred of steps, which demonstrates the beneï¬cial regularization effect of adversarial training. More training curves on other tasks can be found in Appendix.
To further understand the importance of adversarial pre-training, we use VQA as the probing task, and compare the performance of standard and adversarial pre-training at each intermediate model
7
(a) Standard vs. adversarial pre-training. (b) Adversarial training as a regularizer.
Figure 3: For (a), we use VQA as probing, and compare the performance of standard and adversarial pre-training. For (b), we plot the training curves of standard and adversarial ï¬netuning using LXMERT as backbone.
f dirt phone one phone he : . - i ' a ' one one air ' one one air A group of people are in a dirt ' 4 4 ra 2 4 a mountain, one person is talking on ! Py ; the phone, one is taking a picture a and one is jumping in the air. : : UNITER , VILLA
Figure 4: Visualization of text-to-image attention, comparing VILLA against UNITER.
Model Visual Coreference (Flickr30k) Visual Relation (Visual Genome) scene clothing animals instruments vehicles on standing in wearing holding covering UNITERBASE VILLABASE 0.151 0.169 0.157 0.185 0.285 0.299 0.244 0.263 0.154 0.201 0.107 0.120 0.311 0.353 0.200 0.241 Ave. 0.195 0.223
0.194 0.151 0.202 0.192 Table 4: Probing analysis of the attention heads in pre-trained UNITER and VILLA models.
checkpoint (using standard ï¬netuning to both pre-trained models). Results are presented in Figure 3a. As shown, once adversarial training is activated, VILLA-pre starts outperforming UNITER, and the performance gap increases as the number of pre-training steps grows.
Image vs. Text Modality To gain insights on the effects of adversarial examples in different modalities, we conduct experiments by adding perturbations to either image or text modality, and use VQA and VCR for ablation tests. Results are summarized in Table 3a. Conventionally, adversarial training in the image domain hurts model accuracy on clean images. However, in our setting, we observe that adding perturbations to image features alone can boost ï¬nal model performance signiï¬cantly. Our initial intuition was that adding perturbations to both modalities might increase the diversity of adversarial examples, hence bringing more beneï¬ts. However, ablation results show that adding perturbations on one modality is already gaining signiï¬cant improvement.3 The boost on VCR is larger than VQA, which we hypothesize is due to the higher complexity in VCR task, which adding more adversaries to model training can effectively help.
FreeLB vs. VILLA To compare with prior work FreeLB, we conduct an additional ablation study also on VQA and VCR, two representative and challenging V+L tasks. Table 3b shows that VILLA achieves consistently better performance than FreeLB over both benchmarks, thanks to the additional ï¬ne-grained adversarial regularization term. For example, FreeLB brings little performance boost on VQA, while VILLA achieves considerable improvement over the baseline.
Probing Analysis Pre-trained models are expected to learn intricate knowledge about multimodality correlations, such as visual coreference (i.e., region-phrase alignment) and visual relation (i.e., region-
3We also tried adding adversarial perturbations to both modalities simultaneously instead of alternatively. Empirically, we observe that they obtained similar performance.
8
Method VQA GQA NLVR2 Meta-Ave. test-dev test-std test-dev test-std dev test-P LXMERT LXMERT (reimp.) VILLA-ï¬ne 72.42 72.50 73.02 72.54 72.52 73.18 60.00 59.92 60.98 60.33 60.28 61.12 74.95 74.72 75.98 74.45 74.75 75.73 69.12 69.12 70.00
Table 5: Results on LXMERT with VILLA-ï¬ne (ï¬netuning).
Data split MUTAN BUTD BUTD+CC Pythia Pythia+CC BAN BAN+CC UNITER VILLA Original Rephrasing 59.08 46.87 61.51 51.22 62.44 52.58 64.08 54.20 64.52 55.65 64.97 55.87 65.87 56.59 70.35 64.56 71.27 65.35
Table 6: Results on VQA-Rephrasings. Both UNITER and VILLA use the base model size. Baseline results are copied from [57].
region interaction). To provide a more direct measurement on how well our adversarial pre-trained model captures such multimodal signals, we conduct a probing analysis following [7]. We consider ï¬ve most common visual coreference types in Flickr30k Entities [52] and top ï¬ve visual relations in Visual Genome [28] (listed in Table 4), and calculate the attention weights between region and phrase (or between regions) learned by pre-trained models. Results show that VILLA presents higher attention weights across all the ten categories (0.223 vs. 0.195 on average), indicating a higher probability of identifying those relations. Figure 4 further provides a visualization of text-to-image attention, where VILLA exhibits more accurate and sharper multimodal alignment.
Results on LXMERT VILLA is a generic framework that can be readily applied to any V+L models. To demonstrate its generalization ability, we conduct additional experiments using LXMERT as the backbone. Since adversarial pre-training is highly time-consuming, we only focus on adversarial ï¬netuning for LXMERT.4 We use VQA, GQA and NLVR2 as the evaluation tasks, the same as LXMERT. Results in Table 5 show that VILLA-ï¬ne instantly provides +0.88 average performance boost across the three tasks. The training curves are provided in Figure 3b. Compared to LXMERT, VILLA-ï¬ne achieves higher accuracy on validation set and lower accuracy on training set for both VQA and GQA, clearly demonstrating its regularization effect in preventing overï¬tting of large-scale pre-trained models.
Robustness In order to test adversarial robustness, we need to perform adversarial attacks to existing V+L models. This V+L attack problem is largely unexplored in the literature. For example, how to reliably back-propagate the gradients from the multimodal Transformer to the CNN backbone to generate image adversaries is non-trivial. How to craft textual adversaries that align with the visual context is also challenging. In this work, we mainly focus on improving modelâs generalization performance on clean data, leaving a more thorough investigation of adversarial attack and robustness as important future work.
As a proxy for robustness evaluation, we conduct additional experiments on the VQA-Rephrasings dataset [57] to test the robustness of existing V+L models to paraphrases. For fair comparison, we have re-trained both UNITER and VILLA on the VQA training set only. Results are summarized in Table 6, where âOriginalâ and âRephrasingâ denote the test set with original questions and their rephrasings, respectively. UNITER has already lifted up the performance by a large margin, and VILLA facilitates further performance boost.
We provide additional experimental results, more details about the probing analysis, and additional visualization examples in Appendix.
# 5 Conclusion
In this paper, we present VILLA, an advanced adversarial training (AT) framework for better vision- and-language representation learning. By performing AT in both pre-training and ï¬netuning stages, and by adding adversarial perturbations to the embedding space, VILLA achieves consistent perfor- mance boost on all the benchmarks evaluated. As AT is time-consuming, for future work, we plan to study how to accelerate AT so that it can be more feasible for large-scale pre-training in practice.
4Code is available at https://github.com/zhegan27/LXMERT-AdvTrain.
9
# Broader Impact
Our research advances vision-and-language representation learning by incorporating adversarial training in both pre-training and ï¬netuning stages. By utilizing the enormous amount of image-text data available on the web for pre-training, VILLA can absorb multimodal clues to capture multi- channel signals from the world, towards a smarter AI system. Furthermore, VILLA can provide instant performance boost in ï¬netuning stage, which will help accelerate future studies in this ï¬eld. However, in order to train models to learn such capabilities, our method also calls for a high demand on computational resources due to large-scale training, which could be costly both ï¬nancially and environmentally. As part of our research effort, we will release our pre-trained models to facilitate future research, to empower othersâ scientiï¬c exploration and save environmental cost.
# References
[1] Chris Alberti, Jeffrey Ling, Michael Collins, and David Reitter. Fusion of detected objects in text for visual question answering. arXiv preprint arXiv:1908.05054, 2019.
[2] Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. Bottom-up and top-down attention for image captioning and visual question answering. In CVPR, 2018.
[3] Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Neural module networks. In CVPR, 2016.
[4] Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. Vqa: Visual question answering. In ICCV, 2015.
[5] Anish Athalye, Nicholas Carlini, and David Wagner. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420, 2018.
[6] Remi Cadene, Hedi Ben-Younes, Matthieu Cord, and Nicolas Thome. Murel: Multimodal relational reasoning for visual question answering. In CVPR, 2019.
[7] Jize Cao, Zhe Gan, Yu Cheng, Licheng Yu, Yen-Chun Chen, and Jingjing Liu. Behind the scene: Revealing the secrets of pre-trained vision-and-language models. arXiv preprint arXiv:2005.07310, 2020.
[8] Yair Carmon, Aditi Raghunathan, Ludwig Schmidt, John C Duchi, and Percy S Liang. Unlabeled data improves adversarial robustness. In NeurIPS, 2019.
[9] Hongge Chen, Huan Zhang, Pin-Yu Chen, Jinfeng Yi, and Cho-Jui Hsieh. Attacking visual language ground- ing with adversarial examples: A case study on neural image captioning. arXiv preprint arXiv:1712.02051, 2017.
[10] Tianlong Chen, Sijia Liu, Shiyu Chang, Yu Cheng, Lisa Amini, and Zhangyang Wang. Adversarial robustness: From self-supervised pre-training to ï¬ne-tuning. arXiv preprint arXiv:2003.12862, 2020.
[11] Wenhu Chen, Zhe Gan, Linjie Li, Yu Cheng, William Wang, and Jingjing Liu. Meta module network for compositional visual reasoning. arXiv preprint arXiv:1910.03230, 2019.
[12] Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. Uniter: Learning universal image-text representations. arXiv preprint arXiv:1909.11740, 2019.
[13] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirec- tional transformers for language understanding. In NAACL, 2019.
[14] Akira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. Multimodal compact bilinear pooling for visual question answering and visual grounding. arXiv preprint arXiv:1606.01847, 2016.
[15] Zhe Gan, Yu Cheng, Ahmed EI Kholy, Linjie Li, Jingjing Liu, and Jianfeng Gao. Multi-step reasoning via recurrent dual attention for visual dialog. arXiv preprint arXiv:1902.00579, 2019.
[16] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
[17] Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In CVPR, 2017.
[18] Weituo Hao, Chunyuan Li, Xiujun Li, Lawrence Carin, and Jianfeng Gao. Towards learning a generic agent for vision-and-language navigation via pre-training. arXiv preprint arXiv:2002.10638, 2020.
[19] Dan Hendrycks, Kimin Lee, and Mantas Mazeika. Using pre-training can improve model robustness and uncertainty. arXiv preprint arXiv:1901.09960, 2019.
10
[20] Ronghang Hu, Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Kate Saenko. Learning to reason: End-to-end module networks for visual question answering. In ICCV, 2017.
[21] Zhicheng Huang, Zhaoyang Zeng, Bei Liu, Dongmei Fu, and Jianlong Fu. Pixel-bert: Aligning image pixels with text by deep multi-modal transformers. arXiv preprint arXiv:2004.00849, 2020.
[22] Drew A Hudson and Christopher D Manning. Compositional attention networks for machine reasoning. arXiv preprint arXiv:1803.03067, 2018.
[23] Drew A Hudson and Christopher D Manning. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In CVPR, 2019.
[24] Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Tuo Zhao. Smart: Robust and efï¬cient ï¬ne-tuning for pre-trained natural language models through principled regularized optimization. arXiv preprint arXiv:1911.03437, 2019.
[25] Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Judy Hoffman, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. Inferring and executing programs for visual reasoning. In ICCV, 2017.
[26] Jin-Hwa Kim, Jaehyun Jun, and Byoung-Tak Zhang. Bilinear attention networks. In NeurIPS, 2018.
[27] Jin-Hwa Kim, Kyoung-Woon On, Woosang Lim, Jeonghee Kim, Jung-Woo Ha, and Byoung-Tak Zhang. Hadamard product for low-rank bilinear pooling. arXiv preprint arXiv:1610.04325, 2016.
[28] Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. Visual genome: Connecting language and vision using crowdsourced dense image annotations. IJCV, 2017.
[29] Kuang-Huei Lee, Xi Chen, Gang Hua, Houdong Hu, and Xiaodong He. Stacked cross attention for image-text matching. In ECCV, 2018.
[30] Gen Li, Nan Duan, Yuejian Fang, Daxin Jiang, and Ming Zhou. Unicoder-vl: A universal encoder for vision and language by cross-modal pre-training. arXiv preprint arXiv:1908.06066, 2019.
[31] Linjie Li, Yen-Chun Chen, Yu Cheng, Zhe Gan, Licheng Yu, and Jingjing Liu. Hero: Hierarchical encoder for video+ language omni-representation pre-training. arXiv preprint arXiv:2005.00200, 2020.
[32] Linjie Li, Zhe Gan, Yu Cheng, and Jingjing Liu. Relation-aware graph attention network for visual question answering. arXiv preprint arXiv:1903.12314, 2019.
[33] Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. Visualbert: A simple and performant baseline for vision and language. arXiv preprint arXiv:1908.03557, 2019.
[34] Pengcheng Li, Jinfeng Yi, Bowen Zhou, and Lijun Zhang. Improving the robustness of deep neural networks via adversarial training with triplet loss. arXiv preprint arXiv:1905.11713, 2019.
[35] Xiujun Li, Xi Yin, Chunyuan Li, Xiaowei Hu, Pengchuan Zhang, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, et al. Oscar: Object-semantics aligned pre-training for vision-language tasks. arXiv preprint arXiv:2004.06165, 2020.
[36] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In ECCV, 2014.
[37] Xiaodong Liu, Hao Cheng, Pengcheng He, Weizhu Chen, Yu Wang, Hoifung Poon, and Jianfeng Gao. Adversarial training for large neural language models. arXiv preprint arXiv:2004.08994, 2020.
[38] Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In NeurIPS, 2019.
[39] Jiasen Lu, Vedanuj Goswami, Marcus Rohrbach, Devi Parikh, and Stefan Lee. 12-in-1: Multi-task vision and language representation learning. arXiv preprint arXiv:1912.02315, 2019.
[40] Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. Hierarchical question-image co-attention for visual question answering. In NeurIPS, 2016.
[41] Huaishao Luo, Lei Ji, Botian Shi, Haoyang Huang, Nan Duan, Tianrui Li, Xilin Chen, and Ming Zhou. Univilm: A uniï¬ed video and language pre-training model for multimodal understanding and generation. arXiv preprint arXiv:2002.06353, 2020.
[42] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.
[43] Arjun Majumdar, Ayush Shrivastava, Stefan Lee, Peter Anderson, Devi Parikh, and Dhruv Batra. Improving vision-and-language navigation with image-text pairs from the web. arXiv preprint arXiv:2004.14973, 2020.
[44] Chengzhi Mao, Ziyuan Zhong, Junfeng Yang, Carl Vondrick, and Baishakhi Ray. Metric learning for adversarial robustness. In NeurIPS, 2019.
11
[45] Takeru Miyato, Andrew M Dai, and Ian Goodfellow. Adversarial training methods for semi-supervised text classiï¬cation. arXiv preprint arXiv:1605.07725, 2016.
[46] Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. PAMI, 2018.
[47] Vishvak Murahari, Dhruv Batra, Devi Parikh, and Abhishek Das. Large-scale pretraining for visual dialog: A simple state-of-the-art baseline. arXiv preprint arXiv:1912.02379, 2019.
[48] Will Norcliffe-Brown, Stathis Vafeias, and Sarah Parisot. Learning conditioned graph structures for interpretable visual question answering. In NeurIPS, 2018.
[49] Vicente Ordonez, Girish Kulkarni, and Tamara L Berg. Im2text: Describing images using 1 million captioned photographs. In NeurIPS, 2011.
[50] Tianyu Pang, Xiao Yang, Yinpeng Dong, Kun Xu, Hang Su, and Jun Zhu. Boosting adversarial training with hypersphere embedding. arXiv preprint arXiv:2002.08619, 2020.
[51] Gao Peng, Zhengkai Jiang, Haoxuan You, Pan Lu, Steven Hoi, Xiaogang Wang, and Hongsheng Li. Dynamic fusion with intra-and inter-modality attention ï¬ow for visual question answering. arXiv preprint arXiv:1812.05252, 2018.
[52] Bryan A Plummer, Liwei Wang, Chris M Cervantes, Juan C Caicedo, Julia Hockenmaier, and Svetlana Lazebnik. Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models. In ICCV, 2015.
[53] Sainandan Ramakrishnan, Aishwarya Agrawal, and Stefan Lee. Overcoming language priors in visual question answering with adversarial regularization. In NeurIPS, 2018.
[54] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. Semantically equivalent adversarial rules for debugging nlp models. In ACL, 2018.
[55] Adam Santoro, David Raposo, David G Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, and Timothy Lillicrap. A simple neural network module for relational reasoning. In NeurIPS, 2017.
[56] Ali Shafahi, Mahyar Najibi, Mohammad Amin Ghiasi, Zheng Xu, John Dickerson, Christoph Studer, Larry S Davis, Gavin Taylor, and Tom Goldstein. Adversarial training for free! In NeurIPS, 2019.
[57] Meet Shah, Xinlei Chen, Marcus Rohrbach, and Devi Parikh. Cycle-consistency for robust visual question answering. In CVPR, 2019.
[58] Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In ACL, 2018.
[59] Robert Stanforth, Alhussein Fawzi, Pushmeet Kohli, et al. Are labels required for improving adversarial robustness? arXiv preprint arXiv:1905.13725, 2019.
[60] Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. Vl-bert: Pre-training of generic visual-linguistic representations. arXiv preprint arXiv:1908.08530, 2019.
[61] Alane Suhr, Stephanie Zhou, Ally Zhang, Iris Zhang, Huajun Bai, and Yoav Artzi. A corpus for reasoning about natural language grounded in photographs. arXiv preprint arXiv:1811.00491, 2018.
[62] Chen Sun, Fabien Baradel, Kevin Murphy, and Cordelia Schmid. Contrastive bidirectional transformer for temporal representation learning. arXiv preprint arXiv:1906.05743, 2019.
[63] Chen Sun, Austin Myers, Carl Vondrick, Kevin Murphy, and Cordelia Schmid. Videobert: A joint model for video and language representation learning. In ICCV, 2019.
[64] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
[65] Hao Tan and Mohit Bansal. Lxmert: Learning cross-modality encoder representations from transformers. In EMNLP, 2019.
[66] Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204, 2017.
[67] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NeurIPS, 2017.
[68] Dilin Wang, Chengyue Gong, and Qiang Liu. Improving neural language modeling via adversarial training. arXiv preprint arXiv:1906.03805, 2019.
[69] Yue Wang, Shaï¬q Joty, Michael R Lyu, Irwin King, Caiming Xiong, and Steven CH Hoi. Vd-bert: A uniï¬ed vision and dialog transformer with bert. arXiv preprint arXiv:2004.13278, 2020.
[70] Eric Wong, Leslie Rice, and J Zico Kolter. Fast is better than free: Revisiting adversarial training. arXiv preprint arXiv:2001.03994, 2020.
12
[71] Qiaolin Xia, Haoyang Huang, Nan Duan, Dongdong Zhang, Lei Ji, Zhifang Sui, Edward Cui, Taroon Bharti, and Ming Zhou. Xgpt: Cross-modal generative pre-training for image captioning. arXiv preprint arXiv:2003.01473, 2020.
[72] Cihang Xie, Mingxing Tan, Boqing Gong, Jiang Wang, Alan Yuille, and Quoc V Le. Adversarial examples improve image recognition. arXiv preprint arXiv:1911.09665, 2019.
[73] Cihang Xie, Yuxin Wu, Laurens van der Maaten, Alan L Yuille, and Kaiming He. Feature denoising for improving adversarial robustness. In CVPR, 2019.
[74] Ning Xie, Farley Lai, Derek Doran, and Asim Kadav. Visual entailment: A novel task for ï¬ne-grained image understanding. arXiv preprint arXiv:1901.06706, 2019.
[75] Qizhe Xie, Zihang Dai, Eduard Hovy, Minh-Thang Luong, and Quoc V Le. Unsupervised data augmenta- tion for consistency training. 2019.
[76] Yan Xu, Baoyuan Wu, Fumin Shen, Yanbo Fan, Yong Zhang, Heng Tao Shen, and Wei Liu. Exact adversarial attack to image captioning via structured output learning with latent variables. In CVPR, 2019.
[77] Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. Stacked attention networks for image question answering. In CVPR, 2016.
[78] Licheng Yu, Patrick Poirson, Shan Yang, Alexander C Berg, and Tamara L Berg. Modeling context in referring expressions. In ECCV, 2016.
[79] Zhou Yu, Jun Yu, Yuhao Cui, Dacheng Tao, and Qi Tian. Deep modular co-attention networks for visual question answering. In CVPR, 2019.
[80] Zhou Yu, Jun Yu, Jianping Fan, and Dacheng Tao. Multi-modal factorized bilinear pooling with co-attention learning for visual question answering. In ICCV, 2017.
[81] Rowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin Choi. From recognition to cognition: Visual commonsense reasoning. In CVPR, 2019.
[82] Dinghuai Zhang, Tianyuan Zhang, Yiping Lu, Zhanxing Zhu, and Bin Dong. You only propagate once: Accelerating adversarial training via maximal principle. In NeurIPS, 2019.
[83] Haichao Zhang and Jianyu Wang. Defense against adversarial attacks using feature scattering-based adversarial training. In NeurIPS, 2019.
[84] Hongyang Zhang, Yaodong Yu, Jiantao Jiao, Eric P Xing, Laurent El Ghaoui, and Michael I Jordan. Theoretically principled trade-off between robustness and accuracy. arXiv preprint arXiv:1901.08573, 2019.
[85] Luowei Zhou, Hamid Palangi, Lei Zhang, Houdong Hu, Jason J Corso, and Jianfeng Gao. Uniï¬ed vision-language pre-training for image captioning and vqa. arXiv preprint arXiv:1909.11059, 2019.
[86] Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Thomas Goldstein, and Jingjing Liu. Freelb: Enhanced adversarial training for language understanding. arXiv preprint arXiv:1909.11764, 2019.
13
# A Appendix
This supplementary material contains three sections. Section A.1 reviews additional related work. Section A.2 provides additional experimental results. Section A.3 describes downstream tasks and implementation details.
# A.1 Additional Related Work
Adversarial Training Many efforts have been devoted to improving AT from different angles: (i) use triplet- wise metric learning [44, 34] and optimal transport [83] to leverage inter-sample interactions; (ii) exploit extra unlabeled training data [59, 8]; and (iii) accelerate the training procedure [56, 82, 70]. Speciï¬cally, adversarial examples have been explored primarily in the image domain, and only recently started to gain attention in vision-and-language research. [9, 76] studied how to craft adversarial examples for image captioning, and [54] investigated how to derive adversarial rules to attack VQA systems. Different from these studies, we are not interested in crafting actual adversarial examples, but aim to apply AT to improve the ï¬nal model performance over V+L tasks. Note that âadversarial regularizationâ was proposed in [53]; however, it is mainly used to overcome the language priors in VQA, which is entirely different from the AT used here.
# A.2 Additional Results
Results on VQA In Table 1a, we have reported the experimental results on the test-dev and test-std splits of VQA. More detailed results on each question type are provided in Table 7. As shown, VILLA improves over UNITER on all the question types.
Method test-dev test-std yes/no number other overall yes/no number other overall UNITERBASE (reimp.) VILLABASE UNITERLARGE (reimp.) VILLALARGE VILLALARGE (Ensemble) 88.97 89.37 90.13 90.76 91.24 55.67 56.86 57.24 58.26 59.73 62.81 63.90 63.70 64.67 65.98 72.77 73.59 73.86 74.69 75.68 - 89.41 - 90.85 91.30 - 56.78 - 57.3 59.23 - 63.84 - 64.98 66.20 - 73.67 - 74.87 75.85
# Table 7: More detailed results on VQA.
Training Curves In Figure 3a, we have provided the training curves on three datasets. The training curves for the remaining three datasets are shown in Figure 5 with similar trend observed.
(a) SNLI-VE (b) Flickr30k IR (c) RefCOCO
# Figure 5: Additional training curves of VILLA and UNITER on different tasks.
Pre-training vs. Finetuning with Large Model Size In Table 2, we provided ablation study on adversarial pre-training and ï¬netuning with UNITER-base model size (12 layers). In Table 8, we provide additional ablation study with large model size (24 layers) on a selective set of tasks (VQA and VCR). On average, adversarial pre-training and ï¬netuning bring +1.48 and +2.21 performance gain, respectively. Combining the two AT stages provides further improvement.
Results on GQA In Table 5, we have reported LXMERT results on GQA enhanced by VILLA-ï¬ne. The complete results are provided in Table 10 for reference.
Adversarial pre-training from scratch Instead of performing adversarial pre-training from 100k steps, we also conducted experiments on adversarial pre-training from scratch with base model size. Preliminary results on VQA are shown in Table 9. Adversarial pre-training from scratch brings further performance improvement. We leave a thorough investigation of this as future work.
Additional Visualization We provide additional text-to-image attention visualization results in Figure 6.
14
Method VQA VCR (val) Ave. test-dev QâA QAâR QâAR UNITER (reimp.) VILLA-pre VILLA-ï¬ne VILLA 73.82 74.05 74.48 74.69 76.70 77.16 77.74 78.45 80.61 81.02 81.91 82.57 62.15 62.99 64.00 65.18 72.32 73.80 74.53 75.22
Table 8: Ablation study on VILLA-pre (pre-training) and VILLA-ï¬ne (ï¬netuning) with large model size.
Method VQA (test-dev) 100k 200k (from scratch) UNITER (reimp.) VILLA-pre VILLA 72.70 73.03 73.59 - 73.18 73.69
# Table 9: Adversarial pre-training from scratch with base model size.
Method test-dev Accuracy Binary Open Validity Plausibility Consistency Distribution LXMERT (reimp.) VILLA-ï¬ne 59.92 60.98 77.32 78.17 44.61 45.86 97.10 97.07 85.26 85.44 89.55 91.09 1.15 1.20 Method test-std Accuracy Binary Open Validity Plausibility Consistency Distribution LXMERT (reimp.) VILLA-ï¬ne 60.28 61.12 77.14 78.07 45.40 46.16 96.33 96.36 84.46 84.80 89.45 91.13 5.38 5.55
Table 10: More detailed results on GQA.
man sits colorful oman sits colorful . 1 H fe Saal \) <a > ew = : p a) man stands i {man stands it . â \ Lf F} : 5 . : . Ni Ni â i Aman sits onacolorful : é b| â man-drawn carriage, while | "= : t b | another man stands beside : : : . it. UNITER VILLA
(a)
woman four children ! woman four children » A A Ot ote - he fhe crossing busy street | crossing busy street â : nl pe) oe Ei Awoman and four children; â Py > | _ PY of] é i ; aS A a PA Saas are crossing a busy street. : a = = : 3 See UNITER â VILLA
(b)
Figure 6: Additional visualization on text-to-image attention, comparing VILLA and UNITER.
15
Task Model Batch Size Grad. Accu. Lr. Training Steps Warm-up Steps Adv. Lr. Adv. Weight VQA VCR NLVR2 SNLI-VE RefCOCO+ RefCOCO RefCOCOg Flickr30k ITR VILLABASE VILLALARGE VILLABASE VILLALARGE VILLABASE VILLALARGE VILLABASE VILLALARGE VILLABASE VILLALARGE VILLABASE VILLALARGE VILLABASE VILLALARGE VILLABASE VILLALARGE 5120 3072 2000 1000 2560 1280 4096 4096 128 96 128 96 128 96 32 32 5 8 10 20 4 8 4 2 1 1 1 1 1 1 32 32 8e-5 5e-5 6e-5 6e-5 6e-5 2e-5 8e-5 3e-5 5e-5 4e-5 4e-5 4e-5 7e-5 4e-5 5e-5 5e-5 6000 5000 8000 10000 3000 5000 5000 4000 8000 8000 8000 10000 12000 8000 5000 5000 600 500 800 1000 300 500 500 400 800 800 800 1000 1200 800 500 500 1e-3 1e-3 1e-2 1e-1 5e-4 1e-2 3e-3 1e-3 2e-3 1e-3 5e-3 1e-3 2e-3 1e-3 1e-2 1e-2 1.5 1.5 1.5 1.0 1.5 1.5 2.0 1.5 1.0 1.5 2.0 1.5 1.0 1.0 1.0 1.0
Table 11: Hyper-parameter values used in our experiments.
# A.3 Downstream Tasks and Implementation Details
Downstream Tasks In VQA [17], GQA [23] and VCR [81], given an image and an input question, the model predicts an answer (or selects from a candidate pool). For NLVR2 [61], given a pair of images and a natural language description, the model judges the correctness of the description based on the visual clues in the image pair. For Visual Entailment, we evaluate on SNLI-VE [74], where the model predicts whether a given image semantically entails a given sentence. For Referring Expression (RE) Comprehension, we evaluate on RefCOCO, RefCOCO+, and RefCOCOg datasets [78], where given a text description, the model selects the described region from a set of image region proposals. Models are evaluated on ground-truth objects and detected proposals. For Image-Text Retrieval (ITR), we consider both image retrieval and text retrieval on Flickr30k dataset.
For all the tasks except RE Comprehension, we extract the joint V+L embedding from the [CLS] token, and apply a multi-layer perceptron (MLP) for prediction. For RE Comprehension, we use MLP to compute the region-wise alignment scores. During the ï¬netuning stage, ITR is formulated as a ranking problem, with triplet loss used for modeling training and hard negatives applied to boost performance [30]. All the other tasks can be formulated as a classiï¬cation problem, using cross-entropy loss for model training. For VCR [81], second-stage pre-training with VCR training data was proven useful in [12]. Therefore, for VCR downstream experiments, we further apply 60k steps of second-stage adversarial pre-training.
Probing Analysis The visual coreference task aims to predict whether there is a link between an image region and a noun phrase in the sentence that describes the image. In addition, each coreference link in the dataset is annotated with a label. Through this task, we can ï¬nd out whether the coreference knowledge can be captured by the attention trace. To achieve this goal, for each data sample in the Flickr30k Entity dataset, we extract the encoderâs attention weights for all the 144 heads. Note that noun phrases typically consist of two or more tokens in the sequence. Thus, we extract the maximum attention weight between the image region and each word of the noun phrase for each head. The maximum weight is then used to evaluate which head identiï¬es visual coreference.
Similarly, the visual relation task aims to identify and classify the relation between two image regions. The Visual Genome dataset is used for this task, which contains 1,531,448 relations. To reduce the imbalance in the number of relations per relation type, we randomly select at most 15,000 relation pairs per type. Then, we perform similar probing analysis of the attention heads by examining the attention weights on ground-truth links.
Implementation Details Our models are implemented based on PyTorch.To speed up training, we use Nvidia Apex5 for mixed precision training. All pre-training experiments are run on Nvidia V100 GPUs (16GB VRAM; PCIe connection). Finetuning experiments are implemented on the same hardware or Titan RTX GPUs (48GB VRAM). For large pre-training experiments, we use Horovod6 and NCCL7 for multi-node communication. All the hyper-parameter values used in experiments are listed in Table 11. And for all the experiments, we set the number of adversarial training steps to 3. We mostly follow the experimental settings in UNITER [12]. For more details on each downstream task ï¬netuning, please refer to their Appendix. Since we mostly adopt their default hyper-parameters, and the only additional hyper-parameters we introduce are adversarial learning rate, number of adversarial steps, and the adversarial weight α in Eqn. 2, the experimental results are fairly easy to reproduce.
# 5https://github.com/NVIDIA/apex 6https://github.com/horovod/horovod 7https://github.com/NVIDIA/nccl
16 | {
"id": "1909.11059"
} |
2006.05987 | Revisiting Few-sample BERT Fine-tuning | This paper is a study of fine-tuning of BERT contextual representations, with
focus on commonly observed instabilities in few-sample scenarios. We identify
several factors that cause this instability: the common use of a non-standard
optimization method with biased gradient estimation; the limited applicability
of significant parts of the BERT network for down-stream tasks; and the
prevalent practice of using a pre-determined, and small number of training
iterations. We empirically test the impact of these factors, and identify
alternative practices that resolve the commonly observed instability of the
process. In light of these observations, we re-visit recently proposed methods
to improve few-sample fine-tuning with BERT and re-evaluate their
effectiveness. Generally, we observe the impact of these methods diminishes
significantly with our modified process. | http://arxiv.org/pdf/2006.05987 | Tianyi Zhang, Felix Wu, Arzoo Katiyar, Kilian Q. Weinberger, Yoav Artzi | cs.CL, cs.LG | Code available at
https://github.com/asappresearch/revisit-bert-finetuning | null | cs.CL | 20200610 | 20210311 | 1 2 0 2
r a M 1 1 ] L C . s c [
3 v 7 8 9 5 0 . 6 0 0 2 : v i X r a
Published as a conference paper at ICLR 2021
# REVISITING FEW-SAMPLE BERT FINE-TUNING
Tianyi Zhang**! Felix Wu*t = Arzoo Katiyar*° â_ Kilian Q. Weinbergerâ! Yoav Artzi'* + ASAPP Inc. 5Stanford University °Penn State University âCornell University [email protected] {fwu, kweinberger, yoav}@asapp.com [email protected]
# ABSTRACT
This paper is a study of ï¬ne-tuning of BERT contextual representations, with focus on commonly observed instabilities in few-sample scenarios. We identify several factors that cause this instability: the common use of a non-standard optimization method with biased gradient estimation; the limited applicability of signiï¬cant parts of the BERT network for down-stream tasks; and the prevalent practice of using a pre-determined, and small number of training iterations. We empirically test the impact of these factors, and identify alternative practices that resolve the commonly observed instability of the process. In light of these observations, we re-visit recently proposed methods to improve few-sample ï¬ne-tuning with BERT and re-evaluate their effectiveness. Generally, we observe the impact of these methods diminishes signiï¬cantly with our modiï¬ed process.
# INTRODUCTION
Fine-tuning self-supervised pre-trained models has signiï¬cantly boosted state-of-the-art performance on natural language processing (NLP) tasks (Liu, 2019; Yang et al., 2019a; Wadden et al., 2019; Zhu et al., 2020; Guu et al., 2020). One of the most effective models for this process is BERT (Devlin et al., 2019). However, despite signiï¬cant success, ï¬ne-tuning remains unstable, especially when using the large variant of BERT (BERTLarge) on small datasets, where pre-training stands to provide the most signiï¬cant beneï¬t. Identical learning processes with different random seeds often result in signiï¬cantly different and sometimes degenerate models following ï¬ne-tuning, even though only a few, seemingly insigniï¬cant aspects of the learning process are impacted by the random seed (Phang et al., 2018; Lee et al., 2020; Dodge et al., 2020).1 As a result, practitioners resort to multiple random trials for model selection. This increases model deployment costs and time, and makes scientiï¬c comparison challenging (Dodge et al., 2020).
This paper is a study of different aspects of the few-sample ï¬ne-tuning optimization process. Our goal is to better understand the impact of common choices with regard to the optimization algorithm, model initialization, and the number of ï¬ne-tuning training iterations. We identify suboptimalities in common community practices: the use of a non-standard optimizer introduces bias in the gradient estimation; the top layers of the pre-trained BERT model provide a bad initialization point for ï¬ne- tuning; and the use of a pre-determined , but commonly adopted number of training iterations hurts convergence. We study these issues and their remedies through experiments on multiple common benchmarks, focusing on few-sample ï¬ne-tuning scenarios.
Once these suboptimal practices are addressed, we observe that degenerate runs are eliminated and performance becomes much more stable. This makes it unnecessary to execute numerous random restarts as proposed in Dodge et al. (2020). Our experiments show the remedies we experiment with for each issue have overlapping effect. For example, allocating more training iterations can eventually compensate for using the non-standard biased optimizer, even though the combination of a bias-corrected optimizer and re-initializing some of the pre-trained model parameters can reduce ï¬ne-tuning computational costs. This empirically highlights how different aspects of ï¬ne-tuning inï¬uence the stability of the process, at times in a similar manner. In the light of our observations, we re-evaluate several techniques (Phang et al., 2018; Lee et al., 2020; Howard & Ruder, 2018) that
Equal contribution, 4 Work done at ASAPP. 'Fine-tuning instability is also receiving
also receiving signiï¬cant practitioner attention. For example: https://github.com/zihangdai/xlnet/issues/96 and https://github.com/huggingface/transformers/issues/265.
1
Published as a conference paper at ICLR 2021
were recently proposed to increase few-sample ï¬ne-tuning stability and show a signiï¬cant decrease in their impact. Our work furthers the empirical understanding of the ï¬ne-tuning process, and the optimization practices we outline identify impactful avenues for the development of future methods.
# 2 BACKGROUND AND RELATED WORK
BERT The Bidirectional Encoder Representations from Transformers (BERT; Devlin et al., 2019) model is a Transformer encoder (Vaswani et al., 2017) trained on raw text using masked language modeling and next-sentence prediction objectives. It generates an embedding vector contextualized through a stack of Transformer blocks for each input token. BERT prepends a special [CLS]token to the input sentence or sentence pairs. The embedding of this token is used as a summary token for the input for classiï¬cation tasks. This embedding is computed with an additional fully-connected layer with a tanh non-linearity, commonly referred to as the pooler, to aggregate the information for the [CLS]embedding.
Fine-tuning The common approach for using the pre-trained BERT model is to replace the original output layer with a new task-speciï¬c layer and ï¬ne-tune the complete model. This includes learning the new output layer parameters and modifying all the original weights, including the weights of word embeddings, Transformer blocks, and the pooler. For example, for sentence-level classiï¬cation, an added linear classiï¬er projects the [CLS]embedding to an unnormalized probability vector over the output classes. This process introduces two sources of randomness: the weight initialization of the new output layer and the data order in the stochastic ï¬ne-tuning optimization. Existing work (Phang et al., 2018; Lee et al., 2020; Dodge et al., 2020) shows that these seemingly benign factors can inï¬uence the results signiï¬cantly, especially on small datasets (i.e., < 10K examples). Consequently, practitioners often conduct many random trials of ï¬ne-tuning and pick the best model based on validation performance (Devlin et al., 2019).
Fine-tuning Instability The instability of the BERT ï¬ne-tuning process has been known since its introduction (Devlin et al., 2019), and various methods have been proposed to address it. Phang et al. (2018) show that ï¬ne-tuning the pre-trained model on a large intermediate task stabilizes later ï¬ne-tuning on small datasets. Lee et al. (2020) introduce a new regularization method to constrain the ï¬ne-tuned model to stay close to the pre-trained weights and show that it stabilizes ï¬ne-tuning. Dodge et al. (2020) propose an early stopping method to efï¬ciently ï¬lter out random seeds likely to lead to bad performance. Concurrently to our work, Mosbach et al. (2020) also show that BERTADAM leads to instability during ï¬ne-tuning. Our experiments studying the effect of training longer are related to previous work studying this question in the context of training models from scratch (Popel & Bojar, 2018; Nakkiran et al., 2019).
BERT Representation Transferability BERT pre-trained representations have been widely stud- ied using probing methods showing that the pre-trained features from intermediate layers are more transferable (Tenney et al., 2019b;a; Liu et al., 2019a; Hewitt & Manning, 2019; Hewitt & Liang, 2019) or applicable (Zhang et al., 2020) to new tasks than features from later layers, which change more after ï¬ne-tuning (Peters et al., 2019; Merchant et al., 2020). Our work is inspired by these ï¬ndings, but focuses on studying how the pre-trained weights inï¬uence the ï¬ne-tuning process. Li et al. (2020) propose to re-initialize the ï¬nal fully-connected layer of a ConvNet and show perfor- mance gain for image classiï¬cation.2 Concurrent to our work, Tamkin et al. (2020) adopt a similar methodology of weight re-initialization (Section 5) to study the transferability of BERT. In contrast to our study, their work emphasizes pinpointing the layers that contribute the most in transfer learning, and the relation between probing performance and transferability.
# 3 EXPERIMENTAL METHODOLOGY
Data We follow the data setup of previous studies (Lee et al., 2020; Phang et al., 2018; Dodge et al., 2020) to study few-sample ï¬ne-tuning using eight datasets from the GLUE benchmark (Wang et al., 2019b). The datasets cover four tasks: natural language inference (RTE, QNLI, MNLI), paraphrase detection (MRPC, QQP), sentiment classiï¬cation (SST-2), and linguistic acceptability (CoLA). Appendix A provides dataset statistics and a description of each dataset. We primarily
2This concurrent work was published shortly after our study was posted.
2
Published as a conference paper at ICLR 2021
Algorithm 1: the ADAM pseudocode adapted from[Kingma & Bal 4 [Kingma & Bal (2014), and provided for reference. g? denotes the elementwise square g, © g;. 3, and J to the power t are denoted as 34 8%. All operations on vectors are element-wise. The suggested hyperparameter values according to | 2014) are: a = 0.001, 6, = 0.9, B2 = 0.999, and e = 1078. BERTADAM (Devlin et al.|/2019) omits the bias correction (lines[9}{10), and treats m, and v; as Require: a: _ rate; 81, 82 ⬠[0, 1): exponential decay rates for the moment estimates; f(@): stochastic objective function with parameters 0; 9p: initial parameter vector; A ⬠[0, 1): decoupled weight decay. my, and 0; in line
Require: a: rate; 81, 82 ⬠[0, 1): exponential decay rates for the moment estimates; f(@): objective function with parameters 0; 9p: initial parameter vector; A ⬠[0, 1): decoupled weight decay. mo + 0 (Initialize first moment vector) vo + 0 (Initialize second moment vector) t + 0 (Initialize timestep) while 4; not converged do tcett+l1 ge â Vofe(O.â-1) (Get gradients w.r.t. stochastic objective at timestep t) me + $1 -met-1 + (1 â 81) - gt (Update biased first moment estimate) ve + B2- 4-1 + (1 â Bo) - G (Update biased second raw moment estimate) Mz <â mz/(1 â 64) (Compute bias-corrected first moment estimate) 10: & < v:/(1 â 85) (Compute bias-corrected second raw moment estimate) 11: Oâ O_1 -âa- ire / (Vor + â¬) (Update parameters) 12: end while 13: return 6; (Resulting parameters) CrPIAAMRYNYS
focus on four datasets (RTE, MRPC, STS-B, CoLA) that have fewer than 10k training samples, because BERT ï¬ne-tuning on these datasets is known to be unstable (Devlin et al., 2019). We also complement our study by downsampling all eight datasets to 1k training examples following Phang et al. (2018). While previous studies (Lee et al., 2020; Phang et al., 2018; Dodge et al., 2020) focus on the validation performance, we split held-out test sets for our study.3 For RTE, MRPC, STS-B, and CoLA, we divide the original validation set in half, using one half for validation and the other for test. For the other four larger datasets, we only study the downsampled versions, and split additional 1k samples from the training set as our validation data and test on the original validation set.
Experimental Setup Unless noted otherwise, we follow the hyperparameter setup of Lee et al. (2020). We ï¬ne-tune the uncased, 24-layer BERTLarge model with batch size 32, dropout 0.1, and peak learning rate 2 à 10â5 for three epochs. We clip the gradients to have a maximum norm of 1. We apply linear learning rate warm-up during the ï¬rst 10% of the updates followed by a linear decay. We use mixed precision training using Apex4 to speed up experiments. We show that mixed precision training does not affect ï¬ne-tuning performance in Appendix C. We evaluate ten times on the validation set during training and perform early stopping. We ï¬ne-tune with 20 random seeds to compare different settings.
# 4 OPTIMIZATION ALGORITHM: DEBIASING OMISSION IN BERTADAM
The most commonly used optimizer for ï¬ne-tuning BERT is BERTADAM, a modiï¬ed version of the ADAM ï¬rst-order stochastic optimization method. It differs from the original ADAM algo- rithm (Kingma & Ba, 2014) in omitting a bias correction step. This change was introduced by Devlin et al. (2019), and subsequently made its way into common open source libraries, including the ofï¬cial implementation,5 huggingfaceâs Transformers (Wolf et al., 2019),6 AllenNLP (Gardner et al., 2018), GluonNLP (Guo et al., 2019), jiant (Wang et al., 2019c), MT-DNN (Liu et al., 2020), and FARM.7 As a result, this non-standard implementation is widely used in both industry and research (Wang et al., 2019a; Phang et al., 2018; Lee et al., 2020; Dodge et al., 2020; Sun et al., 2019; Clark et al., 2020; Lan et al., 2020; Houlsby et al., 2019; Stickland & Murray, 2019; Liu et al., 2019b). We observe that the bias correction omission inï¬uences the learning rate, especially early in the ï¬ne-tuning process, and is one of the primary reasons for instability in ï¬ne-tuning BERT (Devlin et al., 2019; Phang et al., 2018; Lee et al., 2020; Dodge et al., 2020).
Algorithm 1 shows the ADAM algorithm, and highlights the omitted line in the non-standard BERTADAM implementation. At each optimization step (lines 4â11), ADAM computes the exponen-
3The original test sets are not publicly available. 4https://github.com/NVIDIA/apex 5https://github.com/google-research/bert/blob/f39e881/optimization.py#L108-L157 6The default was changed from BERTADAM to debiased ADAM in commit ec07cf5a on July 11, 2019. 3 7https://github.com/deepset-ai/FARM
Published as a conference paper at ICLR 2021
10° 10° scale) Performance Distribution Fr 6° © Correction No Correction â Median © Outlier 8 Test Performance ese ee ee: b RE BD &® © Ss So RTE MRPC CoLA STS-B 2 2 4 5 "5 £ &
Bias in Update Magnitude wR UD N 10* 10° 10° itil 10? 108 Training Iterations (log scale) 1 10° 10!
RTE 1.00 2 2 0.75 4 5 "5 0.50 £ & â Correction 0.25 â No Correction 23 92 161 230 Steps
Figure 1: Bias in the ADAM up- date as a function of training iter- ations. Vertical lines indicate the typical number of iterations used to ï¬ne-tune BERT on four small datasets and one large dataset (MNLI). Small datasets use fewer iterations and are most affected.
Figure 2: Performance dis- tribution box plot across 50 random trials and the four datasets with and without ADAM bias correction. Bias correction reduces the vari- ance of ï¬ne-tuning results by a large margin.
Figure 3: Mean (solid lines) and range (shaded region) of training loss during ï¬ne- tuning BERT, across 50 ran- dom trials. Bias correction speeds up convergence and shrinks the range of training loss.
tial moving average of the gradients (mt) and the squared gradients (vt), where β1, β2 parameterize the averaging (lines 7â8). Because ADAM initializes mt and vt to 0 and sets exponential decay rates β1 and β2 close to 1, the estimates of mt and vt are heavily biased towards 0 early during learning when t is small. Kingma & Ba (2014) computes the ratio between the biased and the unbiased estimates of mt and vt as (1 â βt 2). This ratio is independent of the training data. The model parameters θ are updated in the direction of the averaged gradient mt divided by the square vt (line 11). BERTADAM omits the debiasing (lines 9â10), and directly root of the second moment uses the biased estimates in the parameters update. Figure 1 shows the ratio Ëmtâ between the update using the biased and the unbiased estimation Ëvt as a function of training iterations. The bias is relatively high early during learning, indicating overestimation. It eventually converges to one, suggesting that when training for sufï¬cient iterations, the estimation bias will have negligible effect.8 Therefore, the bias ratio term is most important early during learning to counteract the overestimation of mt and vt during early iterations. In practice, â
1âβt 2
ADAM adaptively re-scales the learning rate by . This correction is crucial for BERT ï¬ne- tuning on small datasets with fewer than 10k training samples because they are typically ï¬ne-tuned with less than 1k iterations (Devlin et al., 2019). The ï¬gure shows the number of training iterations for RTE, MRPC, STS-B, CoLA, and MNLI. MNLI is the only one of this set with a large number of supervised training examples. For small datasets, the bias ratio is signiï¬cantly higher than one for the entire ï¬ne-tuning process, implying that these datasets suffer heavily from overestimation in the update magnitude. In comparison, for MNLI, the majority of ï¬ne-tuning occurs in the region where the bias ratio has converged to one. This explains why ï¬ne-tuning on MNLI is known to be relatively stable (Devlin et al., 2019).
We evaluate the importance of the debiasing step empirically by ï¬ne-tuning BERT with both BERTADAM and the debiased ADAM9 for 50 random seeds on RTE, MRPC, STS-B, and CoLA. Figure 2 summarizes the performance distribution. The bias correction signiï¬cantly reduces the performance variance across different random trials and the four datasets. Without the bias correction we observe many degenerate runs, where ï¬ne-tuned models fail to outperform the random baseline. For example, on RTE, 48% of ï¬ne-tuning runs have an accuracy less than 55%, which is close to random guessing. Figure 3 further illustrates this difference by plotting the mean and the range of training loss during ï¬ne-tuning across different random trials on RTE. Figure 11 in Appendix F shows similar plots for MRPC, STS-B, and CoLA. The biased BERTADAM consistently leads to worse averaged training loss, and on all datasets to higher maximum training loss. This indicates models trained with BERTAdam are underï¬tting and the root of instability lies in optimization.
8Our experiments on the completely MNLI dataset conï¬rm using the unbiased estimation does not improve nor degrade performance for large datasets (Appendix D).
# 9We use the PyTorch ADAM implementation https://pytorch.org/docs/1.4.0/_modules/torch/optim/adamw.html.
4
Published as a conference paper at ICLR 2021
RTE MRPC STS-B CoLA 0.75 0.95 0.90 0.70 3g 0.70 = 092 3 0.89 9 0.65 2 065 % 0.89 = 0.88 Z 9.60 % & é % 0.55 & 0.60 5.0.86 = 087 © 050 | â Correction F055 BD 0.83 (H 0.86 045| â No Correction 9507 10 20 30 40 50 98?) 19 20 30 40 50 281 10 20 30 40 50 1 10 20 30 40 50 # of Random Trials # of Random Trials # of Random Trials # of Random Trials
Figure 4: Expected test performance (solid lines) with standard deviation (shaded region) over the number of random trials allocated for ï¬ne-tuning BERT. With bias correction, we reliably achieve good results with few (i.e., 5 or 10) random trials.
Dataset RTE MRPC STS-B CoLA Longer Standard 69.5 ± 2.5 72.3 ± 1.9 90.8 ± 1.3 90.5 ± 1.5 89.0 ± 0.6 89.6 ± 0.3 63.0 ± 1.5 62.4 ± 1.7 72.6 ± 1.6 73.1 ± 1.3 91.4 ± 0.8 91.0 ± 0.4 89.4 ± 0.2 89.9 ± 0.1 63.9 ± 1.9 61.9 ± 2.3 Re-init 3 Epochs Longer 3 Epochs 3 Epochs Longer 3 Epochs Longer Dataset RTE (1k) MRPC (1k) STS-B (1k) CoLA (1k) Longer Standard 62.5 ± 2.8 65.2 ± 2.1 80.5 ± 3.3 83.8 ± 2.1 84.7 ± 1.4 88.0 ± 0.4 45.9 ± 1.6 48.8 ± 1.4 65.6 ± 2.0 65.8 ± 1.7 84.6 ± 1.6 86.0 ± 1.2 87.2 ± 0.4 88.4 ± 0.2 47.6 ± 1.8 48.4 ± 2.1 Re-init 3 Epochs Longer 3 Epochs 3 Epochs Longer 3 Epochs Longer Dataset SST (1k) QNLI (1k) QQP (1k) MNLI (1k) Longer Standard 89.7 ± 1.5 90.9 ± 0.5 78.6 ± 2.0 81.4 ± 0.9 74.0 ± 2.7 77.4 ± 0.8 52.2 ± 4.2 67.5 ± 1.1 90.8 ± 0.4 91.2 ± 0.5 81.9 ± 0.5 82.1 ± 0.3 77.2 ± 0.7 77.6 ± 0.6 66.4 ± 0.6 68.8 ± 0.5 Re-init 3 Epochs Longer 3 Epochs 3 Epochs Longer 3 Epochs Longer
Table 1: Mean test performance and standard deviation. We compare ï¬ne-tuning with the complete BERT model (Standard) and ï¬ne-tuning with the partially re-initialized BERT (Re-init). We show results of ï¬ne-tuning for 3 epochs and for longer training (Sec 6). We underline and highlight in blue the best and number statistically equivalent to it among each group of 4 numbers. We use a one-tailed Studentâs t-test and reject the null hypothesis when p < 0.05.
We simulate a realistic setting of multiple random trials following Dodge et al. (2020). We use bootstrapping for the simulation: given the 50 ï¬ne-tuned models we trained, we sample models with replacement, perform model selection on the validation set, and record the test results; we repeat this process 1k times to estimate mean and variance. Figure 4 shows the simulated test results as a function of the number of random trials. Appendix E provides the same plots for validation performance. Using the debiased ADAM we can reliably achieve good results using fewer random trials; the difference in expected performance is especially pronounced when we perform less than 10 trials. Whereas the expected validation performance monotonically improves with more random trials (Dodge et al., 2020), the expected test performance deteriorates when we perform too many random trials because the model selection process potentially overï¬ts the validation set. Based on these observations, we recommend performing a moderate number of random trials (i.e., 5 or 10).
# INITIALIZATION: RE-INITIALIZING BERT PRE-TRAINED LAYERS
The initial values of network parameters have signiï¬cant impact on the process of training deep neural networks, and various methods exist for careful initialization (Glorot & Bengio, 2010; He et al., 2015; Zhang et al., 2019; Radford et al., 2019; Dauphin & Schoenholz, 2019). During ï¬ne-tuning, the BERT parameters take the role of the initialization point for the ï¬ne-tuning optimization process, while also capturing the information transferred from pre-training. The common approach for BERT ï¬ne-tuning is to initialize all layers except one specialized output layer with the pre-trained weights. We study the value of transferring all the layers in contrast to simply ignoring the information learned in some layers. This is motivated by object recognition transfer learning results showing that lower pre-trained layers learn more general features while higher layers closer to the output specialize more to the pre-training tasks (Yosinski et al., 2014). Existing methods using BERT show that using the complete network is not always the most effective choice, as we discuss in Section 2. Our empirical results further conï¬rm this: we observe that transferring the top pre-trained layers slows down learning and hurts performance.
5
Published as a conference paper at ICLR 2021
RTE MRPC os â7ââ oma ; 30.750 a Ee QO08 co» a z T og = 0.700 = = s f@ Standard lll Re-init 0.88 ° ° oss Median © Outer d aeh Veg Vag Deg Hy Deg doh Veg Vag Deg eg Deg © SASH SG or AHH GHGs SNH WWE WW SMH EW W YE ew ee
RTE MRPC â Standard ons â Re-init Ss a a ° a 3 Train Loss oo bo a S 23 92 161 230 34 136 238 340 Steps Steps
# 2 Train Loss
Figure 5: Validation performance distribution of re-initializing different number of layers of the BERT model.
Figure 6: Mean (solid lines) and Range (shaded region) of training loss during ï¬ne-tuning BERT, across 20 random trials. Re-init leads to faster convergence and shrinks the range.
We test the transferability of the top layers using a simple ablation study. Instead of using the pre- trained weights for all layers, we re-initialize the pooler layers and the top L â N BERT Transformer blocks using the original BERT initialization, N (0, 0.022). We compare two settings: (a) standard ï¬ne-tuning with BERT, and (b) Re-init ï¬ne-tuning of BERT. We evaluate Re-init by selecting L â {1, . . . , 6} based on mean validation performance. All experiments use the debiased ADAM (Section 4) with 20 random seeds.
Re-init Impact on Performance Table 1 shows our results on all the datasets from Section 3. We show results for the common setting of using 3 epochs, and also for longer training, which we discuss and study in Section 6. Re-init consistently improves mean performance on all the datasets, showing that not all layers are beneï¬cial for transferring. It usually also decreases the variance across all datasets. Appendix F shows similar beneï¬ts for pre-trained models other than BERT.
Sensitivity to Number of Layers Re-initialized Figure 5 shows the effect of the choice of L, the number of blocks we re-initialize, on RTE and MRPC. Figure 13 in Appendix F shows similar plots for the rest of the datasets. We observe more signiï¬cant improvement in the worst-case performance than the best performance, suggesting that Re-init is more robust to unfavorable random seed. We already see improvements when only the pooler layer is re-initialized. Re-initializing further layers helps more. For larger L though, the performance plateaus and even decreases as re-initialize pre-trained layers with general important features. The best L varies across datasets.
Effect on Convergence and Parameter Change Figure 6 shows the training loss for both the standard ï¬ne-tuning and Re-init on RTE and MRPC. Figure 13, Appendix F shows the training loss for all other datasets. Re-init leads to faster convergence. We study the weights of different Transformer blocks. For each block, we concatenate all parameters and record the L2 distance between these parameters and their initialized values during ï¬ne-tuning. Figure 7 plots the L2 distance for four different transformer blocks as a function of training steps on RTE, and Figures 15â 18 in Appendix F show all transformer blocks on four datasets. In general, Re-init decreases the L2 distance to initialization for top Transformer blocks (i.e., 18â24). Re-initializing more layers leads to a larger reduction, indicating that Re-init decreases the ï¬ne-tuning workload. The effect of Re-init is not local; even re-initializing only the topmost Transformer block can affect the whole network. While setting L = 1 or L = 3 continues to beneï¬t the bottom Transformer blocks, re-initializing too many layers (e.g., L = 10) can increase the L2 distance in the bottom Transformer blocks, suggesting a tradeoff between the bottom and the top Transformer blocks. Collectively, these results suggest that Re-init ï¬nds a better initialization for ï¬ne-tuning and the top L layers of BERT are potentially overspecialized to the pre-training objective.
# 6 TRAINING ITERATIONS: FINE-TUNING BERT FOR LONGER
BERT is typically ï¬ne-tuned with a slanted triangular learning rate, which applies linear warm-up to the learning rate followed by a linear decay. This learning schedule warrants deciding the number of training iterations upfront. Devlin et al. (2019) recommend ï¬ne-tuning GLUE datasets for three epochs. This recommendation has been adopted broadly for ï¬ne-tuning (Phang et al., 2018; Lee et al., 2020; Dodge et al., 2020). We study the impact of this choice, and observe that this one-size-ï¬ts-all
6
Published as a conference paper at ICLR 2021
s Transformer Block 6 Transformer Block 12 Transformer Block 18 Transformer Block 24 2 0.90 0.90 0.90 0.90 S 30.72 0.72 0.72 0.72 a 2 0.54 0.54 0.54 0.54 fo} 0.36 â Standard â Re-init 6 0.36 0.36 0.36 @ 0.18| 77 Rerinit | âRe-init 101 9 1g 0.18 0.18 a â Re-init 3 a 0.00 / 0.00 0.00, 0.00 4 0 50 100 150 200 250 0 50 100 150 200 250 0 50 100 150 200 250 0 50 100 150 200 250 Steps Steps Steps Steps
Figure 7: L2 distance to the initial parameters during ï¬ne-tuning BERT on RTE. Re-init reduces the amount of change in the weights of top Transformer blocks. However, re-initializing too many layers causes a larger change in the bottom Transformer blocks.
RTE (1k) MRPC (1k) STS-B (1k) CoLA (1k) 3 0.75 2 2 2 5 g 0875 Em 0.88 5 0.60 â 0.70 i & Ee g 0.850 > ar 0.86 ⬠& 0.65 âRe-init6 & âReinits £ âReinitd & 5 â Re-init 1 = 0.60 â Standard $= 0.895 â Standard âS * âStandad 3S â Standard 96 200 400 800 1600 3200 96 200 400 800 1600 3200 "8 96 200 400 800 1600 3200 96 200 400 800 1600 3200 Training Iterations Training Iterations Training Iterations Training Iterations
Figure 8: Mean (solid lines) and range (shaded region) of validation performance trained with different number of iterations, across eight random trials.
three-epochs practice for BERT ï¬ne-tuning is sub-optimal. Fine-tuning BERT longer can improve both training stability and model performance.
Experimental setup We study the effect of increasing the number of ï¬ne-tuning iterations for the datasets in Section 3. For the 1k downsampled datasets, where three epochs correspond to 96 steps, we tune the number of iterations in {200, 400, 800, 1600, 3200}. For the four small datasets, we tune the number of iterations in the same range but skip values smaller than the number of iterations used in three epochs. We evaluate our models ten times on the validation set during ï¬ne-tuning. This number is identical to the experiments in Sections 4â5, and controls for the set of models to choose from. We tune with eight different random seeds and select the best set of hyperparameters based on the mean validation performance to save experimental costs. After the hyperparameter search, we ï¬ne-tune with the best hyperparameters for 20 seeds and report the test performance.
Results Table 1 shows the result under the Longer column. Training longer can improve over the three-epochs setup most of the time, in terms of both performance and stability. This is more pronounced on the 1k downsampled datasets. We also ï¬nd that training longer reduces the gap between standard ï¬ne-tuning and Re-init, indicating that training for more iterations can help these models recover from bad initializations. However, on datasets such as MRPC and MNLI, Re-init still improves the ï¬nal performance even with training longer. We show the validation results on the four downsampled datasets with different number of training iterations in Figure 8. We provide a similar plot in Figure 14, Appendix G for the other downsampled datasets. We observe that different tasks generally require different number of training iterations and it is difï¬cult to identify a one-size-ï¬ts-all solution. Therefore, we recommend practitioners to tune the number of training iterations on their datasets when they discover instability in ï¬ne-tuning. We also observe that on most of the datasets, Re-init requires fewer iterations to achieve the best performance, corroborating that Re-init provides a better initialization for ï¬ne-tuning.
# 7 REVISITING EXISTING METHODS FOR FEW-SAMPLE BERT FINE-TUNING
Instability in BERT ï¬ne-tuning, especially in few-sample settings, is receiving increasing attention recently (Devlin et al., 2019; Phang et al., 2018; Lee et al., 2020; Dodge et al., 2020). We revisit these methods given our analysis of the ï¬ne-tuning process, focusing on the impact of using the debiased ADAM instead of BERTADAM (Section 4). Generally, we ï¬nd that when these methods are re-evaluated with the unbiased ADAM they are less effective with respect to the improvement in ï¬ne-tuning stability and performance.
7
Published as a conference paper at ICLR 2021
Standard Int. Task LLRD Mixout Pre-trained WD WD Re-init Longer 69.5 ± 2.5 81.8 ± 1.7 69.7 ± 3.2 71.3 ± 1.4 RTE MRPC 90.8 ± 1.3 91.8 ± 1.0 91.3 ± 1.1 90.4 ± 1.4 STS-B 89.0 ± 0.6 89.2 ± 0.3 89.2 ± 0.4 89.2 ± 0.4 CoLA 63.0 ± 1.5 63.9 ± 1.8 63.0 ± 2.5 61.6 ± 1.7 69.6 ± 2.1 90.8 ± 1.3 89.0 ± 0.5 63.4 ± 1.5 69.5 ± 2.5 72.6 ± 1.6 72.3 ± 1.9 90.8 ± 1.3 91.4 ± 0.8 91.0 ± 1.3 89.0 ± 0.6 89.4 ± 0.2 89.6 ± 0.3 63.0 ± 1.5 64.2 ± 1.6 62.4 ± 1.7
Table 2: Mean test performance and standard deviation on four datasets. Numbers that are statistically signiï¬cantly better than the standard setting (left column) are in blue and underlined. The results of Re-init and Longer are copied from Table 1. All experiments use ADAM with debiasing (Section 4). Except Longer, all methods are trained with three epochs. âInt. Taskâ stands for transfering via an intermediate task (MNLI).
7.1 OVERVIEW
Pre-trained Weight Decay Weight decay (WD) is a common regularization technique (Krogh & Hertz, 1992). At each optimization iteration, λw is subtracted from the model parameters, where λ is a hyperparameter for the regularization strength and w is the model parameters. Pre-trained weight decay adapts this method for ï¬ne-tuning pre-trained models (Chelba & Acero, 2004; Daumé III, 2007) by subtracting λ(w â Ëw) from the objective, where Ëw is the pre-trained parameters. Lee et al. (2020) empirically show that pre-trained weight decay works better than conventional weight decay in BERT ï¬ne-tuning and can stabilize ï¬ne-tuning.
Mixout Mixout (Lee et al., 2020) is a stochastic regularization technique motivated by Dropout (Sri- vastava et al., 2014) and DropConnect (Wan et al., 2013). At each training iteration, each model parameter is replaced with its pre-trained value with probability p. The goal is to prevent catastrophic forgetting, and (Lee et al., 2020) proves it constrains the ï¬ne-tuned model from deviating too much from the pre-trained initialization.
Layer-wise Learning Rate Decay (LLRD) LLRD (Howard & Ruder, 2018) is a method that applies higher learning rates for top layers and lower learning rates for bottom layers. This is accomplished by setting the learning rate of the top layer and using a multiplicative decay rate to decrease the learning rate layer-by-layer from top to bottom. The goal is to modify the lower layers that encode more general information less than the top layers that are more speciï¬c to the pre-training task. This method is adopted in ï¬ne-tuning several recent pre-trained models, including XLNet (Yang et al., 2019b) and ELECTRA (Clark et al., 2020).
Transferring via an Intermediate Task Phang et al. (2018) propose to conduct supplementary ï¬ne-tuning on a larger, intermediate task before ï¬ne-tuning on few-sample datasets. They show that this approach can reduce variance across different random trials and improve model performance. Their results show that transferring models ï¬ne-tuned on MNLI (Williams et al., 2018) can lead to signiï¬cant improvement on several downstream tasks including RTE, MRPC, and STS-B. In contrast to the other methods, this approach requires large amount of additional annotated data.
7.2 EXPERIMENTS
We evaluate all methods on RTE, MRPC, STS-B, and CoLA. We ï¬ne-tune a BERTLarge model using the ADAM optimizer with debiasing for three epochs, the default number of epochs used with each of the methods. For intermediate task ï¬ne-tuning, we ï¬ne-tune a BERTLarge model on MNLI and then ï¬ne-tune for our evaluation. For other methods, we perform hyperparameter search with a similar size search space for each method, as described in Appendix H. We do model selection using the average validation performance across 20 random seeds. We additionally report results for standard ï¬ne-tuning with longer training time (Section 6), weight decay, and Re-init (Section 5).
Table 2 provides our results. Compared to published results (Phang et al., 2018; Lee et al., 2020), our test performance for Int. Task (transferring via an intermediate task), Mixout, Pre-trained WD, and WD are generally higher when using the ADAM with debiasing.10 However, we observe less pronounced beneï¬ts for all surveyed methods compared to results originally reported. At
10The numbers in Table 2 are not directly comparable with previously published validation results (Phang et al., 2018; Lee et al., 2020) because we are reporting test performance. However, the relatively large margin between our results and previously published results indicates an improvement. More important, our focus is the relative improvement, or lack of improvement compared to simply training longer.
8
Published as a conference paper at ICLR 2021
times, these methods do not outperform the standard baselines or simply training longer. Using additional annotated data for intermediate task training continues to be effective, leading to consistent improvement over the average performance across all datasets. LLRD and Mixout show less consistent performance impact. We observe no noticeable improvement using pre-trained weight decay and conventional weight decay in improving or stabilizing BERT ï¬ne-tuning in our experiments, contrary to existing work (Lee et al., 2020). This indicates that these methods potentially ease the optimization difï¬culty brought by the debiasing omission in BERTADAM, and when we add the debiasing, the positive effects are reduced.
# 8 CONCLUSION
We have demonstrated that optimization plays a vital role in the few-sample BERT ï¬ne-tuning. First, we show that the debiasing omission in BERTADAM is the main cause of degenerate models on small datasets commonly observed in previous work (Phang et al., 2018; Lee et al., 2020; Dodge et al., 2020). Second, we observe the top layers of the pre-trained BERT provide a detrimental initialization for ï¬ne-tuning and delay learning. Simply re-initializing these layers not only speeds up learning but also leads to better model performance. Third, we demonstrate that the common one-size-ï¬ts-all three-epochs practice for BERT ï¬ne-tuning is sub-optimal and allocating more training time can stabilize ï¬ne-tuning. Finally, we revisit several methods proposed for stabilizing BERT ï¬ne-tuning and observe that their positive effects are reduced with the debiased ADAM. In the future, we plan to extend our study to different pre-training objectives and model architectures, and study how model parameters evolve during ï¬ne-tuning.
ACKNOWLEDGMENTS
We thank Cheolhyoung Lee for his help in reproducing previous work. We thank Lili Yu, Ethan R. Elenberg, Varsha Kishore, and Rishi Bommasani for their insightful comments, and Hugging Face for the Transformers project, which enabled our work.
# REFERENCES
Luisa Bentivogli, Ido Kalman Dagan, Dang Hoa, Danilo Giampiccolo, and Bernardo Magnini. The ï¬fth pascal recognizing textual entailment challenge. In TAC 2009 Workshop, 2009.
Daniel Cer, Mona Diab, Eneko Agirre, Iñigo Lopez-Gazpio, and Lucia Specia. Semeval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In SemEval-2017, 2017.
Ciprian Chelba and Alex Acero. Adaptation of maximum entropy capitalizer: Little data can help a
# lot. In EMNLP, 2004.
Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. ELECTRA: Pre-training text encoders as discriminators rather than generators. In ICLR, 2020.
Hal Daumé III. Frustratingly easy domain adaptation. In ACL, 2007.
Yann N Dauphin and Samuel Schoenholz. Metainit: Initializing learning by learning to initialize. In NeurIPS, 2019.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT, 2019.
Jesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali Farhadi, Hannaneh Hajishirzi, and Noah Smith. Fine-tuning pretrained language models: Weight initializations, data orders, and early stopping. arXiv preprint arXiv:2002.06305, 2020.
William B Dolan and Chris Brockett. Automatically constructing a corpus of sentential paraphrases.
In IWP, 2005.
9
Published as a conference paper at ICLR 2021
Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F Liu, Matthew Peters, Michael Schmitz, and Luke Zettlemoyer. Allennlp: A deep semantic natural language processing platform. In NLP-OSS, 2018.
Xavier Glorot and Yoshua Bengio. Understanding the difï¬culty of training deep feedforward neural networks. In AISTATS, 2010.
Jian Guo, He He, Tong He, Leonard Lausen, Mu Li, Haibin Lin, Xingjian Shi, Chenguang Wang, Junyuan Xie, Sheng Zha, Aston Zhang, Hang Zhang, Zhi Zhang, Zhongyue Zhang, and Shuai Zheng. Gluoncv and gluonnlp: Deep learning in computer vision and natural language processing. arXiv preprint arXiv:1907.04433, 2019.
Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. Realm: Retrieval- augmented language model pre-training. arXiv preprint arXiv:2002.08909, 2020.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectiï¬ers: Surpassing human-level performance on imagenet classiï¬cation. In ICCV, 2015.
J. Hewitt and P. Liang. Designing and interpreting probes with control tasks. In EMNLP, 2019.
John Hewitt and Christopher D. Manning. A structural probe for ï¬nding syntax in word representa- tions. In NAACL, 2019.
Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. Parameter-efï¬cient transfer learning for NLP. In ICML, 2019.
Jeremy Howard and Sebastian Ruder. Universal language model ï¬ne-tuning for text classiï¬cation. In ACL, 2018.
Shankar Iyer, Nikhil Dandekar, and Kornel Csernai. First quora dataset release: Question pairs. https://tinyurl.com/y2y8u5ed, 2017.
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Anders Krogh and John A. Hertz. A simple weight decay can improve generalization. In NeurIPS, 1992.
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. Albert: A lite bert for self-supervised learning of language representations. In ICLR, 2020.
Cheolhyoung Lee, Kyunghyun Cho, and Wanmo Kang. Mixout: Effective regularization to ï¬netune large-scale pretrained language models. In ICLR, 2020.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461, 2019.
Xingjian Li, Haoyi Xiong, Haozhe An, Chengzhong Xu, and Dejing Dou. Riï¬e: Backpropagation in depth for deep transfer learning through re-initializing the fully-connected layer. In ICML, 2020.
Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. Linguistic knowledge and transferability of contextual representations. arXiv preprint arXiv:1903.08855, 2019a.
Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. Multi-task deep neural networks for natural language understanding. In ACL, 2019b.
Xiaodong Liu, Yu Wang, Jianshu Ji, Hao Cheng, Xueyun Zhu, Emmanuel Awa, Pengcheng He, Weizhu Chen, Hoifung Poon, Guihong Cao, and Jianfeng Gao. The microsoft toolkit of multi-task deep neural networks for natural language understanding. arXiv preprint arXiv:2002.07972, 2020.
Yang Liu. Fine-tune BERT for extractive summarization. arXiv preprint arXiv:1903.10318, 2019.
10
Published as a conference paper at ICLR 2021
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv preprint arXiv:1907.11692, 2019c.
Brian W Matthews. Comparison of the predicted and observed secondary structure of t4 phage lysozyme. Biochimica et Biophysica Acta (BBA)-Protein Structure, 1975.
Amil Merchant, Elahe Rahimtoroghi, Ellie Pavlick, and Ian Tenney. What happens to bert embeddings during ï¬ne-tuning? arXiv preprint arXiv:2004.14448, 2020.
Marius Mosbach, Maksym Andriushchenko, and Dietrich Klakow. On the stability of ï¬ne-tuning bert: Misconceptions, explanations, and strong baselines. arXiv preprint arXiv:2006.04884, 2020.
Preetum Nakkiran, Gal Kaplun, Yamini Bansal, Tristan Yang, Boaz Barak, and Ilya Sutskever. Deep double descent: Where bigger models and more data hurt. In ICLR, 2019.
Matthew E. Peters, Sebastian Ruder, and Noah A. Smith. To tune or not to tune? adapting pretrained representations to diverse tasks. arXiv preprint arXiv:1903.05987, 2019.
Jason Phang, Thibault Févry, and Samuel R Bowman. Sentence encoders on stilts: Supplementary training on intermediate labeled-data tasks. arXiv preprint arXiv:1811.01088, 2018.
Martin Popel and OndËrej Bojar. Training tips for the transformer model. The Prague Bulletin of Mathematical Linguistics, 2018.
Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2019.
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP, 2013.
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overï¬tting. JMLR, 2014.
Asa Cooper Stickland and Iain Murray. Bert and pals: Projected attention layers for efï¬cient adaptation in multi-task learning. In ICML, 2019.
Chi Sun, Xipeng Qiu, Yige Xu, and Xuanjing Huang. How to ï¬ne-tune bert for text classiï¬cation? In CCL, 2019.
Alex Tamkin, Trisha Singh, Davide Giovanardi, and Noah Goodman. Investigating transferability in pretrained language models. In EMNLP, 2020.
Ian Tenney, Dipanjan Das, and Ellie Pavlick. Bert rediscovers the classical nlp pipeline. In ACL, 2019a.
Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Sam Bowman, Dipanjan Das, and Ellie Pavlick. What do you learn from context? probing for sentence structure in contextualized word representations. In ICLR, 2019b.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NeurIPS, 2017.
David Wadden, Ulme Wennberg, Yi Luan, and Hannaneh Hajishirzi. Entity, relation, and event extraction with contextualized span representations. In EMNLP-IJCNLP, 2019.
Li Wan, Matthew Zeiler, Sixin Zhang, Yann Le Cun, and Rob Fergus. Regularization of neural networks using dropconnect. In ICML, 2013.
Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. Superglue: A stickier benchmark for general-purpose language understanding systems. In NeurIPS, 2019a.
11
Published as a conference paper at ICLR 2021
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. Glue: A multi-task benchmark and analysis platform for natural language understanding. In ICLR, 2019b.
Alex Wang, Ian F. Tenney, Yada Pruksachatkun, Phil Yeres, Jason Phang, Haokun Liu, Phu Mon Htut, Katherin Yu, Jan Hula, Patrick Xia, Raghu Pappagari, Shuning Jin, R. Thomas McCoy, Roma Patel, Yinghui Huang, Edouard Grave, Najoung Kim, Thibault Févry, Berlin Chen, Nikita Nangia, Anhad Mohananey, Katharina Kann, Shikha Bordia, Nicolas Patry, David Benton, Ellie Pavlick, and Samuel R. Bowman. jiant 1.3: A software toolkit for research on general-purpose text understanding models. http://jiant.info/, 2019c.
Alex Warstadt, Amanpreet Singh, and Samuel R Bowman. Neural network acceptability judgments. TACL, 2019.
Adina Williams, Nikita Nangia, and Samuel Bowman. A broad-coverage challenge corpus for sentence understanding through inference. In ACL, 2018.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Râemi Louf, Morgan Funtowicz, and Jamie Brew. Huggingfaceâs transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771, 2019.
Wei Yang, Haotian Zhang, and Jimmy Lin. Simple applications of BERT for ad hoc document retrieval. arXiv preprint arXiv:1903.10972, 2019a.
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. Xlnet: Generalized autoregressive pretraining for language understanding. In NeurIPS, 2019b.
Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In NeurIPS, 2014.
Hongyi Zhang, Yann N. Dauphin, and Tengyu Ma. Residual learning without normalization via better initialization. In ICLR, 2019.
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. BERTScore: Evaluating Text Generation with BERT. In ICLR, 2020.
Jinhua Zhu, Yingce Xia, Lijun Wu, Di He, Tao Qin, Wengang Zhou, Houqiang Li, and Tieyan Liu. Incorporating bert into neural machine translation. In ICLR, 2020.
12
Published as a conference paper at ICLR 2021
Task RTE QNLI NLI Paraphrase Similarity Acceptibility Sentiment NLI MRPC STS-B CoLA SST-2 QQP Paraphrase # of training samples # of validation samples # of test samples 2.5k 139 139 3.7k 204 205 5.8k 690 690 8.6k 521 521 61.3k 1k 1.8k 104k 1k 5.5k 363k 1k 40k Evaluation metric Majority baseline (val) Majority baseline (test) Acc. 52.9 52.5 F1 81.3 81.2 SCC 0 0 MCC 0 0 Acc. 50.0 49.1 Acc. 50.0 50.5 Acc. 50.0 63.2 MNLI NLI 392k 1k 9.8k Acc. 33.3 31.8
Table 3: The datasets used in this work. We apply non-standard data splits to create test sets. SCC stands for Spearman Correlation Coefï¬cient and MCC stands for Matthews Correlation Coefï¬cient.
# A DATASETS
Table 3 summarizes dataset statistics and describes our validation/test splits. We also provide a brief introduction for each datasets:
RTE Recognizing Textual Entailment (Bentivogli et al., 2009) is a binary entailment classiï¬cation task. We use the GLUE version.
MRPC Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005) is binary classiï¬cation task. Given a pair of sentences, a model has to predict whether they are paraphrases of each other. We use the GLUE version.
STS-B Semantic Textual Similarity Benchmark (Cer et al., 2017) is a regression tasks for estimating sentence similarity between a pair of sentences. We use the GLUE version.
CoLA Corpus of Linguistic Acceptability (Warstadt et al., 2019) is a binary classiï¬cation task for verifying whether a sequence of words is a grammatically correct English sentence. Matthews correlation coefï¬cient (Matthews, 1975) is used to evaluate the performance. We use the GLUE version.
MNLI Multi-Genre Natural Language Inference Corpus (Williams et al., 2018) is a textual entail- ment dataset, where a model is asked to predict whether the premise entails the hypothesis, predicts the hypothesis, or neither. We use the GLUE version.
QQP Quora Question Pairs (Iyer et al., 2017) is a binary classiï¬cation task to determine whether two questions are semantically equivalent (i.e., paraphrase each other). We use the GLUE version.
SST-2 The binary version of the Stanford Sentiment Treebank (Socher et al., 2013) is a binary classiï¬cation task for whether a sentence has positive or negative sentiment. We use the GLUE version.
# B ISOLATING THE IMPACT OF DIFFERENT SOURCES OF RANDOMNESS
The randomness in BERT ï¬ne-tuning comes from three sources: (a) weight initialization, (b) data order, and (c) Dropout regularization (Srivastava et al., 2014). We control the randomness using two separate random number generators: one for weight initialization and the other for both data order and Dropout (both of them affect the stochastic loss at each iteration). We ï¬ne-tune BERT on RTE for three epochs using ADAM with 10 seeds for both random number generators. We compare the standard setup with Re-init 5, where L = 5. This experiment is similar to Dodge et al. (2020), but we use ADAM with debiasing instead of BERTADAM and control for the randomness in Dropout as well. When ï¬xing a random seed for weight initialization, Re-init 5 shares the same initialized classiï¬er weights with the standard baseline. Figure 9 shows the validation accuracy of each individual run as well as the minimum, average, and maximum scores when ï¬xing one of the random seeds. Figure 10 summarizes the standard deviations when one of the random seeds is controlled. We observe several trends. Re-init 5 usually improves the performance regardless of the weight initialization or data order and Dropout. Second, Re-init 5 still reduces the instability when one of the sources of randomness is controlled. Third, the standard deviation of ï¬xing the weight initialization roughly matches the one of controlled data order and Dropout, which aligns with the observation of Dodge et al. (2020).
13
Published as a conference paper at ICLR 2021
Re-init 5 Random Seed for Data Order and Dropout 2.03 4 5 6 7 8 9 Min Mean Max Standard Random Seed for Data Order and Dropout 23 4 5 6 7 8 9 Min Mean Max Random Seed for Initialization Random Seed for Initialization 66 Min 69.87 69.57 66 Mean Max Max
Figure 9: Validation accuracy on RTE with controlled random seeds. The min, mean, and max values of controlling one of the random seeds are also included. Re-init 5 usually improves the validation accuracy.
Standard Re-init 5 Random Seed for Data Order and Dropout Random Seed for Data Order and Dropout Avg. Avg. 2°93 4 5 6 7 8 9 Std Std 2.3 4 5 6 7 8 9 Std std Random Seed for Initialization Random Seed for Initialization std std Avg. Std Avg. Std 0.0 00
Figure 10: The standard deviation of the validation accuracy on RTE with controlled random seeds. We show the standard deviation of ï¬xing either the initialization or data order and Dropout. Re-init 5 consistently reduces the instability regardless of the sources of the randomness.
# C MIXED PRECISION TRAINING
Mixed precision training can accelerate model training while preserving performance by replacing some 32-bit ï¬oating-point computation with 16-bit ï¬oating-point computation. We use mixed precision training in all our experiments using huggingfaceâs Transformers (Wolf et al., 2019). Transformers uses O1-level optimized mixed precision training implemented with the Apex library.11 We evaluate if this mixed precision implementation inï¬uences our results. We ï¬ne-tune BERT with 20 random trials on RTE, MRPC, STS-B, and CoLA. We use two-tailed t-test to test if the distributions of the two methods are statistically different. Table 4 shows the mean and standard deviation of the
# 11https://github.com/NVIDIA/apex
14
Published as a conference paper at ICLR 2021
CoLA MRPC RTE STS-B Mixed precision Full precision 60.3 ± 1.5 59.9 ± 1.5 89.2 ± 1.2 88.7 ± 1.4 71.8 ± 2.1 71.4 ± 2.2 90.1 ± 0.7 90.1 ± 0.7
Table 4: Comparing BERT ï¬ne-tuning with mixed precision and full precision. The difference between the two numbers on any dataset is not statistically signiï¬cant.
Dev Acc. (%) Test Acc. (%) No bias correction Bias correction 86.0 ± 0.3 85.9 ± 0.3 87.0 ± 0.4 86.9 ± 0.3
Table 5: Comparing BERT ï¬ne-tuning with and without bias correction on the MNLI dataset. When we have a large dataset, there is no signiï¬cant difference in using bias correction or not.
test performance. The performance of mixed precision matches the single precision counterpart, and there is no statistically signiï¬cant difference.
# D BIAS-CORRECTION ON MNLI
The focus of this paper is few-sample learning. However, we also experiment with the full MNLI dataset. Table 5 shows that average accuracy over three random runs. The results conï¬rm that there is no signiï¬cant difference in using bias correction or not on such a large dataset. While our recommended practices do not improve training on large datasets, this result shows there is no disadvantage to ï¬ne-tune such models with the same procedure as we propose for few-sample training.
E SUPPLEMENTARY MATERIAL FOR SECTION 4
Effect of ADAM with Debiasing on Convergence. Figure 11 shows the training loss as a function of the number of training iterations. Using bias correction effectively speeds up convergence and reduces the range of the training loss, which is consistent with our observation in Figure 3.
Effect of ADAM with Debiasing on the Expected Validation Performance. Figure 12 shows the expected validation performance as a function of the number of random trials. Comparing to Figure 4, we observe several trends. First, using ADAM with debiasing consistently leads to faster convergence and improved validation performance, which is similar to our observation about the test performance. Second, we observe that the expected validation performance monotonically increases with the number of random trials, contrary to our observation about the test performance. This suggests that using too many random trials may overï¬t to the validation set and hurt generalization performance.
# F SUPPLEMENTARY MATERIAL FOR SECTION 5
Effect of L on Re-init Figure 13 shows the effect of Re-init in ï¬ne-tuning on the eight downsampled datasets. We observe similar trends in Figure 13 and Figure 5. Re-initâs improvement is more pronounced in the wort-case performance across different random trials. Second, the best value of L is different for each dataset.
Effect of Re-init on Model Parameters We use the same setup as in Figure 7 to plot the change in the weights of different Transformer blocks during ï¬ne-tuning on RTE, MRPC, STS-B, and CoLA in Figures 15â18.
Effect of Re-init on Other Models We study more recent pre-trained contexual embedding models beyond BERTLarge. We investigate whether Re-init provides better ï¬ne-tuning initialization in XLNetLarge Yang et al. (2019b), RoBERTaLarge Liu et al. (2019c), BARTLarge Lewis et al. (2019), and ELECTRALarge Clark et al. (2020). XLNet is an autoregressive language model trained by learning
15
Published as a conference paper at ICLR 2021
RTE MRPC STS-B CoLA 1.00 10 â Correction 8 0.75 . 0.6 6 0.75 â No Correction a & 0.50 5 04 3 0.50 & 0.25 0.2 0.25 0 23 92 161 230 34 136 238 340 54 216 378 540° 80, 320 560 800 Steps Steps Steps Steps
Figure 11: Mean (solid lines) and range (shaded region) of training loss during ï¬ne-tuning BERT, across 50 random trials. Bias correction speeds up convergence and reduces the range of the training loss.
RTE MRPC STS-B CoLA _o: 80 0.95 0.92 0.70 9 0.14 aE âââF 3091 Y 0.65 = = H J 0.60 0.68 B 0.89 = 0.90 =. 3 = z S 0.55 ~ 0.62 50:86 é 0.89 =.0.50/ â Correction B 056 20.83 Bj 0.88 0.45 â No Correction o 50; 10 20 30 40 50 0.80 | 10 20 30 40 50 0.87 1 10 20 30 40 50 0.40 10 20 30 40 50 # of Random Trials # of Random Trials # of Random Trials # of Random Trials
Figure 12: Expected validation performance (solid lines) with standard deviation (shaded region) over the number of random trials allocated for ï¬ne-tuning BERT. With bias correction, we can reliably achieve good results with few (i.e., 5 or 10) random trials.
RTE MRPC STS-B CoLA ° 092° 0.910 ° T 066 ° 0.775 T 2? 8 3 8 ; 3 0.905 a a 3 0.04 1 Boo b 2 oso : by : 2 ova 7 Sons j | é | em é 2 f o 8 2 0.895 2 0.00 0.700 : * a a = 2 B 088 E 0.800 fe Standard am Re-init. 0.58 | â Median © Outlier 0.675 0.885 " " 056 wes CMR hensâ ss shoe satis? S50" aes SO fy SES Faepaensnnes® os BEES SN EE sy BEES ay EEE es es eS RTE (1k) MRPC (1k) STS-B (1k) ow CoLA (1k) ° 0.88 Ty ° 0.88 ~ 8 2 oe 0.700 I 7 ° ° aa r ° ° 2 [baa g g eer? 8 0.00 . ; 8 o61s 8 oss a 3 3 E | E | I £0.86 E 0.58 ⬠0.650 " t ⬠5 2 ⬠| | 4 2 oss I a ° 2056 ri 0.625 i PA 0.84 a ~ = ° = > > 054 ° m6 0.600 082 VgPghgd gO VegP pr gg OR Sad gdahyrah gg RS Dr gre MO Wweâ ss Shes Se SOR eo SCA oy EEE SS EELS a PEE EES oo EELS ae ae eo SST (1k) QNLI (1k) QQP (1k) MNLI (1k) = 0.70 . 8 0.90 a 4 - bea 0.82 _poinee 8 # LD Be er dy ane Foe 3 oss are 3 ] 3 a 2081 a E 0.88 E 0.80 ° â E 0.60 é é S030 & Loge ° £0.78 2 2055 ad ad 0.79 a = ° F076 s S050 0.84 ° ° 0.78 0.45 ee SOS Sse SO ee SO ee SORA Se PRES se Sew wee oo PRES oy EES ew we
Figure 13: Validation performance distribution of re-initializing different number of layers of BERT on the downsampled datasets.
all permutations of natural language sentences. RoBERTa is similar to BERT in terms of model architecture but is only pre-trained on the mask language modeling task only, but for longer and on
16
Published as a conference paper at ICLR 2021
Model BERT Dataset all Learning Rate 2 Ã 10â5 Training Epochs / Steps 3 epochs Batch Size Warmup Ratio / Steps 32 10% XLNet RTE MRPC STS-B CoLA 3 Ã 10â5 5 Ã 10â5 5 Ã 10â5 3 Ã 10â5 800 steps 800 steps 3000 steps 1200 steps 32 32 32 128 200 steps 200 steps 500 steps 120 steps RoBERTa RTE MRPC STS-B CoLA 2 Ã 10â5 1 Ã 10â5 2 Ã 10â5 1 Ã 10â5 2036 steps 2296 steps 3598 steps 5336 steps 16 16 16 16 122 steps 137 steps 214 steps 320 steps ELECTRA RTE MRPC STS-B CoLA 5 Ã 10â5 5 Ã 10â5 5 Ã 10â5 5 Ã 10â5 10 epochs 3 epochs 10 epochs 3 epochs 32 32 32 32 10% 10% 10% 10% BART RTE MRPC STS-B CoLA 1 Ã 10â5 2 Ã 10â5 2 Ã 10â5 2 Ã 10â5 1018 steps 1148 steps 1799 steps 1334 steps 32 64 32 64 61 steps 68 steps 107 steps 80 steps LLRD - - - - - - - - - 0.9 0.9 0.9 0.9 - - - -
Table 6: Fine-tuning hyper-parameters of BERT and its variants as reported in the ofï¬cial repository of each model.
RTE MRPC STS-B CoLA Standard Re-init Standard Re-init Standard Re-init Standard XLNet BART Re-init
Table 7: Average Test performance with standard deviation on four small datasets with four different pre-trained models. For each setting, the better numbers are bolded and are in blue if the improvement is statistically signiï¬cant.
more data. BART is a sequence-to-sequence model trained as a denoising autoencoder. ELECTRA is a BERT-like model trained to distinguish tokens generated by masked language model from tokens drawn from the natural distribution. Together, they represent a diverse range of modeling choices in pre-training, including different model architectures, objectives, data, and training strategies. We use ADAM with debiasing to ï¬ne-tune these models on RTE, MRPC, STS-B, and COLA, using the hyperparameters that are either described in the paper or in the ofï¬cial repository of each model. Table 6 summarizes the hyper-parameters of each model for each dataset. We use the huggingfaceâs Transformers library (Wolf et al., 2019). The experimental setup is kept the same as our other experiments. Table 7 displays the average test performance on these datasets. We observe that several models suffer from high instability on these datasets and in most cases, Re-init can reduce the performance variance. We observe that for some models, like XLNetLarge or RoBERTaLarge, Re-init can improve the average performance and reduce variance. However, the behavior of Re-init varies signiï¬cantly across different models and Re-init have less signiï¬cant improvement for ELECTRALarge and BARTLarge. Further study of the entire model family requires signiï¬cant computational resources, and we leave it as an important direction for future work.
# G SUPPLEMENTARY MATERIAL FOR SECTION 6
Figure 14 plots the validation performance as a function of the number of training iteration, using the same setting as in Figure 8. Similar to our observations in Figure 8, we ï¬nd that training longer generally improves ï¬ne-tuning performance and reduces the gap between standard ï¬ne-tuning and Re-init. On MNLI, Re-init still outperforms standard ï¬ne-tuning.
17
Published as a conference paper at ICLR 2021
SST (1k) QNLI (1k) QQP (1k) MNLI (1k) 3 3 8 ey el E 0.90 5 ] a 0.82 5 5 & 0.80 =I & 5 5 = 0.80, = â° & 0.88 âRe-init2 & âReinit4 2° âReinit3 & â Re-init 5 S â Standard & â Standard & â Standard S$ 0.5 â Standard 0.75 0.78 96 200 400 800 1600 3200 96 200 400 800 1600 3200 96 200 400 800 1600 3200 96 200 400 800 1600 3200 Training Iterations Training Iterations Training Iterations Training Iterations
Figure 14: Mean (solid lines) and range (shaded region) of validation performance trained with different number of iterations, across eight random trials.
# H EXPERIMENTAL DETAILS IN SECTION 7
The hyperparameter search space allocated for each method in our experiments is: Layerwise Learning Rate Decay (LLRD) We grid search the initial learning rate in {2Ã10â5, 5Ã 10â5, 1 Ã 10â4} and the layerwise decay rate in {0.9, 0.95}.
Mixout We tune the mixout probability p â {0.1, 0.3, 0.5, 0.7, 0.9}.
Weight decay toward the pre-trained weight We tune the regularization strength λ â {10â3, 10â2, 10â1, 100}. Weight decay We tune the regularization strength λ â {10â4, 10â3, 10â2, 10â1}.
18
Published as a conference paper at ICLR 2021
Transformer Block 1 Transformer Block 2 Transformer Block 3 Transformer Block 4 0.8 0.8 0.8 0.6 0.6 0.6 0.4 0.4 04 â Standard â Re-init 6 0.2 = Re-init | â Re-init 10 â Re-init 3 B02 A 0.2 Y 0.0 4 0.0 0 50 100 150 200 250 0 50 100 150 200 250 0 50 100 150 200 250 0 50 100 150 200 250 Steps Steps Steps Steps 0.0 L2 Dist. to Initialization esceces: So hb RD & L2 Dist. to Initialization L2 Dist. to Initialization L2 Dist. to Initialization Transformer Block 5 Transformer Block 6 Transformer Block 7 Transformer Block 8 0.8 0.8 0.8 0.6 0.6 0.6 0.4 0.4 04 0.2 202 A 0.2 + 0.0 0.0 0.0 L2 Dist. to Initialization esceces: So hb RD & L2 Dist. to Initialization L2 Dist. to Initialization L2 Dist. to Initialization 0 50 100 150 200 250 0 50 100 150 200 250 0 50 100 150 200 250 0 50 100 150 200 250 Steps Steps Steps Steps Transformer Block 9 Transformer Block 10 Transformer Block 11 Transformer Block 12 08 08 08 0.6 0.6 0.6 04 0.4 0.4 02 30.2 50.2 L2 Dist. to Initialization eeseesee: So Rh RA & L2 Dist. to Initialization L2 Dist. to Initialization L2 Dist. to Initialization 0.0 0.0 0.0 0 50 100 150 200 250 0 50 100 150 200 250 0 50 100 150 200 250 0 50 100 150 200 250 Steps Steps Steps Steps 8 Transformer Block 13 08 Transformer Block 14 08 Transformer Block 15 08 Transformer Block 16 0.6 0.6 0.6 04 0.4 04 02 3 0.2 5 0.2 L2 Dist. to Initialization L2 Dist. to Initialization Se cs ¢ S s Ss 8b FE 6 L2 Dist. to Initialization L2 Dist. to Initialization 0.0 0.0 0.0 0 50 100 150 200 250 0 50 100 150 200 250 0 50 100 150 200 250 0 50 100 150 200 250 Steps Steps Steps Steps 8 Transformer Block 17 Transformer Block 18 Transformer Block 19 08 Transformer Block 20 0.8 0.8 . 0.6 0.6 0.6 04 0.4 04 02 50.2 50.2 y 0.0 0.0 0.0 L2 Dist. to Initialization Se es ¢- fs So Bb BE 8 L2 Dist. to Initialization L2 Dist. to Initialization L2 Dist. to Initialization 0 50 100 150 200 250 0 50 100 150 200 250 0 50 100 150 200 250 0 50 100 150 200 250 Steps Steps Steps Steps 8 Transformer Block 21 Transformer Block 22 Transformer Block 23 Transformer Block 24 0.8 0.8 &
0.6 0.4 0.2 L2 Dist. to Initialization 0 0.0
0.6 0.4 30.2 L2 Dist. to Initialization 250 0 0.0
L2 Dist. to Initialization ee:seesee: S bh R QR 200 250 0 50
0.6 04 302 2 Dist. to Initialization 0.0 2507 0
50
100 150 Steps
200
50
100 150 Steps
100 150 Steps
200
50
100 150 Steps
# Figure 15: L2 distance to the initialization during ï¬ne-tuning BERT on RTE.
19
200
250
Published as a conference paper at ICLR 2021
Transformer Block 1 Transformer Block 2 Transformer Block 3 Transformer Block 4 : 1.00 Z 0.25 0.01 1.00 5 0.75 ° a a 0.75 0.50 0.50 â Standard â Re-init 6 0.25 âRe-init 1 â Re-init 10 â Re-init 3 ° i a 0.25 *Y L2 Dist. to Initialization & 5 8 Sa s L2 Dist. to Initialization L2 Dist. to Initialization & 3 L2 Dist. to Initialization 0.00 0 On 0. 004 0.00 0 74 148 222 296 370 74 148 222 296 370 74 148 222 296 370 0 74 148 222 296 370 Steps Steps Steps Steps c Transformer Block5 Transformer Block6 Transformer Block7 Transformer Block 8 J 1.00 -⬠1.00 -E 1.00 S 1.00 3 3 3 3 3 3 3 3 a 0.75 s 0.75 "a 0.75 ici 0.75 0.50 4 0.50 £ 050 = 0.50 g g g g F 0.25 3 025 3 025 B05 a a a a ~ 0.00 Q 0.001 Q 0-00 ~ 0.00 0 74 148 222 296 3704 74 148 222 296 3704 74 148 222 296 370-2 0 74._«148 222 296 370 Steps Steps Steps Steps Transformer Block 9 Transformer Block 10 Transformer Block 11 Transformer Block 12 1.00 5 1.00 0.75 ° a a 0.75 0.50 0.50 0.25 ° i a 0.25 (dai ae ) L2 Dist. to Initialization L2 Dist. to Initialization & 3 L2 Dist. to Initialization L2 Dist. to Initialization £8886 oe 2 Sas 0.00 0. ool 0.00 0 74 148 222 296 370 74 148 222 296 370 74 148 222 296 370 0 74 148 222 296 370 Steps Steps Steps Steps 0 Transformer Block 13 0 Transformer Block 14 o Transformer Block 15 Transformer Block 16 1.01 1 1.00 0.75 ° a a 0.75 0.50 0.50 0.25 ° i a 0.25 Ef. âFT L2 Dist. to Initialization L2 Dist. to Initialization & 3 L2 Dist. to Initialization L2 Dist. to Initialization gs 8 8 & Ss a 0.00 0. oor 0.00 0 74 148 222 296 370 74 148 222 296 370 74 148 222 296 370 0 74 148 222 296 370 Steps Steps Steps Steps c Transformer Block 17. Transformer Block 18 ~ Transformer Block 19 _ Transformer Block 20 & 1.00 & 1.00 & § E E E Eons B | iS 0.75 | ic] 0.75 ic] 0.75 a 3 0.50 2 0.50 E 0.50 2 0.50 2 £ £ 2 z 0.25 Zz 0.25 z 0.25 a 0.25 a a a a 1 0.00 fo 0.001 a 0.00 1 0.00 0 74 148 222 296 3704 74 148 222 296 3704 74 148 222 296 370-2 0 74._:148 222 296 370 Steps Steps Steps Steps c Transformer Block 21. Transformer Block 22. _ Transformer Block 23, Transformer Block 24 8 g g S 1.00 g 208 Fe 3
iN 0.75 3 2 0.50 a
2 Zo a cy 0.00 0
74
148
222
296
A 206 = = 2% oa a ee 370
74
148
222
296
iB 0.75 3 0.50 I
g goss a 00 370
74
148
222
296
3 3075 = = 0.50 2 3025 a cy 0.00 370 0
74
148
222
# Steps
# Steps
# Steps
# Steps
# Figure 16: L2 distance to the initialization during ï¬ne-tuning BERT on MRPC.
20
296
370
Published as a conference paper at ICLR 2021
Transformer Block 1 Ss g S10 3 & 205 â Standard â Re-init 6 zB ~ Re-init 1 â Re-init 10 a âRe-init 3 0.0 0 110 220 330 440 550 Steps c Transformer Block 5 g S10 3 = 2 05 a a 0.0 0 110 220 330 440 550 Steps c Transformer Block 9 g S10 3 & 2 05 a 10.0 4 110 220 330 440 550 Steps 0 Transformer Block 13 o S a -) L2 Dist. to Initialization S 6 110 220 330 440 550 Steps 0 Transformer Block 17 S g a a Ss Bi eece:e i a L2 Dist. to Initialization 8 110 220 330 440 550 Steps 0 Transformer Block 21 S ) g a Initialization Transformer Block 3 laa Transformer Block 2 al o o o a 5 L2 Dist. to Initialization L2 Dist. to Initialization 0.0 0.0 0 110 220 330 440 550 0 110 220 330 440 550 Steps Steps « Transformer Block6 Transformer Block 7 5 5 S10 S10 3 3 a a 205 205 Z Zz a a XQ 0.0 a 0.0 4 0 110 220 330 440 550 110 220 330 440 550 Steps 0 Steps Transformer Block 11 ft 0 110 220 330 440 550 Steps Transformer Block 10 aint 0 110 220 330 440 550 Steps o o S a 5 L2 Dist. to Initialization S S L2 Dist. to Initialization S Transformer Block 14 Transformer Block 15 s s g g glo § 1.00 3 3 0.75 5 & ey 0.5 2 0.50 Zz B 0.25 a a 20.0 © 0.00 Jo 110 220 330 440 550-47 0-110 220 330 440 550 Steps Steps Transformer Block 18 Transformer Block 19 Ss s g 3 @ 1.00 = 1.00 8 8 2 0.75 0.75 & & ig 0250 e050 Z 0.25 B 0.25 a a © 0.00 1 0.00 to 110 220 330 440 50-0110 220 330 440 550 Steps Steps c Transformer Block 22 ~ Transformer Block 23 é § = 1.00 = 1.00 8 8 0.75 â3 0.75 % z Transformer Block 4 la 0 110 220 330 440 550 Steps S = Bs & S L2 Dist. to Initialization Transformer Block 8 | S a L2 Dist. to Initialization S 110 220 330 440 550 Steps Transformer Block 12 ~) L2 Dist. to Initialization S a S 110 220 330 440 550 Steps Transformer Block 16 a Of sas ie a â7 cosc:se S L2 Dist. to Initialization 0 110 220 330 440 550 Steps Transformer Block 20 Ss L2 Dist. to Initialization 110 220 330 440 550 Steps Transformer Block 24 y Initialization
o a Ss » o i L2 Dist. to a e 110 220 330 440 550 Steps oso 05 0.25 [a] 4 0.0 0
110
220 330 Steps
050 3B 0.25 a 1 0.00 440 550 0-110
220 330 440 Steps
550
# S a S L2 Dist. to
0
110
220 330 Steps
# Figure 17: L2 distance to the initialization during ï¬ne-tuning BERT on STS-B.
21
440
550
Published as a conference paper at ICLR 2021
Transformer Block 1 Transformer Block 2 Transformer Block 3 Transformer Block 4 in a L2 Dist. to Initialization ° ° = a S L2 Dist. to Initialization B 8 6 L2 Dist. to Initialization S S rc > S a Ss in â Standard â Re-init 6 â Re-init 1 â Re-init 10 â Re-init 3 L2 Dist. to Initializati © ° S a .0 0 164 328 492 656 820 0 164 328 492 656 820 0 164 328 492 656 820 0 164 328 492 656 820 Steps Steps Steps Steps Transformer Block 5 Transformer Block 6 Transformer Block 7 Transformer Block 8 a L2 Dist. to Initialization 2 ° = = S Ba S in L2 Dist. to Initialization 2g 5 L2 Dist. to Initialization S S rc > S a So in L2 Dist. to Initialization 2 ° = = S Bs S in 0 164 328 492 656 820 0 164 328 492 656 820 0 164 328 492 656 820 0 164 328 492 656 820 Steps Steps Steps Steps Transformer Block 9 Transformer Block 10 Transformer Block 11 Transformer Block 12 o L2 Dist. to Initialization ° ° = = a S in L2 Dist. to Initialization fe 8 5 5 L2 Dist. to Initialization S S rc > S a Ss in L2 Dist. to Initialization ° ° = = S Bs S in 0 164 328 492 656 820 0 164 328 492 656 820 0 164 328 492 656 820 0 164 328 492 656 820 Steps Steps Steps Steps Transformer Block 13 Transformer Block 14 Transformer Block 15 Transformer Block 16 L2 Dist. to Initialization 2 ° = S an S L2 Dist. to Initialization fg 5 L2 Dist. to Initialization S S rc S a S L2 Dist. to Initialization 2 ° = S Bs S 0 164 328 492 656 820 0 164 328 492 656 820 0 164 328 492 656 820 0 164 328 492 656 820 Steps Steps Steps Steps Transformer Block 17 Transformer Block 18 Transformer Block 19 Transformer Block 20 L2 Dist. to Initialization ° ° = a S L2 Dist. to Initialization f 8 & L2 Dist. to Initialization eeesees:r She Sas L2 Dist. to Initialization esses = Sb & 3 8S Sh ⬠as .0 0 164 328 492 656 820 0 164 328 492 656 820 0 164 328 492 656 820 0 164 328 492 656 820 Steps Steps Steps Steps Transformer Block 21 Transformer Block 22 Transformer Block 23 Transformer Block 24 1.00 s = 8 =
L2 Dist. to Initialization sss be 3 & S$ a e
L2 Dist. to Initialization 28886 R 2 a 00, 656 820 0 164
# L2 Dist. to Initialization esses 8 Sk 6a 820 0
L2 Dist. to Initialization eescsee b & a6 RB ⬠as 00 820 0
°
164
328 492 Steps
328 492 Steps
656
164
328 492 Steps
656
164
328 492 Steps
# Figure 18: L2 distance to the initialization during ï¬ne-tuning BERT on CoLA.
22
656
820 | {
"id": "2004.14448"
} |
2006.05929 | Dataset Condensation with Gradient Matching | As the state-of-the-art machine learning methods in many fields rely on
larger datasets, storing datasets and training models on them become
significantly more expensive. This paper proposes a training set synthesis
technique for data-efficient learning, called Dataset Condensation, that learns
to condense large dataset into a small set of informative synthetic samples for
training deep neural networks from scratch. We formulate this goal as a
gradient matching problem between the gradients of deep neural network weights
that are trained on the original and our synthetic data. We rigorously evaluate
its performance in several computer vision benchmarks and demonstrate that it
significantly outperforms the state-of-the-art methods. Finally we explore the
use of our method in continual learning and neural architecture search and
report promising gains when limited memory and computations are available. | http://arxiv.org/pdf/2006.05929 | Bo Zhao, Konda Reddy Mopuri, Hakan Bilen | cs.CV, cs.LG | null | International Conference on Learning Representations 2021 | cs.CV | 20200610 | 20210308 | 1 2 0 2
r a M 8 ] V C . s c [
3 v 9 2 9 5 0 . 6 0 0 2 : v i X r a
Published as a conference paper at ICLR 2021
# DATASET CONDENSATION WITH GRADIENT MATCHING
Bo Zhao, Konda Reddy Mopuri, Hakan Bilen School of Informatics, The University of Edinburgh {bo.zhao, kmopuri, hbilen}@ed.ac.uk
# ABSTRACT
As the state-of-the-art machine learning methods in many ï¬elds rely on larger datasets, storing datasets and training models on them become signiï¬cantly more expensive. This paper proposes a training set synthesis technique for data-efï¬cient learning, called Dataset Condensation, that learns to condense large dataset into a small set of informative synthetic samples for training deep neural networks from scratch. We formulate this goal as a gradient matching problem between the gradients of deep neural network weights that are trained on the original and our synthetic data. We rigorously evaluate its performance in several computer vision benchmarks and demonstrate that it signiï¬cantly outperforms the state-of-the-art methods1. Finally we explore the use of our method in continual learning and neural architecture search and report promising gains when limited memory and computations are available.
# INTRODUCTION
Large-scale datasets, comprising millions of samples, are becoming the norm to obtain state-of- the-art machine learning models in multiple ï¬elds including computer vision, natural language pro- cessing and speech recognition. At such scales, even storing and preprocessing the data becomes burdensome, and training machine learning models on them demands for specialized equipment and infrastructure. An effective way to deal with large data is data selection â identifying the most repre- sentative training samples â that aims at improving data efï¬ciency of machine learning techniques. While classical data selection methods, also known as coreset construction (Agarwal et al., 2004; Har-Peled & Mazumdar, 2004; Feldman et al., 2013), focus on clustering problems, recent work can be found in continual learning (Rebufï¬ et al., 2017; Toneva et al., 2019; Castro et al., 2018; Aljundi et al., 2019) and active learning (Sener & Savarese, 2018) where there is typically a ï¬xed budget in storing and labeling training samples respectively. These methods commonly ï¬rst deï¬ne a criterion for representativeness (e.g. in terms of compactness (Rebufï¬ et al., 2017; Castro et al., 2018), diversity (Sener & Savarese, 2018; Aljundi et al., 2019), forgetfulness (Toneva et al., 2019)), then select the representative samples based on the criterion, ï¬nally use the selected small set to train their model for a downstream task.
Unfortunately, these methods have two shortcomings: they typically rely on i) heuristics (e.g. pick- ing cluster centers) that does not guarantee any optimal solution for the downstream task (e.g. image classiï¬cation), ii) presence of representative samples, which is neither guaranteed. A recent method, Dataset Distillation (DD) (Wang et al., 2018) goes beyond these limitations by learning a small set of informative images from large training data. In particular, the authors model the network param- eters as a function of the synthetic training data and learn them by minimizing the training loss over the original training data w.r.t. synthetic data. Unlike in the coreset methods, the synthesized data are directly optimized for the downstream task and thus the success of the method does not rely on the presence of representative samples.
Inspired from DD (Wang et al., 2018), we focus on learning to synthesize informative samples that are optimized to train neural networks for downstream tasks and not limited to individual samples in original dataset. Like DD, our goal is to obtain the highest generalization performance with a model trained on a small set of synthetic images, ideally comparable performance to that of a model trained on the original images (see Figure 1(a)). In particular, we investigate the following
1The implementation is available at https://github.com/VICO-UoE/DatasetCondensation.
1
Published as a conference paper at ICLR 2021
a: test (a) \ (b) i Large training set Large training set ! 1 Vv 1 > Comparable ! a Forward pass ' Update 1 Pa â ' Backpropagation 1 1 Ly Ly i 1 synthetic set | [] train test Small synthetic set Small synthetic set
Figure 1: Dataset Condensation (left) aims to generate a small set of synthetic images that can match the performance of a network trained on a large image dataset. Our method (right) realizes this goal by learning a synthetic set such that a deep network trained on it and the large set produces similar gradients w.r.t. its weights. The synthetic data can later be used to train a network from scratch in a small fraction of the original computational load. CE denotes Cross-Entropy.
questions. Is it possible to i) compress a large image classiï¬cation dataset into a small synthetic set, ii) train an image classiï¬cation model on the synthetic set that can be further used to classify real images, iii) learn a single set of synthetic images that can be used to train different neural network architectures? To this end, we propose a Dataset Condensation method to learn a small set of âcondensedâ synthetic samples such that a deep neural network trained on them obtains not only similar performance but also a close solution to a network trained on the large training data in the network parameter space. We formulate this goal as a minimization problem between two sets of gradients of the network parameters that are computed for a training loss over a large ï¬xed training set and a learnable condensed set (see Figure 1(b)). We show that our method enables effective learning of synthetic images and neural networks trained on them, outperforms (Wang et al., 2018) and coreset methods with a wide margin in multiple computer vision benchmarks. In addition, learning a compact set of synthetic samples also beneï¬ts other learning problems when there is a ï¬xed budget on training images. We show that our method outperforms popular data selection methods by providing more informative training samples in continual learning. Finally, we explore a promising use case of our method in neural architecture search, and show that â once our condensed images are learned â they can be used to train numerous network architectures extremely efï¬ciently.
Our method is related to knowledge distillation (KD) techniques (Hinton et al., 2015; BuciluËa et al., 2006; Ba & Caruana, 2014; Romero et al., 2014) that transfer the knowledge in an ensemble of models to a single one. Unlike KD, we distill knowledge of a large training set into a small synthetic set. Our method is also related to Generative Adversarial Networks (Goodfellow et al., 2014a; Mirza & Osindero, 2014; Radford et al., 2015) and Variational AutoEncoders (Kingma & Welling, 2013) that synthesize high-ï¬delity samples by capturing the data distribution. In contrast, our goal is to generate informative samples for training deep neural networks rather than to produce âreal-lookingâ samples. Finally our method is related to the methods that produce image patches by projecting the feature activations back to the input pixel space (Zeiler & Fergus, 2014), reconstruct the input image by matching the feature activations (Mahendran & Vedaldi, 2015), recover private training images for given training gradients (Zhu et al., 2019; Zhao et al., 2020), synthesize features from semantic embeddings for zero-shot learning (Sariyildiz & Cinbis, 2019). Our goal is however to synthesize a set of condensed training images not to recover the original or missing training images.
In the remainder of this paper, we ï¬rst review the problem of dataset condensation and introduce our method in section 2, present and analyze our results in several image recognition benchmarks in sec- tion 3.1, showcase applications in continual learning and network architecture search in section 3.2, and conclude the paper with remarks for future directions in section 4.
# 2 METHOD
# 2.1 DATASET CONDENSATION
Suppose we are given a large dataset consisting of |T | pairs of a training image and its class label T = {(xi, yi)}||T | i=1 where x â X â Rd, y â {0, . . . , C â 1}, X is a d-dimensional input space and C is the number of classes. We wish to learn a differentiable function Ï (i.e. deep neural network)
2
Published as a conference paper at ICLR 2021
with parameters θ that correctly predicts labels of previously unseen images, i.e. y = Ïθ(x). One can learn the parameters of this function by minimizing an empirical loss term over the training set:
θT = arg min LT (θ) θ (1)
where £7(0) = al Vewer (¢0(@),y) » £(-,+) is a task specific loss (i.e. cross-entropy) and 07 is the minimizer of £7. The generalization performance of the obtained model ¢gr can be written as Ex~ pp [¢(der (@), y)] where Pp is the data distribution. Our goal is to generate a small set of condensed synthetic samples with their labels, S = {(s;,y HS where s ⬠Rand y ⬠J, |S| < |T]. Similar to eq. (ip. once the condensed set is learned, one can train ¢ on them as follows
θS = arg min LS (θ) θ (2)
where £5(0) = roi XY veyes (¢o(s),y) and 9° is the minimizer of C5. As the synthetic set S is significantly smaller (2-3 orders of magnitude), we expect the optimization in eq. (2) to be significantly faster than that in eq. (I). We also wish the generalization performance of ¢gs to be close to dg, i.e. Exx Pp [C(dgr (@), y)] ~ Ex~ pp [â¬(G9s (x), y)] over the real data distribution Pp. Discussion. The goal of obtaining comparable generalization performance by training on the con- densed data can be formulated in different ways. One approach, which is proposed in (Wang et al. 2018) and extended in (Sucholutsky & Schonlau| 2019} |Bohdal et al. 2020} Such et al. 2020), is to pose the parameters @° as a function of the synthetic data S:
S â = arg min LT (θS (S)) subject to θS (S) = arg min LS (θ). S θ (3)
The method aims to ï¬nd the optimum set of synthetic images S â such that the model ÏθS trained on them minimizes the training loss over the original data. Optimizing eq. (3) involves a nested loop optimization and solving the inner loop for θS (S) at each iteration to recover the gradients for S which requires a computationally expensive procedure â unrolling the recursive computation graph for S over multiple optimization steps for θ (see (Samuel & Tappen, 2009; Domke, 2012)). Hence, it does not scale to large models and/or accurate inner-loop optimizers with many steps. Next we propose an alternative formulation for dataset condensation.
# 2.2 DATASET CONDENSATION WITH PARAMETER MATCHING
Here we aim to learn S such that the model ¢gs trained on them achieves not only comparable generalization performance to ¢g7 but also converges to a similar solution in the parameter space (i.e. 0S ~ OT). Let ¢@ be a locally smooth functior}| similar weights (0° ~ 07) imply similar mappings in a local neighborhood and thus generalization performance, i.e. Ey. pp [¢(¢97 (@), y)| & Eg Pp [C(d9s (a), y)]. Now we can formulate this goal as
min S D(θS , θT ) subject to θS (S) = arg min θ LS (θ) (4)
where θT = arg minθ LT (θ) and D(·, ·) is a distance function. In a deep neural network, θT typically depends on its initial values θ0. However, the optimization in eq. (4) aims to obtain an optimum set of synthetic images only for one model ÏθT with the initialization θ0, while our actual goal is to generate samples that can work with a distribution of random initializations Pθ0. Thus we modify eq. (4) as follows:
min S Eθ0â¼Pθ0 [D(θS (θ0), θT (θ0))] subject to θS (S) = arg min θ LS (θ(θ0)) (5)
where θT = arg minθ LT (θ(θ0)). For brevity, we use only θS and θT to indicate θS (θ0) and θT (θ0) respectively in the next sections. The standard approach to solving eq. (5) employs implicit differentiation (see (Domke, 2012) for details), which involves solving an inner loop optimization for θS . As the inner loop optimization θS (S) = arg minθ LS (θ) can be computationally expensive in
2Local smoothness is frequently used to obtain explicit ï¬rst-order local approximations in deep networks (e.g. see (Rifai et al., 2012; Goodfellow et al., 2014b; Koh & Liang, 2017)).
3
Published as a conference paper at ICLR 2021
case of large-scale models, one can adopt the back-optimization approach in (Domke, 2012) which re-deï¬nes θS as the output of an incomplete optimization:
θS (S) = opt-algθ(LS (θ), Ï) (6)
where opt-alg is a speciï¬c optimization procedure with a ï¬xed number of steps (Ï). In practice, θT for different initializations can be trained ï¬rst in an ofï¬ine stage and then used as the target parameter vector in eq. (5). However, there are two potential issues by learning to regress θT as the target vector. First the distance between θT and intermediate values of θS can be too big in the parameter space with multiple local minima traps along the path and thus it can be too challenging to reach. Second opt-alg involves a limited number of optimization steps as a trade- off between speed and accuracy which may not be sufï¬cient to take enough steps for reaching the optimal solution. These problems are similar to those of (Wang et al., 2018), as they both involve parameterizing θS with S and θ0.
# 2.3 DATASET CONDENSATION WITH CURRICULUM GRADIENT MATCHING
Here we propose a curriculum based approach to address the above mentioned challenges. The key idea is that we wish θS to be close to not only the ï¬nal θT but also to follow a similar path to θT throughout the optimization. While this can restrict the optimization dynamics for θ, we argue that it also enables a more guided optimization and effective use of the incomplete optimizer. We can now decompose eq. (5) into multiple subproblems:
T-1 min E,~ Po, [D> D(97,07)] subject to «) t=0 O7,1(S) = opt-alge(L*(67),s°) and Of, = opt-alge(L7 (67 ),<7)
where T is the number of iterations, Ï S and Ï T are the numbers of optimization steps for θS and θT respectively. In words, we wish to generate a set of condensed samples S such that the network parameters trained on them (θS t ) at each iteration t. In our preliminary experiments, we observe that θS t+1, which is parameterized with t , θT t+1 by updating S and minimizing D(θS S, can successfully track θT
t ) close to zero. In the case of one step gradient descent optimization for opt-alg, the update rule is:
t+1 â θS θS t â ηθâθLS (θS t ) and θT t+1 â θT t â ηθâθLT (θT t ), (8)
where ηθ is the learning rate. Based on our observation (D(θS tion in eq. (7) by replacing θT t , θT t and use θ to denote θS in the rest of the paper: t ) â 0), we simplify the formula- t with θS
T-1 min Eo,~P9, [D> D(VoL5 (6), VoLT (81))]- (9) t=0
We now have a single deep network with parameters θ trained on the synthetic set S which is optimized such that the distance between the gradients for the loss over the training samples LT w.r.t. θ and the gradients for the loss over the condensed samples LS w.r.t. θ is minimized. In words, our goal reduces to matching the gradients for the real and synthetic training loss w.r.t. θ via updating the condensed samples. This approximation has the key advantage over (Wang et al., 2018) and eq. (5) that it does not require the expensive unrolling of the recursive computation graph over the previous parameters {θ0, . . . , θtâ1}. The important consequence is that the optimization is signiï¬cantly faster, memory efï¬cient and thus scales up to the state-of-the-art deep neural networks (e.g. ResNet (He et al., 2016)).
Discussion. The synthetic data contains not only samples but also their labels (s, y) that can be jointly learned by optimizing eq. (9) in theory. However, their joint optimization is challenging, as the content of the samples depend on their label and vice-versa. Thus in our experiments we learn to synthesize images for ï¬xed labels, e.g. one synthetic image per class.
4
Published as a conference paper at ICLR 2021
Algorithm. We depict the optimization details in Alg. 1. At the outer level, it contains a loop over random weight initializations, as we want to obtain condensed images that can later be used to train previously unseen models. Once θ is randomly initialized, we use Ïθ to ï¬rst compute the loss over both the training samples (LT ), synthetic samples (LS ) and their gradients w.r.t. θ, then optimize the synthetic samples S to match these gradients âθLS to âθLT by applying ÏS gradient descent steps with learning rate ηS . We use the stochastic gradient descent optimization for both opt-algθ and opt-algS . Next we train θ on the updated synthetic images by minimizing the loss LS with learning rate ηθ for Ïθ steps. Note that we sample each real and synthetic batch pair from T and S containing samples from a single class and the synthetic data for each class are separately (or parallelly) updated at each iteration (t) for the following reasons: i) this reduces memory use at train time, ii) imitating the mean gradients w.r.t. the data from single class is easier compared to those of multiple classes. This does not bring any extra computational cost.
Algorithm 1: Dataset condensation with gradient matching Input: Training set T
1 Required: Randomly initialized set of synthetic samples S for C classes, probability distribution over randomly initialized weights Pθ0 , deep neural network Ïθ, number of outer-loop steps K, number of inner-loop steps T , number of steps for updating weights Ïθ and synthetic samples ÏS in each inner-loop step respectively, learning rates for updating weights ηθ and synthetic samples ηS .
kwon yaw fork =0,---,K â1do Initialize 99 ~ Pay fort =0,---,Tâ1do forc =0,---,Câ1do Sample a minibatch pair BI ~ T and BS ~S > BI and Be are of the same class c. Compute LT = Ter; Dee wenr (Go. (@),y) and LE = ys] Dye wens â(eo (8)>Â¥) Update S. + opt-algs(D(VoLl3 (01), Val (0:)), ss, ns) Update 0,41 <â opt-alge(L5(4), se. eo) > Use the whole S Output: Ss
Gradient matching loss. The matching loss D(-,-) in eq. (9) measures the distance between the gradients for £5 and £7 w.r.t. 8. When ¢o is a multi-layered neural network, the gradients corre- spond to a set of learnable 2D (out x in) and 4D (out x inxhxw) weights for each fully connected (FC) and convolutional layer resp where out, in, h, w are number of output and input channels, kernel height and width resp. The matching loss can be decomposed into a sum of layerwise losses as D(VeLlS, VoL) = iy d(VeawL>, Vou LT) where I is the layer index, L is the number of layers with weights and
â Aj. - Bi. ) d(A,B) = 1- ââ_â. 10 (A.B) > ( [Av IIB] 0)
i=1
where Ai· and Bi· are ï¬attened vectors of gradients corresponding to each output node i, which is in dimensional for FC weights and inÃhÃw dimensional for convolutional weights. In contrast to (Lopez-Paz et al., 2017; Aljundi et al., 2019; Zhu et al., 2019) that ignore the layer-wise structure by ï¬attening tensors over all layers to one vector and then computing the distance between two vectors, we group them for each output node. We found that this is a better distance for gradient matching (see the supplementary) and enables using a single learning rate across all layers.
3 EXPERIMENTS
# 3.1 DATASET CONDENSATION
First we evaluate classiï¬cation performance with the condensed images on four standard benchmark datasets: digit recognition on MNIST (LeCun et al., 1998), SVHN (Netzer et al., 2011) and object classiï¬cation on FashionMNIST (Xiao et al., 2017), CIFAR10 (Krizhevsky et al., 2009). We test our method using six standard deep network architectures: MLP, ConvNet (Gidaris & Komodakis, 2018), LeNet (LeCun et al., 1998), AlexNet (Krizhevsky et al., 2012), VGG-11 (Simonyan & Zis- serman, 2014) and ResNet-18 (He et al., 2016). MLP is a multilayer perceptron with two nonlinear hidden layers, each has 128 units. ConvNet is a commonly used modular architecture in few-shot
5
Published as a conference paper at ICLR 2021
Img/Cls Ratio % Random Coreset Selection Herding K-Center Forgetting Ours Whole Dataset MNIST 1 10 50 0.017 0.17 0.83 64.9±3.5 95.1±0.9 97.9±0.2 89.2±1.6 93.7±0.3 94.9±0.2 89.3±1.5 84.4±1.7 97.4±0.3 35.5±5.6 68.1±3.3 88.2±1.2 91.7±0.5 97.4±0.2 98.8±0.2 99.6±0.0 FashionMNIST 1 10 50 0.017 0.17 0.83 51.4±3.8 73.8±0.7 82.5±0.7 67.0±1.9 71.1±0.7 71.9±0.8 66.9±1.8 54.7±1.5 68.3±0.8 42.0±5.5 53.9±2.0 55.0±1.1 70.5±0.6 82.3±0.4 83.6±0.4 93.5±0.1 SVHN 1 10 50 0.014 0.14 0.7 14.6±1.6 35.1±4.1 70.9±0.9 20.9±1.3 50.5±3.3 72.6±0.8 21.0±1.5 14.0±1.3 20.1±1.4 12.1±1.7 16.8±1.2 27.2±1.5 31.2±1.4 76.1±0.6 82.3±0.3 95.4±0.1 CIFAR10 1 10 50 0.02 0.2 1 14.4±2.0 26.0±1.2 43.4±1.0 21.5±1.2 31.6±0.7 40.4±0.6 21.5±1.3 14.7±0.9 27.0±1.4 13.5±1.2 23.3±1.0 23.3±1.1 28.3±0.5 44.9±0.5 53.9±0.5 84.8±0.1
Table 1: The performance comparison to coreset methods. This table shows the testing accuracies (%) of different methods on four datasets. ConvNet is used for training and testing. Img/Cls: image(s) per class, Ratio (%): the ratio of condensed images to whole training set.
learning (Snell et al., 2017; Vinyals et al., 2016; Gidaris & Komodakis, 2018) with D duplicate blocks, and each block has a convolutional layer with W (3 à 3) ï¬lters, a normalization layer N , an activation layer A and a pooling layer P , denoted as [W, N, A, P ] à D. The default ConvNet (unless speciï¬ed otherwise) includes 3 blocks, each with 128 ï¬lters, followed by InstanceNorm (Ulyanov et al., 2016), ReLU and AvgPooling modules. The ï¬nal block is followed by a linear classiï¬er. We use Kaiming initialization (He et al., 2015) for network weights. The synthetic images can be initialized from Gaussian noise or randomly selected real training images. More details about the datasets, networks and hyper-parameters can be found in the supplementary.
The pipeline for dataset condensation has two stages: learning the condensed images (denoted as C) and training classiï¬ers from scratch on them (denoted as T). Note that the model architectures used in two stages might be different. For the coreset baselines, the coreset is selected in the ï¬rst stage. We investigate three settings: 1, 10 and 50 image/class learning, which means that the condensed set or coreset contains 1, 10 and 50 images per class respectively. Each method is run for 5 times, and 5 synthetic sets are generated in the ï¬rst stage; each generated synthetic set is used to train 20 randomly initialized models in the second stage and evaluated on the test set, which amounts to evaluating 100 models in the second stage. In all experiments, we report the mean and standard deviation of these 100 testing results.
Baselines. We compare our method to four coreset baselines (Random, Herding, K-Center and Forgetting) and also to DD (Wang et al., 2018). In Random, the training samples are randomly selected as the coreset. Herding baseline, which selects closest samples to the cluster center, is based on (Welling, 2009) and used in (Rebufï¬ et al., 2017; Castro et al., 2018; Wu et al., 2019; Belouadah & Popescu, 2020). K-Center (Wolf, 2011; Sener & Savarese, 2018) picks multiple center points such that the largest distance between a data point and its nearest center is minimized. For Herding and K-Center, we use models trained on the whole dataset to extract features, compute l2 distance to centers. Forgetting method (Toneva et al., 2019) selects the training samples which are easy to forget during training. We do not compare to GSS-Greedy (Aljundi et al., 2019), because it is also a similarity based greedy algorithm like K-Center, but GSS-Greedy trains an online learning model to measure the similarity of samples, which is different from general image classiï¬cation problem. More detailed comparisons can be found in the supplementary.
Comparison to coreset methods. We ï¬rst compare our method to the coreset baselines on MNIST, FashionMNIST, SVHN and CIFAR10 in Table 1 using the default ConvNet in classiï¬- cation accuracy. Whole dataset indicates training on the whole original set which serves as an approximate upper-bound performance. First we observe that our method outperforms all the base- lines signiï¬cantly and achieves a comparable result (98.8%) in case of 50 images per class to the upper bound (99.6%) in MNIST which uses two orders of magnitude more training images per class (6000). We also obtain promising results in FashionMNIST, however, the gap between our method and upper bound is bigger in SVHN and CIFAR10 which contain more diverse images with varying foregrounds and backgrounds. We also observe that, (i) the random selection baseline is competitive to other coreset methods in 10 and 50 images per class and (ii) herding method is on average the best coreset technique. We visualize the condensed images produced by our method under 1 image/class setting in Figure 2. Interestingly they are interpretable and look like âprototypesâ of each class.
6
Published as a conference paper at ICLR 2021
ol 12/3/45] él7/8] 9) MLP ConvNet = Ea LeNet AlexNet VGG ResNet mer ale
C\T MLP ConvNet LeNet AlexNet VGG ResNet 70.5±1.2 63.9±6.5 77.3±5.8 70.9±11.6 53.2±7.0 80.9±3.6 ConvNet 69.6±1.6 91.7±0.5 85.3±1.8 85.1±3.0 83.4±1.8 90.0±0.8 71.0±1.6 90.3±1.2 85.0±1.7 84.7±2.4 80.3±2.7 89.0±0.8 LeNet AlexNet 72.1±1.7 87.5±1.6 84.0±2.8 82.7±2.9 81.2±3.0 88.9±1.1 VGG 70.3±1.6 90.1±0.7 83.9±2.7 83.4±3.7 81.7±2.6 89.1±0.9 ResNet 73.6±1.2 91.6±0.5 86.4±1.5 85.4±1.9 83.4±2.4 89.4±0.9 MLP
Figure 2: Visualization of condensed 1 im- age/class with ConvNet for MNIST, Fashion- MNIST, SVHN and CIFAR10.
Table 2: Cross-architecture performance in testing accu- racy (%) for condensed 1 image/class in MNIST.
Dataset Img/Cls DD Ours Whole Dataset MNIST CIFAR10 1 10 1 10 85.0±1.6 79.5±8.1 93.9±0.6 - 24.2±0.9 36.8±1.2 39.1±1.2 - 99.5±0.0 83.1±0.2 Performance (%) Correlation Time cost (min) Storage (imgs) 76.2 -0.21 18.8 102 76.2 -0.20 18.8 102 84.5 0.79 18.8 102 84.5 0.42 18.8 104 85.9 1.00 8604.3 5 à 104
Table 3: Comparison to DD (Wang et al., 2018) in terms of testing accuracy (%).
Table 4: Neural Architecture Search. Methods are compared in performance, ranking correlation, time and memory cost.
Comparison to DD (Wang et al., 2018). Unlike the setting in Table 1, DD (Wang et al., 2018) reports results only for 10 images per class on MNIST and CIFAR10 over LeNet and AlexCifarNet (a customized AlexNet). We strictly follow the experimental setting in (Wang et al., 2018), use the same architectures and report our and their original results in Table 3 for a fair comparison. Our method achieves signiï¬cantly better performance than DD on both benchmarks; obtains 5% higher accuracy with only 1 synthetic sample per class than DD with 10 samples per class. In addition, our method obtains consistent results over multiple runs with a standard deviation of only 0.6% on MNIST, while DDâs performance signiï¬cantly vary over different runs (8.1%). Finally our method trains 2 times faster than DD and requires 50% less memory on CIFAR10 experiments. More detailed runtime and qualitative comparison can be found in the supplementary.
Cross-architecture generalization. Another key advantage of our method is that the condensed images learned using one architecture can be used to train another unseen one. Here we learn 1 con- densed image per class for MNIST over a diverse set of networks including MLP, ConvNet (Gidaris & Komodakis, 2018), LeNet (LeCun et al., 1998), AlexNet (Krizhevsky et al., 2012), VGG-11 (Si- monyan & Zisserman, 2014) and ResNet-18 (He et al., 2016) (see Table 2). Once the condensed sets are synthesized, we train every network on all the sets separately from scratch and evaluate their cross architecture performance in terms of classiï¬cation accuracy on the MNIST test set. Table 2 shows that the condensed images, especially the ones that are trained with convolutional networks, perform well and are thus architecture generic. MLP generated images do not work well for training convolutional architectures which is possibly due to the mismatch between translation invariance Interestingly, MLP achieves better performance properties of MLP and convolutional networks. with convolutional network generated images than the MLP generated ones. The best results are obtained in most cases with ResNet generated images and ConvNet or ResNet as classiï¬ers which is inline with their performances when trained on the original dataset.
Number of condensed images. We also study the test performance of a ConvNet trained on them for MNIST, FashionMNIST, SVHN and CIFAR10 for various number of condensed images per class in Figure 3 in absolute and relative terms â normalized by its upper-bound. Increasing the number of condensed images improves the accuracies in all benchmarks and further closes the gap with the upper-bound performance especially in MNIST and FashionMNIST, while the gap remains larger in SVHN and CIFAR10. In addition, our method outperforms the coreset method - Herding by a large margin in all cases.
Activation, normalization & pooling. We also study the effect of various activation (sigmoid, ReLU (Nair & Hinton, 2010; Zeiler et al., 2013), leaky ReLU (Maas et al., 2013)), pooling (max, average) and normalization functions (batch (Ioffe & Szegedy, 2015), group (Wu & He, 2018), layer (Ba et al., 2016), instance norm (Ulyanov et al., 2016)) and have the following observations: i) leaky ReLU over ReLU and average pooling over max pooling enable learning better condensed images, as they allow for denser gradient ï¬ow; ii) instance normalization obtains better classiï¬ca-
7
Published as a conference paper at ICLR 2021
MNIST FashionMNIST 98 99 90 80 â ours 90 â ours © a © & Relative accuracy (%) © a Absolute accuracy (%) Testing accuracy (%) SVHN CIFARIO 80 80 50 60 92) ââ Herding w. KD 60 60 40 â urs w. KD 5 40 91 - Herding w.o. KD 20:8. es 202 40 â Herding 40 30 â Herding ---- Ours w.o. KD 20 = os | 20 â ours 90 i 10 20 30 40 50 i 10 20 30 40 50 Tl T2 73 images/class images/class Training stage
Figure 3: Absolute and relative testing accuracies for varying the number of condensed images/class for MNIST, FashionMNIST, SVHN and CIFAR10. The relative accuracy means the ratio compared to its upper- bound, i.e. training with the whole dataset.
Figure 4: Continual learning performance in accuracy (%). Herding denotes the original E2E (Castro et al., 2018). T1, T2, T3 are three learning stages. The performance at each stage is the mean testing accuracy on all learned tasks.
tion performance than its alternatives when used in the networks that are trained on a small set of condensed images. We refer to the supplementary for detailed results and discussion.
3.2 APPLICATIONS
Continual Learning First we apply our method to a continual-learning scenario (Rebufï¬ et al., 2017; Castro et al., 2018) where new tasks are learned incrementally and the goal is to preserve the performance on the old tasks while learning the new ones. We build our model on E2E method in (Castro et al., 2018) that uses a limited budget rehearsal memory (we consider 10 images/class here) to keep representative samples from the old tasks and knowledge distillation (KD) to regularize the networkâs output w.r.t. to previous predictions. We replace its sample selection mechanism (herding) with ours such that a set of condensed images are generated and stored in the memory, keep the rest of the model same and evaluate this model on the task-incremental learning problem on the digit recognition datasets, SVHN (Netzer et al., 2011), MNIST (LeCun et al., 1998) and USPS (Hull, 1994) in the same order. MNIST and USPS images are reshaped to 32 à 32 RGB images.
We compare our method to E2E (Castro et al., 2018), depicted as herding in Figure 4, with and without KD regularization. The experiment contains 3 incremental training stages (SVHNâMNISTâUSPS) and testing accuracies are computed by averaging over the test sets of the previous and current tasks after each stage. The desired outcome is to obtain high mean clas- siï¬cation accuracy at T3. The results indicate that the condensed images are more data-efï¬cient than the ones sampled by herding and thus our method outperforms E2E in both settings, while by a larger margin (2.3% at T3) when KD is not employed.
Neural Architecture Search. Here we explore the use of our method in a simple neural architec- ture search (NAS) experiment on CIFAR10 which typically requires expensive training of numerous architectures multiple times on the whole training set and picking the best performing ones on a val- idation set. Our goal is to verify that our condensed images can be used to efï¬ciently train multiple networks to identify the best network. To this end, we construct the search space of 720 ConvNets as described in Section 3.1 by varying hyper-parameters W , N , A, P , D over an uniform grid (see supplementary for more details), train them for 100 epochs on three small proxy datasets (10 im- ages/class) that are obtained with Random sampling, Herding and our method. Note that we train the condensed images for once only with the default ConvNet architecture and use them to train all kinds of architectures. We also compare to early-stopping (Li & Talwalkar, 2020) in which the model is trained on whole training set but with the same number of training iterations as the one required for the small proxy datasets, in other words, for the same amount of computations.
Table 4 depicts i) the average test performance of the best selected model over 5 runs when trained on the whole dataset, ii) Spearmanâs rank correlation coefï¬cient between the validation accuracies obtained by training the selected top 10 models on the proxy dataset and whole dataset, iii) time for training 720 architectures on a NVIDIA GTX1080-Ti GPU, and iv) memory print of the training images. Our method achieves the highest testing performance (84.5%) and performance correlation (0.79), meanwhile signiï¬cantly decreases the the searching time (from 8604.3 to 18.8 minutes) and storage space (from 5 à 104 to 1 à 102 images) compared to whole-dataset training. The competitive early-stopping baseline achieves on par performance for the best performing model with
8
Published as a conference paper at ICLR 2021
ours, however, the rank correlation (0.42) of top 10 models is signiï¬cantly lower than ours (0.79) which indicates unreliable correlation of performances between early-stopping and whole-dataset training. Furthermore, early-stopping needs 100 times as many training images as ours needs. Note that the training time for synthetic images is around 50 minutes (for K = 500) which is one time off and negligible cost when training thousands even millions of candidate architectures in NAS.
# 4 CONCLUSION
In this paper, we propose a dataset condensation method that learns to synthesize a small set of informative images. We show that these images are signiï¬cantly more data-efï¬cient than the same number of original images and the ones produced by the previous method, and they are not archi- tecture dependent, can be used to train different deep networks. Once trained, they can be used to lower the memory print of datasets and efï¬ciently train numerous networks which are crucial in continual learning and neural architecture search respectively. For future work, we plan to explore the use of condensed images in more diverse and thus challenging datasets like ImageNet (Deng et al., 2009) that contain higher resolution images with larger variations in appearance and pose of objects, background.
Acknowledgment. This work is funded by China Scholarship Council 201806010331 and the EPSRC programme grant Visual AI EP/T028572/1. We thank Iain Murray and Oisin Mac Aodha for their valuable feedback.
# REFERENCES
Pankaj K Agarwal, Sariel Har-Peled, and Kasturi R Varadarajan. Approximating extent measures of points. Journal of the ACM (JACM), 51(4):606â635, 2004.
Rahaf Aljundi, Min Lin, Baptiste Goujaud, and Yoshua Bengio. Gradient based sample selection for online continual learning. In Advances in Neural Information Processing Systems, pp. 11816â 11825, 2019.
Jimmy Ba and Rich Caruana. Do deep nets really need to be deep? In Advances in neural informa- tion processing systems, pp. 2654â2662, 2014.
Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
Eden Belouadah and Adrian Popescu. Scail: Classiï¬er weights scaling for class incremental learn- ing. In The IEEE Winter Conference on Applications of Computer Vision, 2020.
Ondrej Bohdal, Yongxin Yang, and Timothy Hospedales. Flexible dataset distillation: Learn labels instead of images. Neural Information Processing Systems Workshop, 2020.
In Pro- ceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 535â541, 2006.
Francisco M Castro, Manuel J Mar´ın-Jim´enez, Nicol´as Guil, Cordelia Schmid, and Karteek Alahari. End-to-end incremental learning. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 233â248, 2018.
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 248â255. Ieee, 2009.
Justin Domke. Generic methods for optimization-based modeling. In Artiï¬cial Intelligence and Statistics, pp. 318â326, 2012.
Dan Feldman, Melanie Schmidt, and Christian Sohler. Turning big data into tiny data: Constant-size coresets for k-means, pca and projective clustering. In Proceedings of the twenty-fourth annual ACM-SIAM symposium on Discrete algorithms, pp. 1434â1453. SIAM, 2013.
9
Published as a conference paper at ICLR 2021
In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4367â 4375, 2018.
Jack Goetz and Ambuj Tewari. Federated learning via synthetic data. arXiv preprint arXiv:2008.04489, 2020.
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural infor- mation processing systems, pp. 2672â2680, 2014a.
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014b.
Sariel Har-Peled and Soham Mazumdar. On coresets for k-means and k-median clustering. In Proceedings of the thirty-sixth annual ACM symposium on Theory of computing, 2004.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectiï¬ers: Surpassing human-level performance on imagenet classiï¬cation. In Proceedings of the IEEE international conference on computer vision, pp. 1026â1034, 2015.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770â778, 2016.
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
Jonathan J. Hull. A database for handwritten text recognition research. IEEE Transactions on pattern analysis and machine intelligence, 16(5):550â554, 1994.
Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. ArXiv, abs/1502.03167, 2015.
Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
Pang Wei Koh and Percy Liang. Understanding black-box predictions via inï¬uence functions. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1885â 1894. JMLR. org, 2017.
Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convo- lutional neural networks. In Advances in neural information processing systems, pp. 1097â1105, 2012.
Yann LeCun, L´eon Bottou, Yoshua Bengio, Patrick Haffner, et al. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278â2324, 1998.
Guang Li, Ren Togo, Takahiro Ogawa, and Miki Haseyama. Soft-label anonymous gastric x-ray image distillation. In 2020 IEEE International Conference on Image Processing (ICIP), pp. 305â 309. IEEE, 2020.
Liam Li and Ameet Talwalkar. Random search and reproducibility for neural architecture search. In Uncertainty in Artiï¬cial Intelligence, pp. 367â377. PMLR, 2020.
Raphael Gontijo Lopes, Stefano Fenu, and Thad Starner. Data-free knowledge distillation for deep neural networks. In LLD Workshop at Neural Information Processing Systems (NIPS ), 2017.
David Lopez-Paz et al. Gradient episodic memory for continual learning. In Advances in Neural Information Processing Systems, pp. 6467â6476, 2017.
10
Published as a conference paper at ICLR 2021
Andrew L Maas, Awni Y Hannun, and Andrew Y Ng. Rectiï¬er nonlinearities improve neural net- work acoustic models. In International conference on machine learning (ICML), 2013.
Aravindh Mahendran and Andrea Vedaldi. Understanding deep image representations by inverting them. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5188â5196, 2015.
Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014.
Vinod Nair and Geoffrey E Hinton. Rectiï¬ed linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10), pp. 807â814, 2010.
Gaurav Kumar Nayak, Konda Reddy Mopuri, Vaisakh Shaj, Venkatesh Babu Radhakrishnan, and Anirban Chakraborty. Zero-shot knowledge distillation in deep networks. In Proceedings of the 36th International Conference on Machine Learning, 2019.
Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. 2011.
Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
Sylvestre-Alvise Rebufï¬, Alexander Kolesnikov, Georg Sperl, and Christoph H Lampert. icarl: In Proceedings of the IEEE Conference on Incremental classiï¬er and representation learning. Computer Vision and Pattern Recognition, pp. 2001â2010, 2017.
Salah Rifai, Yoshua Bengio, Yann Dauphin, and Pascal Vincent. A generative process for sampling contractive auto-encoders. arXiv preprint arXiv:1206.6434, 2012.
Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. Fitnets: Hints for thin deep nets. arXiv preprint arXiv:1412.6550, 2014.
Kegan GG Samuel and Marshall F Tappen. Learning optimized map estimates in continuously- valued mrf models. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 477â484. IEEE, 2009.
Mert Bulent Sariyildiz and Ramazan Gokberk Cinbis. Gradient matching generative networks for In Proceedings of the IEEE Conference on Computer Vision and Pattern zero-shot learning. Recognition, pp. 2168â2178, 2019.
Ozan Sener and Silvio Savarese. Active learning for convolutional neural networks: A core-set approach. ICLR, 2018.
Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. In Advances in neural information processing systems, pp. 4077â4087, 2017.
Felipe Petroski Such, Aditya Rawal, Joel Lehman, Kenneth O Stanley, and Jeff Clune. Genera- tive teaching networks: Accelerating neural architecture search by learning to generate synthetic training data. International Conference on Machine Learning, 2020.
Ilia Sucholutsky and Matthias Schonlau. Soft-label dataset distillation and text dataset distillation. arXiv preprint arXiv:1910.02551, 2019.
Mariya Toneva, Alessandro Sordoni, Remi Tachet des Combes, Adam Trischler, Yoshua Bengio, and Geoffrey J Gordon. An empirical study of example forgetting during deep neural network learning. ICLR, 2019.
Dmitry Ulyanov, Andrea Vedaldi, and Victor Lempitsky. Instance normalization: The missing in- gredient for fast stylization. arXiv preprint arXiv:1607.08022, 2016.
11
Published as a conference paper at ICLR 2021
Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Daan Wierstra, et al. Matching networks for one shot learning. In Advances in neural information processing systems, 2016.
Tongzhou Wang, Jun-Yan Zhu, Antonio Torralba, and Alexei A Efros. Dataset distillation. arXiv preprint arXiv:1811.10959, 2018.
Max Welling. Herding dynamical weights to learn. In Proceedings of the 26th Annual International Conference on Machine Learning, pp. 1121â1128. ACM, 2009.
G W Wolf. Facility location: concepts, models, algorithms and case studies. 2011.
Yue Wu, Yinpeng Chen, Lijuan Wang, Yuancheng Ye, Zicheng Liu, Yandong Guo, and Yun Fu. Large scale incremental learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 374â382, 2019.
Yuxin Wu and Kaiming He. Group normalization. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 3â19, 2018.
Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmark- ing machine learning algorithms. arXiv preprint arXiv:1708.07747, 2017.
Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. European conference on computer vision, pp. 818â833. Springer, 2014. In
Matthew D. Zeiler, MarcâAurelio Ranzato, Rajat Monga, Mark Z. Mao, Kyeongcheol Yang, Quoc V. Le, Patrick Nguyen, Andrew W. Senior, Vincent Vanhoucke, Jeffrey Dean, and Geoffrey E. Hin- ton. On rectiï¬ed linear units for speech processing. 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3517â3521, 2013.
Bo Zhao, Konda Reddy Mopuri, and Hakan Bilen. idlg: Improved deep leakage from gradients. arXiv preprint arXiv:2001.02610, 2020.
Yanlin Zhou, George Pu, Xiyao Ma, Xiaolin Li, and Dapeng Wu. Distilled one-shot federated learning. arXiv preprint arXiv:2009.07999, 2020.
Ligeng Zhu, Zhijian Liu, and Song Han. Deep leakage from gradients. In Advances in Neural Information Processing Systems, pp. 14747â14756, 2019.
12
Published as a conference paper at ICLR 2021
# A IMPLEMENTATION DETAILS
In this part, we explain the implementation details for the dataset condensation, continual learning and neural architecture search experiments.
Dataset condensation. The presented experiments involve tuning of six hyperparameters â the number of outer-loop K and inner-loop steps T , learning rates ηS and number of optimization steps ÏS for the condensed samples, learning rates ηθ and number of optimization steps Ïθ for the model weights. In all experiments, we set K = 1000, ηS = 0.1, ηθ = 0.01, ÏS = 1 and employ Stochastic Gradient Descent (SGD) as the optimizer. The only exception is that we set ηS to 0.01 for synthesizing data with MLP in cross-architecture experiments (Table 2), as MLP requires a slightly different treatment. Note that while K is the maximum number of outer-loop steps, the optimization can early-stop automatically if it converges before K steps. For the remaining hyperparameters, we use different sets for 1, 10 and 50 image(s)/class learning. We set T = 1, Ïθ = 1 for 1 image/class, T = 10, Ïθ = 50 for 10 images/class, T = 50, Ïθ = 10 for 50 images/class learning. Note that when T = 1, it is not required to update the model parameters (Step 9 in Algorithm 1), as this model is not further used. For those experiments where more than 10 images/class are synthesized, we set T to be the same number as the synthetic images per class and Ïθ = 500/T , e.g. T = 20, Ïθ = 25 for 20 images/class learning. The ablation study on hyper-parameters are given in Appendix B which shows that our method is not sensitive to varying hyper-parameters.
We do separate-class mini-batch sampling for Step 6 in Algorithm 1. Speciï¬cally, we sample a mini-batch pair BT c that contain real and synthetic images from the same class c at each inner iteration. Then, the matching loss for each class is computed with the sampled mini-batch pair and used to update corresponding synthetic images Sc by back-propogation (Step 7 and 8). This is repeated separately (or parallelly given enough computational resources) for every class. Training as such is not slower than using mixed-class batches. Although our method still works well when we randomly sample the real and synthetic mini-batches with mixed labels, we found that separate- class strategy is faster to train as matching gradients w.r.t. data from single class is easier compared to those of multiple classes. In experiments, we randomly sample 256 real images of a class as a mini-batch to calculate the mean gradient and match it with the mean gradient that is averaged over all synthetic samples with the same class label. The performance is not sensitive to the size of real-image mini-batch if it is greater than 64.
In all experiments, we use the standard train/test splits of the datasets â the train/test statistics are shown in Table T5. We apply data augmentation (crop, scale and rotate) only for experiments (coreset methods and ours) on MNIST. The only exception is that we also use data augmentation when compared to DD (Wang et al., 2018) on CIFAR10 with AlexCifarNet, and data augmentation is also used in (Wang et al., 2018). For initialization of condensed images, we tried both Gaussian noise and randomly selected real training images, and obtained overall comparable performances in different settings and datasets. Then, we used Gaussian noise for initialization in experiments.
USPS MNIST FashionMNIST SVHN CIFAR10 CIFAR100 Train 7,291 60,000 Test 2,007 10,000 60,000 10,000 73,257 26,032 50,000 10,000 50,000 10,000
Table T5: Train/test statistics for USPS, MNIST, FashionMNIST, SVHN, CIFAR10 and CIFAR100 datasets.
In the ï¬rst stage â while training the condensed images â, we use Batch Normalization in the VGG and ResNet networks. For reliable estimation of the running mean and variance, we sample many real training data to estimate the running mean and variance and then freeze them ahead of Step 7. In the second stage â while training a deep network on the condensed set â, we replace Batch Normalization layers with Instance Normalization in VGG and ResNet, due to the fact that the batch statistics are not reliable when training networks with few condensed images. Another minor modiï¬cation that we apply to the standard network ResNet architecture in the ï¬rst stage is replacing the strided convolutions where stride = 2 with convolutional layers where stride = 1 coupled with an average pooling layer. We observe that this change enables more detailed (per pixel) gradients w.r.t. the condensed images and leads to better condensed images.
Continual learning. In this experiment, we focus on a task-incremental learning on SVHN, MNIST and USPS with the given order. The three tasks share the same label space, however have
13
Published as a conference paper at ICLR 2021
° Random: Correlation = -0.21 Herding: Correlation = -0.20 Ours: Correlation = 0.79 Early-stopping: Correlation = 0.42 5 . 0.86 0.86 £ 0.85 . . 0.85 4 . . . S . ] . 0.84 . 0.84 . 50.83 a 0.83 ° . oe 3 0.804 . 0.82 . 0.82 . $ 0.80 a . Pa . 8 e 0.784 . . 0.80 0.80 : 30.78 .. . 2 . 0 0.75} 0.78 voy 0.78 . S 075 0.734 0.31 0.32 0.33 0.34 0.35 0.36 0.37 0.38 0.42 0.44 0.46 0.52 0.54 0.56 Proxy Dataset Performance Proxy Dataset Performance Proxy Dataset Performance Proxy Dataset Performance
Figure F5: The performance correlation between the training on proxy dataset and whole-dataset. For each proxy dataset, the best 10 models are selected based on validation set performance. In the ï¬gure, each point represents an architecture.
C\T Sigmoid ReLu LeakyReLu C\T None MaxPooling AvgPooling Sigmoid ReLu LeakyReLu 86.7±0.7 86.1±0.9 86.3±0.9 91.2±0.6 91.7±0.5 91.7±0.5 91.2±0.6 91.7±0.5 91.7±0.4 None MaxPooling Avgpooing 78.7±3.0 81.2±2.8 81.8±2.9 80.8±3.5 89.5±1.1 90.2±0.8 88.3±1.0 91.1±0.6 91.7±0.5
Table T6: Cross-activation experiments in accu- racy (%) for 1 condensed image/class in MNIST.
__
Table T7: Cross-pooling experiments in accu- racy (%) for 1 condensed image/class in MNIST.
signiï¬cantly different image statistics. The images of the three datasets are reshaped to 32Ã32 RGB size for standardization. We use the standard splits for training sets and randomly sample 2,000 test images for each datasets to obtain a balanced evaluation over three datasets. Thus each model is tested on a growing test set with 2,000, 4,000 and 6,000 images at the three stages respectively. We use the default ConvNet in this experiment and set the weight of distillation loss to 1.0 and the temperature to 2. We run 5,000 and 500 iterations for training and balanced ï¬netuning as in (Castro et al., 2018) with the learning rates 0.01 and 0.001 respectively. We run 5 experiments and report the mean and standard variance in Figure 4.
Neural Architecture Search. To construct the searching space of 720 ConvNets, we vary hyper- parameters W â {32, 64, 128, 256}, D â {1, 2, 3, 4}, N â {None, BatchNorm, LayerNorm, InstanceNorm, GroupNorm}, A â {Sigmoid, ReLu, LeakyReLu}, P â {None, MaxPooling, AvgPooling}. We randomly sample 5,000 images from the 50,000 training images in CIFAR10 as the validation set. Every candidate ConvNet is trained with the proxy dataset, and then evalu- ated on the validation set. These candidate ConvNets are ranked by the validation performance. 10 architectures with top validation accuracies are selected to calculate Spearmanâs rank correlation coefï¬cient, because the best model that we want will come from the top 10 architectures. We train each ConvNet for 5 times to get averaged validation and testing accuracies.
We visualize the performance correlation for different proxy datasets in Figure F5. Obviously, the condensed proxy dataset produced by our method achieves the highest performance correlation (0.79) which signiï¬cantly higher than early-stopping (0.42). It means our method can produce more reliable results for NAS.
# B FURTHER ANALYSIS
Next we provide additional results on ablative studies over various deep network layers including activation, pooling and normalization functions and also over depth and width of deep network architecture. We also study the selection of hyper-parameters and the gradient distance metric. An additional qualitative analysis on the learned condensed images is also given.
Ablation study on activation functions. Here we study the use of three activation functions â Sigmoid, ReLU, LeakyReLu (negative slope is set to 0.01) â in two stages, when training condensed images (denoted as C) and when training a ConvNet from scratch on the learned condensed im- ages (denoted as T). The experiments are conducted in MNIST dataset for 1 condensed image/class setting. Table T6 shows that all three activation functions are good for the ï¬rst stage while gener- ating good condensed images, however, Sigmoid performs poor in the second stage while learning a classiï¬er on the condensed images â its testing accuracies are lower than ReLu and LeakyReLu by around 5%. This suggests that ReLU can provide sufï¬ciently informative gradients for learning condensed images, though the gradient of ReLU w.r.t. to its input is typically sparse.
14
Published as a conference paper at ICLR 2021
C\T None BatchNorm LayerNorm InstanceNorm GroupNorm 79.0±2.2 None 78.6±2.1 BatchNorm 81.2±1.8 LayerNorm InstanceNorm 72.9±7.1 GroupNorm 79.5±2.1 80.8±2.0 80.7±1.8 78.6±3.0 56.7±6.5 81.8±2.3 85.8±1.7 85.7±1.6 87.4±1.3 82.7±5.3 87.3±1.2 90.7±0.7 90.9±0.6 90.7±0.7 91.7±0.5 91.6±0.5 85.9±1.7 85.9±1.5 87.3±1.4 84.3±4.2 87.2±1.2
Table T8: Cross-normalization experiments in accuracy (%) for 1 condensed image/class in MNIST.
C\T 1 2 3 4 C\T 32 64 128 256 1 2 3 4 61.3±3.5 78.2±3.0 77.1±4.0 76.4±3.5 78.3±2.3 89.0±0.8 91.0±0.6 89.4±0.8 81.6±1.5 89.8±0.8 91.7±0.5 90.4±0.6 82.5±1.3 89.9±0.8 91.9±0.5 90.6±0.4 32 90.6±0.8 91.4±0.5 91.5±0.5 91.3±0.6 64 91.0±0.8 91.6±0.6 91.8±0.5 91.4±0.6 128 90.8±0.7 91.5±0.6 91.7±0.5 91.2±0.7 256 91.0±0.7 91.6±0.6 91.7±0.5 91.4±0.5
Table T9: Cross-depth performance in accuracy (%) for 1 condensed image/class in MNIST.
Table T10: Cross-width performance in accuracy (%) for 1 condensed image/class in MNIST.
Ablation study on pooling functions. Next we investigate the performance of two pooling func- tions â average pooling and max pooling â also no pooling for 1 image/class dataset condensation with ConvNet in MNIST in terms of classiï¬cation accuracy. Table T7 shows that max and average pooling both perform signiï¬cantly better than no pooling (None) when they are used in the second stage. When the condensed samples are trained and tested on models with average pooling, the best testing accuracy (91.7 ± 0.5%) is obtained, possibly, because average pooling provides more informative and smooth gradients for the whole image rather than only for its discriminative parts.
Ablation study on normalization functions. Next we study the performance of four normal- ization options â No normalization, Batch (Ioffe & Szegedy, 2015), Layer (Ba et al., 2016), In- stance (Ulyanov et al., 2016) and Group Normalization (Wu & He, 2018) (number of groups is set to be four) â for 1 image/class dataset condensation with ConvNet architecture in MNIST classi- ï¬cation accuracy. Table T8 shows that the normalization layer has little inï¬uence for learning the condensed set, while the choice of normalization layer is important for training networks on the con- densed set. LayerNorm and GroupNorm have similar performance, and InstanceNorm is the best choice for training a model on condensed images. BatchNorm obtains lower performance which is similar to None (no normalization), as it is known to perform poorly when training models on few condensed samples as also observed in (Wu & He, 2018). Note that Batch Normalization does not allow for a stable training in the ï¬rst stage (C); thus we replace its running mean and variance for each batch with those of randomly sampled real training images.
Ablation study on network depth and width. Here we study the effect of network depth and width for 1 image/class dataset condensation with ConvNet architecture in MNIST in terms of clas- siï¬cation accuracy. To this end we conduct multiple experiments by varying the depth and width of the networks that are used to train condensed synthetic images and that are trained to classify testing data in ConvNet architecture and report the results in Table T9 and Table T10. In Table T9, we observe that deeper ConvNets with more blocks generate better condensed images that results in better classiï¬cation performance when a network is trained on them, while ConvNet with 3 blocks performs best as classiï¬er. Interestingly, Table T10 shows that the best results are obtained with the classiï¬er that has 128 ï¬lters at each block, while network width (number of ï¬lters at each block) in generation has little overall impact on the ï¬nal classiï¬cation performance.
Ablation study on hyper-parameters. Our performance is not sensitive to hyper-parameter se- lection. The testing accuracy for various K and T , when learning 10 images/class condensed sets, is depicted in Figure F6. The results show that the optimum K and T are around similar values across all datasets. Thus we simply set K to 1000 and T to 10 for all datasets. Similarly, for the remaining ones including learning rate, weight decay, we use a single set of hyperparameters that are observed to work well for all datasets and architectures in our preliminary experiments.
Ablation study on gradient distance metric. To prove the effectiveness and robustness of the proposed distance metric for gradients (or weights), we compare to the traditional ones ( (2019) which vectorize and concatenate the wh gradient, Gâ , ⬠RYâ, and compute the squared Euclidean distance ||G7 â G*||? and the Cosine distance 1 â cos(G7,G*), where D is the number of all network parameters. We do 1
15
Published as a conference paper at ICLR 2021
100 100 = 90 H ! z 80 H ee £ i i 2 70 H ' 8 : 60 ! 60 i ' 2 â MNist H â nist ' % 507 â FashionMNisT i â FashionMNIST i : ee fi â SVHN es 40 | â SVHN i 407 ââ cirario H â CIFAR1O H 50 100 200 500 1000 1500 2000 1 2 5 10 20 50 Hyper-parameter: K Hyper-parameter: T
Figure F6: Ablation study on the hyper-parameters K and T when learning 10 images/class condensed sets.
image/class learning experiment on MNIST with different architectures. For simplicity, the synthetic images are learned and tested on the same architecture in this experiment. Table T11 shows that the proposed gradient distance metric remarkably outperforms others on complex architectures (e.g. LeNet, AlexNet, VGG and ResNet) and achieves the best performances in most settings, which means it is more effective and robust than the traditional ones. Note that we set ηS = 0.1 for MLP-Euclidean and MLP-Cosine because it works better than ηS = 0.01.
MLP ConvNet LeNet AlexNet VGG ResNet Euclidean 69.3±0.9 92.7±0.3 65.0±5.1 66.2±5.6 57.1±7.0 68.0±5.2 45.2±3.6 69.2±2.7 61.1±8.2 58.3±4.1 55.0±5.0 68.8±7.8 70.5±1.2 91.7±0.5 85.0±1.7 82.7±2.9 81.7±2.6 89.4±0.9 Cosine Ours
Table T11: Ablation study on different gradient distance metrics. Obviously, the proposed distance metric is more effective and robust. Euclidean: squared Euclidean distance, Cosine: Cosine distance.
Further qualitative analysis We ï¬rst depict the condensed images that are learned on MNIST, FashionMNIST, SVHN and CIFAR10 datasets in one experiment using the default ConvNet in 10 images/class setting in Figure F7. It is interesting that the 10 images/class results in Figure F7 are diverse which cover the main variations, while the condensed images for 1 image/class setting (see Figure 2) look like the âprototypeâ of each class. For example, in Figure F7 (a), the ten images of âfourâ indicate ten different styles. The ten âbagâ images in Figure F7 (b) are signiï¬cantly different from each other, similarly âwalletâ (1st row), âshopping bagâ (3rd row), âhandbagâ (8th row) and âschoolbagâ (10th row). Figure F7 (c) also shows the diverse house numbers with different shapes, colors and shadows. Besides, different poses of a âhorseâ have been learned in Figure F7 (d).
# C COMPARISON TO MORE BASELINES
Optimal random selection. One interesting and strong baseline is Optimal Random Selection (ORS) that we implement random selection experiments for 1,000 times and pick the best ones. Table T12 presents the performance comparison to the selected Top 1000 (all), Top 100 and Top 10 coresets. These optimal coresets are selected by ranking their performance. Obviously, the condensed set generated by our method surpasses the selected Top 10 of 1000 coresets with a large margin on all four datasets.
Generative model. We also compare to the popular generative model, namely, Conditional Gener- ative Adversarial Networks (cGAN) (Mirza & Osindero, 2014). The generator has two blocks which consists of the Up-sampling (scale factor=2), Convolution (stride=1), BatchNorm and LeakyReLu layers. The discriminator has three blocks which consists of Convolution (stride=2), BatchNorm and LeakyReLu layers. In additional to the random noise, we also input the class label as the condition. We generate 1 and 10 images per class for each dataset with random noise. Table T12 shows that the images produced by cGAN have similar performances to those randomly selected coresets (i.e. Top 1000). It is reasonable, because the aim of cGAN is to generate real-look images. In contrast, our method aims to generate images that can train deep neural networks efï¬ciently.
Analysis of coreset performances We ï¬nd that K-Center (Wolf, 2011; Sener & Savarese, 2018) and Forgetting (Toneva et al., 2019) donât work as well as other general coreset methods, namely
16
Published as a conference paper at ICLR 2021
4 3 $s Eas BIEN GRICE -
4 3 $s 2 antsPulloverDress Coat Sandal ShirtSneaker Bag Boot nk |
(a) MNIST
(b) FashionMNIST
(c) SVHN
(d) CIFAR10
Figure F7: The synthetic images for MNIST, FashionMNIST, SVHN and CIFAR10 produced by our method with ConvNet under 10 images/class setting.
Img/Cls Ratio % Optimal Random Selection Top 100 Top 1000 Top 10 cGAN Ours Whole Dataset MNIST 1 10 0.017 0.17 64.3±6.1 94.8±0.7 74.4±1.8 96.0±0.2 78.2±1.7 96.4±0.1 64.0±3.2 94.9±0.6 91.7±0.5 97.4±0.2 99.6±0.0 FashionMNIST 1 10 0.017 0.17 51.3±5.4 73.8±1.6 59.6±1.3 76.4±0.6 62.4±0.9 77.6±0.2 51.1±0.8 73.9±0.7 70.5±0.6 82.3±0.4 93.5±0.1 SVHN 1 10 0.014 0.14 14.3±2.1 34.6±3.2 18.1±0.9 40.3±1.3 19.9±0.2 42.9±0.9 16.1±0.9 33.9±1.1 31.2±1.4 76.1±0.6 95.4±0.1 CIFAR10 1 10 0.02 0.2 15.0±2.0 27.1±1.6 18.5±0.8 29.8±0.7 20.1±0.5 31.4±0.2 16.3±1.4 27.9±1.1 28.3±0.5 44.9±0.5 84.8±0.1
Table T12: The performance comparison to optimal random selection (ORS) and conditional generative ad- versarial networks (cGAN) baselines. This table shows the testing accuracies (%) of different methods on four datasets. ConvNet is used for training and testing. Img/Cls: image(s) per class, Ratio (%): the ratio of con- densed images to whole training set. Top 1000, Top 100 and Top 10 means the selected 1000, 100 and 10 optimal coresets by ranking their performances.
17
Published as a conference paper at ICLR 2021
Img/Cls Ratio % Random Core-set Selection Herding K-Center Forgetting LDâ Ours Whole Dataset CIFAR100 1 10 0.2 2 4.2±0.3 14.6±0.5 8.4±0.3 17.3±0.3 8.3±0.3 7.1±0.3 3.5±0.3 9.8±0.2 11.5±0.4 - 12.8±0.3 25.2±0.3 56.2±0.3
Table T13: The performance comparison on CIFAR100. This table shows the testing accuracies (%) of different methods. ConvNet is used for training and testing except that LDâ uses AlexNet. Img/Cls: image(s) per class, Ratio (%): the ratio of condensed images to whole training set.
Method MLP ConvNet LeNet AlexNet VGG ResNet DD 72.7±2.8 77.6±2.9 79.5±8.1 51.3±19.9 11.4±2.6 63.6±12.7 83.0±2.5 92.9±0.5 93.9±0.6 90.6±1.9 92.9±0.5 94.5±0.4 Ours
Table T14: Generalization ability comparison to DD. The 10 condensed images per class are trained with LeNet, and tested on various architectures. It shows that condensed images generated by our method have better generalization ability.
Random and Herding (Rebufï¬ et al., 2017), in this experimental setting. After analyzing the algo- rithms and coresets, we ï¬nd two main reasons. 1) K-Center and Forgetting are not designed for training deep networks from scratch, instead they are for active learning and continual learning re- spectively. 2) The two algorithms both tend to select âhardâ samples which are often outliers when only a small number of images are selected. These outliers confuse the training, which results in worse performance. Speciï¬cally, the ï¬rst sample per class in K-Center coreset is initialized by se- lecting the one closest to each class center. The later ones selected by the greedy criterion that pursues maximum coverage are often outliers which confuse the training.
Performance on CIFAR100. We supplement the performance comparison on CIFAR100 dataset which includes 10 times as many classes as other benchmarks. More classes while fewer images per class makes CIFAR100 signiï¬cantly more challenging than other datasets. We use the same set of hyper-parameters for CIFAR100 as other datasets. Table T13 depicts the performances of coreset selection methods, Label Distillation (LD) Bohdal et al. (2020) and ours. Our method achieves 12.8% and 25.2% testing accuracies on CIFAR100 when learning 1 and 10 images per class, which are the best compared with others.
# D FURTHER COMPARISON TO DD (WANG ET AL., 2018)
Next we compare our method to DD (Wang et al., 2018) ï¬rst quantitatively in terms of cross- architecture generalization, then qualitatively in terms of synthetic image quality, and ï¬nally in terms of computational load for training synthetic images. Note that we use the original source code to obtain the results for DD that is provided by the authors of DD in the experiments.
Generalization ability comparison. Here we compare the generalization ability across different deep network architectures to DD. To this end, we use the synthesized 10 images/class data learned with LeNet on MNIST to train MLP, ConvNet, LeNet, AlexNet, VGG11 and ResNet18 and report the results in Table T14. We see that that the condensed set produced by our method achieves good classiï¬cation performances with all architectures, while the synthetic set produced by DD perform poorly when used to trained some architectures, e.g. AlexNet, VGG and ResNet. Note that DD generates learning rates to be used in every training step in addition to the synthetic data. This is in contrast to our method which does not learn learning rates for speciï¬c training steps. Although the tied learning rates improve the performance of DD while training and testing on the same architecture, they will hinder the generalization to unseen architectures.
Method Dataset Architecture Memory (MB) Time (min) Test Acc. DD MNIST Ours MNIST LeNet LeNet 785 653 160 46 79.5±8.1 93.9±0.6 DD CIFAR10 AlexCifarNet Ours CIFAR10 AlexCifarNet 3211 1445 214 105 36.8±1.2 39.1±1.2
Table T15: Time and memory use for training DD and our method in 10 images/class setting.
18
Published as a conference paper at ICLR 2021
w g 3 4 2 x Ey 8
(a) MNIST of DD
(b) CIFAR10 of DD
(c) MNIST of ours (d) CIFAR10 of ours
Figure F8: Qualitative comparison between the condensed images produced by DD and ours under 10 im- ages/class setting. LeNet and AlexCifarNet are utilized for MNIST and CIFAR10 respectively.
Qualitative comparison. We also provide a qualitative comparison to to DD in terms of image quality in Figure F8. Note that both of the synthetic sets are trained with LeNet on MNIST and AlexCifarNet on CIFAR10. Our method produces more interpretable and realistic images than DD, although it is not our goal. The MNIST images produced by DD are noisy, and the CIFAR10 images produced by DD do not show any clear structure of the corresponding class. In contrast, the MNIST and CIFAR10 images produced by our method are both visually meaningful and diverse.
Training memory and time. One advantage of our method is that we decouple the model weights from its previous states in training, while DD requires to maintain the recursive computation graph which is not scalable to large models and inner-loop optimizers with many steps. Hence, our method requires less training time and memory cost. We compare the training time and memory cost re- quired by DD and our method with one NVIDIA GTX1080-Ti GPU. Table T15 shows that our method requires signiï¬cantly less memory and training time than DD and provides an approxima- tion reduction of 17% and 55% in memory and 71% and 51% in train time to learn MNIST and CIFAR10 datasets respectively. Furthermore, our training time and memory cost can be signiï¬- cantly decreased by using smaller hyper-parameters, e.g. K, T and the batch size of sampled real images, with a slight performance decline (refer to Figure F6).
19
Published as a conference paper at ICLR 2021
# E EXTENDED RELATED WORK
Variations of Dataset Distillation. There exists recent work that extends Dataset Distillation (Wang et al., 2018). For example, (Sucholutsky & Schonlau, 2019; Bohdal et al., 2020) aim to improve DD by learning soft labels with/without synthetic images. (Such et al., 2020) utilizes a generator to synthesize images instead of directly updating image pixels. However, the reported quantitative and qualitative improvements over DD are minor compared to our improvements. In addition, none of these methods have thoroughly veriï¬ed the cross-architecture generalization abil- ity of the synthetic images.
Zero-shot Knowledge Distillation. Recent zero-shot KD methods (Lopes et al., 2017; Nayak et al., 2019) aim to perform KD from a trained model in the absence of training data by generating synthetic data as the intermediate production to further use. Unlike them, our method does not require pretrained teacher models to provide the knowledge, i.e. to obtain the features and labels.
Data Privacy & Federated Learning. Synthetic dataset is also a promising solution to protecting data privacy and enabling safe federated learning. There exists some work that uses synthetic dataset to protect the privacy of medical dataset (Li et al., 2020) and reduce the communication rounds in federated learning (Zhou et al., 2020). Although transmitting model weights or gradients (Zhu et al., 2019; Zhao et al., 2020) may increase the transmission security, the huge parameters of modern deep neural networks are prohibitive to transmit frequently. In contrast, transmitting small-scale synthetic dataset between clients and server is low-cost (Goetz & Tewari, 2020).
20 | {
"id": "1910.02551"
} |
2006.05576 | Self-supervised Learning from a Multi-view Perspective | As a subset of unsupervised representation learning, self-supervised
representation learning adopts self-defined signals as supervision and uses the
learned representation for downstream tasks, such as object detection and image
captioning. Many proposed approaches for self-supervised learning follow
naturally a multi-view perspective, where the input (e.g., original images) and
the self-supervised signals (e.g., augmented images) can be seen as two
redundant views of the data. Building from this multi-view perspective, this
paper provides an information-theoretical framework to better understand the
properties that encourage successful self-supervised learning. Specifically, we
demonstrate that self-supervised learned representations can extract
task-relevant information and discard task-irrelevant information. Our
theoretical framework paves the way to a larger space of self-supervised
learning objective design. In particular, we propose a composite objective that
bridges the gap between prior contrastive and predictive learning objectives,
and introduce an additional objective term to discard task-irrelevant
information. To verify our analysis, we conduct controlled experiments to
evaluate the impact of the composite objectives. We also explore our
framework's empirical generalization beyond the multi-view perspective, where
the cross-view redundancy may not be clearly observed. | http://arxiv.org/pdf/2006.05576 | Yao-Hung Hubert Tsai, Yue Wu, Ruslan Salakhutdinov, Louis-Philippe Morency | cs.LG, stat.ML | null | null | cs.LG | 20200610 | 20210322 | 1 2 0 2
r a M 2 2 ] G L . s c [
4 v 6 7 5 5 0 . 6 0 0 2 : v i X r a
Published as a conference paper at ICLR 2021
# SELF-SUPERVISED LEARNING FROM A MULTI-VIEW PERSPECTIVE
Yao-Hung Hubert Tsai, Yue Wu, Ruslan Salakhutdinov, Louis-Philippe Morency Machine Learning Department, Carnegie Mellon University
# ABSTRACT
As a subset of unsupervised representation learning, self-supervised representation learning adopts self-deï¬ned signals as supervision and uses the learned representa- tion for downstream tasks, such as object detection and image captioning. Many proposed approaches for self-supervised learning follow naturally a multi-view per- spective, where the input (e.g., original images) and the self-supervised signals (e.g., augmented images) can be seen as two redundant views of the data. Building from this multi-view perspective, this paper provides an information-theoretical frame- work to better understand the properties that encourage successful self-supervised learning. Speciï¬cally, we demonstrate that self-supervised learned representations can extract task-relevant information and discard task-irrelevant information. Our theoretical framework paves the way to a larger space of self-supervised learning objective design. In particular, we propose a composite objective that bridges the gap between prior contrastive and predictive learning objectives, and introduce an additional objective term to discard task-irrelevant information. To verify our analysis, we conduct controlled experiments to evaluate the impact of the compos- ite objectives. We also explore our frameworkâs empirical generalization beyond the multi-view perspective, where the cross-view redundancy may not be clearly observed.
# INTRODUCTION
Self-supervised learning (SSL) (Zhang et al., 2016; Devlin et al., 2018; Oord et al., 2018; Tian et al., 2019) learns representations using a proxy objective (i.e., SSL objective) between inputs and self-deï¬ned signals. Empirical evidence suggests that the learned representations can generalize well to a wide range of downstream tasks, even when the SSL objective has not utilize any downstream supervision during training. For example, SimCLR (Chen et al., 2020) deï¬nes a contrastive loss (i.e., an SSL objective) between images with different augmentations (i.e., one as the input and the other as the self-supervised signal). Then, one can take SimCLR as features extractor and adopt the features to various computer vision applications, spanning image classiï¬cation, object detection, instance segmentation, and pose estimation (He et al., 2019). Despite success in practice, only a few work (Arora et al., 2019; Lee et al., 2020; Tosh et al., 2020) provide theoretical insights into the learning efï¬cacy of SSL. Our work shares a similar goal to explain the success of SSL, from the perspectives of Information Theory (Cover & Thomas, 2012) and multi-view representation1.
To understand (a subset2 of) SSL, we start by the following multi-view assumption. First, we regard the input and the self-supervised signals as two corresponding views of the data. Using our running example, in SimCLR (Chen et al., 2020), the augmented images (i.e., the input and the self-supervised signal) are an image with different views. Second, we adopt a common assumption in multi-view learning: either view alone is (approximately) sufï¬cient for the downstream tasks (see Assumption 1 in prior work (Sridharan & Kakade, 2008)). The assumption suggests that the image augmentations (e.g., changing the style of an image) should not affect the labels of images, or analogously, the self- supervised signal contains most (if not all) of the information that the input has about the downstream tasks. With this assumption, our ï¬rst contribution is to formally show that the self-supervised learned
1The work (Lee et al., 2020; Tosh et al., 2020) are done concurrent and in parallel, and part of their assumptions/ conclusions are similar to ours. We will elaborate the differences more in the related work section. 2We discuss the limitations of the multi-view assumption in Section 2.1.
1
Published as a conference paper at ICLR 2021
ard task-irrelevant info Discard task-irrelevant info min H(Zx|S) with a fixed gap 1(X; S|7) Zy=FO) Xx 5 x S x Ss 7 optimal Zy 4 _ _ felationswith | x downstream task T Vv max I(Zy;8) HZyi:S) = (XS) Redundancy Zy=F AX) (X;T|S Extract task-relevant info from (Goal: Extract task-relevant info] [Minimal and Sufficient Self-supervision| X with a potential âoes â pote! info T
Figure 1: High-level takeaways for our main results using information diagrams. (a) We present to learn minimal and sufficient self-supervision: minimize H(Zx|Sâ)) for discarding task-irrelevant information and maximize I(Zx; S) for extracting task-relevant information. (b) The resulting learned representation Zx * contains all task relevant information from the input with a potential loss â¬ingo and discards task-irrelevant information with a fixed gap I(X; S|T). (c) Our core assumption: the self-supervised signal is approximately redundant to the input for the task-relevant information.
representations can 1) extract all the task-relevant information (from the input) with a potential loss; and 2) discard all the task-irrelevant information (from the input) with a ï¬xed gap. Then, using classiï¬cation task as an example, we are able the quantify the smallest generalization error (Bayes error rate) given the discussed task-relevant and task-irrelevant information.
As the second contribution, our analysis 1) connects prior arts for SSL on contrastive (Oord et al., 2018; Bachman et al., 2019; Chen et al., 2020; Tian et al., 2019) and predictive learning (Zhang et al., 2016; Vondrick et al., 2016; Tulyakov et al., 2018; Devlin et al., 2018) approaches; and 2) paves the way to a larger space of composing SSL objectives to extract task-relevant and discard task-irrelevant information simultaneously. For instance, the combination between the contrastive and predictive learning approaches achieves better performance than contrastive- or predictive-alone objective and enjoys less over-ï¬tting problem. We also present a new objective to discard task-irrelevant information. The objective can be easily incorporated with prior self-supervised learning objectives.
We conduct controlled experiments on visual (the ï¬rst set) and visual-textual (the second set) self- supervised representation learning. The ï¬rst set of experiments are performed when the multi-view assumption is likely to hold. The goal is to compare different compositions of SSL objectives on extracting task-relevant and discarding task-irrelevant information. The second set of experiments are performed when the input and the self-supervised signal lie in very different modalities. Under this cross-modality setting, the task-relevant information may not mostly lie in the shared information be- tween the input and the self-supervised signal. The goal is to examine SSL objectivesâ generalization, where the multi-view assumption is likely to fail.
# 2 A MULTI-VIEW INFORMATION-THEORETICAL FRAMEWORK
Notations. For the input, we denote its random variable as X, sample space as 4â, and outcome as x. We learn a representation (Zx/ Z/ z,) from the input through a deterministic mapping Fy: Zx = Fx(X). For the self-supervised signal, we denote its random variable/ sample space/ outcome as S/ S/ s. Two sample spaces can be different between the input and the self-supervised signal: X # S. The information required for downstream tasks is referred to as âtask-relevant informationâ: T/ T/ t. Note that SSL has no access to the task-relevant information. Lastly, we use [(A; B) to represent mutual information, J(A; B|C) to represent conditional mutual information, H(A) to represent the entropy, and H(A|B) to represent conditional entropy for random variables A/B/C. We provide high-level takeaways for our main results in Figure 1. We defer all proofs to Supplementary.
2.1 MULTI-VIEW ASSUMPTION
In our paper, we regard the input (X) and the self-supervised signals (S) as two views of the data. Here, we provide a table showing different X/S in various SSL frameworks:
Framework BERT (Devlin et al., 2018) Look & Listen (Arandjelovic & Zisserman, 2017) SimCLR (Chen et al., 2020) Colorization (Zhang et al., 2016) Inputs (X) Self-supervised Signals (S) Non-masked Words Masked Words Image Audio Stream Image Same Image with Augmentation Image Lightness Image Color
We note that not all SSL frameworks realize the inputs and the self-supervised signals as corresponding views. For instance, Jigsaw puzzle (Noroozi & Favaro, 2016) considers (shufï¬ed) image patches as the input and the positions of the patches as the self-supervised signals. Another example is Learning by Predicting Rotations (Gidaris et al., 2018), which considers an image (rotating with a speciï¬c
2
Published as a conference paper at ICLR 2021
angle) as the input and the rotation angle of the image as the self-supervised signal. We point out that the frameworks that regard X/S as two corresponding views (Chen et al.; 2020; He et al., 2019) have a much better empirical downstream performance than the frameworks that do not (Noroozi & Favaro, 2016; Gidaris et al., 2018). Our paper hence focuses on the multi-view setting between X/S.
Next, we adopt the common assumption (i.e., multi-view assumption (Sridharan & Kakade, 2008; Xu et al., 2013)) in the multi-view learning between the input and the self-supervised signal: Assumption 1 (Multi-view, restating Assumption | in prior work (Sridharan & Kakade, 2008)). The self-supervised signal is approximately redundant to the input for the task-relevant information. In other words, there exist an â¬nfo > 0 such that I(X;T|S) < into.
Assumption | states that, when â¬jnfo is small, the task-relevant information lies mostly in the shared information between the input and the self-supervised signals. We argue this assumption is mild with the following example. For self-supervised visual contrastive learning (Hjelm et al., 2018; Chen et al., 2020), the input and the self-supervised signal are the same image with different augmentations. Using image augmentations can be seen as changing the style of an image while not affecting the content. And we argue that the information required for downstream tasks should only be retained in the content but not the style. Next, we point out the failure cases of the assumption (or have large â¬jnf.): the input and the self-supervised signal contain very different task-relevant information. For instance, a drastic image augmentation (e.g., adding large noise) may change the content of the image (e.g., the noise completely occludes the objects). Another example is BERT (Devlin et al., 2018), with too much masking, downstream information may exist differently in the masked (i.e., the self-supervised signals) and the non-masked (i.e., the input) words. Analogously, too much masking makes the non-masked words have insufficient context to predict the masked words.
2.2 LEARNING MINIMAL AND SUFFICIENT REPRESENTATIONS FOR SELF-SUPERVISION
We start by discussing the supervised representation learning. The Information Bottleneck (IB) method (Tishby et al., 2000; Achille & Soatto, 2018) generalizes minimal sufï¬cient statistics to the representations that are minimal (i.e., less complexity) and sufï¬cient (i.e., better ï¬delity). To learn such representations for downstream supervision, we consider the following objectives: Deï¬nition 1 (Minimal and Sufï¬cient Representations for Downstream Supervision). Let Z sup the sufï¬cient supervised representation and Z supmin be the minimal and sufï¬cient representation: H(ZX |T ) s.t. I(ZX ; T ) is maximized.
X = arg min ZX
# Z sup
# I(ZX ; T ) and Z supmin
X = arg max X ZX
To reduce the complexity of the representation ZX , the prior methods (Tishby et al., 2000; Achille & Soatto, 2018) presented to minimize I(ZX ; X) while ours presents to minimize H(ZX |T ). We provide a justiï¬cation: minimizing H(ZX |T ) reduces the randomness from T to ZX , and the randomness is regarded as a form of incompressibility (Calude, 2013). Hence, minimizing H(ZX |T ) leads to a more compressed representation (discarding redundant information)3. Note that we do not constrain the downstream task T as classiï¬cation, regression, or clustering.
Then, we present SSL objectives to learn sufï¬cient (and minimal) representations for self-supervision: Deï¬nition 2 (Minimal and Sufï¬cient Representations for Self-supervision). Let Z ssl X be the sufï¬cient self-supervised representation and Z sslmin I(ZX ; S) and Z sslmin
Definition 2 defines our self-supervised representation learning strategy. Now, we are ready to associate the supervised and self-supervised learned representations: Theorem 1 (Task-relevant information with a potential loss â¬jn4.). The supervised learned represen- tations (i.e., ZX? and ZX" ) contain all the task-relevant information in the input (i.e., I(X;T)). The self-supervised learned representations (i.e., Z%$! and Zevon ) contain all the task-relevant information in the input with a potential loss â¬j4.. Formally, 1(X;T) = 1(ZE";T) = (Ze? T) > (ZEST) > (ZS; T) > 1(X;T) â ento-
# X ; T ) ⥠I(Z sslmin
3We do not claim H(ZX |T ) minimization is better than I(ZX ; X) minimization for reducing the complexity In Supplementary, we will show that H(ZX |T ) minimization and I(ZX ; X) in the representations ZX . minimization are interchangeable under our frameworkâs setting.
3
Published as a conference paper at ICLR 2021
Z Fy (Forward) Predictive Learning ; wee |X |S | max Ep, flog PS|Z)] Extracting ay ZyaFyay P84 R x Fy Task-Relevant Info F, Contrastive Learning feconstructior ne ive Learning inwe trast wee. x |S max I(Z,;5) mapping ston obieptve. 3 prediction | objeptive feaiure projection ZF) _ vere Predive Lo ome rapping a) head oscar ane (ae Paa) [5] Sinan wa-(s Pees Task-lrrelevant Info x â(8 imax, Ep,,, (log PZx15)] (nz | Fs G
Figure 2: Remarks on contrastive and predictive learning objectives for self-supervised learning. Between the representation ZX and the self-supervised signal S, contrastive objective performs mutual information maximization and predictive objectives perform log conditional likelihood maximization. We show that the SSL objectives aim at extracting task-relevant and discarding task-irrelevant information. Last, we summarize the computational blocks for practical deployments for these objectives.
When â¬;nf. is small, Theorem | indicates that the self-supervised learned representations can extract almost as much task-relevant information as the supervised one. While when â¬jy¢, is non-trivial, the learned representations may not always lead to good downstream performance. This result has also been observed in prior work (Tschannen et al., 2019) and InfoMin (Tian et al., 2020), which claim the representations with maximal mutual information may not have the best performance. Theorem 2 (Task-irrelevant information with a fixed compression gap I(X;S|T)). The sufficient self-supervised representation (i.e., I(Z%$!;T)) contains more task-irrelevant information in the input than the sufficient and minimal self-supervised representation (i.e., T(ZSmm; T)). The latter contains an amount of the information, I(X;S\T), that cannot be discarded from the input. Formally, 1(Z3; X|T) = 1(X; S|\T) + (ZS; X|S,T) > (ZS; X(T) = 1(X; S|T) > 1(ZEPâ¢â¢; X|T) = 0.
Theorem 2 indicates that a compression gap (i.e., I(X; S|T )) exists when we discard the task- irrelevant information from the input. To be speciï¬c, I(X; S|T ) is the amount of the shared informa- tion between the input and the self-supervised signal excluding the task-relevant information. Hence, I(X; S|T ) would be large if the downstream tasks requires only a portion of the shared information.
2.3 CONNECTIONS WITH CONTRASTIVE AND PREDICTIVE LEARNING OBJECTIVES
Theorem 1 and 2 state that our self-supervised learning strategies (i.e., min H(ZX |S) and max I(ZX ; S) deï¬ned in Deï¬nition 2) can extract task-relevant and discard task-irrelevant informa- tion. A question emerges:
âWhat are the practical aspects of the presented self-supervised learning strategies?â
To answer this question, we present 1) the connections with prior SSL objectives, especially for contrastive (Oord et al., 2018; Bachman et al., 2019; Chen et al., 2020; Tian et al., 2019; Hjelm et al., 2018; He et al., 2019) and predictive (Zhang et al., 2016; Pathak et al., 2016; Vondrick et al., 2016; Tulyakov et al., 2018; Peters et al., 2018; Devlin et al., 2018) learning objectives, showing that these objectives are extracting task-relevant information; and 2) a new inverse predictive learning objective to discard task-irrelevant information. We illustrate important remarks in Figure 2.
Contrastive Learning (is extracting task-relevant information). Contrastive learning objec- tive (Oord et al., 2018) maximizes the dependency/contrastiveness between the learned representa- tion Zx and the self-supervised signal S', which suggests maximizing the the mutual information I(Zx;S). Theorem | suggests that maximizing I(Zx; S}) results in Zx containing (approximately) all the information required for the downstream tasks from the input X. To deploy the contrastive learning objective, we suggest contrastive predictive coding (CPC) (Oord et al., 2018)*, which is a mutual information lower bound with low variance (Poole et al., 2019; Song & Ermon, 2019): @§G(2xi),G(2si))
@§G(2xi),G(2si)) 12 Eee y 201) (2sns%en)~P"(Z3,2x)| > D108 TSR âGaa eee |r weed 82x mo ST GEM Lei :=
where FS : S â Z is a deterministic mapping and G is a project head that projects a representation in Z into a lower-dimensional vector. If the input and self-supervised signals share the same
4Other contrastive learning objectives can be other mutual information lower bounds such as DV-bound or NWJ-bound (Belghazi et al., 2018) or its JS-divergence (Poole et al., 2019; Hjelm et al., 2018) variants. Among different objectives, Tschannen et al. (2019) have suggested that the objectives with large variance (e.g., DV-/NWJ-bound (Belghazi et al., 2018)) may lead to worsen performance compared to the low variance counterparts (e.g., CPC (Oord et al., 2018) and JS-div. (Poole et al., 2019)).
4
Published as a conference paper at ICLR 2021
sample space, i.e., X = S, we can impose FX = FS (e.g., self-supervised visual representation learning (Chen et al., 2020)). The projection head, G, can be an identity, a linear, or a non-linear mapping. Last, we note that modeling equation 1 often requires a large batch size (e.g., large n in equation 1) to ensure a good downstream performance (He et al., 2019; Chen et al., 2020).
Forward Predictive Learning (is extracting task-relevant information). Forward predictive learning encourages the learned representation ZX to reconstruct the self-supervised signal S, which suggests maximizing the log conditional likelihood EPS,ZX [log P (S|ZX )]. By the chain rule, I(ZX ; S) = H(S) â H(S|ZX ), where H(S) is irrelevant to ZX . Hence, maximizing I(ZX ; S) is equivalent to maximizing âH(S|ZX ) = EPS,ZX [log P (S|ZX )], which is the predictive learning objective. Together with Theorem 1, if zx can perfectly reconstruct s for any (s, zx) â¼ PS,ZX , then ZX contains (approximately) all the information required for the downstream tasks from the input X. A common approach to avoid intractability in computing EPS,ZX [log P (S|ZX )] is assuming a variational distribution QÏ(S|ZX ) with Ï representing the parameters in QÏ(·|·). Speciï¬cally, we present to maximize EPS,ZX [log P (S|ZX )]5. QÏ(·|·) can be any distribution such as Gaussian or Laplacian and Ï can be a linear model, a kernel method, or a neural network. Note that the choice of the reconstruction type of loss depends on the distribution type of QÏ(·|·), and is not ï¬xed. For instance, if we let QÏ(S|ZX ) be Gaussian N
Lrep:= max Zx=Fx(X),R Es,2n~Ps,z5 - \|s â R(ze) q , (2)
where R : Z â S is a deterministic mapping to reconstruct S from Z and we ignore the constants derived from the Gaussian distribution. Last, in most real-world applications, the self-supervised signal S has a much higher dimension (e.g., a 224 Ã 224 Ã 3 image) than the representation ZX (e.g., a 64-dim. vector). Hence, modeling a conditional generative model QÏ(S|ZX ) will be challenging.
Inverse Predictive Learning (is discarding task-irrelevant information). Inverse predictive learning encourages the self-supervised signal S to reconstruct the learned representation ZX , which suggests maximizing the log conditional likelihood EPS,ZX [log P (ZX |S)]. Given Theorem 2 together with âH(ZX |S) = EPS,ZX [log P (ZX |S)], we know if s can perfectly reconstruct zx for any (s, zx) â¼ PS,ZX under the constraint that I(ZX ; S) is maximized, then ZX discards the task-irrelevant information, excluding I(X; S|T ). Similar to the forward predictive learning, we use EPS,ZX [log P (ZX |S)]. In our deployment, we take the advantage of the design in equation 1 and let QÏ(ZX |S) be Gaussian N
Lrp := z Ez,,20~ = ||ze â 2s[[3] - 3 zone atk Hy (xx) Beste P25.2x |e â 2s|l2 (3)
Note that optimizing equation 3 alone results in a degenerated solution, e.g., learning ZX and ZS to be the same constant.
Composing SSL Objectives (to extract task-relevant and discard task-irrelevant information simultaneously). So far, we discussed how prior self-supervised learning approaches extract task- relevant information via the contrastive or the forward predictive learning objectives. Our analysis also inspires a new loss, the inverse predictive learning objective, to discard task-irrelevant information. Now, We present a composite loss to combine them together:
LSSL = λCLLCL + λF P LF P + λIP LIP , (4)
where λCL, λF P , and λIP are hyper-parameters. This composite loss enables us to extract task- relevant and discard task-irrelevant information simultaneously.
Eps 7, blog P(SIZx)] = BexEPs,zy los Qo(S|Zx)] + Pee (P(SIZx) I Qg(SIZx)) > BaxERs, gy bes Qo(S1Zx)) The assumption of identity covariance in the Gaussian is only a particular parameterization of the distri- bution Q(-|-). Other examples are MocoGAN (Tulyakov et al., 2018), which assumes Q is Laplacian (i.e., ¢1 reconstruction loss) and ¢ is a deconvolutional network (Long et al., 2015). Transformer-XL (Dai et al., 2019) assumes Q is a categorical distribution (i.e., cross entropy loss) and ¢ is a Transformer network (Vaswani et al., 2017). Although Gaussian with diagonal covariance is not the best assumption, it is perhaps the simplest one.
5
Published as a conference paper at ICLR 2021
2.4 THEORETICAL ANALYSIS - BAYES ERROR RATE FOR DOWNSTREAM CLASSIFICATION
In last subsection, we see the practical aspects of our designed SSL strategies. Now, we provide an theoretical analysis on the representationsâ generalization error when T is a categorical variable . We use Bayes error rate as an example, which stands for the irreducible error (smallest generalization error (Feder & Merhav, 1994)) when learning an arbitrary classiï¬er from the representation to infer the labels. In speciï¬c, let Pe be the Bayes error rate of arbitrary learned representations ZX and ËT as the estimation for T from our classiï¬er, Pe := Ezxâ¼PZX
To begin with, we present a general form of sample complexity with mutual information (I(Zx; S)) estimation using empirical samples from distribution Pz, 5. Let PY g denote the (uniformly sampled) empirical distribution of Pz, 5 and if) (Zx; S) = E,m [fo(zx,s)] with fy being the ZxiS estimated log density ratio (i.e., log p(s|z.)/p(s)). Proposition 1 (Mutual Information Neural Estimation, restating Theorem 1| by Tsai et al. (2020)). Let 0 <6 <1. There exists d ⬠N and a family of neural networks F := {fg : 9 ⬠© C Râ} where 0 is compact, so that 36* ⬠©, with probability at least 1 â 6 over the draw of {22:;, 8i}f) ~ PB , 7) (Zx:8) âI(Zx:8)| <0 (see).
This proposition shows that there exists a neural network 6*, with high probability, 7 (Zx;S) can approximate I(Zx; S) with n samples at rate O(1/\/n). Under this network 6* and the same parameters d and 6, we are ready to present our main results on the Bayes error rate. Formally, let |T'| be Tâs cardinalitiy and Th(x) = min {max {x,0}, 1 â 1/|T|} as a thresholding function: Theorem 3 (Bayes Error Rates for Arbitrary Learned Representations). For an arbitrary learned representations Zx, P. = Th(Pe) with
P.<1-exp ( - (H(r) + I(X;S|T) + 1(Z;X|S8,7) â 5&2 (Zx;8) + O( ct eat/)))). n
Given arbitrary learned representations (Z ), Theorem 3 suggests the corresponding Bayes error rate (P.) is small when: 1) the estimated mutual information (TP (Zx; S)) is large; 2) a larger number of samples n are used for estimating the mutual information; and 3) the task-irrelevant information (the compression gap I(X; S|Tâ) and the superfluous information I(Z; X|S,T), defined in Theorem 2) is small. The first and the second results supports the claim that maximizing [(Zx;S) may learn the representations that are beneficial to downstream tasks. The third result implies the learned representations may perform better on the downstream task when the compression gap is small. Additionally, 7°" is preferable than Z**! since I(Z**'"; X|S,T) = 0 and 1(Z*!; X|S,T) > 0. Theorem 4 (Bayes Error Rates for Self-supervised Learned Representations). Let P3"P/P3*!/psslmin be the Bayes error rate of the supervised or the self-supervised learned representations ZY ZZ", Then, Ps! = Th(P*) and Ps!» = Th(P*â¢Â») with
# X /Z sslmin log (1 â P sup
log (1 â Pe"P) + log 2 < {Pel Pslmin} < 1 â exp ( â (log 2 + P2"? - log |T| + into ): log (|T|) <{ ys â !
â
Given our self-supervised learned representations (Z%$! and Zien), Theorem 4 suggests a smaller upper bound of Ps*! (or Ps*!â¢=) when the redundancy between the input and the self-supervised signal (â¬jnfo, defined in Assumption 1) is small. This result implies the self-supervised learned representations may perform better on the downstream task when the multi-view redundancy is small.
# 3 CONTROLLED EXPERIMENTS
This section aims at providing empirical supports for Theorems 1 and 2 and comparing different SSL objectives. In particular, we present information inequalities in Theorems 1 and 2 regarding the amount of the task-relevant and the task-irrelevant information that will be extracted and discarded when learning self-supervised representations. Nonetheless, quantifying the information is notoriously hard and often leads to inaccurate quantiï¬cations in practice (McAllester & Stratos, 2020; Song &
6
.
Published as a conference paper at ICLR 2021
(a) Omniglot (Contrastive with Inverse Predictive) (b) CIFAR10 (Contrastive with Inverse Predictive) -(c) Omniglot (Contrastive & Forward Predictive) a3 59 oe a Baa 5 8 8 Zao tx 40.0059 (Contrastive +Forward Predictive) Test Accuracy Test Accuracy ss B 209) Lex only â ta (Contrastive) Lex, only sos â Lp (Forward Predictive) Appin Ley + Ayplap Appin Ley + Aplip Training Epoch
Figure 3: Comparisons for different compositions of SSL objectives on Omniglot and CIFAR10.
Ermon, 2019). Not to mention the information we aim to quantify is the conditional information, which is believed to be even more challenging than quantifying the unconditional one (Póczos & Schneider, 2012). To address this concern, we instead study the generalization error of the self- supervised learned representations, theoretically (Bayes error rate discussed in Section 2.4) and empirically (test performance discussed in this section).
Another important aspect of the experimental design is examining equation 4, which can be viewed as a Lagrangian relaxation to learn representations that contain minimal and sufï¬cient self-supervision (see Deï¬nition 2): a weighted combination between I(ZX ; S) and âH(ZX |S). In particular, the contrastive loss LCL and the forward-predictive loss LF P represent different realizations of modeling I(ZX ; S) and the inverse-predictive loss LF P represents a realization of modeling âH(ZX |S).
We design two sets of experiments: The ï¬rst one is when the input and self-supervised signals lie in the same modality (visual) and are likely to satisfy the multi-view redundancy assumption (Assumption 1). The second one is when the input and self-supervised signals lie in very different modalities (visual and textual), thus challenging the SSL objectiveâs generalization ability. Experiment I - Visual Representation Learning. We use Omniglot dataset (Lake et al., 2015) 7 in this experiment. The training set contains images from 964 characters, and the test set contains 659 characters. There are no characters overlap between the training and test set. Each character contains twenty examples drawn from twenty different people. We regard image as input (X) and generate self-supervised signal (S) by ï¬rst sampling an image from the same character as the input image and then applying translation/ rotation to it. Furthermore, we represent task-relevant information (T ) by the labels of the image. Under this self-supervised signal construction, the exclusive information in X or S are drawing styles (i.e., by different people) and image augmentations, and only their shared information contribute to T . To formally show the later, if T representing the label for X/S, then P (T |X) and P (T |S) are Dirac. Hence, T â¥â¥ S|X and T â¥â¥ X|S, suggesting Assumption 1 holds.
We train the feature mapping F'x (-) with SSL objectives (see eq. equation 4), set F's(-) = Fx(-), let R(-) be symmetrical to F'y(-), and G(-) be an identity mapping. On the test set, we fix the mapping and randomly select 5 examples per character as the labeled examples. Then, we classify the rest of the examples using the 1-nearest neighbor classifier based on feature (i.e. Zx = Fx (X)) cosine similarity. The random performance on this task stands at a = 0.15% . One may refer to Supplementary for more details. >_ Results & Discussions. In Figure 3, we evaluate the generalization ability on the test set for different SSL objectives. First, we examine how the introduced inverse predictive learning objective L;p can help improve the performance along with the contrastive learning objective Lc,. We present the results in Figure 3 (a) and also provide experiments with SimCLR (Chen et al., 2020) on CIFAR 10 (Krizhevsky et al., 2009) in Figure 3 (b), where \; p = 0 refers to the exact same setup as in SimCLR (which considers only Loy). We find that adding L; p in the objective can boost model performance, although being sensitive to the hyper-parameter A; p. According to Theorem 2, the improved performance suggests a more compressed representation results in better performance for the downstream tasks. Second, we add the discussions with the forward predictive learning objective Lyp. We present the results in Figure 3 (c). Comparing to Ly p, Loy, 1) reaches better test accuracy; 2) requires shorter training epochs to reach the best performance; and 3) suffers from overfitting with long-epoch training. Combining both of them (Lc, + 0.005Lp) brings their advantages together. Experiment II - Visual-Textual Representation Learning. We provide experiments using MS COCO dataset (Lin et al., 2014) that contains 328k multi-labeled images with 2.5 million labeled
7More complex datasets such as CIFAR10 (Krizhevsky et al., 2009) or ImageNet (Deng et al., 2009), to achieve similar performance, require a much larger training scale from contrastive to forward predictive objective. For example, on ImageNet, MoCo (He et al., 2019) uses 8 GPUs for its contrastive objective and ImageGPT (Chen et al.) uses 2048 TPUs for its forward predictive objective. We choose the Omniglot to ensure fair comparisons among different self-supervised learning objectives under reasonable computation constraint.
7
Published as a conference paper at ICLR 2021
(b) Raw BERT + Pre-trained ResNet (Contrastive with Inverse Predictive) _ 3 = ons. fan only Micro ROO-AUG âSubset Accuracy TP Apogee phic, + Aplp
(a) MS COCO (Using LCL as SSL objective) Setting Micro ROC-AUC Cross-modality Self-supervised Learning Raw BERT + Raw ResNet Pre-trained BERT + Raw ResNet Raw BERT + Pre-trained ResNet Pre-trained BERT + Pre-trained ResNet 0.5963 ± 0.0034 0.5915 ± 0.0035 0.7049 ± 0.0040 0.7065 ± 0.0026 Non Self-supervised Learning Only Pre-trained ResNet 0.6761 ± 0.0045 Subset Acc. 0.0166 ± 0.0017 0.0163 ± 0.0011 0.2081 ± 0.0063 0.2123 ± 0.0040 0.1719 ± 0.0015
Figure 4: Comparisons for different settings on self-supervised visual-textual representation training. We report metrics on MS COCO validation set with mean and standard deviation from 5 random trials.
instances from 91 objects. Each image has 5 annotated captions describing the relationships between objects in the scenes. We regard image as input (X) and its textual descriptions as self-supervised signal (S). Since vision and text are two very different modalities, the multi-view redundancy may not be satisfied, which means én. may be large in Assumption 1.
We adopt Ler (+A;pLip) as our SSL objective. We use ResNet18 (He et al., 2016) image encoder for Fx (-) (trained from scratch or fine-tuned on ImageNet (Deng et al., 2009) pre-trained weights), BERT- uncased (Devlin et al., 2018) text encoder for Fâs(-) (trained from scratch or BookCorpus (Zhu et al., 2015)/Wikipedia pre-trained weights), and a linear layer for G(-). After performing self-supervised visual-textual representation learning, we consider the downstream multi-label classification over 91 categories. We evaluate learned visual representation (Zx ) using downstream linear evaluation protocol (Oord et al., 2018; Hénaff et al., 2019; Tian et al., 2019; Hjelm et al., 2018; Bachman et al., 2019; Tschannen et al., 2019). Specifically, a linear classifier is trained from the self-supervised learned (fixed) representation to the labels on the training set. Commonly used metrics for multi-label classification are reported on MS COCO validation set: Micro ROC-AUC and Subset Accuracy. One may refer to Supplementary for more details on these metrics. > Results & Discussions. First, Figure 4 (a) suggests that the SSL strategy can still work when the input and self-supervised signals lie in different modalities. For example, pre-trained ResNet with BERT (either raw or the pre-trained one) outperforms pre-trained ResNet alone. We also see that the self-supervised learned representations benefit more if the ResNet is pre-trained but not the BERT. This result is in accord with the fact that object recognition requires more understanding in vision, and hence the pre-trained ResNet is preferrable than the pre-trained BERT. Next, Figure 4 (b) suggests that the self-supervised learned representations can be further improved by combining Lo, and Lp, suggesting L;p may be a useful objective to discard task-irrelevant information.
Remarks on λIP and LIP . As observed in the experimental results, λIP is a sensitive hyper- parameter to the performance. We provide an optimization perspective to address this concern. Note that one of the our goals is to examine the setting when learning the minimal and sufï¬cient representations for self-supervision (see Deï¬nition 2): minimize H(ZX |S) under the constraint that I(ZX ; S) is maximized. However, this constrained optimization is not feasible when considering gradients methods in neural networks. Hence, our approach can be seen as its Lagrangian Relaxation by a weighted combination between LCL (or LF P , representing I(ZX ; S)) and LIP (representing H(ZX |S)) with the λIP being the Lagrangian coefï¬cient.
The optimal λIP can be obtained by solving the Lagrangian dual, which depends on the parametriza- tion of LCL (or LF P ) and LIP . Different parameterizations lead to different loss and gradient landscapes, and hence the optimal λIP differs across experiments. This conclusion is veriï¬ed by the results presented in Figure 3 (a) and (b) and Figure 4 (b). Lastly, we point out that even not solving the Lagrangian dual, an empirical observation across experiments is that λIP which leads to the best performance is when the scale of LIP is one-tenth to the scale of LCL (or LF P ).
# 4 RELATED WORK
Prior work by Arora et al. (2019) and the recent concurrent work (Lee et al., 2020; Tosh et al., 2020) are landmarks for theoretically understanding the success of SSL. In particular, Arora et al. (2019); Lee et al. (2020) showed a decreased sample complexity for downstream supervised tasks when adopting contrastive learning objectives (Arora et al., 2019) or predicting the known information in the data (Lee et al., 2020). Tosh et al. (2020) showed that the linear functions of the learned representations are nearly optimal on downstream prediction tasks. By viewing the input and the self-supervised signal as two corresponding views of the data, we discuss the differences among these works and ours. On the one hand, the work by Arora et al. (2019); Lee et al. (2020) assume strong independence between the views conditioning on the downstream tasks , i.e., I(X; S|T ) â 0.
8
Published as a conference paper at ICLR 2021
On the other hand, the work by Tosh et al. (2020) and ours assume strong independence between the downstream task and one view conditioning on the other view, i.e., I(T ; X|S) â 0. Prior work (Balcan et al., 2005; Du et al., 2010) have compared these two assumptions and pointed out the former one (I(X; S|T ) â 0) is too strong and not likely to hold in practice. We note that all these related work and ours have shown that the self-supervised learning methods are learning to extract task-relevant information. Our work additionally presents to discard task-irrelevant information and quantiï¬es the amount of information that cannot be discarded.
Our method also resembles the InfoMax principle (Linsker, 1988; Hjelm et al., 2018) and the Multi- view Information Bottleneck method (Federici et al., 2020). The InfoMax principle aims at preserving the information of itself, while ours aims at extracting the information in the self-supervised signal. On the other hand, to reduce the redundant information across views, the Multi-view Information Bottleneck method proposed to minimize the conditional mutual information I(ZX ; X|S) , while ours propose to minimize the conditional entropy H(ZX |S). The conditional entropy minimization problem can be easily optimized via our proposed inversed predictive learning objective.
Another related work is InfoMin (Tian et al., 2020), where both InfoMin and our method suggest to learn the representations that contain ânotâ too much information. In particular, InfoMin presents to augment the data (i.e., by constructing learnable data augmentations) such that the shared information between augmented variants is as minimal as possible, followed by the mutual information maxi- mization between the learned features from the augmented variants. Our method instead considers standard augmentations (e.g., rotations and translations), followed by learning representations that contain no more than the shared information between the augmented variants of the data.
On the empirical side, we explain why contrastive (Oord et al., 2018; Bachman et al., 2019; Chen et al., 2020) and predictive learning (Zhang et al., 2016; Pathak et al., 2016; Vondrick et al., 2016; Chen et al.) approaches can unsupervised extract task-relevant information. Different from these work, we present an objective to discard task-irrelevant information and show its combination with existing contrastive or predictive objectives beneï¬ts the performance.
# 5 CONCLUSION
This work studies both theoretical and empirical perspectives on self-supervised learning. We show that the self-supervised learned representations could extract task-relevant information (with a potential loss) and discard task-irrelevant information (with a ï¬xed gap), along with their practical deployments such as contrastive and predictive learning objectives. We believe this work sheds light on the advantages of self-supervised learning and may help better understand when and why self-supervised learning is likely to work. In the future, we plan to connect our framework and recent SSL methods that cannot be easily ï¬t into our analysis: e.g., BYOL (Grill et al., 2020), SWAV (Caron et al., 2020), and Unifromality-Alignment (Wang & Isola, 2020).
# ACKNOWLEDGEMENT
This work was supported in part by the NSF IIS1763562, NSF Awards #1750439 #1722822, Na- tional Institutes of Health, IARPA D17PC00340, ONR Grant N000141812861, and Facebook PhD Fellowship. We would also like to acknowledge NVIDIAâs GPU support.
# REFERENCES
Alessandro Achille and Stefano Soatto. Emergence of invariance and disentanglement in deep representations. The Journal of Machine Learning Research, 19(1):1947â1980, 2018.
Relja Arandjelovic and Andrew Zisserman. Look, listen and learn. In Proceedings of the IEEE International Conference on Computer Vision, pp. 609â617, 2017.
Sanjeev Arora, Hrishikesh Khandeparkar, Mikhail Khodak, Orestis Plevrakis, and Nikunj Saunshi. A theoretical analysis of contrastive unsupervised representation learning. arXiv preprint arXiv:1902.09229, 2019.
Philip Bachman, R Devon Hjelm, and William Buchwalter. Learning representations by maximizing mutual information across views. In Advances in Neural Information Processing Systems, pp. 15509â15519, 2019.
9
Published as a conference paper at ICLR 2021
Maria-Florina Balcan, Avrim Blum, and Ke Yang. Co-training and expansion: Towards bridging theory and practice. In Advances in neural information processing systems, pp. 89â96, 2005.
Peter L Bartlett. The sample complexity of pattern classiï¬cation with neural networks: the size of the weights is more important than the size of the network. IEEE transactions on Information Theory, 44(2):525â536, 1998.
Mohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeswar, Sherjil Ozair, Yoshua Bengio, Aaron Courville, and R Devon Hjelm. Mine: mutual information neural estimation. arXiv preprint arXiv:1801.04062, 2018.
Cristian S Calude. Information and randomness: an algorithmic perspective. Springer Science & Business Media, 2013.
Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, and Armand Joulin. Unsupervised learning of visual features by contrasting cluster assignments. arXiv preprint arXiv:2006.09882, 2020.
Mark Chen, Alec Radford, Rewon Child, Jeff Wu, and Heewoo Jun. Generative pretraining from pixels.
Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. arXiv preprint arXiv:2002.05709, 2020.
Thomas M Cover and Joy A Thomas. Elements of information theory. John Wiley & Sons, 2012.
Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. Transformer-xl: Attentive language models beyond a ï¬xed-length context. arXiv preprint arXiv:1901.02860, 2019.
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248â255. Ieee, 2009.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
Jun Du, Charles X Ling, and Zhi-Hua Zhou. When does cotraining work in real data? IEEE Transactions on Knowledge and Data Engineering, 23(5):788â799, 2010.
Tom Fawcett. An introduction to roc analysis. Pattern recognition letters, 27(8):861â874, 2006.
Meir Feder and Neri Merhav. Relations between entropy and error probability. IEEE Transactions on Information Theory, 40(1):259â266, 1994.
M Federici, A Dutta, P Forré, N Kushmann, and Z Akata. Learning robust representations via multi-view information bottleneck. International Conference on Learning Representation, 2020.
Spyros Gidaris, Praveer Singh, and Nikos Komodakis. Unsupervised representation learning by predicting image rotations. arXiv preprint arXiv:1803.07728, 2018.
Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre H Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Daniel Guo, Mohammad Gheshlaghi Azar, et al. Bootstrap your own latent: A new approach to self-supervised learning. arXiv preprint arXiv:2006.07733, 2020.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770â778, 2016.
Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. arXiv preprint arXiv:1911.05722, 2019.
Olivier J Hénaff, Ali Razavi, Carl Doersch, SM Eslami, and Aaron van den Oord. Data-efï¬cient image recognition with contrastive predictive coding. arXiv preprint arXiv:1905.09272, 2019.
R Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Phil Bachman, Adam Trischler, and Yoshua Bengio. Learning deep representations by mutual information estimation and maximization. arXiv preprint arXiv:1808.06670, 2018.
Kurt Hornik, Maxwell Stinchcombe, Halbert White, et al. Multilayer feedforward networks are universal approximators.
Alex Krizhevsky et al. Learning multiple layers of features from tiny images. 2009.
Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332â1338, 2015.
10
Published as a conference paper at ICLR 2021
Jason D Lee, Qi Lei, Nikunj Saunshi, and Jiacheng Zhuo. Predicting what you already know helps: Provable self-supervised learning. arXiv preprint arXiv:2008.01064, 2020.
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pp. 740â755. Springer, 2014.
Ralph Linsker. Self-organization in a perceptual network. Computer, 21(3):105â117, 1988.
Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3431â3440, 2015.
David McAllester and Karl Stratos. Formal limitations on the measurement of mutual information. In Interna- tional Conference on Artiï¬cial Intelligence and Statistics, pp. 875â884, 2020.
Sudipto Mukherjee, Himanshu Asnani, and Sreeram Kannan. Ccmi: Classiï¬er based conditional mutual information estimation. In Uncertainty in Artiï¬cial Intelligence, pp. 1083â1093. PMLR, 2020.
Mehdi Noroozi and Paolo Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles. In European Conference on Computer Vision, pp. 69â84. Springer, 2016.
Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018.
Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2536â2544, 2016.
Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. Deep contextualized word representations. arXiv preprint arXiv:1802.05365, 2018.
Barnabás Póczos and Jeff Schneider. Nonparametric estimation of conditional information and divergences. In Artiï¬cial Intelligence and Statistics, pp. 914â923. PMLR, 2012.
Ben Poole, Sherjil Ozair, Aaron van den Oord, Alexander A Alemi, and George Tucker. On variational bounds of mutual information. arXiv preprint arXiv:1905.06922, 2019.
Shai Shalev-Shwartz and Shai Ben-David. Understanding machine learning: From theory to algorithms. Cambridge university press, 2014.
Jiaming Song and Stefano Ermon. Understanding the limitations of variational mutual information estimators. arXiv preprint arXiv:1910.06222, 2019.
Mohammad S Sorower. A literature survey on algorithms for multi-label learning.
Karthik Sridharan and Sham M Kakade. An information theoretic framework for multi-view learning. 2008.
Yonglong Tian, Dilip Krishnan, and Phillip Isola. Contrastive multiview coding. arXiv preprint arXiv:1906.05849, 2019.
Yonglong Tian, Chen Sun, Ben Poole, Dilip Krishnan, Cordelia Schmid, and Phillip Isola. What makes for good views for contrastive learning. arXiv preprint arXiv:2005.10243, 2020.
Naftali Tishby, Fernando C Pereira, and William Bialek. The information bottleneck method. arXiv preprint physics/0004057, 2000.
Christopher Tosh, Akshay Krishnamurthy, and Daniel Hsu. Contrastive learning, multi-view redundancy, and linear models. arXiv preprint arXiv:2008.10150, 2020.
Yao-Hung Hubert Tsai, Han Zhao, Makoto Yamada, Louis-Philippe Morency, and Ruslan Salakhutdinov. Neural methods for point-wise dependency estimation. arXiv preprint arXiv:2006.05553, 2020.
Michael Tschannen, Josip Djolonga, Paul K Rubenstein, Sylvain Gelly, and Mario Lucic. On mutual information maximization for representation learning. arXiv preprint arXiv:1907.13625, 2019.
Sergey Tulyakov, Ming-Yu Liu, Xiaodong Yang, and Jan Kautz. Mocogan: Decomposing motion and content for video generation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1526â1535, 2018.
11
Published as a conference paper at ICLR 2021
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pp. 5998â6008, 2017.
Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba. Generating videos with scene dynamics. In Advances in neural information processing systems, pp. 613â621, 2016.
Tongzhou Wang and Phillip Isola. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. arXiv preprint arXiv:2005.10242, 2020.
Chang Xu, Dacheng Tao, and Chao Xu. A survey on multi-view learning. arXiv preprint arXiv:1304.5634, 2013.
Richard Zhang, Phillip Isola, and Alexei A Efros. Colorful image colorization. In European conference on computer vision, pp. 649â666. Springer, 2016.
Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the IEEE international conference on computer vision, pp. 19â27, 2015.
A REMARKS ON LEARNING MINIMAL AND SUFFICIENT REPRESENTATIONS
In the main text, we discussed the objectives to learn minimal and sufï¬cient representations (Deï¬nition 1). Here, we discuss the similarities and differences between the prior methods (Tishby et al., 2000; Achille & Soatto, 2018) and ours. First, to obtain sufï¬cient representations (for the downstream task T ), all the methods presented to maximize I(ZX ; T ). Then, to maintain minimal amount of information in the representations, the prior methods (Tishby et al., 2000; Achille & Soatto, 2018) presented to minimize I(ZX ; X) and the ours presents to minimize H(ZX |T ). Our goal is to relate I(ZX ; X) minimization and H(ZX |T ) minimization in our framework.
To begin with, under the constraint I(ZX ; T ) is maximized, we see that minimizing I(ZX ; X) is equivalent to minimizing I(ZX ; X|T ). The reason is that I(ZX ; X) = I(ZX ; X|T ) + I(ZX ; X; T ), where I(ZX ; X; T ) = I(ZX ; T ) due to the determinism from X to ZX (our framework learns a deterministic function from X to ZX ) and I(ZX ; T ) is maximized in our constraint. Then, I(ZX ; X|T ) = H(ZX |T ) â H(ZX |X, T ), where H(ZX |T ) contains no randomness (no infor- mation) as ZX being deterministic from X. Hence, I(ZX ; X|T ) minimization and H(ZX |T ) minimization are interchangeable.
The same claim can be made from the downstream task T to the self-supervised signal S. In speciï¬c, when X to ZX is deterministic, I(ZX ; X|S) minimization and H(ZX |S) minimization are interchangeable. As discussed in the related work section, for reducing the amount of the redundant information, Federici et al. (2020) presented to use I(ZX ; X|S) minimization and ours presented to use H(ZX |T ) minimization. We also note that directly minimizing the conditional mutual information (i.e., I(ZX ; X|S)) requires a min-max optimization (Mukherjee et al., 2020), which may cause instability in practice. To overcome the issue, Federici et al. (2020) assumes a Gaussian encoder for X â ZX and presents an upper bound of the original objective.
# B PROOFS FOR THEOREM 1 AND 2
We start by presenting a useful lemma from the fact that FX (·) is a deterministic function: Lemma 1 (Determinism). If P (ZX |X) is Dirac, then the following conditional independence holds: T â¥â¥ ZX |X and S â¥â¥ ZX |X, inducing a Markov chain S â T â X â ZX .
Proof. When ZX is a deterministic function of X, for any A in the sigma-algebra induced by ZX we have E[1[ZX âA]|X, {T, S}] = E[1[ZX âA]|X, S] = E[1[ZX âA]|X], which implies T â¥â¥ ZX |X and S â¥â¥ ZX |X.
Theorem 1 and 2 in the main text restated:
12
Published as a conference paper at ICLR 2021
Theorem 5 (Task-relevant information with a potential loss â¬j,4., restating Theorem | in the main text). The supervised learned representations (i.e., I(ZÂ¥?;T) and I(Z?"""; T)) contain all the task-relevant information in the input (i.e., I(X;T)). The self-supervised learned representations (i.e., I(Z$!;T) and T(Zgi; T)) contain all the task-relevant information in the input with a potential loss info. Formally,
# I(X; T ) = I(Z sup
# X ; T ) = I(Z supmin
I(X;T) = 1(Z¥°;T) = (ZY; T) > U(ZE;T) > UZ" T) > 1(X;T) = einto-
Proof. The proofs contain two parts. The ï¬rst one is showing the results for the supervised learned representations and the second one is for the self-supervised learned representations.
Supervised Learned Representations: Adopting Data Processing Inequality (DPI by Cover & Thomas (2012)) in the Markov chain S â T â X â ZX (Lemma 1), I(ZX ; T ) is maximized at I(X; T ). Since both supervised learned representations (Z sup ) maximize I(ZX ; T ), we conclude I(Z sup
# Self-supervised Learned Representations: First, we have
I(ZX ; S) = I(ZX ; T ) â I(ZX ; T |S) + I(ZX ; S|T ) = I(ZX ; T ; S) + I(ZX ; S|T ) and I(X; S) = I(X; T ) â I(X; T |S) + I(X; S|T ) = I(X; T ; S) + I(X; S|T ). By DPI in the Markov chain S â T â X â ZX (Lemma 1), we know
I(ZX ; S) is maximized at I(X; S)
I(ZX ; S; T ) is maximized at I(X; S; T )
I(ZX ; S|T ) is maximized at I(X; S|T )
X and Z sslmin X ; S; T ) = I(Z sslmin Since both self-supervised learned representations (Z ssl I(Z ssl I(Z ssl I(X; S; T ), we get ) maximize I(ZX ; S), we have ; S; T ) = I(X; S; T ) and ; S; T ) = X X ; S) = I(Z sslmin X ; S|T ) = I(Z sslmin ; S) = I(X; S). Hence, I(Z ssl X X ; S|T ) = I(X; S|T ). Using the result I(Z ssl X ; S; T ) = I(Z sslmin X X
I(Z ssl X ; T ) = I(X; T ) â I(X; T |S) + I(Z ssl X ; T |S) I(Z sslmin X ; T ) = I(X; T ) â I(X; T |S) + I(Z sslmin X ; T |S).
and
Now, we are ready to present the inequalities:
1. I(X; T ) ⥠I(Z ssl X ; T ) due to I(X; T |S) ⥠I(Z ssl X ; T |S) by DPI.
X ; T ) ⥠I(Z sslmin X ; T |S) ⥠I(Z sslmin 2. I(Z ssl ; T ) due to I(Z ssl X minimized at Z sslmin X , I(Z sslmin X ; T |S) = 0. X ; T |S) = 0. Since H(ZX |S) is
3. I(Z sslmin
1(Z3'â¢";T)
> I(X;T) â einfo due to
# X
# I(X; T ) â I(X; T |S) + I(Z sslmin
# X
T|S) > 1(X;T) â 1(X;T S) 2 U(X;T) â eintos
where I(X;T|S) < éinto by the redundancy assumption.
Theorem 6 (Task-irrelevant information with a ï¬xed compression gap I(X; S|T ), restating Theorem 2 in the main text). The sufï¬cient self-supervised representation (i.e., I(Z ssl X ; T )) contains more task- irrelevant information in the input than then the sufï¬cient and minimal self-supervised representation (i.e., I(Z sslmin ; T )). The later contains an amount of the information, I(X; S|T ), that cannot be discarded from the input. Formally, X ; X|T ) = I(X; S|T ) + I(Z ssl I(Z ssl
13
= 0.
Published as a conference paper at ICLR 2021
Proof. First, we see that
I(ZX ; X|T ) = I(ZX ; X; S|T ) + I(ZX ; X|S, T ) = I(ZX ; S|T ) + I(ZX ; X|S, T ), where I(ZX ; X; S|T ) = I(ZX ; S|T ) by DPI in the Markov chain S â T â X â ZX . We conclude the proof by combining the following:
⢠From the proof in Theorem 5, we showed I(Z ssl X ; S|T ) = I(Z sslmin X ; S|T ) = I(X; S|T ).
Since H(ZX |S) is minimized at Z sslmin ⢠Since H(ZX |T ) is minimized at Z supmin
# , I(Z sslmin X , I(Z supmin X
; X|S, T ) = 0.
# X
; X|T ) = 0.
# X
# C PROOF FOR PROPOSITION 1
Proposition 2 (Mutual Information Neural Estimation, restating Proposition 1 in the main text). Let 0 <6 <1. There exists d ⬠N and a family of neural networks F := {fg : 9 ⬠© C R*} where © is compact, so that 36* ⬠©, with probability at least 1 â 6 over the draw of {2xi, 8i}P1 ~ PZâ gy < o( [eee 7) (Zx;8) â I(Zx;8)
Sketch of Proof. The proof is a standard instance of uniform convergence bound. First, we assume the boundness and the Lipschitzness of Ëfθ. Then, we use the universal approximation lemma of neural networks (Hornik et al.). Last, combing all these two along with the uniform convergence in terms of the covering number (Bartlett, 1998), we complete the proof.
We note that the complete proof can be found in the prior work (Tsai et al., 2020). An alterna- tive but similar proof can be found in another prior work (Belghazi et al., 2018), which gives
us lis? (2x S) âI(Zx;5)| < O (ees), The subtle difference between them is that, given a neural network function space © C R® and its covering number (0,7), Tsai et al. (2020) has N(0,n) = o((n)-*) by Bartlett (1998) and Belghazi et al. (2018) has N(®,n) = O((n/va)~*) by Shalev-Shwartz & Ben-David (2014). Both are valid and the one used by Tsai et al. (2020) is tighter.
# D PROOFS FOR THEOREM 3 AND 4
To begin with, we see that
I(ZX ; T ) = I(ZX ; X) â I(ZX ; X|T ) + I(ZX ; T |X) = I(ZX ; X) â I(ZX ; X|T ) = I(ZX ; S) â I(ZX ; S|X) + I(ZX ; X|S) â I(ZX ; X|T ) = I(ZX ; S) + I(ZX ; X|S) â I(ZX ; X|T ) ⥠I(ZX ; S) â I(ZX ; X|T ),
where I(ZX ; T |X) = I(ZX ; S|X) = 0 due to the determinism from X to ZX . Then, in the proof of Theorem 6, we have shown I(ZX ; X|T ) = I(ZX ; S|T ) + I(ZX ; X|S, T ). Hence,
I(ZX ; T ) ⥠I(ZX ; S) â I(ZX ; S|T ) â I(ZX ; X|S, T ) ⥠I(ZX ; S) â I(X; S|T ) â I(ZX ; X|S, T ),
where I(ZX ; S|T ) ⤠I(X; S|T ) by DPI.
Theorem 3 and 4 in the main text restated:
14
Published as a conference paper at ICLR 2021
Theorem 7 (Bayes Error Rates for Arbitrary Learned Representations, restating Theorem 3 in the main text). For an arbitrary learned representations ZX , Pe = Th( ¯Pe) with
)
_ ~ (1) +1068(0)+12:X]8.0)-1? Z:8)40 (JA ) Pe <1âexp
Proof. We use the inequality between Pe and H(T |ZX ) indicated by Feder & Merhav (1994):
âlog(1 â Pe) ⤠H(T |ZX ).
Combining with I(ZX ; T ) = H(T ) â H(T |ZX ) and I(ZX ; T ) ⥠I(ZX ; S) â I(X; S|T ) â I(ZX ; X|S, T ), we have
log(1 â Pe) ⥠âH(T ) + I(ZX ; S) â I(X; S|T ) â I(ZX ; X|S, T ).
Hence,
~ (ery +1068(7)+12:X|8.0)-112x:8)) Pe <1 expâ (MM HESIMATENIST) 1:8)
Next, by deï¬nition of the Bayes error rate, we know 0 ⤠Pe ⤠1 â 1
ae |i?) (2x38) â (2x38)
We conclude the proof by combining Proposition 2, |i?) (2x38) â (2x38) < O (24).
Theorem 8 (Bayes Error Rates for Self-supervised Learned Representations, restating Theorem 4 in /P ssl the main text). Let P sup e /P sslmin be the Bayes error rate of the supervised or the self-supervised e learned representations Z sup X /Z ssl ) with X ) + log 2
_ log (1 = P2"P) + log 2 < {PS!, Pham} <1 â exp lost Pe log P+ eto), log ((Z)
Proof. We use the two inequalities between Pe and H(T |ZX ) by Feder & Merhav (1994) and Cover & Thomas (2012):
# âlog(1 â Pe) ⤠H(T |ZX )
and
H(T |ZX ) ⤠log 2 + Pelog|T |.
Combining the results from Theorem 5: I(Z sup X ; T ) ⥠I(Z ssl
T(ZÂ¥;T) =U ZEIT) > UZ"; T) > (ZY: T) = einto,
we have
⢠the upper bound of the self-supervised learned representationsâ Bayes error rate:
{-log(1 = PS"), -log(1 = PS)} < (H(L|Z8), H(T|Z%"")} < A(T|ZX") + einto < log 2+ PS"Plog|T| + into,
# 2+ log |T|+eint0)
} ⤠1 â expâ(log 2+P sup which suggests {P ssl e , P sslmin e e
the lower bound of the self-supervised learned representationsâ Bayes error rate: ) ⤠H(T |Z sup X ) X ), H(T |Z sslmin ⤠{H(T |Z ssl ⤠{log 2 + P ssl
âlog(1 â P sup e )} X e log|T |, ⤠{log 2 + P sslmin e , P sslmin e log|T |}, e which suggests â log (1âP sup )+log 2 ⤠{P ssl }. e log (|T |)
We conclude the proof by having Pe lie in the feasible range: 0 ⤠Pe ⤠1 â 1
|T | .
15
<
Published as a conference paper at ICLR 2021
# E TIGHTER BOUNDS FOR THE BAYES ERROR RATES
We note that the bound used in Theorems 7 and 8: âlog(1 â Pe) ⤠H(T |ZX ) ⤠log 2 + Pelog|T | is not tight. A tighter bound is H â(Pe) ⤠H(T |ZX ) ⤠H +(Pe) with
-1 k k H-(P.) = H(k(1 =P.) + &(1 = Pa)log & when SPS pp ISkS|T-1, and H* (Pe) := H(Pe) + Pelog (|T| â 1), where H(x) = âazlog(x) â (1 â x)log(1 â x). It is clear that âlog(1 â P.) < H~(P.) and H*(P.) < log2 + P-log(|T]).
Hence, Theorem 7 and 8 can be improved as follows: Theorem 9 (Tighter Bayes Error Rates for Arbitrary Learned Representations). For an arbitrary learned representations ZX , Pe = Th( ¯Pe) with ¯Pe ⤠Peupper. Peupper is derived from the program
5,7) +0( 4+ B8/>)) arg max H~(P.) < H(T) â ££ (Z@!; 8) + I(X; S|T) + I(Zx;X
Theorem 10 (Tighter Bayes Error Rates for Self-supervised Learned Representations). Let P sup e /P sslmin /P ssl be the Bayes error rate of the supervised or the self-supervised learned rep- e e resentations Z sup X /Z sslmin e ) and P sslmin ssl upper. } ⤠Pe
# e = Th( ¯P ssl e , ¯P sslmin
ssl lower is derived from the following program
PS!
# Pe
arg min P ssl e H â(P sup e ) ⤠H +(P ssl e )
ssl upper is derived from the following program
# and P|...
arg max H~ (PS!) < H+ (PS?) + â¬into- Ps
# F MORE ON VISUAL REPRESENTATION LEARNING EXPERIMENTS
In the main text, we design controlled experiments on self-supervised visual representation learning to empirically support our theorem and examine different compositions of SSL objectives. In this section, we will discuss 1) the architecture design; 2) different deployments of contrastive/ forward predictive learning; and 3) different self-supervised signal construction strategy. We argue that these three additional set of experiments may be interesting future work.
# F.1 ARCHITECTURE DESIGN
The input image has size 105 à 105. For image augmentations, we adopt 1) rotation with degrees from â10⦠to +10â¦; 2) translation from â15 pixels to +15 pixels; 3) scaling both width and height from 0.85 to 1.0; 4) scaling width from 0.85 to 1.25 while ï¬xing the height; and 5) resizing the image to 28 à 28. Then, a deep network takes a 28 à 28 image and outputs a 1024âdim. The deep network has the structure: Conv â BN â ReLU â Conv â BN â ReLU â MaxPool â Conv â BN â ReLU â MaxPool â Conv âBN â ReLU â MaxPool â Flatten â Linear â L2Norm. 3x3 ker- and Linear nel to FX (·), which has is Linear â BN â ReLU â UnFlatten â DeConv â BN â ReLU â DeConv â BN â ReLU â DeConv âBN â ReLU â DeConv. R(·) has the exact same number of parameters as FX (·). Note that we use the same network designs in I(·, ·) and H(·|·) estimations. To reproduce the results in our experimental section, please refer to our released code8.
8https://github.com/yaohungt/Self_Supervised_Learning_Multiview
16
Published as a conference paper at ICLR 2021
(a) Omniglot (Composing SSL Objectives with LF P as MSE) Objective Trained for Test Accuracy LCL LCL + LIP LF P LF P + 10LIP LCL + 10LF P LCL + 10LF P + LIP 500 epochs 500 epochs 20000 epochs 20000 epochs 9000 epochs 9000 epochs 85.59 ± 0.05% 85.90 ± 0.09% 84.83 ± 0.07% 84.96 ± 0.04% 86.13 ± 0.21% 86.17 ± 0.13%
Figure 5: Comparisons for different objectives/compositions of SSL objectives on self-supervised visual representation training. We report mean and its standard error from 5 random trials.
F.2 DIFFERENT DEPLOYMENTS FOR CONTRASTIVE AND PREDICTIVE LEARNING OBJECTIVES
In the main text, for practical deployments, we suggest Contrastive Predictive Coding (CPC) Oord et al. (2018) for LCL and assume Gaussian distribution for the variational distributions in LF P / LIP . The practical deployments can be abundant by using different mutual information approximations for LCL and having different distribution assumptions for LF P / LIP . In the following, we discuss a few examples.
Contrastive Learning. Other than CPC Oord et al. (2018), another popular contrastive learning objective is JS Bachman et al. (2019), which is the lower bound of Jensen-Shannon divergence between P (ZS, ZX ) and P (ZS)P (ZX ) (a variational bound of mutual information). Its objective can be written as
Ep(zs,zx) [-softplus(â(G(z.), G(zs)))] âEp(z;)P(zx) [softplus((G(z2), G(zs)))]; max Zs=Fs(S8),Zx =Fx(X),G
where we use softplus to denote softplus (x) = log (1 + exp (x)).
Predictive Learning. Gaussian distribution may be the simplest distribution form that we can imagine, which leads to Mean Square Error (MSE) reconstruction loss. Here, we use forward predictive learning as an example, and we discuss the case when S lies in discrete {0, 1} sample space. Speciï¬cally, we let QÏ(S|ZX ) be factorized multivariate Bernoulli:
# Pp
Pp max Ep, 7, Ss 8; - log [R(zz)]i + (1 â 54) - log [1 â R(z2)]i| - (5) Zx=Fx(X),R Ss
This objective leads to Binary Cross Entropy (BCE) reconstruction loss.
If we assume each reconstruction loss corresponds to a particular distribution form, then by ignoring which variatioinal distribution we choose, we are free to choose arbitrary reconstruction loss. For instance, by switching s and z in eq. equation 5, the objective can be regarded as Reverse Binary Cross Entropy Loss (RevBCE) reconstruction loss. In our experiments, we ï¬nd RevBCE works the best among {MSE, BCE, and RevBCE}. Therefore, in the main text, we choose RevBCE as the example reconstruction loss as LF P .
More Experiments. We provide an additional set of experiments by having {CPC, JS} for LCL and {MSE, BCE, RevBCE} reconstruction loss for LF P in Figure 5. From the results, we ï¬nd different formulation of objectives bring very different test generalization performance. We argue that, given a particular task, it is challenging but important to ï¬nd the best deployments for contrastive and predictive learning objectives.
F.3 DIFFERENT SELF-SUPERVISED SIGNAL CONSTRUCTION STRATEGY
In the main text, we design a self-supervised signal construction strategy that the input (X) and the self-supervised signal (S) differ in {drawing styles, image augmentations}. This self-supervised signal construction strategy is different from the one that is commonly adopted in most self-supervised visual representation learning work Tian et al. (2019); Bachman et al. (2019); Chen et al. (2020). Speciï¬cally, prior work consider the difference between input and the self-supervised signal only in image augmentations. We provide additional experiments in Fig. 6 to compare these two different self-supervised signal construction strategies.
We see that, comparing to the common self-supervised signal construction strategy Tian et al. (2019); Bachman et al. (2019); Chen et al. (2020), the strategy introduced in our controlled experiments has much better generalization ability to test set. It is worth noting that, although our construction
17
Published as a conference paper at ICLR 2021
Omniglot (Different Self-supervised Signal Construction) Test Accuracy 2 » @ 8 6 6 w o FS 6 ââ Strategy | - ours (controlled exps) ââ Strategy II - similar to SimCLR 30 0 100 200 300 400 500 600 700 800 9001000 Training Epoch
Figure 6: Comparisons for different self-supervised signal construction strategies. The differences between the input and the self-supervised signals are {drawing styles, image augmentations} for our construction strategy and only {image augmentations} for SimCLR Chen et al. (2020)âs strategy. We choose LCL as our objective, reporting mean and its standard error from 5 random trials.
strategy has access to the label information (i.e., we sample the self-supervised signal image from the same character with the input image), our SSL objectives do not train with the labels. Nonetheless, since we implicitly utilize the label information in our self-supervised construction strategy, it will be unfair to directly compare our strategy and prior one. An interesting future research direction is examining different self-supervised signal construction strategy and even combine full/part of label information into self-supervised learning.
G METRICS IN VISUAL-TEXTUAL REPRESENTATION LEARNING
⢠Subset Accuracy (A) Sorower, also know as the Exact Match Ratio (MR), ignores all partially correct (consider them incorrect) outputs and extend accuracy from the single label case to the multi-label setting.
Le MR=â)lly.=n) i=l
⢠Micro AUC ROC score Fawcett (2006) computes the AUC (Area under the curve) of a receiver operating characteristic (ROC) curve.
18 | {
"id": "2002.05709"
} |
2006.05259 | Wavelet Networks: Scale-Translation Equivariant Learning From Raw Time-Series | Leveraging the symmetries inherent to specific data domains for the
construction of equivariant neural networks has lead to remarkable improvements
in terms of data efficiency and generalization. However, most existing research
focuses on symmetries arising from planar and volumetric data, leaving a
crucial data source largely underexplored: time-series. In this work, we fill
this gap by leveraging the symmetries inherent to time-series for the
construction of equivariant neural network. We identify two core symmetries:
*scale and translation*, and construct scale-translation equivariant neural
networks for time-series learning. Intriguingly, we find that scale-translation
equivariant mappings share strong resemblance with the wavelet transform.
Inspired by this resemblance, we term our networks Wavelet Networks, and show
that they perform nested non-linear wavelet-like time-frequency transforms.
Empirical results show that Wavelet Networks outperform conventional CNNs on
raw waveforms, and match strongly engineered spectrogram techniques across
several tasks and time-series types, including audio, environmental sounds, and
electrical signals. Our code is publicly available at
https://github.com/dwromero/wavelet_networks. | http://arxiv.org/pdf/2006.05259 | David W. Romero, Erik J. Bekkers, Jakub M. Tomczak, Mark Hoogendoorn | cs.LG, stat.ML | null | null | cs.LG | 20200609 | 20240121 | 4 2 0 2 n a J 1 2 ]
# G L . s c [
2 v 9 5 2 5 0 . 6 0 0 2 : v i X r a
# Published in Transactions on Machine Learning Research (12/2023)
# Wavelet Networks: Scale-Translation Equivariant Learning From Raw Time-Series
David W. Romeroâ NVIDIA Research
[email protected]
# Erik J. Bekkers Universiteit van Amsterdam
[email protected]
# Jakub M. Tomczakâ Technische Universiteit Eindhoven
[email protected]
# Mark Hoogendoorn Vrije Universiteit Amsterdam
[email protected]
Reviewed on OpenReview: https: // openreview. net/ forum? id= ga5SNulYet
# Abstract
Leveraging the symmetries inherent to specific data domains for the construction of equiv- ariant neural networks has lead to remarkable improvements in terms of data efficiency and generalization. However, most existing research focuses on symmetries arising from planar and volumetric data, leaving a crucial data source largely underexplored: time-series. In this work, we fill this gap by leveraging the symmetries inherent to time-series for the con- struction of equivariant neural network. We identify two core symmetries: scale and trans- lation, and construct scale-translation equivariant neural networks for time-series learning. Intriguingly, we find that scale-translation equivariant mappings share strong resemblance with the wavelet transform. Inspired by this resemblance, we term our networks Wavelet Networks, and show that they perform nested non-linear wavelet-like time-frequency trans- forms. Empirical results show that Wavelet Networks outperform conventional CNNs on raw waveforms, and match strongly engineered spectrogram techniques across several tasks and time-series types, including audio, environmental sounds, and electrical signals. Our code is publicly available at https://github.com/dwromero/wavelet_networks.
1 Introduction Leveraging the symmetries inherent to specific data domains for the construction of statistical models, such as neural networks, has proven highly advantageous, by restricting the model to the family of functions that accurately describes the data. A prime example or this principle is Convolutional Neural Networks (CNNs) (LeCun et al., 1989). CNNs embrace the translation symmetries in visual data by restricting their mappings to a convolutional structure. Convolutions possess a distinctive property called translation equivariance: if the input is translated, the output undergoes an equal translation. This property endows CNNs with better data efficiency and generalization than unconstrained models like multi-layered perceptrons.
Group equivariant convolutional neural networks (G-CNNs) (Cohen & Welling, 2016) extend equivariance to more general symmetry groups through the use of group convolutions. Group convolutions are group equivariant: if the input is transformed by the symmetries described by the group, e.g., scaling, the output undergoes an equal transformation. Equivariance to larger symmetry groups endows G-CNNs with increased data efficiency and generalization on data exhibiting these symmetries. Existing group equivariance research primarily focuses on symmetries found in visual data, e.g., planar rotations, planar scaling (Weiler et al., 2018; Worrall & Welling, 2019; Sosnovik et al., 2020), and more recently, on 3D symmetries, e.g., for spherical
âWork done while at the Vrije Universiteit Amsterdam
1
# Published in Transactions on Machine Learning Research (12/2023)
(a) (b) (c)
Equivariant Map i Paine | AFA vi âTranslation + Scaling âTranslation + Scaling | . [transition + Equivarant Map : Sataron er, Nite
ln, Sy âe A ranslation + Sealing ai
lation + Seal © ~\ [/- oo wr we
Figure 1: Equivariance, invariance and their impact on the hierarchical representations. In a group equivariant map- ping, when the input is transformed by a group transformation, its output undergoes an equivalent transformation (Fig. 1a). In contrast, in group invariant maps, the output remains unchanged for all group transformations of the input (Fig. 1b). This distinction holds significant implications in the construction of hierarchical feature representa- tions. For example, a face recognition system built upon invariant eye, nose and mouth detectors would be unable to set the portraits in Fig. 1c apart. However, by leveraging equivariant mappings, information about the input transformations can be used to distinguish these portraits effectively. In essence, in contrast to equivariant maps, invariant maps permit senseless pattern combinations resulting for overly restraining constraints in their design.
and molecular data (Thomas et al., 2018; Fuchs et al., 2020; Satorras et al., 2021). Yet, an important category remains underexplored, which also exhibits symmetries: time-series. Notably, their translation symmetry is a cornerstone in signal processing and system analysis, e.g., Linear Time-Invariant (LTI) systems.
In this work, we bridge this gap by constructing neural networks that embrace the symmetries inherent to time-series. We begin by asking: âWhat symmetries are inherently present in time-series?â We identify two fundamental symmetries âscale and translationâ, whose combination elucidate several phenomena observed in time-series, e.g., temporal translations, phase shifts, temporal scaling, resolution changes, pitch shifts, seasonal occurrences, etc. By leveraging group convolutions equivariant to the scale-translation group, we construct neural architectures such that when the input undergoes translation, scaling or a combination of the two, all intermediate layers will undergo an equal transformation in a hierarchical manner, akin to the methods proposed by Sosnovik et al. (2020); Zhu et al. (2022) for visual data. Interestingly, we observe that constructing convolutional layers equivariant to scale and translation results in layers that closely resemble the wavelet transform. However, we find that in order to preserve these symmetries consistently across the whole network, the output of each layer must be processed by a layer that also behaves like the wavelet transform. This approach substantially deviates from common approaches that rely on spectro-temporal representations, e.g., the wavelet transform, which compute spectro-temporal representations once and pass their response to a 2D CNN for further processing.
Inspired by the resemblance of scale-translation group equivariant convolutions with the wavelet transform, we term our scale-translation equivariant networks for time-series processing Wavelet Networks. Extensive empirical results show that Wavelet Networks consistently outperform conventional CNNs operating on raw waveforms, and match strongly engineered spectogram-based approaches, e.g., on Mel-spectrograms, across several tasks and time-series types, e.g., audio, environmental sounds, electrical signals. To our best knowledge, we are first to propose scale-translation equivariant neural networks for time-series processing.
# 2 Related Work
Learning from raw time-series. Several end-to-end learning approaches for time-series exist (Dieleman & Schrauwen, 2014; Dieleman et al., 2016; Dai et al., 2017; Rethage et al., 2018; Stoller et al., 2018). Given the considerable high-dimensionality of time-series, existing works focus on devising techniques with parameter- and compute-efficient large memory horizons (Romero et al., 2021; Goel et al., 2022). Due to small effective memory horizons and long training times, Recurrent Neural Networks (RNNs) (Rumelhart et al., 1985) have gradually been overshadowed by CNN backbones (Bai et al., 2018).
While CNNs are equivariant to translations, they do not inherently incorporate a distinct notion of scale. Although methods involving layer-wise multi-scale representations have been proposed, e.g., Zhu et al. (2016); Lu et al. (2019); von Platen et al. (2019); Guizzo et al. (2020), these layers are not scale equivariant. As a result, networks incorporating them struggle to maintain consistent scale information across layers.
2
# Published in Transactions on Machine Learning Research (12/2023)
Group-invariant time-series learning. Learning invariant representations from raw speech and sound has been extensively studied in past. Scattering operators (Mallat, 2012; Bruna & Mallat, 2013) construct group invariant feature representations that can be used to construct neural architectures invariant to scale and translation (Andén & Mallat, 2014; Peddinti et al., 2014; Salamon & Bello, 2015). In contrast to the invariant feature representations developed by these works, Wavelet networks construct equivariant feature representations. Since group equivariance is a generalization of group invariance (Fig. 1b, Sec. 3.1), Wavelet Networks accommodate a broader functional family than previous works, while still upholding scale and translation preservation. Notably, equivariant methods shown superior performance compared to invariant methods across several tasks, even for intrinsically invariant tasks like classification (Cohen & Welling, 2016; Zaheer et al., 2017; Maron et al., 2018). This phenomenon stems from the hierarchical form in which neural networks extract features. Enforcing invariance early in the feature extraction process imposes an overly restrictive constraint in the resulting models (Fig. 1c).
Group-equivariant time-series learning. To our best knowledge, Zhang et al. (2015) is the only approach that proposes equivariant learning for time-series data. They propose to learn feature representations equiv- ariant to vocal tract length changes âan inherent symmetry of speech. However, vocal tract length changes do not conform to the mathematical definition of a group, making this equivariance only an approximate es- timation. Interestingly, vocal tract length changes can be characterized by specific (scale, translation) tuples. Consequently, considering equivariance to the scale-translation group implicitly describes vocal tract length changes as well as many other symmetries encountered in audio, speech and other time-series modalities.
3 Background This work assumes a basic familiarity with the concepts of a group, a subgroup and a group action. For those who may not be acquainted with these terms, we introduce these terms in Appx. A.
3.1 Group equivariance, group invariance and symmetry preservation Group equivariance. Group equivariance is the property of a mapping to respect the transformations in a group. We say that a map is equivariant to a group if a transformation of the input by elements of the group leads to an equivalent transformation of the output (Fig. 1a). Formally, for a group G with elements g â G acting on a set X, and a mapping Ï : X â X, we say that Ï is equivariant to G if: Ï(gx) = gÏ(x),
For example, the convolution of a signal f : R â R and a kernel Ï : R â R is equivariant to the group of translations âor translation equivariantâ because when the input is translated, its convolutional descriptors are equivalently translated, i.e., (Ï â Ltf )=Lt(Ï â f ), with Lt a translation operator by t: Ltf (x)=f (xât). Group invariance. Group invariance is a special case of group equivariance in which the output of the map is equal for all transformations of the input (Fig. 1b). Formally, for a group G with elements g â G acting on a set X, and a mapping Ï : X â X, we say that Ï is invariant to G if:
Ï(gx) = Ï(x), âx â X, âg â G.
Relation to symmetry preservation. A symmetry-preserving mapping preserves the symmetries of the input. That is, if the input has certain symmetries, e.g., translation, rotation, scale, these symmetries will also be present in the output. Since symmetries are mathematically described as groups, it follows that group equivariant mappings preserve the symmetries of the group to which the mapping is equivariant. In contrast, invariant mappings do not preserve symmetry, as they remove all symmetric information from the input.
3.2 Symmetry-preserving mappings: The group and the lifting convolution When talking about (linear) symmetry-preserving mappings, we are obliged to talk about the group convolu- tion. Previous work has shown that group convolutions are the most general class of group equivariant linear maps (Cohen et al., 2019). Hence, it holds that any linear equivariant map is in fact a group convolution.
Group convolution. Let f : G â R and Ï : G â R be a scalar-valued signal and convolutional kernel defined on a group G. The group convolution (âG) between f and Ï is given by:
(f *¢ ¥)(9) -[ F(y)Lo(7) dug (7) =[ sow (9-*y) dug (>). (3)
3
(2)
# Published in Transactions on Machine Learning Research (12/2023)
Figure 2: Locality of visual and auditory objects. Whereas visual objects are local (left), auditory objects are not. The latter often cover large parts of the frequency axis in a sparse manner (right).
where g,y ⬠G, Lyv(y)=v (g-+y) is the action of the group G on the kernel 7, and pig(7) is the (invariant) Haar measure of the group @ for 7. Notably, the group convolution generalizes the translation equivariance of convolutions to general groups. The group convolution is equivariant in the sense that for all y,g ⬠G, Lol ¥e VC) = (Lof #6 V9), with Ly f()=f (9 "7)- (4)
The lifting convolution. In practice, the input signals f might not be readily defined on the group of interest G, but on a sub-domain thereof X, i.e., f : X â R. For example, time-series are defined on R although we might want to consider larger groups such as the scale-translation group. Hence, we require a symmetry-preserving mapping from X to G that lifts the input signal to G to use group convolutions. This operation is called a lifting convolution. Formally, with f : X â R and Ï : X â R a scalar-valued signal and convolutional kernel defined on X, and X a sub-group of G, the lifting convolution (âGâ) is a mapping from functions on X to functions on G defined as:
# Z
# Z
(f âGâ Ï)(g) = f (x)LgÏ(x) dµG(x) = f (x)Ï(gâ1x) dµG(x) X X (5)
Note that, the lifting convolution is also group equivariant mapping. That is, Lg(f âGâ Ï)=(Lgf âGâ Ï).
4 The problem of learning 2D convolutional kernels on the time-frequency plane CNNs have been a major breakthrough in computer vision, yielding startling results in countless applications. Due to their success, several works have proposed to treat spectro-temporal representations ârepresentations on the time-frequency planeâ as 2D images and learn 2D CNNs on top. In this section, we delve into the dif- ferences between visual and spectro-temporal representations, and assess the suitability of training 2D CNNs directly on top of spectro-temporal representations. Our analysis suggest that treating spectro-temporal rep- resentations as images and learning 2D CNNs on top might not be adequate for effective time-series learning.
To enhance clarity, we define spectro-temporal representations in separate gray boxes throughout the section to avoid interrupting the reading flow. Those already familiar with these concepts may skip these boxes.
Spectro-temporal representations. Let f (t) â L2(R) be a square integrable function on R. An spectro-temporal representation Φ[f ](t, Ï) : R2 â C of f is constructed by means of a linear time- frequency transform Φ that correlates the signal f with a dictionary D of localized time-frequency atoms D={Ït,Ï}tâR,ÏâR, Ït,Ï : R â C of finite energy and unitary norm, i.e., Ït,Ï â L2(R), â¥Ït,Ïâ¥2=1, ât â R, Ï â R. The resulting spectro-temporal representation Φ[f ] is given by:
# Z
Φ[f ](t, Ï) = â¨f, Ït,Ïâ© = R f (Ï )Ïâ t,Ï (Ï ) dÏ, (6)
with Ïâ the complex conjugate of Ï, and â¨Â·, ·⩠the dot product of its arguments. Using different time- frequency components Ït,Ï, spectro-temporal representations with different properties can be obtained.
4.1 Fundamental differences between visual representations and spectro-temporal representations There exist two fundamental distinctions between visual data and spectro-temporal representations, which are universal to all spectro-temporal representations: (i) locality and (ii) transparency. Unlike visual data, auditory signals exhibit strong non-local characteristics. Auditory signals consist of auditory objects, e.g., spoken words, which contain components resonating at multiple non-local frequencies known as the har- monics of the signal. Consequently, the spectro-temporal representations of auditory objects often occupy a significant portion of the time-frequency plane âparticularly along the frequency axis (Ï)â in a sparse manner
4
(4)
# Published in Transactions on Machine Learning Research (12/2023)
a ee ee es
es
a ee ee
Figure 3: Occlusion and superposition. Visual objects occlude each other when they appear simultaneously at a given position (left). Auditory objects, instead, superpose at all shared positions (right).
(Fig. 2). Furthermore, when considering auditory signals comprising multiple auditory objects, these objects exhibit a phenomenon known as superposition. This property is notably different from visual data, where visual objects in the same location occlude one another, resulting in only the object closest to the camera being visible (Fig. 3). This inherent property of sound is colloquially referred to as transparency.
4.2 The problem of learning 2D kernels on short-time Fourier spectro-temporal representations The short-time Fourier transform constructs a representation in which a signal is decomposed in terms of its correlation with time-frequency atoms of constant time and frequency resolution. As a result, it is effective as long as the signal f does not exhibit transient behavior âcomponents that evolve quickly over timeâ with some waveform structures being very localized in time and others very localized in frequency.
The short-time Fourier transform. The short-time Fourier transform S âalso called windowed Fourier transformâ is a linear time-frequency transform that uses a dictionary of time-frequency atoms Ït,Ï(Ï )=w(Ï â t)eâiÏÏ , tâR, ÏâR, constructed with a symmetric window w(Ï ) of local support shifted by t and modulated by the frequency Ï. The spectro-temporal representation S[f ] is given by:
# Z
# Z
S[f ](t, Ï) = â¨f, Ït,Ïâ© = R f (Ï )Ïâ t,Ï (Ï ) dÏ = R f (Ï )w(Ï â t)eâiÏÏ dÏ. (7)
Intuitively, the short-time Fourier transform divides the time-frequency plane in tiles of equal resolu- tion, whose value is given by the correlation between f and the time-frequency atom Ït,Ï (Fig. 4a).
Nevertheless, decades of research in psychology and neuroscience have shown that humans largely rely in the transient behavior of auditory signals to distinguish auditory objects (Cherry, 1953; van Noorden et al., 1975; Moore & Gockel, 2012). In addition, it has been shown that the human auditory system has high spectral resolution at low-frequencies and high temporal resolution at higher frequencies (Stevens et al., 1937; Santoro et al., 2014; Bidelman & Khaja, 2014). For example, a semitone at the bottom of the piano scale (â¼30Hz) is of about 1.5Hz, while at the top of the musical scale (â¼5kHz) it is of about 200Hz. These properties of the human auditory signal largely contrast both with (i) the inability of the short-time Fourier transform to detect transient signals, as well as with (ii) its constant spectro-temporal resolution.
To account for these differences, improved spectro-temporal representations on top of the short-time Fourier transform have been proposed such as log-Mel spectrograms (Stevens et al., 1937; Furui, 1986). These devel- opments revolve around transforming the frequency axis of the short-time Fourier transform in a logarithmic scale, thereby compressing the frequency axis and better aligning with the spectro-temporal resolution of
a oo FUSE UR WERE WE REEE WORE EEO ET â - > ee t,
Fe) . i i ee es ee
(a) (b) 5
Figure 4: Tiling of the time-frequency plane for the short-time Fourier trans- form (Fig. 4a) and the wavelet transform (Fig. 4b). The short-time Fourier trans- form divides the time-frequency plane in tiles of equal resolution. This makes it adequate for signals without tran- sient behaviour. The Wavelet trans- form, on the other hand, divides the time-frequency plane on tiless of chang- ing spectro-temporal resolution. This allows it to represent detect highly local- ized events both on time and frequency.
# Published in Transactions on Machine Learning Research (12/2023)
the human auditory system. Consequently, this adjustment enables local structures, e.g., 2D convolutional kernels, to better capture non-local relationships (Ullrich et al., 2014; Choi et al., 2016; Xu et al., 2018). However, despite their improved learning characteristics, these spectro-temporal representations remain in- complete due to their inability to modify the constant temporal resolution of the short-time Fourier transform. 4.3 The problem of learning 2D kernels on Wavelet spectro-temporal representations In contrast to the short-time Fourier transform, the Wavelet transform constructs a spectro-temporal repre- sentation in terms of correlations with time-frequency atoms, whose time and frequency resolution change. As a result, the resulting decomposition of the time-frequency plane allows the wavelet transform to correctly describe signals with transient behaviour with localized components both on time and frequency (Fig. 4b).
The wavelet transform. The wavelet transform (V is a linear time-frequency transform that uses a dictionary of time-frequency atoms ¢¢,., ()=zZev (=), t â¬R, w ⬠Rso. The function 7, is called a Wavelet and satisfies the properties of having zero mean, i.e., f vte(r)dr=0,s and being unitary, ie., lol? =1, for any t ER, w ⬠Rso. The resulting spectro-temporal representation W[f] is given by:
WiFi) = hore) = f srdezlrrar= f 100) po (+) ar (8)
# Z
# Z
Intuitively, the Wavelet transform divides the time-frequency plane in tiles of different resolutions, with high frequency resolution and low spatial resolution at low frequencies, and low frequency resolution and high spatial resolution for high frequencies (Fig. 4b).a
aImportantly, it is not possible to have high frequency and spatial resolution at the same time due to the uncertainty principle (Gabor, 1946). It states that the joint time-frequency resolution of spectro-temporal representations is limited by a minimum surface ÏÏ,tÏÏ,Ï â¥ 1
Interestingly, the modus operandi of the wavelet transform perfectly aligns with the spectro-temporal reso- lution used by the human auditory system for the processing of auditory signals. Nevertheless, despite this resemblance, training local 2D structures, e.g., convolutional kernels, directly on the wavelet transformâs output stil falls short in addressing the non-local, transparent characteristics inherent in auditory signals. Consequently, researchers have devised several strategies to overcome these challenges, e.g., by defining sep- arable kernels that span large memory horizons along the frequency and time axis independently (Pons & Serra, 2019) or by prioritizing learning along the harmonics of a given frequency (Zhang et al., 2020).
As shown in the next section, a better alternative arises from considering the symmetries appearing in time- series data. Starting from this perspective, we are led to scale-translation equivariant mappings and find striking relationships between these family of mappings and the wavelet transform. Nevertheless, our anal- ysis indicates that all layers within a neural network should be symmetry preserving âa condition not met by the methods depicted in this section. By doing so, we devise neural architectures, whose convolutional layers process the output of previous layers in a manner akin to the wavelet transform. As a result, each con- volutional layer performs spectro-temporal decompositions of the input in terms of localized time-frequency atoms able to process global and localized patterns both on time and frequency.
5 Wavelet networks: Scale-translation equivariant learning from raw waveforms We are interested in mappings that preserve the scale and translation symmetries of time-series. In this section, we start by tailoring lifting and group convolutions to the scale-translation group. Next, we outline the general form of Wavelet Networks and make concrete practical considerations for their implementation. At the end of this section, we formalize the relationship between Wavelet Networks and the wavelet transform, and provide a thorough analysis on the equivariance properties of common spectro-temporal transforms.
5.1 Scale-translation preserving mappings: group convolutions on the scale-translation group We are interested in mappings that preserve scale and translation. By imposing equivariance to the scale- translation group, we guarantee that if input patterns are scaled, translated, or both, their feature represen- tations will transform accordingly, but not be modified.
The scale-translation group. From a mathematical perspective scale and translational symmetries are described by the affine scale-translation group G=R â Râ¥0, which emerges from the semi-direct product of
6
# Published in Transactions on Machine Learning Research (12/2023)
_y âi scale rotate
Figure 5: The action of unimodular and non- unimodular groups. Most unimodular groups, e.g., rotation, mirroring, keep the volume of the objects they act upon intact. In contrast, non-unimodular groups, e.g., scaling, change it through their action.
the translation group T=(R, +) and the scale group S=(Râ¥0, Ã) acting on R. As a result, we have that the resulting group product is given by g · γ=(t, s) · (Ï, Ï)=(t + sÏ, s · Ï), with t, Ï â R and s, Ï â Râ¥0. In addition, by solving gâ1 · g=e, we obtain that the inverse of a group element g=(t, s) is given by gâ1=sâ1(ât, 1).
Semi-direct product and affine groups. When treating data defined on Rd, one is mainly interested in the analysis of groups of the form G=Rd â H resulting from the semi-direct product (â) between the translation group (Rd, +) and an arbitrary (Lie) group H acting on Rd, e.g., rotation, scale, etc. This kind of groups are called affine groups and their group product is defined as:
g1 · g2 = (x1, h1) · (x2, h2) = (x1 + Ah1 (x2), h1 · h2), (9)
with g1=(x1, h1), g2=(x2, h2) â G, x1, x2 â Rd and h1, h2 â H. A denotes the action of H on Rd.
Unimodular and non-unimodular groups. Unimodular groups, such as rotation, translation and mir- roring, are groups whose action keeps the volume of the objects on which they act intact (Fig. 5). Recall that a group convolution performs an integral over the whole group (Eq. 3). Hence, for its result to be invariant over different group actions, it is required for the Haar measure to be equal for all elements of the group âtherefore the name invariant Haar measure. Since the action of (most) unimodular groups does not alter the size of the objects on which they act, their action on infinitesimal objects keeps their size unchanged. As a consequence, for (most) unimodular groups, the Haar measure is equal to the Lebesgue measure, i.e., dµG(γ)=dγ, âγ â G, and therefore, it is often omitted in literature, e.g., in Cohen & Welling (2016). In contrast, non-unimodular groups, such as the scale group and the scale-translation group, do modify the size of objects on which they act (Fig. 5 right). Consequently, their action on infinitesimal objects changes their size. As a result, the Haar measure must be treated carefully in order to obtain equivariance to non- unimodular groups (Bekkers, 2020). The Haar measure guarantees that dµG(γ)=dµG(gγ), â g, γ â G. For the scale-translation group, it is obtained as:
1
1
dµG(γ) = dµG(gγ) = dµG(t + sÏ, sÏ) = dµG(t + sÏ )dµG(sÏ) = |s| dÏ |s| dÏ, (10)
where g=(t, s), γ=(Ï, Ï) â G, t, Ï â R, s, Ï â R>0; dÏ , dÏ are the Lebesgue measure of the respective spaces; and |s| depicts the determinant of the matrix representation of the group element.1 Intuitively, the Haar measure counteracts the growth of infinitesimal elements resulting from the action of s on RÃR>0. Scale-translation group convolutions. The general formulation of the group convolution is given in Eq. 3. Interestingly, the scale-translation group has additional properties with which this formulation can be simplified. In particular, by taking advantage of the fact that the scale-translation group is an affine group G=R â S, with S=(R>0, Ã), as well as of the definition of the Haar measure for the scale-translation group in Eq. 10 we can reformulate the group convolution for the scale-translation group as:
(f *¢ ¥)(9) =f 1ovwo" 7) due(y) recor = ff tes (t,8)"1(r,8)) a dr as=f [tm 5 gat (sr ts) ar ds =| [#05 3 alow (7 4,5) a £00) (45) ds (11)
1A member s of the scale group R>0 acting on a N -dimensional space is represented a matrix diag(s, ..., s). Since, its determinant sN depends on the value of the group element s, the factor 1 |s| = 1 sN in Eq. 10 cannot be omitted.
7
# Published in Transactions on Machine Learning Research (12/2023)
ScaleâTranslation Lifting Convolution: {250} se5 (f er W(t 8) f(r) LLnÂ¥(7) Hh ââ Ie ++i Ona == - | ler) et fb 9 fp . ScaleâTranslation Group Convolution: ELV} es Lu (7.5) f(r.s) (f *¢ w(t.) La v(r9) , Alle ene â 4k : = +R w(7,s) is i Seiemenedtadtedtamemsinamaaatl
Figure 6: Scale-translation lifting and group convolution. The lifting convolution can be seen a set of 1D convolutions with a bank of scaled convolutional kernels 1 s LsÏ, and the group convolution can be seen as a set of 1D convolutions s2 LsÏ, followed by an integral over scales Ï â R. Their main difference is with a bank of scaled convolutional kernels 1 that, for group convolutions, the input f and the convolutional kernel Ï are functions on the scale-translation group whereas for lifting convolutions these are functions on R. Lifting and group convolutions can be seen as spectro- temporal decompositions with large values of s relating to coarse features and small values to finer features.
where g=(t,h), y=(7,5) ⬠G, t,7 ⬠R, and s,¢ ⬠scale group S on a convolutional kernel w : R x words, for the scale-translation group, the group bank of scaled convolutional kernels {alo }ses, Rs; and followed Scale-translation lifting convolution. Like the grou simplified by considering the properties of the scale-trans L0(7,5)=v (s71(7,<)) is the (left) action of the Ro â R defined on the scale-translation group. In other convoluti on can be seen as a set of 1D convolutions with a by an integral over scales ¢ ⬠R (Fig. 6, bottom). convolution, the lifting convolution can also be ation group. In particular, we can rewrite it as:
# Z
# Z
(Fer wo =f Fewer") dugla) = [row gâr)dg(r) 1 (F 4c (ts) =f Fr)Â¥((3)*7) ait iG -Â¥ (slr - t)) dr= Ss (Fs iow)e (2)
where g=(t,h), y=(7,¢) ⬠G, t,7 ⬠R, and s,¢5 ⬠Ryo; and L.0(7,5)=v (s71(7,<)) is the (left) action of the scale group S on a 1D convolutional kernel 7) : R > R. In other words, for the scale-translation group, the lift- ing convolution can be seen as a set of 1D convolutions with a bank of scaled convolutional kernels {4 LeU}ses (Fig. 6, top). Note that the Haar measure imposes a normalization factor of 4 for group convolutions and of 1 for the li has an additional dimension relative to the space on whic! âting convolution. This is because space on which the group convolution is performed (R = Ryo) h the lifting convolution is performed (R).
5.2 Wavelet Networks: architecture and practical implementation The general architecture of our proposed Wavelet networks is shown in Fig. 7. Wavelet networks consist of several stacked layers that respect scale and transla- tion. They consist of a lifting group convolution layer that lifts input time-series to the scale-translation group, followed by arbitrarily many group convolutional layers. At the end of the network, a global pooling layer is used to produce scale-translation invariant representations. Due to their construction, Wavelet networks make sure that common neural operations, e.g., point-wise nonlinear- ities, do not disrupt scale and translation equivariance. This in turn, makes them broadly applicable and easily extendable to other existing neural archi-
aa Activation pesroo inal GroupConv = xLq Activation YaxPoolRd Dropout : Tine GiobaiPooling
# Figure 7: Wavelet networks.
8
# Published in Transactions on Machine Learning Research (12/2023)
1.0 08 ? T | | | 2 T | | | | . . 25° 3 . 1 15 2
14 | 115 2 25 3
Figure 8: Convolutional kernels on dis- crete and continuous bases. In red the canonical basis used for the construction of the convolutional kernel is shown: a delta Dirac for the discrete case, and a B2-spline for the continuous case. Pos- sible resulting kernels are shown in blue.
(a) Discrete bases (Dirac deltas) (b) Continuous bases (B2-splines)
(b) exponential grid Figure 9: Riemann integration of functions on R>0 using linear (9a) and exponential grids (9b, 9c).
# (a) linear grid
# (c) log-plot exponential grid
tectures, e.g., ResNets (He et al., 2016), U-Nets (Ronneberger et al., 2015).
5.2.1 Group convolutional kernels on continuous bases Although our previous derivations build upon continuous functions, in practice, computations are performed on discretized versions of these functions. Continuous bases have proven advantageous for the construction of group convolutions as the action of relevant groups often impose transformations not well-defined for discrete bases (Weiler et al., 2018; Bekkers et al., 2018; Weiler & Cesa, 2019). For instance, in the context of scale-translations, scaling a kernel [w1, w2, w3] by a factor of two results in a filter [w1, w1.5, w2, w2.5, w3] wherein the introduced values [w1.5, w2.5] do not exist in the original basis (Fig. 8a). The most adopted solution to address this problem is interpolation, i.e., deriving the value of [w1.5, w2.5] based on the neighbouring known pixels. However, interpolation introduces spurious artifacts which are particularly severe for small kernels. Instead, we adopt an alternative approach: we define convolutional kernels directly on a continuous basis (Fig. 8b). Drawing from the resemblance of gammatone filters â strongly motivated by the physiology of the human auditory system for the processing and recognition of auditory signals (Johannesma, 1972; Hewitt & Meddis, 1994; Lindeberg & Friberg, 2015a)â to B2-splines, we parameterize our filters within a B2-spline basis as in Bekkers (2020). As a result, our convolutional filters are parameterized as a linear combination of shifted B2-splines Ï(Ï ):= PN i=1 wiB2(Ï â Ïi), rather than the commonly used shifted Dirac deltaâs basis Ï(Ï ):= PN
5.2.2 Constructing a discrete scale grid From the response of the lifting layers onward, the feature representations of wavelet networks possess an additional axis s â R>0. Just like the spatial axis, this axis must be discretized in order to perform computational operations. That is, we must approximate the scale axis R>0 by a finite set of discrete scales . Inspired by previous work, we approximate the scale axis with a dyadic set {2j}jmax (Mallat, {s}smax 1999; Lindeberg & Friberg, 2015b; Worrall & Welling, 2019). Dyadic sets resemble the spectro-temporal resolution of the human auditory system, and are widely used for discrete versions of the wavelet transform.
Integrating on exponential grids. A subtlety arises with respect to integrating over the scale axis when implementing the continuous theory in a discrete setting that is suitable for numerical computations. The group convolutions include scale correction factors as part of the Haar measure, which makes the integration invariant to actions along the scale axis. That is, the integral of a signal f (s) over scale is the same as that of the same signal f (zâ1s), whose scale is changed by a factor z â R>0:
# Z
# Z
# Z
f (zâ1s) 1 s R>0 ds sâzs= f (zâ1s) 1 zs R>0 dzs = f (s) 1 s R>0 ds. (13)
9
# Published in Transactions on Machine Learning Research (12/2023)
We can translate the scale integration to the discrete setting via Riemann integrals, where we sample the function on a grid and take the weighted sum of these values with weights given by the bin-width:
# Z
# X
f (s) 1 s R>0 ds â i f (si) 1 si âi. (14)
When the scale grid is linear, the bin-widths âi are constant, as depicted in Fig. 9a. When the scale grid is exponential, e.g., si=biâ1 with b some base factor, the bin widths are proportional to the scale values at the grid points, i.e., âi â si (Fig. 9b). In this setting, the factor 1 cancels out (up to some constant) with the si bin width âi, and integration is simply done by summing the values sampled on the scale grid. Consequently, when working with an exponential grid along the scale axis, the factor in the group convolutions (Eq. 11) becomes 1 s2 . It is worth mentioning that using an exponential grid is the natural thing to do s when dealing with the scale group. The scale group is a multiplicative group with a natural distance between group elements z, s â R>0 defined by ⥠log zâ1sâ¥. Consequently, on an exponential grid, the grid points are spaced uniformly with respect to this distance, as illustrated in Fig. 9c.
Defining the discrete scale grid. In practice, Wavelet networks must define the number of scales Ns to be considered in the dyadic set as well as its limits smin, smax. Fortunately, it turns out that these values are related to the spatial dimension of the input f itself, and thus, we can use it to determine these values. Let us consider a signal f and a convolutional kernel Ï sampled on discrete grids [1, Nf ], [1, NÏ] â Z of sizes Nf , and NÏ, respectively. When we re-scale the convolutional kernel Ï, we are restricted (i) at the bottom of the scale axis by the Nyquist criterion, and (ii) at the top of the scale by the scale for which the filter becomes constant in an interval of Nf samples. The Nyquist criterion is required to avoid aliasing and intuitively restricts us to a compression factor on Ï such that it becomes as big as 2 grid samples. On the other hand, by having Ï re-scaled to an extreme to which it is constant in the support of the input signal f , the kernel will only be able to perform average operations.
Considerations regarding computational complexity. Note that the computational cost of Wavelet networks increases linearly with the number of scales considered. Hence, it is desirable to reduce the number of scales used as much as possible. To this end, we reason that using scales for which the sampled support of Ï is smaller than NÏ is unnecessary as the functions that can be described at those scales can also be described âand learnedâ at the unscaled resolution of the kernel s=1. Therefore, we define the minimum scale as smin=1. Furthermore, we reason that using scales for which the support of the filter overpasses that of the input, i.e., Nf ⤠NÏ, is also suboptimal, as the values outside of the region [1, Nf ] are unknown. Therefore, we consider the set of sensible scales to be given by the interval [1, Nf , NÏ this corresponds to the j-values given by the interval [0, 1, 2, ..., jmax s.t. NÏ 2jmax ⤠Nf ]. Effect of downsampling on the scale grids used. Neural architectures utilize pooling operations, e.g., max- pooling, to reduce the spatial dimension of the input as a function of depth. Following the rationale outlined in the previous paragraph, we take advantage of these reductions to reduce the number of scales that representations at a given depth should use. Specifically, we use the factor of downsampling as a proxy for the number of scales that can be disregarded. For example, if we use a pooling of 8 at a given layer, subsequent layers should reduce the number of scales considered by the same factor, i.e., 23. For a set of dyadic scales before a pooling layer given by {2j}jmax and a pooling layer of factor 2p, the set of dyadic scales considered after pooling will be given by {2j}jmaxâp j=jmin
# 5.2.3 Imposing wavelet structure to the learned convolutional kernels
In classical spectro-temporal analysis, wavelets are designed to have unit norm â¥Ïâ¥2=1 and zero mean R Ï(Ï ) dÏ =0. These constraints are useful for both theoretical and practical reasons including energy preser- vation, numerical stability and the ability to act as band-pass filters (Mallat, 1999). Since Wavelet networks construct time-frequency representations of the input, we experiment with an additional regularization loss that encourages the learned convolutional kernels to behave like wavelets. First, we note that lifting and s2 â in their definitions. Therefore, the group convolutions inherently incorporate a normalization term â 1 s normalization criterion is inherently satisfied. To encourage the learned kernels to have zero mean, we for- mulate a regularization term that promote this behaviour. Denoting Ïd as the convolutional kernel at the
10
# Published in Transactions on Machine Learning Research (12/2023)
d-th layer of a neural network with D convolutional layers, the regularization term Lwavelet is defined as: D X
Lwavelet = â¥mean(Ïd)â¥2. d=1 (15)
Interestingly, we observe that enforcing wavelet structure in the learned convolutional kernels consistently yields improved performance across all tasks considered (Sec. 6). This result underscores the potential value of integrating insights from classical signal processing, e.g., spectro-temporal analysis (Scharf, 1991; Mallat, 1999; Daubechies, 2006), in the design of deep learning architectures.
# 5.3 Wavelet networks perform nested non-linear time-frequency transforms
Interestingly, we can use spectro-temporal analysis to understand the modus operandi of wavelet networks. Our analysis reveals that wavelet networks perform nested time-frequency transforms interleaved with point- wise nonlinearities. In this process, each time-frequency transform emerges as a linear combination of parallel wavelet-like transformations of the input computed with learnable convolutional kernels Ï.
The relation between scale-translation equivariant mappings and the wavelet transform. The wavelet transform shows many similarities to the scale-translation group and lifting convolutions (Grossmann et al., 1985). In fact, by analyzing the definition of the wavelet transform (Eq. 8), we obtain that the Wavelet transform is equivalent to a lifting group convolution (Eq. 12 with Ï=s) up to a normalization factor
winitw) = [eZee (FE) ar = [seo ee 0) ar = [ 1 ytew (r=0) ar = (fox BL) = Fy lPracr Wow). (6)
# Z
Furthermore, if we let the input f be a function defined on the scale-translation group, and let Ï act on this group according to the group structure of the scale-translation group, we have that the scale-translation group convolution is equivalent to a Wavelet transform whose input has been obtained by a previously applied Wavelet transform, up to a normalization factor
# Ï
# Ï
# Z
# Z
# Z
# Z
w)= T. (ert T = ro) b (rT - T d¢ WIA lt) fa ev (Mr =8)5) dr ds [Ls $s) elaw (tes) dr ds = [. ( *R butâ) (t,¢) dg = AE (f *¢ Â¥) (tw) (17)
W[f ](t, Ï) =
In other words, lifting and group convolutions on the scale-translation group can be interpreted as linear time-frequency transforms that adopt time-frequency plane tiling akin wavelet transform (Fig. 4b), for which the group convolution accepts wavelet-like spectro-temporal representations as input.
Equivariance properties of common time-frequency transforms. For completeness, we also analyze the equivariance properties of common time-frequency transforms and their normalized representations, e.g., spectrogram. Careful interpretations and proofs are provided in Appx. B.
Let Lt0f = f (t â t0) and Ls0f (t) = f (sâ1 0 t), t0 â R, s0 â R>0, be translation and scaling operators. The Fourier, short-time Fourier and Wavelet transform of Lt0f and Ls0 f , f â L2(R), are given by: ⢠Fourier Transform:
F[Lt0f ](Ï) = eâiÏt0F[f ](Ï) F[f ](Ï) F[Ls0 f ](Ï) = s0Lsâ1 â |F[Lt0 f ](Ï)|2 = |F[f ](Ï)|2 â |F[Ls0 f ](Ï)|2 = |s0|2|Lsâ1 F[f ](Ï)|2 0 0 (18) (19)
Short-Time Fourier Transform: S[Lt0 f ](t, Ï) = eâiÏt0 Lt0 S[f ](t, Ï) 0 t, s0Ï) S[Ls0 f ](t, Ï) â s0 S[f ](sâ1
â |S[Lt0f ](t, Ï)|2 = |Lt0S[f ](t, Ï)|2 â |S[Ls0 f ](t, Ï)|2 â |s0|2|S[f ](sâ1 0 t, s0Ï)|2 (â) (20) (21)
11
(17)
# Published in Transactions on Machine Learning Research (12/2023)
# ⢠Wavelet Transform:
W[Lt0 W[Ls0 f ](t, Ï) = [f ]](t, Ï) = Lt0 W[f ](t, Ï) â s0 Ls0 W[f ](t, Ï) â |W[Lt0f ](t, Ï)|2 = |Lt0 W[f ](t, Ï)|2 â |W[Ls0 f ](t, Ï)|2 = |Ls0 W[f ](t, Ï)|2 (22) (23)
(â) Eq. 21 only approximately holds for large windows (see Appx. B.2 for a detailed explanation).
In other words, the Wavelet transform and the scalogram |W[·]|2 are the only time-frequency repre- sentations that exhibit both translation and scaling equivariance in a practical way.
Wavelet networks apply parallel time-frequency transforms with learned bases at every layer. So far, our analysis has been defined for scalar-valued input and convolutional kernels. However, in practice, convolutional layers perform operations between inputs f : R â RNin and convolutional kernels Ï : R â RNoutÃNin to produce outputs (f â Ï) : R â RNout as the linear combination along the Nin dimension of convolutions with several learned convolutional kernels computed in parallel:
(f â Ï)o= NinX (fi â Ïi), o â [1, 2, ..., Nout]. i=1 (24)
In practice, both lifting and group convolutional layers adhere to the same structure. In a dilation-translation convolutional layer with Nout output channels, Nout independent convolutional kernels, each consisting of Nin channels, are learned. During the forward pass, the input is group-convolved with each of these kernels in parallel. The Nout output channels are then formed by linearly combining the outcomes of the Nin channels. In other words, lifting and group convolutional layers produce linear combinations of distinct time-frequency decompositions of the input computed in parallel at each layer.
Wavelet networks are scale-translation equivariant nested non-linear time-frequency trans- forms. Just like in conventional neural architectures, the outputs of lifting and group convolutional layers are interleaved with point-wise nonlinearities. Therefore, wavelet networks compute nonlinear scale-translation equivariant feature representations that resemble nested nonlinear time-frequency transforms of the input.
6 Experiments In this section, we empirically evaluate wavelet networks. To this end, we take existing neural architectures designed to process raw signals and construct equivalent wavelet networks (W-Nets). We then compare the performance of W-Nets and the corresponding baselines on tasks defined on raw environmental sounds, raw audio and raw electric signals. We replicate as close as possible the training regime of the corresponding baselines and utilize their implementation as a baseline whenever possible. Detailed descriptions of the specific architectures as well as the hyperparameters used for each experiment are provided in Appx. C.
6.1 Classification of environmental sounds First, we consider the task of classifying environmental sounds on the UrbanSound8K (US8K) dataset (Salamon et al., 2014). The US8K dataset consists of 8732 audio clips uniformly drawn from 10 environmental sounds, e.g., siren, jackhammer, etc, of 4 seconds or less, with a total of 9.7 hours of audio.
Experimental setup. We compare the Mn-Nets of Dai et al. (2017) and the 1DCNNs of Abdoli et al. (2019) with equivalent W-Nets in terms of number of layers and parameters. Contrarily to Dai et al. (2017) we sample the audio files at 22.05kHz as opposed to 8kHz. This results from preliminary studies of the data, which indicated that some classes become indistinguishable for the human ear at such low sampling rates.2 For the comparison with the 1DCNN of Abdoli et al. (2019), we select the 50999-1DCNN as baseline, as it is the network type that requires the less human engineering. We note, however, that we were unable to replicate the results reported in Abdoli et al. (2019). In contrast to the 83±1,3% reported, we were only able to obtain a final accuracy of 62.0±6.791. This inconsistency is further detailed in Appx. C.1.
To compare to models other than Mn-nets and 1DCNNs, e.g., Pons et al. (2017a); Tokozume & Harada (2017), we also provide 10-fold cross-validation results. This is done by taking 8 of the 10 official subsets for
2See https://github.com/dwromero/wavelet_networks/blob/master/experiments/UrbanSound8K/data_analysis.ipynb.
12
# Published in Transactions on Machine Learning Research (12/2023)
# Table 1: Experimental results on UrbanSound8K.
UrbanSound8K Model 10th Fold Acc. (%) Cross-Val. Acc. (%) # Params. M3-Net W3-Net W3-Net-wl M5-Net W5-Net W5-Net-wl M11-Net W11-Net W11-Net-wl M18-Net W18-Net W18-Net-wl M34-Net W34-Net W34-Net-wl 54.48 61.05 63.08 69.89 72.28 74.55 74.43 79.33 80.41 69.65 75.87 78.26 75.15 76.22 78.38 - - - - - - - 66.97 ± 5.178 68.47 ± 4.914 - 64.02 ± 4.645 65.01 ± 5.431 - 65.69 ± 5.780 66.77 ± 4.771 220.67k 219.45k 558.08k 558.03k 1.784m 1.806m 3.680m 3.759m 3.978m 4.021m 1DCNN W-1DCNN W-1DCNN-wl - - - 62.00 ± 6.791 62.47 ± 4.925 62.64 ± 4.979 453.42k 458.61k
Comparison With Other Approaches
Model Type Cross-Val. Acc. (%) # Params. W11-Net-wl PiczakCNN Piczak (2015) VGG Pons & Serra (2019) Raw Mel Spectrogram 68.47 ± 4.914 73.7 70.74 1.806m 26m 77m Raw (Bagging) 78 101m
# EnvNet-v2 Tokozume & Harada (2017)
training, one for validation and one for test. We consistently select the (nâ1)mod10 subset for validation when testing on the n-th subset. We note that this training regime might be different from those used in other works, as previous works often do not disclose which subsets are used for validation.
Results. Our results (Tab. 1) show that wavelet networks consistently outperform CNNs on raw waveforms. In addition, they are competitive to spectrogram-based approaches, while using significantly fewer parameters and bypassing the need for preprocessing. Furthermore, we observe that encouraging wavelet structure to the convolutional kernels âdenoted by the WL suffixâ consistently leads to improved accuracy.
6.2 Automatic music tagging Next, we consider the task of automatic music tagging on the MagnaTagATune (MTAT) dataset (Law et al., 2009). The MTAT dataset consists of 25879 audio clips with a total of 170 hours of audio, along with several per-song tags. The goal of the task is to provide the right tags to each of the songs in the dataset.
Experimental setup. Following Lee et al. (2017), we extract the most frequently used 50 tags and trim the audios to 29.1 seconds at a sample-rate of 22.05kHz. Following the convention in literature, we use ROC- curve (AUC) and mean average precision (MAP) as performance metrics. We compare the best performing model of Lee et al. (2017), the 39-Net with a corresponding wavelet network denoted W39-Net.
Results. Our results (Tab. 2) show that wavelet networks consistently outperform CNNs on raw waveforms and perform competitively to spectrogram-based approaches in this dataset as well. In addition, we observe that encouraging the learning of wavelet-like kernels consistently results in increased accuracy as well.
6.3 Bearing fault detection Finally, we also validate Wavelet networks for the task of condition monitoring in induction motors. To this end, we classify healthy and faulty bearings from raw data provided by Samotics. The dataset consists of 246 clips of 15 seconds sampled at 20kHz. The dataset is slightly unbalanced containing 155 healthy and 91 faulty recordings [155, 91]. The dataset is previously split into a training set of [85, 52] and a test
13
# Published in Transactions on Machine Learning Research (12/2023)
# Table 2: Experimental results on MTAT.
MagnaTagATune Average AUC MAP Model Per-class Per-clip Per-class Per-clip # Params. 39-Net W39-Net W39-Net-wl 0.893 0.895 0.899 0.936 0.941 0.943 0.385 0.397 0.404 0.700 0.719 0.723 2.394m 2.404m
Comparison With Other Approaches
Average AUC MAP Model Per-class Per-clip Per-class Per-clip # Params. PCNN Liu et al. (2016) CNN Pons et al. (2017a)â (Raw) CNN Pons et al. (2017a)â (Spect.) CNN Pons et al. (2017b) (Spect.) 0.9013 0.8905 0.9040 0.893 0.9365 - - - 0.4267 0.3492 0.3811 - 0.6902 - - - - 11.8m 5m 191k
â Reported results are obtained in a more difficult version of this dataset.
Table 3: Experimental results on bearing fault detection.
Model Acc. (%) # Params. M11-Net W11-Net W11-Net-wl 65.1376 68.8073 70.207 1.806m 1.823m
set of [70, 39] samples, respectively. These splits are provided ensuring that measurements from the same motor are not included both in the train and the test set. We utilize 20% of the training set for validation. Each clip is composed of 6 channels measuring both current and voltage on the 3 poles of the motor.
Experimental setup. We take the best performing networks on the US8K dataset: the M-11 and W-11 networks, and utilize variants of these architectures for our experiments on this dataset.
Results. Once again we observe that Wavelet networks outperform CNNs on raw waveforms and encouraging the learning of wavelet-like kernels consistently improves accuracy (Tab. 3).
6.4 Discussion Our empirical results firmly establish wavelet networks as a promising avenue for learning from raw time-series data. Notably, these results highlight that considering the symmetries inherent to time-series data ânamely translation and scaleâ for the development of neural networks consistently leads to improved outcomes. Furthermore, we observe that the benefits of wavelet networks extend beyond sound and audio domains. This result advocates for the use of wavelet networks and scale-translation equivariance for learning on time- series data from different sources, e.g., financial data, sensory data. Finally, we also note that promoting the learning of wavelet-like convolutional kernels consistently leads to improved outcomes. We posit that this discovery may hold broader implications for group equivariant networks in general.
Relation to scale-equivariant models of images and 2D signals. In the past, multiple scale-equivariant models have been proposed for the processing of images and 2D signals (Worrall & Welling, 2019; Sosnovik et al., 2020; 2021). Interestingly, we find that the difference in the lengths of the inputs received by image and time-series models leads to very different insights per modality. For comparison, Sosnovik et al. (2020) considers images up to 96Ã96 pixels, whereas audio files in the US8K dataset are 32.000 samples long. We find that this difference in input lengths has crucial implications for how scale interactions within scale- equivariant models function. Sosnovik et al. (2020) mentions that using inter-scale interactions introduces additional equivariance errors due to the truncation of the set S. Therefore, their networks are built with either no scale interaction or interactions of maximum 2 scales. This strongly contrasts with time-series where incorporating inter-scale interactions consistently leads to performance improvements. In our case, the number of scales and inter-scale interactions is rather constrained by the size and computational cost of convolutional kernels (Sec. 5.2.2) rather than their potential negative impact on the modelâs accuracy.
14
# Published in Transactions on Machine Learning Research (12/2023)
7 Limitations and future work Memory and time consumption grows proportionally to the number of scales considered. The biggest limitation of our approach is the increase in memory and time demands as the number of scales considered grows. One potential avenue to mitigate this could involve adopting Monte-Carlo approximations for the computation of group convolutions (Finzi et al., 2020). This strategy might not only establish equivariance to the continuous scale group âin expectationâ, but also dramatically reduce the number of scales considered in each forward pass. Another intriguing direction lies in the extension of partial equivariance (Romero & Lohit, 2022) to the scale group. This extension would enable learning the subset of scales to which the model is equivariant, which in turn could lead to faster execution and enhanced adaptability. Lastly, the adaptation of separable group convolutions (Knigge et al., 2022) offers a means to reduce the computational and memory requirements of wavelet networks.
Convolutions with large convolutional kernels: parameterization and efficiency. The foundation of our approach hinges on computing convolutions with banks of dilated convolutional kernels (Eq. 12, 11). Consequently, considering how these kernels are parameterized as well as how these convolutions are com- puted can unveil avenues for future improvement. Recently, Romero et al. (2021) introduced an expressive continuous parameterization for (large) convolutional kernels that has proven advantageous for complex tasks such as large language modelling (Poli et al., 2023) and processing DNA chains (Nguyen et al., 2023). Exploring the use of this parameterization for wavelet networks could lead to valuable insights and improve- ments, potentially surpassing the current utilization of B2-spline bases. Furthermore, convolutional networks that rely on convolutions with very large convolutional kernels, e.g., Romero et al. (2021); Poli et al. (2023); Nguyen et al. (2023), leverage the Fourier transform to compute convolutions in the frequency domain. In the context of wavelet networks, dynamically selecting between spatial and Fourier convolutions based on the size of convolutional kernels has the potential to significantly improve their efficiency.
8 Conclusion In conclusion, this study introduces Wavelet Networks, a new class of neural networks for raw time-series processing that harness the symmetries inherent to time-series data âscale and translationâ for the construc- tion of neural architectures that respect them. We observe a clear connection between the wavelet transform and scale-translation group convolutions, establishing a profound link between our approach and classical spectro-temporal analysis. In contrast to the usual approach, which uses spectro-temporal representations as a frontend for the subsequent use of 2D CNNs, wavelet networks consistently preserve these symmetries across the whole network through the use of convolutional layers that resemble the wavelet transform. Our analysis reveals that wavelet networks combine the benefits of wavelet-like time-frequency decompositions with the adaptability and non-linearity of neural networks.
Our empirical results demonstrate the superiority of Wavelet Networks over conventional CNNs on raw time-series data, achieving comparable performance to approaches that rely on engineered spectrogram- based methods, e.g., log-Mel spectrograms, with reduced parameters and no need for preprocessing.
This work pioneers the concept of scale-translation equivariant neural networks for time-series analysis, opening new avenues for time-series processing.
# References
Sajjad Abdoli, Patrick Cardinal, and Alessandro Lameiras Koerich. End-to-end environmental sound classi- fication using a 1d convolutional neural network. Expert Systems with Applications, 136:252â263, 2019.
Joakim Andén and Stéphane Mallat. Deep scattering spectrum. IEEE Transactions on Signal Processing, 62(16):4114â4128, 2014.
Shaojie Bai, J Zico Kolter, and Vladlen Koltun. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv preprint arXiv:1803.01271, 2018.
Erik J Bekkers. B-spline {cnn}s on lie groups. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=H1gBhkBFDH.
15
# Published in Transactions on Machine Learning Research (12/2023)
Erik J Bekkers, Maxime W Lafarge, Mitko Veta, Koen AJ Eppenhof, Josien PW Pluim, and Remco Duits. Roto-translation covariant convolutional networks for medical image analysis. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 440â448. Springer, 2018.
Gavin M Bidelman and Ameenuddin Syed Khaja. Spectrotemporal resolution tradeoff in auditory processing as revealed by human auditory brainstem responses and psychophysical indices. Neuroscience letters, 572: 53â57, 2014.
Joan Bruna and Stéphane Mallat. Invariant scattering convolution networks. IEEE transactions on pattern analysis and machine intelligence, 35(8):1872â1886, 2013.
E Colin Cherry. Some experiments on the recognition of speech, with one and with two ears. The Journal of the acoustical society of America, 25(5):975â979, 1953.
Keunwoo Choi, George Fazekas, and Mark Sandler. Automatic tagging using deep convolutional neural
networks. arXiv preprint arXiv:1606.00298, 2016.
Taco Cohen and Max Welling. Group equivariant convolutional networks. In International conference on machine learning, pp. 2990â2999, 2016.
Taco S Cohen, Mario Geiger, and Maurice Weiler. A general theory of equivariant cnns on homogeneous spaces. In Advances in Neural Information Processing Systems, pp. 9142â9153, 2019.
Wei Dai, Chia Dai, Shuhui Qu, Juncheng Li, and Samarjit Das. Very deep convolutional neural networks for raw waveforms. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 421â425. IEEE, 2017.
Ingrid Daubechies. Fundamental papers in wavelet theory. Princeton University Press, 2006.
Sander Dieleman and Benjamin Schrauwen. End-to-end learning for music audio. In 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6964â6968. IEEE, 2014.
Sander Dieleman, Jeffrey De Fauw, and Koray Kavukcuoglu. Exploiting cyclic symmetry in convolutional neural networks. arXiv preprint arXiv:1602.02660, 2016.
Marc Finzi, Samuel Stanton, Pavel Izmailov, and Andrew Gordon Wilson. Generalizing convolutional neural networks for equivariance to lie groups on arbitrary continuous data. arXiv preprint arXiv:2002.12880, 2020.
Fabian B Fuchs, Daniel E Worrall, Volker Fischer, and Max Welling. Se (3)-transformers: 3d roto-translation equivariant attention networks. arXiv preprint arXiv:2006.10503, 2020.
Sadaoki Furui. Speaker-independent isolated word recognition based on emphasized spectral dynamics. In ICASSPâ86. IEEE International Conference on Acoustics, Speech, and Signal Processing, volume 11, pp. 1991â1994. IEEE, 1986.
Dennis Gabor. Theory of communication. part 1: The analysis of information. Journal of the Institution of Electrical Engineers-Part III: Radio and Communication Engineering, 93(26):429â441, 1946.
Karan Goel, Albert Gu, Chris Donahue, and Christopher Ré. Itâs raw! audio generation with state-space models. In International Conference on Machine Learning, pp. 7616â7633. PMLR, 2022.
Alex Grossmann, Jean Morlet, and T Paul. Transforms associated to square integrable group representations.
i. general results. Journal of Mathematical Physics, 26(10):2473â2479, 1985.
Eric Guizzo, Tillman Weyde, and Jack Barnett Leveson. Multi-time-scale convolution for emotion recognition from speech audio signals. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6489â6493. IEEE, 2020.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770â778, 2016.
16
# Published in Transactions on Machine Learning Research (12/2023)
Michael J Hewitt and Ray Meddis. A computer model of amplitude-modulation sensitivity of single units in
the inferior colliculus. The Journal of the Acoustical Society of America, 95(4):2145â2159, 1994.
PLM Johannesma. The pre-response stimulus ensemble of neurons in the cochlear nucleus. In Symposium on Hearing Theory, 1972. IPO, 1972.
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
David M Knigge, David W Romero, and Erik J Bekkers. Exploiting redundancy: Separable group convolu- tional networks on lie groups. In International Conference on Machine Learning, pp. 11359â11386. PMLR, 2022.
Edith Law, Kris West, Michael I Mandel, Mert Bay, and J Stephen Downie. Evaluation of algorithms using games: The case of music tagging. In ISMIR, pp. 387â392, 2009.
Yann LeCun, Bernhard Boser, John S Denker, Donnie Henderson, Richard E Howard, Wayne Hubbard, and Lawrence D Jackel. Backpropagation applied to handwritten zip code recognition. Neural computation, 1 (4):541â551, 1989.
Jongpil Lee, Jiyoung Park, Keunhyoung Luke Kim, and Juhan Nam. Sample-level deep convolutional neural networks for music auto-tagging using raw waveforms. arXiv preprint arXiv:1703.01789, 2017.
Tony Lindeberg and Anders Friberg. Idealized computational models for auditory receptive fields. PLoS one, 10(3), 2015a.
Tony Lindeberg and Anders Friberg. Scale-space theory for auditory signals. In International Conference on Scale Space and Variational Methods in Computer Vision, pp. 3â15. Springer, 2015b.
Jen-Yu Liu, Shyh-Kang Jeng, and Yi-Hsuan Yang. Applying topological persistence in convolutional neural network for music audio signals. arXiv preprint arXiv:1608.07373, 2016.
Xugang Lu, Peng Shen, Sheng Li, Yu Tsao, and Hisashi Kawai. Deep progressive multi-scale attention for acoustic event classification. arXiv preprint arXiv:1912.12011, 2019.
Stéphane Mallat. A wavelet tour of signal processing. Elsevier, 1999.
Stéphane Mallat. Group invariant scattering. Communications on Pure and Applied Mathematics, 65(10): 1331â1398, 2012.
Haggai Maron, Heli Ben-Hamu, Nadav Shamir, and Yaron Lipman. Invariant and equivariant graph networks. arXiv preprint arXiv:1812.09902, 2018.
Brian CJ Moore and Hedwig E Gockel. Properties of auditory stream formation. Philosophical Transactions of the Royal Society B: Biological Sciences, 367(1591):919â931, 2012.
Eric Nguyen, Michael Poli, Marjan Faizi, Armin Thomas, Callum Birch-Sykes, Michael Wornow, Aman Patel, Clayton Rabideau, Stefano Massaroli, Yoshua Bengio, et al. Hyenadna: Long-range genomic sequence modeling at single nucleotide resolution. arXiv preprint arXiv:2306.15794, 2023.
Vijayaditya Peddinti, TaraN Sainath, Shay Maymon, Bhuvana Ramabhadran, David Nahamoo, and Vaib- hava Goel. Deep scattering spectrum with deep neural networks. In 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 210â214. IEEE, 2014.
Karol J Piczak. Environmental sound classification with convolutional neural networks. In 2015 IEEE 25th International Workshop on Machine Learning for Signal Processing (MLSP), pp. 1â6. IEEE, 2015.
Michael Poli, Stefano Massaroli, Eric Nguyen, Daniel Y Fu, Tri Dao, Stephen Baccus, Yoshua Bengio, Stefano Ermon, and Christopher Ré. Hyena hierarchy: Towards larger convolutional language models. arXiv preprint arXiv:2302.10866, 2023.
17
# Published in Transactions on Machine Learning Research (12/2023)
Jordi Pons and Xavier Serra. Randomly weighted cnns for (music) audio classification. In ICASSP 2019- 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 336â340. IEEE, 2019.
Jordi Pons, Oriol Nieto, Matthew Prockup, Erik Schmidt, Andreas Ehmann, and Xavier Serra. End-to-end learning for music audio tagging at scale. arXiv preprint arXiv:1711.02520, 2017a.
Jordi Pons, Olga Slizovskaia, Rong Gong, Emilia Gómez, and Xavier Serra. Timbre analysis of music audio signals with convolutional neural networks. In 2017 25th European Signal Processing Conference (EUSIPCO), pp. 2744â2748. IEEE, 2017b.
Dario Rethage, Jordi Pons, and Xavier Serra. A wavenet for speech denoising. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5069â5073. IEEE, 2018.
David W Romero and Suhas Lohit. Learning partial equivariances from data. Advances in Neural Information Processing Systems, 35:36466â36478, 2022.
David W Romero, Erik J Bekkers, Jakub M Tomczak, and Mark Hoogendoorn. Attentive group equivariant convolutional networks. arXiv preprint arXiv:2002.03830, 2020.
David W Romero, Anna Kuzina, Erik J Bekkers, Jakub M Tomczak, and Mark Hoogendoorn. Ckconv: Continuous kernel convolution for sequential data. arXiv preprint arXiv:2102.02611, 2021.
Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted InterventionâMICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, pp. 234â241. Springer, 2015.
David E Rumelhart, Geoffrey E Hinton, Ronald J Williams, et al. Learning internal representations by error propagation, 1985.
Justin Salamon and Juan Pablo Bello. Feature learning with deep scattering for urban sound analysis. In 2015 23rd European Signal Processing Conference (EUSIPCO), pp. 724â728. IEEE, 2015.
Justin Salamon, Christopher Jacoby, and Juan Pablo Bello. A dataset and taxonomy for urban sound research. In Proceedings of the 22nd ACM international conference on Multimedia, pp. 1041â1044, 2014.
Roberta Santoro, Michelle Moerel, Federico De Martino, Rainer Goebel, Kamil Ugurbil, Essa Yacoub, and Elia Formisano. Encoding of natural sounds at multiple spectral and temporal resolutions in the human auditory cortex. PLoS computational biology, 10(1), 2014.
Vıctor Garcia Satorras, Emiel Hoogeboom, and Max Welling. E (n) equivariant graph neural networks. In International conference on machine learning, pp. 9323â9332. PMLR, 2021.
Louis L Scharf. Statistical signal processing, volume 98. Addison-Wesley Reading, MA, 1991.
Ivan Sosnovik, MichaÅ Szmaja, and Arnold Smeulders. Scale-equivariant steerable networks. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=HJgpugrKPS.
Ivan Sosnovik, Artem Moskalev, and Arnold WM Smeulders. Scale equivariance improves siamese tracking. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 2765â2774, 2021.
Stanley Smith Stevens, John Volkmann, and Edwin B Newman. A scale for the measurement of the psycho- logical magnitude pitch. The Journal of the Acoustical Society of America, 8(3):185â190, 1937.
Daniel Stoller, Sebastian Ewert, and Simon Dixon. Wave-u-net: A multi-scale neural network for end-to-end audio source separation. arXiv preprint arXiv:1806.03185, 2018.
18
# Published in Transactions on Machine Learning Research (12/2023)
Nathaniel Thomas, Tess Smidt, Steven Kearnes, Lusann Yang, Li Li, Kai Kohlhoff, and Patrick Riley. Tensor Field Networks: Rotation-and Translation-Equivariant Neural Networks for 3D Point Clouds. arXiv preprint arXiv:1802.08219, 2018.
Yuji Tokozume and Tatsuya Harada. Learning environmental sounds with end-to-end convolutional neural network. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2721â2725. IEEE, 2017.
Karen Ullrich, Jan Schlüter, and Thomas Grill. Boundary detection in music structure analysis using convolutional neural networks. In ISMIR, pp. 417â422, 2014.
Leo Paulus Antonie Servatius van Noorden et al. Temporal coherence in the perception of tone sequences, volume 3. Institute for Perceptual Research Eindhoven, the Netherlands, 1975.
Patrick von Platen, Chao Zhang, and Philip Woodland. Multi-span acoustic modelling using raw waveform signals. arXiv preprint arXiv:1906.11047, 2019.
Maurice Weiler and Gabriele Cesa. General e (2)-equivariant steerable cnns. In Advances in Neural Infor- mation Processing Systems, pp. 14334â14345, 2019.
Maurice Weiler, Fred A Hamprecht, and Martin Storath. Learning steerable filters for rotation equivariant cnns. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 849â858, 2018.
Daniel E Worrall and Max Welling. Deep scale-spaces: Equivariance over scale. arXiv preprint arXiv:1905.11697, 2019.
Yong Xu, Qiuqiang Kong, Wenwu Wang, and Mark D Plumbley. Large-scale weakly supervised audio clas- sification using gated convolutional neural network. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 121â125. IEEE, 2018.
Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Russ R Salakhutdinov, and Alexan- der J Smola. Deep sets. Advances in neural information processing systems, 30, 2017.
Matthew D Zeiler. Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701, 2012.
Chiyuan Zhang, Stephen Voinea, Georgios Evangelopoulos, Lorenzo Rosasco, and Tomaso Poggio. Discrimi- native template learning in group-convolutional networks for invariant speech representations. In Sixteenth Annual Conference of the International Speech Communication Association, 2015.
Zhoutong Zhang, Yunyun Wang, Chuang Gan, Jiajun Wu, Joshua B. Tenenbaum, Antonio Torralba, and William T. Freeman. Deep audio priors emerge from harmonic convolutional networks. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=rygjHxrYDB.
Wei Zhu, Qiang Qiu, Robert Calderbank, Guillermo Sapiro, and Xiuyuan Cheng. Scaling-translation- equivariant networks with decomposed convolutional filters. The Journal of Machine Learning Research, 23(1):2958â3002, 2022.
Zhenyao Zhu, Jesse H Engel, and Awni Hannun. Learning multiscale features directly from waveforms. arXiv preprint arXiv:1603.09509, 2016.
19
# Published in Transactions on Machine Learning Research (12/2023)
# Appendix
# A Group and group action
Group. A group is an ordered pair (G, ·) where G is a set and · : G ÃG â G is a binary operation on G, such that (i) the set is closed under this operation, (ii) the operation is associative, i.e., (g1 · g2) · g3 = g1 · (g2 · g3), g1, g2, g3 â G, (iii) there exists an identity element e â G such that âg â G we have e · g = g · e = g, and (iv) for each g â G, there exists an inverse gâ1 such that g · gâ1 = e. Subgroup. Given a group (G, ·), we say that a subset H is a subgroup of G if the tuple (H, ·) also complies to the group axioms. For example, the set of rotations by 90â¦, H={0â¦, 90â¦, 180â¦, 270â¦}, is a subgroup of the continuous rotation group as it also complies to the group axioms.
Group action. Let G be a group and X be a set. The (left) group action of G on X is a function
A : G à X â X, Ag : x â xâ², =Ag2 ⦠Ag1
(25) . In other words, the action of G on X describes how the
such that for any g1, g2 â G, Ag2g1 elements in the set x â X are transformed by elements g â G. For brevity, Ag(x) is written as gx.
# B Equivariance properties of common time-frequency transforms
B.1 The Fourier transform The Fourier transform represents a function with finite energy f â L2(R) as a sum of complex sinusoidal waves eiÏt= cos Ït + i sin Ït:
f (t) = 1 2Ï Z â ââ Ëf (Ï) eiÏt dÏ,
where, Ëf (Ï) depicts the amplitude of each component eiÏt in f . The Fourier transform F is defined as:
Z â
F[f ](Ï) = Ëf (Ï) = â¨f, eiÏtâ© = f (t) eâiÏt dt. ââ
In other words, the Fourier transform encodes f into a time-frequency dictionary D={eiÏt}ÏâR. Input translation. Let Lt0
[f ](t)=f (t â t0) be a translated version of f . Its Fourier transform is given by: Z â
[f]|(w) = [. f(t âto) e**" dt | t=t â to; dt=dt -[ f@ eli ara eee [ f(b) et! dé =e" F [f](w) (26)
# F[Lt0
In other words, a translation of t0 corresponds to a phase modulation of eâiÏt0 in the frequency domain. 0 t), s0 â R>0, be a scaled version of f . Its Fourier transform equals: Input scaling. Let Ls0 Z â
F [Leo Lf] (@) = [. f(sp't) edt | t=s, ât; dt=sy âdt = [. ro e #(808) d (sof) = 5 [-. r@ en i(sow)é gf = s0F[f](sow) = soL,â1[F[f]]() (27)
In other words, we observe that a dilation on the time domain produces a compression in the Fourier domain times the inverse of the dilation.
Simultaneous input translation and scaling. Following the same derivation procedure, we can show the behavior of the Fourier transform to simultaneous translations and dilations of the input:
F[Ls0Lt0 [f ]](Ï) = s0eâiÏt0F[f ](s0Ï) = eâiÏt0 s0Lsâ1 0 [F[f ]](Ï). (28)
This corresponds to the superposition of the previously exhibited behaviours.
20
# Published in Transactions on Machine Learning Research (12/2023)
Effect of input transformations on the spectral density. The spectral density of a function f â L2(R) is given by |F[f ](Ï)|2. Input translations and dilations produce the following transformations:
# |F[Lt0 |F[Ls0
[f ]](Ï)|2 = |F[f ](Ï)|2 [f ]](Ï)|2 = |s0|2|Lsâ1
IF [Leg FI)? = (FLA)? (29)
[F[f ]](Ï)|2 0 (30)
Equivariance and invariance properties of the Fourier transform. From Eq. 26 we can see that the Fourier transform is translation equivariant as it encodes translations of the input as a phase modulation In addition, it is also scale equivariant (Eq. 27), as it encodes dilations of the input as of the output. a modulation of the frequency components in the output. We can prove that the Fourier transform is dilation and translation equivariant by showing that the output transformations eâiÏt0 and s0Lsâ1 are group representations of the translation and scaling group in the Fourier space.
Group representation. Let G be a group and f be a function on a given functional space LV(X). The (left) regular representation of G is a linear transformation L : G Ã LV(X) â LV(X) which extends group actions to functions on LV(X) by:
Lg : f â f â², f â²(Ag(x)) = f (x) â f â²(x) = f (gâ1x),
such that for any g1, g2 â G, Lg2g1 . In other words, the group representation describes how a function on a functional space f â LV(X) is modified by the effect of group elements g â G.
We can show that the combination of input translations t0, t1 â R or dilations s0, s1 â R>0 produces a trans- formation on the Fourier domain that preserves the group structure. In other words, that the transformations previously outlined are group representations. Specifically, for Lt1
F (Le, [Lig fll] (@) = ee F[f[(w) = OME [F](w) = LS FL) F [Loy [Leo LFI] () = srLe [s0L.- LF LET] ) = (8081) F [f](sisow) = LE" FLAT)
with LFourier t and LFourier s Unfortunately, the resulting group representations rapidly become cumbersome specially in the presence of several input components. In addition, although the calculation of the spectral density leaves the scale equivariance property of the transformation unaffected, Eq. 29 shows that it reduces translation equivariance of the Fourier transform to translation invariance. This is why the Fourier transform is commonly considered not to carry positional information.
B.2 The short-time Fourier transform The short-time Fourier transform of a signal f â L2(R) is given by:
S[f ](t, Ï) = Z +â f (Ï )w(Ï â t) eâiÏÏ dÏ. ââ
In other words, it encodes the input f into a time-frequency dictionary D={Ït,Ï}, Ït,Ï=w(Ï â t) eâiÏÏ . Input translation. Let Lt0 is given by:
Z â
[Li [f]](t,w) = [: f(r â to)w(r ât) e**T dr t=r â to; dt=dr = [40 w(t+to ât)e ~i8(E+to) qf â rae [7 f()w(r â (t- to)) e et dt eo OS[f](t â to,w) =e L,,[8[f]](t,0) (31)
# S[Lt0
In other words, a translation by t0 in the time domain, corresponds to a shift by t0 on the time axis of the short-time Fourier transform, and an additional phase modulation of eâiξt0 similar to that of the Fourier transform (Eq. 26).
21
(29)
# Published in Transactions on Machine Learning Research (12/2023)
I(t) = cos 3t + cos Tt F(t) = cos $t-+ cos Ft S(t) = cos 8t + cos ft i| Wd \ N\, 9/\ i I {\ N \ I i Hy i Wy] lal \W \ | y A /\ , N\ l\_f Nal lap â w(t) w(0) /⢠/~ / J \ y, \ / \ a (fw) /\ i) i) \ | \ adhd 1 Vip ââ } \ ~ â_â a â a V Ny 4 Freq. Dec. + Freq. Dec. tL Freq. Dec. et eb wai eb) \ || | \ allt | ln / \ âil VL LPS ~\ J \ {i VW I \ VV VI w=38 w=3 /\ w= wae /\ ri 1 2 f\ 7 s / \ ~f\Vpr A | \ on / / \/ \ \ VV
Figure 10: Scale equivariance of the short-time Fourier transform. Consider a function f (t)= cos Ï1t + cos Ï2t composed of two frequencies Ï1=3 and Ï2=7, and a window function w(t), with which the short-time Fourier transform is calculated. For relatively high frequencies (left column), the dot-product of f and w, â¨f, wâ©, is able to capture sufficient spectral information from f to correctly extract the frequencies Ï1, Ï2 from it. However, for dilated versions of the same signal f (right columns) obtained by reducing the frequency of the spectral components Ï1, Ï2 of f , the capacity of the dot-product â¨f, wâ© to capture the spectral information in the input gradually degrades and, eventually, is entirely lost. Consequently, scale equivariance holds (approximately) for scales for which all of the spectral components of the signal f lie within the range of the window w.
Input scaling. Let Ls0 is given by: [f ](Ï )=f (sâ1 0 Ï ), s0 â R>0, be a scaled version of f . Its short-time Fourier transform
Z â
S[L5,[f]](,w) = / f (sq 't)w(r ât) e'*7 dr | t=s, â7; df=s) âdr 0° - 0 . ao ~ / f(â¬)w(sot â t) eW*#6) d(sof) = | f(Dw(sot â t) eW6)" dé | t=sq tsot Hos 05 of f(Bw(so(é â sp 1t)) 1 %)! dé | w(sT) © w(T) 80 [ sue- sp lt) e160") dé = 89 S[f] (sp lt, sow) (32)
# S[Ls0
In other words, a dilation in the time domain produces a compression in the frequency domain analogous to the Fourier transform (Eq. 27). However, it is important to note that we rely on the approximate w(x)âw(sx) to arrive to the final expression. Nevertheless, it is important to note that this approximate does not generally holds in practice. This approximation implies that the window function w is invariant to scaling, which holds only for increasing window sizes, i.e., when the short-term Fourier transform starts to approximate the (global) Fourier transform.
Simultaneous input translation and scaling. Following the same derivation procedure, we can show the behavior of the short-time Fourier transform to simultaneous translations and scaling. We have that: [f ]](t, Ï) = s0eâiÏt0S[f ](sâ1 0
22
# Published in Transactions on Machine Learning Research (12/2023)
Effect of input transformations on the spectrogram. The spectrogram of a function f â L2(R) is given by |S[f ](t, Ï)|2. Input translations and dilations produce the following transformations: [f ]](t, Ï)|2 = |Lt0 [f ]](t, Ï)|2 = |s0|2|S[f ](sâ1
# |F[Lt0 |F[Ls0
Equivariance and invariance properties of the short-time Fourier transform. The short-time Fourier transform is approximately translation and scale equivariance in a manner similar to that of the Fourier transform. In contrast to the Fourier transform, however, it decomposes input translations into a translation t â t0 and a phase shift eâiÏt0 in the output (Eq. 31). This decomposition can be interpreted as a rough estimate t â t0 signalizing the position in which the window w is localized, and a fine grained localization within that window given by the phase shift eâiÏt0 indicating the relative position of the pattern within the window L(tât0)[w](Ï ). Equivariance to dilations is analogous to the Fourier transform up to the fact that time and frequency are now jointly described. However, since the window itself does not scale with the sampled frequency âas is the case in wavelet transformsâ, exact equivariance is not obtained. Note that equivariance to dilations is only approximate, and is restricted to the set of scales that can be detected with the width of the window used (see Fig. 10 for a visual explanation). Since this is not generally the case, the short-time Fourier transform is not scale equivariant.
The calculation of the spectrogram leaves the scale equivariance property of the transformation unaffected and is equivalent in a join manner to the scale equivariance property of the Fourier transform (Eq. 32). Differently however, Eq. 34 shows that translation equivariance is partially preserved and only information about the phase shift within the window is lost. This is why the short-time Fourier transform is said to carry positional information, i.e., to be (approximately) translation equivariant.
# B.3 The Wavelet Transform
The wavelet transform of a signal f â L2(R) is given by: Z +â
Wists) = ade) =f 400) 4 (1) on â0o
and is equivalent to encoding f into a time-frequency dictionary D = {Ït,s}uâR,sâR>0 Input translation. Let Lt0 Z â , Ït,s(Ï )= 1â s Ïâ( Ï ât s [f ](Ï )=f (Ï â t0) be a translated version of f . Its wavelet transform is given by: â ).
[Lr Lfll(t,s) = [tc ~ ty) V8 *U"(s-(r 8) dr | Fer ~ ty; dé=ar _ [fOvE WOT t â1)) at= [sovetwoe- (tâto))) dé = W[f|(t â to, 8) = La WLsl(t.s) (36)
# W[Lt0
In other words, a translation of the input produces an equivalent translation in the wavelet domain.
Input scaling. Let Ls0 [f ](t)=f (sâ1
ât) be a scaled version of f. The corresponding wavelet transform is: [Hey 'VE WE Mr =) ar | Fay tes dé=s5 "ar
[Lal flttss) = [Hey 'VE WE Mr =) ar | Fay tes dé=s5 "ar = [fove'e( ot - t)) (sot) | t=so | sot 00 -1 = | fOVs âsob"(s7!s0(E â sy 't)) dé | so=\/ 89/50" 00 ââ-1 = vs ff @y/s5's u"((so's) (E- spt) dt = Â¥50 WLf](so't. 59's) = V0 Lo MLA](t, 5)
# W[Ls0
23
(37)
# Published in Transactions on Machine Learning Research (12/2023)
In other words, a dilation s0 in the input domain produces an equivalent dilation in the wavelet domain on both components (t, s), multiplied by a factor â s0. That is, the wavelet transform is translation equivariant. Simultaneous input translation and scaling. Following the same procedure, we can show the behavior of the wavelet transform to simultaneous translations and dilations of the input: â
â
W[f (sâ1
(Ï â t0)](t, s) = s0 W[f ](sâ1 0 (t â t0), sâ1 0 s) = s0 Lt0 Ls0 W[f ](t, s) 0 (38)
We observe that the Wavelet transform is the only time-frequency transform that respects equivariance with equivalent group representations in the input and output domain.
Effect of input transformations on the scalogram. The scalogram of a function f â L2(R) is given by |W[f ](u, s)|2. Input translations and dilations produce the following transformations on the scalogram: [f ]](u, s)|2 = |Lt0 [f ]](u, s)|2 = |Ls0
# |W[Lt0 |W[Ls0
[W[f ]](u, s)|2 [W[f ]](u, s)|2
In other words the scalogram is exactly equivariant to both translations and dilations.
Equivariance and invariance properties of the wavelet transform. From Eq. 36, we can see that the wavelet transform is exactly equivariant to translations and the group representation of the output space equals that of the input space. Furthermore, translation equivariance is preserved in the scalogram as well (Eq. 39). Similarly, scale equivariance is preserved on the wavelet transform up to a multiplicative factor (Eq. 37). However, the scalogram preserves both translation and dilation equivariance exactly (Eq. 40).
We emphasize that the group representation on the output space resembles that of the input space. This behavior leads to much more straightforward group representations than that exhibited by the Fourier transform and the short-time Fourier transform. Additionally, exact scale equivariance is only obtained on the scalogram (Eq. 40), whilst for the wavelet transform it is retained up to multiplicative factor (Eq. 37). This behavior elucidates the fact that time-frequency transforms have been optimized for energy density representations rather than for the time-frequency representations themselves.
C Experimental details Whenever possible, we use existing code for the baselines of our wavelet networks as a starting point for the general infrastructure of our model. Specifically, we utilize the PyTorch implementation provided in https://github.com/philipperemy/very-deep-convnets-raw-waveforms and https://github.com/ kyungyunlee/sampleCNN-pytorch as baseline for the US8K experiments and the MTAT experiments Lee et al. (2017), respectively. By doing so, we aim to preserve the reproducibility of the experiments in the baseline papers during our own experiments, as some important training factors are not specified in the baseline papers, e.g., the learning rate used in Dai et al. (2017). Unfortunately, Abdoli et al. (2019) do not provide code and we were forced to interpret some of the ambiguities in the paper, e.g., the pooling type utilized in the pooling layers and the loss metric used.
Any omitted parameters can safely be considered to be the default values in PyTorch 1.5.0. Our experi- ments are carried out in a Nvidia TITAN RTX GPU.
C.1 UrbanSound8K Wn-Nets. We use a sampling rate of 22.05kHz as opposed to the 8kHz used in Dai et al. (2017). An early study that indicated that some classes were indistinguishable for the human ear at this sampling rate.3 We zero-pad signals shorter than 4 seconds so that all input signals have a constant length of 80200 samples. Following the implementation of Dai et al. (2017), we utilize the Adam optimizer (Kingma & Ba, 2014) with lr=1e-2 and weight_decay=1e-4, and perform training on the official first 9 folds and test on the 10th fold. We noticed that reducing the learning rate from 1e-2 to 1e-3 increased the performance of our W-Nets. The reported results of the W-Net variants are obtained with this learning rate.
We utilize batches of size 16 and perform training for 400 epochs. The learning rate is reduced by half after 20 epochs of no improvement in validation loss. The Wn-nets used are specified in Table 4. See Dai et al. (2017, Tab. 1) for comparison.
3See https://github.com/dwromero/wavelet_networks/experiments/UrbanSound8K/data_analysis.ipynb.
24
# Published in Transactions on Machine Learning Research (12/2023)
Table 4: Wn-networks. W3-Net (0.219M) denotes a 3-layer network with 0.219M parameters. [79/4, 150, 3] denotes a group convolutional layer with a nominal kernel size of 79 samples, 150 filters and 3 scales, with a stride of 4. Stride is omitted for stride 1 (e.g., [3, 150, 3] has stride 1). Each convolutional layer uses batch normalization right after the convolution, after which ReLU is applied. Following the findings of Romero et al. (Romero et al., 2020, Appx. C) on the influence of stride in the equivariance of the network, we replace strided convolutions by normal convolutions, followed by spatial pooling. [. . . ] Ã k denotes k stacked layers and double layers in brackets denote residual blocks as defined in (Dai et al., 2017, Fig. 1b). In each of the levels of convolutional layers and residual blocks, the first convolution of the first block has scale 3 and the remaining convolutional layers at that level has scale 1.
W3-NET W5-NET W11-NET W18-NETr W34-NEr (0.219M) â (0.558M) (1.806M) (3.759M) (4.021M) Input: 80200X 1 TIME-DOMAIN WAVEFORM LirTING LAYER ( 9 SCALES) (79/4,150] [79/4,74] (79/4, 511] (79/4, 57] (79/4, 45] MAXPOoL: 4x1 (OUTPUT: 80200x 9x 1) (3,150,3] [3,743] [3,51,3)x2 [B,57,3) x4 [Pa] «8 MAXPOOL: 4x1 (OUTPUT: 80200x 7x 1) (3,148,3) [3,102,3) x2 [3,114,3] x 4 [39 x4 MAXPooL: 4x1 (ouTPUT: 80200x 5x 1) , ; 2 3,180] - « (3,296, 3] [3,204,3] x 3. [3, 228, 3] « 4 [e139 x6 MAxPooL: 4x1 (OUTPUT: 80200x 3x 1) [3, 408, 3] x 2 GLOBAL AVERAGE POOLING (OUTPUT: 1 X N) Sorrmax [110] (ourpuT: 1 x N)
25
# Published in Transactions on Machine Learning Research (12/2023)
Table 5: Wavelet network variant of the 50999-1DCNN (Abdoli et al., 2019). [31/2, 24, 3] denotes a group convolutional layer with a nominal kernel size of 31 samples, 24 filters and 3 scales, with a stride of 2. FC: [96, 48] denotes a fully-connected layer with 96 input channels and 48 output channels. Each convolutional layer uses batch normalization right after the convolution followed by ReLU. All fully connected layers expect for the last one use dropout of 0.25 and ReLU. Following the findings of Romero et al. (2020, Appx. C) on the influence of stride in the equivariance of the network, we replace strided convolutions with normal convolutions followed by spatial pooling. We note that the input size of our network is (presumably) different from that in Abdoli et al. (2019). Consequently, the last pooling layer utilizes a region of 5, in contrast to 4 as used in Abdoli et al. (2019). However, as it is not clear how the input dimension is reduced from 64000 to 50999 in Abdoli et al. (2019) and we stick to their original sampling procedure. We interpret their poling layers as max-pooling ones.
W-1DCNN (0.549m) Input: 64000x 1 Lifting Layer ( 9 scales) [63/2, 12] Maxpool: 8x1 [31/2, 24, 3] Maxpool: 8x1 [15/2, 48, 3] [7/2, 96, 3] [3/2, 408, 3] Maxpool: 5x1 Flatten 196 Ã 6 â 1152 FC: [1152, 96] FC: [96, 48] FC: [48, 10] Softmax
W-1DCNN. Following Abdoli et al. (2019), we utilize a sampling rate of 16kHz during our experiments. We zero-pad signals shorter than 4 seconds so that all input signals have a constant length of 64000 samples. Following the experimental description of the paper, we utilize the AdaDelta optimizer (Zeiler, 2012) with lr=1.0 and perform training in a 10-fold cross validation setting as described in Sec. 6. We use batches of size 100 and perform training for 100 epochs. We utilize the 50999-1DCNN variant of Abdoli et al. (2019), as it is the variant that requires the less human engineering.4
Unfortunately, we were not able to replicate the results reported in Abdoli et al. (2019) (83±1.3%) in our experiments. Our replication of Abdoli et al. (2019) lead to a 10-cross fold accuracy of 62±1.3%, which is 21 accuracy points less relative to the results reported. We experiment with our interpretation of the mean squared logarithmic error (MSLE) loss defined in (Abdoli et al., 2019, Eq. 4). However, we find that the conventional cross-entropy loss leads to better results. Consequently, all our reported results are based on training with this loss.5
The description of the Wavelet 50999-1DCNN Abdoli et al. (2019) is provided in Table 5 (see (Abdoli et al., 2019, Tab. 1) for comparison).
4The remaining architectures partition the input signal into overlapping windows after which the predictions of each windows are summarized via a voting mechanism. Consequently, one could argue that the 50999-1DCNN is the only variant that truly receives the raw waveform signal. Nevertheless it is not clear from the paper how the input signal of 64000 samples is reduced to 50999 samples, which is the input dimension of the raw signal for this architecture type.
# PN
2
, where pi, ai and N are the predicted class, the actual class, and the number of samples respectively. Note, however, that obtaining the predicted class pi, i.e., pi=argmaxof (xi), where f (xi) â RO is the output of the network for a classification problem with O classes and input xi, is a non-differentiable function. Consequently, it is not possible to train the network based on the formulation provided there. In order to train our model with this loss, we re-formulate the MSLE loss as 1 o=1 is a N one-hot encoded version of the label ai. That is, we measure the difference between the one-hot encoded label and the output.
26
# Published in Transactions on Machine Learning Research (12/2023)
Table 6: W39-network. [3/1, 90, 3] denotes a group convolutional layer with a nominal kernel size of 3 samples, 90 filters and 3 scales, with a stride of 1. MP:3x1 denotes a max-pooling layer of size 3. FC: [360, 50] denotes a fully-connected layer with 360 input channels and 50 output channels. Each convolutional layer uses batch normalization after the convolution followed by ReLU. Dropout of 0.5 is used after the 6th and 11th layer. Following the findings of Romero et al. (2020, Appx. C) on the influence of stride in the network equivariances, we replace strided convolutions by normal convolutions followed by spatial pooling.
W-39 Net (2.404m) Input: 59049x 1 Lifting Layer ( 9 scales) [3/3, 90] [3/1, 90, 3], MP:3x1 [3/1, 90, 1], MP:3x1 [3/1, 180, 1], MP:3x1 [3/1, 180, 3], MP:3x1 [3/1, 180, 1], MP:3x1 [3/1, 180, 1], MP:3x1 [3/1, 180, 3], MP:3x1 [3/1, 180, 1], MP:3x1 [3/1, 360, 1], MP:3x1 [3/1, 360, 3] FC: [360, 50] Sigmoid
C.2 MagnaTagATune W39-Network. For the experiments in the MTAT dataset, we utilize the PyTorch code provided by Lee et al. Lee et al. (2017). We use the data and tag preprocessing used in Lee et al. (2017). We utilize the SGD optimizer with lr=1e-2, weight_decay=1e-6 and nesterov=True. We use batches of size 23 and perform training for 100 epochs. The learning rate is reduced by 5 after 3 epochs of no improvement in the validation loss. Early stopping is used if the learning rate drops under 1e-7. We were unable to replicate the per-class AUC results reported in Lee et al. (2017). Our experiments indicated a per-class AUC of 0.893 instead of the 0.905 reported in Lee et al. (2017). Details of the W39-Net used are given in Table 6 (see (Lee et al., 2017, Tab. 1) for comparison).
27 | {
"id": "2102.02611"
} |
2006.05398 | Deep Visual Reasoning: Learning to Predict Action Sequences for Task and Motion Planning from an Initial Scene Image | In this paper, we propose a deep convolutional recurrent neural network that
predicts action sequences for task and motion planning (TAMP) from an initial
scene image. Typical TAMP problems are formalized by combining reasoning on a
symbolic, discrete level (e.g. first-order logic) with continuous motion
planning such as nonlinear trajectory optimization. Due to the great
combinatorial complexity of possible discrete action sequences, a large number
of optimization/motion planning problems have to be solved to find a solution,
which limits the scalability of these approaches.
To circumvent this combinatorial complexity, we develop a neural network
which, based on an initial image of the scene, directly predicts promising
discrete action sequences such that ideally only one motion planning problem
has to be solved to find a solution to the overall TAMP problem. A key aspect
is that our method generalizes to scenes with many and varying number of
objects, although being trained on only two objects at a time. This is possible
by encoding the objects of the scene in images as input to the neural network,
instead of a fixed feature vector. Results show runtime improvements of several
magnitudes. Video: https://youtu.be/i8yyEbbvoEk | http://arxiv.org/pdf/2006.05398 | Danny Driess, Jung-Su Ha, Marc Toussaint | cs.LG, cs.AI, cs.CV, cs.RO, stat.ML | Robotics: Science and Systems (R:SS) 2020 | null | cs.LG | 20200609 | 20200609 | 0 2 0 2 n u J 9 ] G L . s c [
1 v 8 9 3 5 0 . 6 0 0 2 : v i X r a
# Deep Visual Reasoning: Learning to Predict Action Sequences for Task and Motion Planning from an Initial Scene Image
Danny Driess
Jung-Su Ha
Marc Toussaint
Machine Learning and Robotics Lab, University of Stuttgart, Germany Max-Planck Institute for Intelligent Systems, Stuttgart, Germany Learning and Intelligent Systems Group, TU Berlin, Germany
AbstractâIn this paper, we propose a deep convolutional recurrent neural network that predicts action sequences for task and motion planning (TAMP) from an initial scene image. Typical TAMP problems are formalized by combining reasoning on a symbolic, discrete level (e.g. ï¬rst-order logic) with continuous motion planning such as nonlinear trajectory optimization. Due to the great combinatorial complexity of possible discrete action sequences, a large number of optimization/motion planning problems have to be solved to ï¬nd a solution, which limits the scalability of these approaches.
To circumvent this combinatorial complexity, we develop a neural network which, based on an initial image of the scene, directly predicts promising discrete action sequences such that ideally only one motion planning problem has to be solved to ï¬nd a solution to the overall TAMP problem. A key aspect is that our method generalizes to scenes with many and varying number of objects, although being trained on only two objects at a time. This is possible by encoding the objects of the scene in images as input to the neural network, instead of a ï¬xed feature vector. Results show runtime improvements of several magnitudes. Video: https://youtu.be/i8yyEbbvoEk
~~ a! : ~ ~ (a) initial scene (b) action 1 (grasp) (c) action 2 (place) \ mo !
: ~ (b) action 1 (grasp)
~ (c) action 2 (place)
~~ a! (a) initial scene
\ (f) action 5 (place)
mo ! (d) action 3 (grasp)
(e) action 4 (grasp)
action 3 (grasp) (e) action 4 (grasp) (f) action 5 (place)
Fig. 1. Typical scene: The yellow object should be placed on the red spot, which is, however, occupied by the blue object. Furthermore, the yellow object cannot be reached by the robot arm that is able to place it on the red spot.
# I. INTRODUCTION
problems that need to be evaluated. Ideally, we seek to directly predict a feasible action sequence, requiring only a single motion planning problem to be solved.
A major challenge of sequential manipulation problems is that they inherently involve discrete and continuous aspects. To account for this hybrid nature of manipulation, Task and Motion Planning (TAMP) problems are usually formalized by combining reasoning on a symbolic, discrete level with continuous motion planning. The symbolic level, e.g. deï¬ned in terms of ï¬rst-order logic, proposes high level discrete action sequences for which the motion planner, for example nonlinear trajectory optimization or a sampling-based method, tries to ï¬nd motions that fulï¬ll the requirements induced by the high level action sequence or return that the sequence is infeasible. Due to the high combinatorial complexity of possible dis- crete action sequences, a large number of motion planning problems have to be solved to ï¬nd a solution to the TAMP problem. This is mainly caused by the fact that many TAMP problems are difï¬cult, since the majority of action sequences are actually infeasible, mostly due to kinematic limits or geo- metric constraints. Moreover, it takes more computation time for a motion planner to reliably detect infeasibility of a high level action sequence than to ï¬nd a feasible motion when it exists. Consequently, sequential manipulation problems, which intuitively seem simple, can take a very long time to solve.
To overcome this combinatorial complexity, we aim to learn to predict promising action sequences from the scene as input. Using this prediction as a heuristic on the symbolic level, we can drastically reduce the number of motion planning
However, learning to predict such action sequences imposes multiple challenges. First of all, the objects in the scene and the goal have to be encoded as input to the predictor in a way that enables similar generalization capabilities to classical TAMP approaches with respect to scenes with many and changing number of objects and goals. Secondly, the large variety of such scenes and goals, especially if multiple objects are involved, makes it difï¬cult to generate a sufï¬cient dataset. Recently, [48] and [12] propose a classiï¬er that predicts the feasibility of a motion planning problem resulting from a discrete decision in the task domain. However, a major limita- tion of their approaches is that the feasibility for only a single action is predicted, whereas the combinatorial complexity of TAMP especially arises from action sequences and it is not straightforward to utilize such a classiï¬er for action sequence prediction within TAMP.
To address these issues, we develop a neural network that predicts action sequences from the initial scene and the goal as input. An important question is how the objects in the scene can be encoded as input to the predictor in a way that shows similar generalization capabilities of classical TAMP approaches. By encoding the objects (and the goal) in the image space, we show that the network is able to generalize to scenes with many and changing number of objects with only little runtime increase, although it has been trained on only a ï¬xed number of objects. Compared to a purely
discriminative model, since the predictions of our network are goal-conditioned, we do not need to use the network to search over many sequences, but can directly generate them with it. The predicted action sequences parameterize a nonlinear trajectory optimization problem that optimizes a globally con- sistent paths fulï¬lling the requirements induced by the actions. To summarize, our main contributions are
⢠A convolutional, recurrent neural network that predicts from an initial scene image and a task goal promising action sequences, which parameterize a nonlinear trajec- tory optimization problem, to solve the TAMP problem. ⢠A way to integrate this network into the tree search
algorithm of the underlying TAMP framework.
⢠We demonstrate that the network generalizes to situations with many and varying numbers of objects in the scene, although it has been trained on only two objects at a time. From a methodological point of view, this work contains nonlinear trajectory optimization, ï¬rst-order logic reasoning and deep convolutional, recurrent neural networks.
# II. RELATED WORK
A. Learning to Plan
There is great interest in learning to mimic planning itself. The architectures in [41, 33, 39, 1] resemble value iteration, path integral control, gradient-based trajectory optimization and iterative LQR methods, respectively. For sampling-based motion planning, [23] learn an optimal sampling distribution conditioned on the scene and the goal to speed up planning. To enable planning with raw sensory input, there are several works that learn a compact representation and its dynamics in sensor space to then apply planning or reinforcement learning (RL) in the learned latent space [3, 50, 17, 22, 47, 15, 29, 38]. Another line of research is to learn an action-conditioned predictive model [14, 50, 35, 13, 9, 34, 36]. With this model, the future state of the environment for example in image space conditioned on the action is predicted, which can then be utilized within MPC [14, 50] or to guide tree search [35]. The underlying idea is that learning the latent representation and dynamics enables reasoning with high-dimensional sensory data. However, a disadvantage of such predictive models is that still a search over actions is necessary, which grows exponen- tially with sequence length. For our problem which contains handovers or other complex behaviors that are induced by an action, learning a predictive model in the image space seems difï¬cult. Most of these approaches focus on low level actions. Furthermore, the behavior of our trajectory optimizer is only deï¬ned for a complete action sequence, since future actions have an inï¬uence on the trajectory of the past. Therefore, state predictive models cannot directly be applied to our problem. The proposed method in the present work learns a relevant representation of the scene from an initial scene image such that a recurrent module can reason about long-term action effects without a direct state prediction.
B. Learning Heuristics for TAMP and MIP in Robotics
A general approach to TAMP is to combine discrete logic search with a sampling-based motion planning algorithm [24,
7, 40, 6] or constraint satisfaction methods [27, 28, 31]. A major difï¬culty arises from the fact that the number of feasible symbolic sequences increases exponentially with the number of objects and sequence length. To reduce the large number of geometric problems that need to be solved, many heuristics have been developed, e.g. [24, 37, 10], to efï¬ciently prune the search tree. Another approach to TAMP is Logic Geometric Programming (LGP) [42, 43, 44, 45, 18], which combines logic search with trajectory optimization. The advantage of an optimization based approach to TAMP is that the trajectories can be optimized with global consistency, which, e.g., allows to generate handover motions efï¬ciently. LGP will be the underlying framework of the present work. For large-scale problems, however, LGP also suffers from the exponentially increasing number of possible symbolic action sequences [19]. Solving this issue is one of the main motivations for our work. there are several ap- proaches to integrate learning into TAMP to guide the discrete search in order to speed up ï¬nding a solution [16, 5, 26, 25, 46]. However, these mainly act as heuristics, meaning that one still has to search over the discrete variables and probably solve many motion planning problems. In contrast, the network in our approach generates goal-conditioned action sequences, such that in most cases there is no search necessary at all. Similarly, in optimal control for hybrid domains mixed- integer programs suffer from the same combinatorial complex- ity [20, 21, 8]. LGP also can be viewed as a generalization of mixed-integer programs. In [4] (footstep planning) and [21] (planar pushing), learning is used to predict the integer assignments, however, this is for a single task only with no generalization to different scenarios.
A crucial question in integrating learning into TAMP is how the scene and goals can be encoded as input to the learning algorithm in a way that enables similar generalization capabilities of classical TAMP. For example, in [35] the considered scene contains always the same four objects with the same colors, which allows them to have a ï¬xed input vector of separate actions for all objects. In [49] convolutional (CNN) and graph neural networks are utilized to learn a state repre- sentation for RL, similarly in [30]. In [2], rendered images from a simulator are used as state representation to exploit the generalization ability of CNNs. In our work, the network learns a representation in image space that is able to reason over complex action sequences from an initial observation only and is able to generalize over changing numbers of objects.
The work of Wells et al. [48] and Driess et al. [12] is most related to our approach. They both propose to learn a classiï¬er which predicts the feasibility of a motion planning problem resulting from a single action. The input is a feature representation of the scene [48] or a scene image [12]. While both show generalization capabilities to multiple objects, one major challenge of TAMP comes from action sequences and it is, however, unclear how a single step classiï¬er as in [48] and [12] could be utilized for sequence prediction.
To our knowledge, the our work is the ï¬rst that learns to generate action sequences for an optimization based TAMP approach from an initial scene image and the goal as input,
while showing generalization capabilities to multiple objects.
III. LOGIC GEOMETRIC PROGRAMMING FOR TASK AND MOTION PLANNING This work relies on Logic Geometric Programming (LGP) [42, 43] as the underlying TAMP framework. The main idea behind LGP is a nonlinear trajectory optimization problem over the continuous variable x, in which the constraints and costs are parameterized by a discrete variable s that represents the state of a symbolic domain. The transitions of this variable induces a to a ï¬rst-order logic language that are subject decision tree. Solving an LGP involves a tree search over the discrete variable, where each node represents a nonlinear trajectory optimization program (NLP). If a symbolic leaf node, i.e. a node which state s is in a symbolic goal state, is found and its corresponding NLP is feasible, a solution to the TAMP problem has been obtained. In this section, we brieï¬y describe LGP for the purpose of this work.
Let Â¥ = 4(S) CR") x SE(3)'â¢S) be the configuration space of all objects and articulated structures (robots) as a function of the scene S. The idea is to find a global path a in the configuration space which minimizes the LGP KT
KT P(g, S)= min [ c(x(t), &(t), #(), se(r),.S) dt (la) eRe S.t.
s.t. âtâ[0,KT ] : âtâ[0,KT ] : âk=1,...,K : âk=1,...,K : âk=1,...,K :
Neq(x(t), &(t), Sat), S) = 0 (x(t), &(t), 84(4),9) <0 hsw(x(kT), &(kT), an, S) = 0
(1b)
heq hineq hsw ak â A(skâ1, S) sk = succ(skâ1, ak) x(0) = Ëx0(S) s0 = Ës0(S) sK â Sgoal(g).
(1c)
hsw(x(kT), &(kT), an, S) = 0 (1d)
(1d) (1e) (1f) (1g) (1h) (1i)
ap ⬠A(spâ1, 5) (le)
Sk = SUCC(SKâ1, Ak) (if)
x(0) = %(S) (1g)
80 = 50(S) (1h)
8K © Sgoai(g)- (li)
The path is assumed to be globally continuous (x ⬠C((0, 7K], V)) and consists of K ⬠N phases (the number is part of the decision problem itself), each of fixed duration T > 0, in which we require smoothness x ⬠C?({(k â 1)T,kT)). These phases are also referred to as kinematic modes [32] [44]. The functions c, heg, hineg and hence the objectives in phase k of the motion (k(t) = |t/T'|) are parameterized by the discrete variable (or integers in mixed-integer programming) 5x ⬠S, representing the state of the symbolic domain. The time discrete transitions between s;,â; and s;, are determined by the successor function succ(-,-), which is a function of the previous state s,_ and the discrete action a, at phase k. The actions are grounded action operators. Which actions are possible at which symbolic state is determined by the logic and expressed in the set A(s,â1, 5). Asw is a function that imposes transition conditions between the kinematic modes. The task or goal of the TAMP problem is defined symbolically through the set Sgoa(g) for the symbolic goal (a set of grounded literals) g ⬠G(S), e.g. placing an object on a table. The quantities %($) and S9($) are the scene dependent initial continuous and symbolic states, respectively. For fixed s it is assumed
that c, heq and hineq are differentiable. Finally, we deï¬ne the feasibility of an action sequence a1:K = (a1, . . . , aK) as
FS (a1:K) = 1 âx : [0, KT ] â X : (1b) â (1h) 0 else (2)
A. Multi-Bound LGP Tree Search and Lower Bounds
The logic induces a decision tree (called LGP-tree) through (1e) and (1f). Solving a path problem as a heuristic to guide the tree search is too expensive. A key contribution of [43] is therefore to introduce relaxations or lower bounds on (1) in the sense that the feasibility of a lower bound is a necessary condition on the feasibility of the complete problem (1), while these lower bounds should be computationally faster to compute. Each node in the LGP tree deï¬nes several lower bounds of (1). Still, as we will show in the experiments, a large number of NLPs have to be solved to ï¬nd a feasible solution for problems with a high combinatorial complexity. This is especially true if many decisions are feasible in early phases of the sequence, but then later become infeasible.
# IV. DEEP VISUAL REASONING
The main idea of this work is, given the scene and the task goal as input, to predict a promising discrete action sequence a1:K = (a1, . . . , aK) which reaches a symbolic goal state and its corresponding trajectory optimization problem is feasible. An ideal algorithm would directly predict an action sequence such that only a single NLP has to be solved to ï¬nd an overall feasible solution, which consequently would lead to a signiï¬cant speedup in solving the LGP (1).
We will ï¬rst describe more precisely what should be pre- dicted, then how the scene, i.e. the objects and actions with them, and the goal can be encoded as input to a neural network that should perform the prediction. Finally, we discuss how the network is integrated into the tree search algorithm in a way that either directly predicts a feasible sequence or, in case the network is mistaken, acts as a heuristic to further guide the search without losing the ability to ï¬nd a solution if one exists. We additionally propose an alternative way to integrate learning into LGP based on a recurrent feasibility classiï¬er.
A. Predicting Action Sequences
First of all, we define for the goal g the set of all action sequences that lead to a symbolic goal state in the scene S' as T (9,8)={auK : WIE, a; ⬠A(si-1,$), 8; = suce(sj_1, a) 80 = 50(S), sK ⬠Sgoai(g)}- (3)
In relation to the LGP-tree, this is the set of all leaf nodes and hence candidates for an overall feasible solution. One idea is to learn a discriminative model which predicts whether a complete sequence leads to a feasible NLP and hence to a solution. To predict an action sequence one would then choose the sequence from T (g, S) where the discriminative model has the highest prediction. However, computing T (·, ·) (up to a maximum K) and then checking all sequences with the discriminative model is computationally inefï¬cient.
| Action Action ap Encoder -â¢~_ Action-object |_| Object-image image (Ox, S) CNN ~| RNN <2) Pr Goal Goal _âââ} g Encoder Goal-object |_| Object-image "| Pea image (Og, 8) CNN
Fig. 2. Proposed neural network architecture.
Instead, we propose to learn a function 7 (a neural network) that, given a scene description S, the task goal g and the past decisions a 1.,~1, predicts whether an action a, at the current time step k is promising in the sense of the probability that there exist future actions a,+41.« such that the complete sequence a1. leads to a feasible NLP that solves the original TAMP problem. Formally, T(ak, GJ, Q1;-+-,Ak-1, 8) =
T(ak, GJ, Q1;-+-,Ak-1, 8) = | Qk, J, Q1,-+-,Qk-1, ). (4)
This way, Ï generates an action sequence by choosing the action at each step where Ï has the highest prediction.
B. Training Targets
The crucial question arises how 7 as defined in (7) can be trained. The semantics of z is related to a universal Q-function, but it evaluates actions a, based on an implicit representation of state (see Sec. . Furthermore, it turns out that we can cast the problem into supervised learning by transforming the data into suitable training targets. Assume that one samples scenes S", goals g' as well as goal-reaching action sequences ain: ⬠T (g', S"), e.g. with breadth-first search. For each of these sampled sequences, the feasibility of the resulting NLP is determined and saved in the set
Daata = {(S' ai scrg' Fs (ah) Po ®)
Based on this dataset, we deï¬ne the training dataset for Ï as
Darain = { (Sah. jcc! f*) } 6) n i=l
where f i â {0, 1}Ki is a sequence of binary labels. Its jth j indicates for every subsequence ai component f i 1:j whether it should be classiï¬ed as promising as follows
1 Fg (a). «+) =1 1 3 (Sa) jg! F!) © Daata : f= Fl= Fs (a0) =l A gag \ ayy = ah; 0. else
If the action sequence is feasible and solves the problem speciï¬ed by gi, then f i j = 1 for all j = 1, . . . , Ki (ï¬rst case). This is the case where Ï should predict a high probability at each step k to follow a feasible sequence. If the action
(7)
sequence with index i is not feasible, but there exists a feasible one in Ddata (index l) which has an overlap with the other sequence up to step j, i.e. al j = 1 as well (second case). Also in this case the network should suggest to follow this decision, since it predicts that there exist future decisions which lead to a feasible solution. Finally, in the last and third case where the sequence is infeasible and has no overlap with other feasible sequences, f i j = 0, meaning that the network should predict to not follow this decision. This data transformation is a simple pre-processing step that allows us to train Ï in a supervised sequence labeling setting, with the standard (weighted) binary cross-entropy loss. Another advantageous side-effect of this transformation is that it creates a more balanced dataset with respect to the training targets.
C. Input to the Neural Network â Encoding a, g and S
So far, we have formulated the predictor Ï in (4) in terms of the scene S, symbolic actions a and the goal g of the LGP (1). In order to represent Ï as a neural network, we need to ï¬nd a suitable encoding of a, g and S.
1) Splitting actions into action operator symbols and ob- jects: An action a is a grounded action operator, i.e. it is a combination of an action operator symbol and objects in the scene it refers to, similarly for a goal g. While the number of action operators is assumed to be constant, the number of objects can be completely different from scene to scene. Most neural networks, however, expect inputs of ï¬xed dimension. In order to achieve the same generalization capabilities of TAMP approaches with respect to changing numbers of objects, we encode action and goal symbols very differently to the objects they operate on. In particular, object references are encoded in a way that includes geometric scene information.
into a = (¯a, O) â AO(s, S) â A à P(O(S)), where ¯a â A is its discrete action operator symbol and O â P(O(S)) the tuple of objects the action operates on. The goal is similarly decomposed into g = (¯g, Og), ¯g â G, Og â P(O(S)). This separation seems to be a minor technical detail, which is, however, of key importance for the generalizability of our approach to scenes with changing numbers of objects.
Through that separation, since, as mentioned before, the cardinality of A and G is constant and independent from the scene, we can input ¯a and ¯g directly as a one-hot encoding to the neural network.
2) Encoding the objects O and Og in the image space: For our approach it is crucial to encode the information about the objects in the scene in a way that allows the neural network to generalize over objects (number of objects in the scene and their properties). By using the separation of the last paragraph, we can introduce the mapping I : (O,S) 1 R(metno)xwxh, which encodes any scene S and object tuple O to a so-called action-object-image encoding, namely an n, + no-channel image of width w and height h, where the first n, channels represent an image of the initial scene and the last no channels are binary masks which indicate the subset of objects that are involved in the action. These last mask channels not only encode object identity, but substantial geometric and relational
information, which is a key for the predictor to predict feasible action sequences. In the experiments, the scene image is a depth image, i.e. nc = 1 and the maximum number of objects that are involved in a single action is two, hence nO = 2. If an action takes less objects into account than nO, this channel is zeroed. Since the maximum number nO depends on the set of actions operator symbols A, which has a ï¬xed cardinality independent from the scene, this is no limitation. The masks create an attention mechanism which is the key to generalize to multiple objects [12]. However, since each action object image I(O, S) always contains a channel providing information of the complete scene, also the geometric relations to other objects are taken into account. Being able to generate such masks is a reasonable assumption, since there are many methods for that. Please note that these action-object-images always correspond to the initial scene.
3) Network Architecture: Fig. 2 shows the network archi- tecture that represents Ï as a convolutional recurrent neural network. Assume that in step k the probability should be predicted whether an action ak = (¯ak, Ok) for the goal g = (¯g, Og) in the scene S is promising. The action object images I(Ok, S) as well as the goal object images I(Og, S) are encoded by a convolutional neural network (CNN). The discrete action/goal symbols ¯a, ¯g are encoded by fully con- nected layers with a one-hot encoding as input. Since the only information the network has access to is the initial conï¬guration of the scene, a recurrent neural network (RNN) takes the current encoding of step k and the past encodings, which it has to learn to represent in its hidden state hkâ1, into account. Therefore, the network has to implicitly generate its own predictive model about the effects of the actions, without explicitly being trained to reproduce some future state. The symbolic goal ¯g and its corresponding goal object image I(Og, S) are fed into the neural network at each step, since it is constant for the complete task. The weights of the CNN action-object-image encoder can be shared with the CNN of the goal-object-image encoder, since they operate on the same set of object-images. To summarize,
(Pry he) = mn (Gk, T(Ox, S),9, (Og, 5), he-1) = T (ak, 9, G1:k-1; 8) ®)
D. Relation to Q-Functions
In principle, one can view the way we deï¬ne Ï in (4) and how we propose to train it with the transformation (7) as learning a goal-conditioned Q-function in a partially observable Markov decision process (POMDP), where a binary reward of 1 is assigned if a complete action sequence is feasible and reaches the symbolic goal. However, there are important differences. For example, a Q-function usually relies on a clear notion of state. In our case, the symbolic state s does not contain sufï¬cient information, since it neither includes geometry nor represents the effects of all past decisions on the NLP. Similarly for the continuous state x, which is only deï¬ned for a complete action sequence. Therefore, our network has to learn a state representation from the past action- object image sequence, while only observing the initial state
(8)
in form of the depth image of the scene as input. Furthermore, we frame learning Ï as a supervised learning problem.
# E. Algorithm
Algo. 1 presents the pseudocode how Ï is integrated in the tree search algorithm. The main idea is to maintain the set E of expand nodes. A node n = (s, (¯a, O), k, pÏ, h, nparent) in the tree consists of its symbolic state s(n), action-object pair (¯a, O), depth k(n), i.e. the current sequence length, the prediction of the neural network pÏ(n), the hidden state of the neural network h(n) and the parent node nparent. At each iteration, the algorithm chooses the node nâ E of the expand list where the network has the highest prediction (line 5). For all possible next actions, i.e. children of nâ E, the network is queried to predict their probability leading to a feasible solution, which creates new nodes (line 10). If a node reaches a symbolic goal state (line 11), it is added to the set of leaf nodes L, else to the expand set. Then the already found leaf nodes are investigated. Since during the expansion of the tree leaf nodes which are unlikely to be feasible are also found, only those trajectory optimization problems are solved where the prediction pÏ is higher than the feasibility threshold fthresh (set to 0.5 in the experiments). This reduces the number of NLPs that have to be solved. However, one cannot expect that the network never erroneously has a low prediction although the node would be feasible. In order to prevent not ï¬nding a feasible solution in such cases, the function adjustFeasibilityThreshold(·) reduces this threshold with a discounting factor or sets it to zero if all leaf nodes with a maximum depth of Kmax have been found. This gives us the following.
Proposition 1: Algorithm 1 is complete in the sense that if a scene contains at least one action sequence with maximum length Kmax for which the nonlinear trajectory optimizer can ï¬nd a feasible motion, the neural network does not prevent ï¬nding this solution, even in case of prediction errors.
As an important remark, we store the hidden state of the recurrent neural network in its corresponding node. Further- more, the object image and action encodings also have to be computed only once. Therefore, during the tree search, only one pass of the recurrent (and smaller) part of the complete ÏNN has to be queried in each step.
F. Alternative: Recurrent Feasibility Classiï¬er
The method of [12] and [48] to learn a feasibility classiï¬er considers single actions only, i.e. no sequences. To allow for a comparision we here present an approach to extend the idea of a feasibility classiï¬er to action sequences and how it can be integrated into our TAMP framework. The main idea is to classify the feasibility of an action sequence with a recurrent classiï¬er, independently from the fact whether it has reached a symbolic goal or not. This way, during the tree search solving an NLP can be replaced by evaluating the classiï¬er, which usually is magnitudes faster. Technically, this classiï¬er ÏRC (¯ak, I(Ok, S), hkâ1) has a similar architecture as ÏNN, but only takes the current action-object-image pair as well as the hidden state of the previous step as input and predicts whether
# Algorithm 1 LGP with Deep Visual Reasoning
1: Input: Scene S, goal g and max sequence length Kmax 2: L = â
3: E = {n0} 4: while no solution found do
D set of leaf nodes D set of nodes to be expanded, no is root node
3, B= {no} D set of nodes to be expanded, no is root node 4: while no solution found do > choose node from expand set with highest prediction 5: Ne = argmax p(n) nâ¬E A k(n)<Kinx 6 E+ E\{ny} 7 for all (@,O) ⬠AO(s(nj,), S) d 8 (Desh) = ny (a, (0.5) 9.1(Oy,8)-h(n) 9: s = succ(s(njz),(@,O)), k=k(nz) +1 10: n= (s, (a, O), k,px,h, ne > new node 11: if s © Spoa(g) then 12: LeLu {n} D if goal state, add to leaf node set 13: else 14: EO EU {n} > if no goal state, add to expand set 15: end if 16: end for > consider already found leaf nodes 17: while |L| > 0 do > choose node from leaf node set with highest prediction 18: ny, = argmax p(n) néL 19: if p(n) < finresn then 20: finresh < adjustFeasibilityThreshold( fihresh) 21: break 22: end if 23: L+< L\{ni} 24: (a, O)1: K= (a, On. k(n nz) (M7) D extract action seq. 25: solve NLP x = P ((a@, O)1:«,9, Og, 5) 26 if feasible, ie. Fs5((@,O)1.«) = 1 then 27: solution (@,O)1:« with trajectory « found 28: break 29: end if 30: end while 31: end while
bWVLPL?Le -p YP FG
-p
YP
FG
Fig. 3. The four different integer assignments of the grasp operator.
the action sequence up to this step is feasible. A disadvantage is that just because an action is feasible does not necessarily mean that following it will solve the TAMP problem in the long term. Sec. V-E presents an empirical comparison.
V. EXPERIMENTS The video https://youtu.be/i8yyEbbvoEk demonstrates the
planned motions both in simulation and with a real robot.
A. Setup and Task
We consider a tabletop scenario with two robot arms (Franka Emika Panda) and multiple box-shaped objects, see Fig. 1 for a typical scene, in which the goal is to move an object to different target locations, visualized by a red square in Fig. 1.
1) Action Operators and Optimization Objectives: The logic is described by PDDL-like rules. There are two ac- tion operators grasp and place. The grasp action takes as parameters the robot arm and one of four integers η, represented in its discrete action symbol ¯a, and one object O. Depending on the integer, the end-effector is aligned to different surfaces of the box (equality constraint). Furthermore, an inequality constraint ensures that the center point between the two grippers is inside of the object (with a margin). In Fig. 3 these four discrete ways of grasping are visualized for one robot arm. The exact grasping location relative to the object is still subject to the optimizer. The place action has the robot arm (also encoded in the discrete symbol ¯a) and two objects as the tuple O as parameters. The effects on the optimization objectives are that the bottom surface of object one touches and is parallel to object two. In our case, object one is a box, whereas object two is the table or the goal location. Preconditions for grasp and place ensure that one robot arm attempts to grasp only one object simultaneously and that an arm can only place an object if it is holding one. Path costs are squared accelerations on x. There are collisions and joint limits as inequality constraints (with no margin).
2) Properties of the Scene: There are multiple properties which make this (intuitively simple) task challenging for task and motion planning algorithms. First of all, the target location can fully or partially be occupied by another object. Secondly, the object and/or the target location can be out of reach for one of the robot arms. Hence, the algorithm has to ï¬gure out which robot arm to use at which phase of the plan and the two robot arms possibly have to collaborate to solve the task. Thirdly, apart from position and orientation, the objects vary in size, which also inï¬uences the ability to reach or place an object. In addition, grasping box-shaped objects introduces a combinatorics that is not handled well by nonlinear trajectory optimization due to local minima and also joint limits. Therefore, as described in the last paragraph, we introduce integers as part of the discrete action that inï¬uence the grasping geometry. This greatly increases the branching factor of the task. For example, depending on the size of the object, it has to be grasped differently or a handover between the two arms is possible or not, which has a signiï¬cant inï¬uence on the feasibility of action sequences.
Indeed, Tab. I shows the number of action sequences with a certain length that lead to a symbolic goal state over the number of objects in the scene. This number corresponds to candidate sequences for a feasible solution (the set T (g, S)) which demonstrates the great combinatorial complexity of the task, not only with respect to sequence length, but also number of objects. One could argue that an occupied and reachability predicate could be introduced in the logic to reduce the branching of the tree. However, this requires a reasoning engine which decides those predicates for a given scene, which is not trivial for general cases. More importantly, both reachability and occupation by another object is something that is also dependent on the geometry of the object that should be grasped or placed and hence not something that can be precomputed in all cases [12, 11]. For example, if the object
TABLE I NUMBER OF ACTION SEQUENCES THAT REACH A SYMBOLIC GOAL STATE
# of objects in the scene 2 length of the action sequence 3 4 5 6 1 2 3 4 5 8 8 8 8 8 32 96 160 224 288 192 704 1,216 1,728 2,240 1,024 6,400 15,872 29,440 47,104 5,632 51,200 145,920 289,792 482,816
that is occupying the target location is small and the object that should be placed there also, then it can be placed directly, while a larger object that should be placed requires to ï¬rst remove the occupying object. Our algorithm does not rely on such non-general simpliï¬cations, but decides promising action sequences based on the real relational geometry of the scene.
B. Training/Test Data Generation and Network Details
We generated 30,000 scenes with two objects present at a time. The sizes, positions and orientations of the objects as well as the target location are sampled uniformly within a certain range. For half of the scenes, one of the objects (not the one that is part of the goal) is placed directly on the target, to ensure that at least half of the scenes contain a situation where the target location is occupied. The dataset Ddata is determined by a breadth-ï¬rst search for each scene over the action sequences, until either 4 solutions have been found or 1,000 leaf nodes have been considered. In total, for 25,736 scenes at least one solution was found, which were then the scenes chosen to create the actual training dataset Dtrain as described in Sec. IV-B. 102,566 of the action sequences in Ddata were feasible, 2,741,573 completely infeasible. This shows the claim of the introduction that the majority of action sequences are actually infeasible. Furthermore, such an imbalanced dataset imposes difï¬culties for a learning algorithm. With the data transformation from Sec. IV-B, there are 7,926,696 fj = 0 and 1,803,684 fj = 1 training targets in Dtrain, which is more balanced.
The network is trained with the ADAM optimizer (learn- ing rate 0.0005) with a batch size of 48. To account for the aforementioned imbalance in the dataset, we oversample feasible action sequences such that at least 16 out of the 48 samples in one batch come from a feasible sequence. The image encoder consists of three convolutional layers with 5, 10, 10 channels, respectively, and ï¬lter size of 5x5. The second and third convolutional layer has a stride of 2. After the convoultional layers, there is a fully connected layer with an output feature size of 100. The same image encoder is used to encode the action images and the goal image. The discrete action encoder is one fully connected layer with 100 neurons and relu activations. The recurrent part consists of one layer with 300 GRU cells, followed by a linear layer with output size 1 and a sigmoid activation as output for Ï. Since the task is always to place an object at varying locations, we left out the discrete goal encoder in the experiments presented here.
To evaluate the performance and accuracy of our method, we sampled 3000 scenes, again containing two objects each, with the same algorithm as for the training data, but with a dif- ferent random seed. Using breadth-ï¬rst search, we determined
ax w T L T L Number of solved NLPs HNw Ao T L Time (total solution time) [s] ornNwWwA T L I aot â | | | | =} | | | | | 2 3 4 5 6 2 3 4 5 6 Action sequence length Action sequence length
Fig. 4. Total time (left) and number of solved NLPs (right) to ï¬nd an overall feasible solution for the test scenes with neural network
3,000 FT T T T | 2,000 FT T 74 g 5 1,500 | | g 2,000 | |s 3 3 1,000 |- 4 g a = 1,000 |- ya g yt 500 - 4 J z= T 2 of/â - & 4 op â- SS li [=I n n | L | n N | L 2 3 4 5 6 3 4 5 6 Action sequence length Action sequence length
Fig. 5. Comparison to LGP tree search. Left: total runtime. Right: the speedup of our neural network (sol. time with NN / sol. time with LGP tree search).
2705 feasible scenes, which serve as the actual test scenes.
C. Performance â Results on Test Scenarios
Fig. 4 shows both the total runtime and the number of NLPs that have to be solved to ï¬nd a feasible solution. When we report the total runtime, we refer to everything, meaning computing the image/action encodings, querying the neural network during the search and all involved NLPs that are solved. As one can see, for all cases with sequence lengths of 2 and 3, the ï¬rst predicted sequence is feasible, such that there is no search and only one NLP has to be solved. For length 3, the median is still 1, but also for sequences of lengths 5 and 6 in half of the cases less than two NLPs have to be solved. Generally, with a median runtime of about 2.3 s for even sequence length of 6, the overall framework with the neural network has a high performance. Furthermore, the upper whiskers are also below 7 s. All experiments have been performed with an Intel Xeon W-2145 CPU @ 3.70GHz.
D. Comparison to Multi-Bound LGP Tree Search
In Fig. 5 (left) the runtimes for solving the test cases with LGP tree search are presented, which shows the difï¬culty of the task. In 132 out of the 2,705 test cases, LGP tree search is not able to ï¬nd a solution within the timeout, compared to only 3 with the neural network. Fig. 5 (right) shows the speedup that is gained by using the neural network. For sequence length 4, the network is 46 times faster, 100 times for length 5 and for length 6 even 705 times (median). In this plot, only those scenes where LGP tree search and the neural network have found the same sequence lengths are compared.
E. Comparison to Recurrent Classiï¬er
Fig. 6 (left) shows a comparison of our proposed goal- conditioned network that generates sequences to a recurrent classiï¬er that only predicts feasibility of an action sequence, independent from the task goal. As one can see, while such a
Time (total solution time) [s] 40 20 ot aA § g Network query time [s] 2 3 4 5 6 Action sequence length Action sequence length
Fig. 6. Comparison to recurrent classiï¬er (orange). Blue is our network.
classiï¬er also leads to a signiï¬cant speedup compared to LGP tree search, our goal-conditioned network has an even higher speedup, which also stays relatively constant with respect to increasing action sequence lengths. Furthermore, with the classiï¬er 22 solutions have not been found, compared to 3 with our approach. While the network query time is neglectable for our network, as can be seen in Fig. 6 (right), the time to query the recurrent classiï¬er becomes visible.
F. Generalization to Multiple Objects
Creating a rich enough dataset containing combinations of different numbers of objects is infeasible. Instead, we now take the network that has been trained as described in Sec. V-B with only two objects present at a time and test whether it generalizes to scenes with more than two (and also only one) objects. The 200 test scenes are always the same, but more and more objects are added. Fig. 7a reports the total runtime to ï¬nd a feasible solution with our proposed neural network over the number of objects present in the scene. These runtimes include all scenes with different action sequence lengths. There was not a single scene where no solution is found. While the upper quartile increases, the median is not signiï¬cantly affected by the presence of multiple objects. Generally, the performance is remarkable, especially when looking at Tab. I where for sequence length 6 there are nearly half a million possible action sequences. The fact that the runtime increases for more objects is not only caused by the fact that the network inevitable does some mistakes and hence more NLPs have to be solved. Solving (even a feasible) NLP with more objects can take more time due to increased collision queries and increased non-convexity.
G. Generalization to Cylinders
Although the network has been trained on box-shaped objects only, we investigate if the network can generalize to scenes which contain other shapes like cylinders. Since the objects are encoded in the image space, there is a chance that, as compared to a feature space which depends on a less general parameterization of the shape, this is possible. We generated 200 test scenes that either contain two cylinders, three cylinders or a mixture of a box and a cylinder, all of different sizes/positions/orientations and targets. If the goal is to place a cylinder on the target, we made sure in the data generation that the cylinder has an upper limit on its radius in order to be graspable. These cylinders, however, have a relatively similar appearance in the rasterized image as boxes. Therefore, the scenes also contain cylinders which have
Fig. 7. Generalization experiments.
larger radii such that they have a clearly different appearance than what is contained in the dataset. Fig. 7b shows the total solution time with the neural network. As one can see, except for action sequences of length 6, there is no drop in performance compared to box-shaped objects, which indicates that the network is able to generalize to other shapes. Even for length 6, the runtimes are still very low, especially compared to LGP tree search. Please note that our constraints for the nonlinear trajectory optimization problem are general enough to deal with boxes and cylinders. However, one also has to state that for even more general shapes the trajectory optimization becomes a problem in its own.
H. Real Robot Experiments
Fig. 1 shows our complete framework in the real world. In this scene the blue object occupies the goal location and the target object (yellow) is out of reach for the robot arm that is be able to place it on the goal. Since the yellow object is large enough, the network proposed a handover solution (Fig. 1c). The presence of an additional object (green) does not confuse the predictions. The planned trajectories are executed open-loop. The images as input to the neural network are rendered from object models obtained by a perception pipeline, therefore, the transfer to the real robot is directly possible.
VI. CONCLUSION In this work, we proposed a neural network that learned to predict promising discrete action sequences for TAMP from an initial scene image and the task goal as input. In most cases, the ï¬rst sequence generated by the network was feasible. Hence, despite the fact that the network could act as a heuristic, there was no real search over the discrete variable and consequently only one trajectory optimization problem had to be solved to ï¬nd a solution to the TMAP problem.
Although being trained on only two objects present at a time, the learned representation of the network enabled to generalize to scenes with multiple objects (and other shapes to some extend) while still showing a high performance.
The main assumption and therefore limitation of the pro- posed method is that the initial scene image has to contain sufï¬cient information to solve the task, which means no total occlusions or other ambiguities.
ACKNOWLEDGMENTS Danny Driess thanks the International Max-Planck Research School for Intelligent Systems (IMPRS-IS) for the support. Marc Toussaint thanks the Max-Planck Fellowship at MPI-IS.
# REFERENCES
[1] Brandon Amos, Ivan Jimenez, Jacob Sacks, Byron Boots, and J Zico Kolter. Differentiable mpc for end-to-end planning and control. In Advances in Neural Information Processing Systems, pages 8289â8300, 2018. [2] W Bejjani, MR Dogar, and M Leonetti.
Learning physics-based manipulation in clutter: Combining image- based generalization and look-ahead planning. In Inter- national Conference on Intelligent Robots and Systems (IROS). IEEE, 2019.
[3] Byron Boots, Sajid M Siddiqi, and Geoffrey J Gordon. Closing the learning-planning loop with predictive state representations. The International Journal of Robotics Research, 2011.
and Nicolas Mansard. Learning feasibility constraints for multicon- tact locomotion of legged robots. In Robotics: Science and Systems, 2017.
[5] Rohan Chitnis, Dylan Hadï¬eld-Menell, Abhishek Gupta, Siddharth Srivastava, Edward Groshev, Christopher Lin, and Pieter Abbeel. Guided search for task and motion plans using learned heuristics. In International Confer- ence on Robotics and Automation (ICRA), pages 447â 454. IEEE, 2016.
[6] Neil T. Dantam, Zachary K. Kingston, Swarat Chaudhuri, and Lydia E. Kavraki. An incremental constraint-based framework for task and motion planning. International Journal on Robotics Research, 2018.
[7] Lavindra de Silva, Amit Kumar Pandey, Mamoun Gharbi, and Rachid Alami. Towards combining HTN planning and geometric task planning. CoRR, 2013.
[8] Neel Doshi, Francois R Hogan, and Alberto Rodriguez. Hybrid differential dynamic programming for planar manipulation primitive. In International Conference on Robotics and Automation (ICRA). IEEE, 2020.
[9] Alexey Dosovitskiy and Vladlen Koltun. Learning to act by predicting the future. In International Conference on Learning Representations ICLR, 2017.
[10] Danny Driess, Ozgur Oguz, and Marc Toussaint. Hier- archical task and motion planning using logic-geometric programming (HLGP). RSS Workshop on Robust Task and Motion Planning, 2019.
[11] Danny Driess, Syn Schmitt, and Marc Toussaint. Active learning with error and reachable set inverse model estimates. In Proc. of the IEEE International Conference on Intelligent Robots and Systems (IROS), 2019. [12] Danny Driess, Ozgur Oguz, Jung-Su Ha, and Marc Toussaint. Deep visual heuristics: Learning feasibility of mixed-integer programs for manipulation planning. In Proc. of the IEEE International Conference on Robotics and Automation (ICRA), 2020.
[13] Frederik Ebert, Chelsea Finn, Alex X. Lee, and Sergey Levine. Self-supervised visual planning with temporal In Conference on Robot Learning, skip connections. 2017.
[14] Chelsea Finn and Sergey Levine. Deep visual foresight
for planning robot motion. In International Conference on Robotics and Automation (ICRA), pages 2786â2793. IEEE, 2017.
[15] Chelsea Finn, Xin Yu Tan, Yan Duan, Trevor Darrell, Sergey Levine, and Pieter Abbeel. Deep spatial au- In International toencoders for visuomotor learning. Conference on Robotics and Automation (ICRA). IEEE, 2016.
[16] Caelan Garrett, Leslie Kaelbling, and Tomas Lozano- Perez. Learning to rank for synthesizing planning heuristics. In Proc. of the Int. Joint Conf. on Artiï¬cial Intelligence (IJCAI), 2016.
[17] Jung-Su Ha, Young-Jin Park, Hyeok-Joo Chae, Soon- Seo Park, and Han-Lim Choi. Adaptive path-integral autoencoders: Representation learning and planning for dynamical systems. In Advances in Neural Information Processing Systems, pages 8927â8938, 2018.
[18] Jung-Su Ha, Danny Driess, and Marc Toussaint. Prob- abilistic framework for constrained manipulations and In Proc. task and motion planning under uncertainty. of the IEEE International Conference on Robotics and Automation (ICRA), 2020.
[19] Valentin N Hartmann, Ozgur S Oguz, Danny Driess, Marc Toussaint, and Achim Menges. Robust task and motion planning for long-horizon architectural construc- tion planning. arXiv:2003.07754, 2020.
[20] Franc¸ois Robert Hogan and Alberto Rodriguez. Feedback control of the pusher-slider system: A story of hybrid and underactuated contact dynamics. In Proceedings of the Workshop on Algorithmic Foundation Robotics (WAFR), 2016.
[21] Francois Robert Hogan, Eudald Romo Grau, and Alberto Rodriguez. Reactive planar manipulation with convex In International Conference on Robotics hybrid MPC. and Automation (ICRA), pages 247â253. IEEE, 2018. URL https://doi.org/10.1109/ICRA.2018.8461175. [22] Brian Ichter and Marco Pavone. Robot motion planning Robotics and Automation
in learned latent spaces. Letters, 4(3):2407â2414, 2019.
[23] Brian Ichter, James Harrison, and Marco Pavone. Learn- ing sampling distributions for robot motion planning. In International Conference on Robotics and Automation (ICRA), pages 7087â7094. IEEE, 2018.
[24] Leslie Pack Kaelbling and Tom´as Lozano-P´erez. Hi- In Proc. of the IEEE erarchical planning in the now. International Conference on Robotics and Automation (ICRA), 2011.
[25] Beomjoon Kim, Leslie Pack Kaelbling, and Tom´as Lozano-P´erez. Guiding search in continuous state-action spaces by learning an action sampler from off-target In Thirty-Second AAAI Conference search experience. on Artiï¬cial Intelligence, 2018.
[26] Beomjoon Kim, Zi Wang, Leslie Pack Kaelbling, and Tom´as Lozano-P´erez. Learning to guide task and motion planning using score-space representation. The Inter- national Journal of Robotics Research, 38(7):793â812, 2019.
[27] F. Lagriffoul, D. Dimitrov, A. Safï¬otti, and L. Karlsson. Constraint propagation on interval bounds for dealing In International Confer- with geometric backtracking. ence on Intelligent Robots and Systems, 2012.
Julien Bidot, Alessandro Safï¬otti, and Lars Karlsson. Efï¬ciently combining task and motion planning using geometric constraints. International Journal on Robotics Research, 2014.
and Arne Voigtl¨ander. Autonomous reinforcement learning on raw visual input data in a real world application. In IJCNN, 2012.
[30] Richard Li, Allan Jabri, Trevor Darrell, and Pulkit Agrawal. Towards practical multi-object manipulation using relational reinforcement learning. In International Conference on Robotics and Automation (ICRA). IEEE, 2020.
[31] Tom´as Lozano-P´erez and Leslie Pack Kaelbling. A constraint-based method for solving sequential manipu- lation planning problems. In Proc. of the Int. Conf. on Intelligent Robots and Systems (IROS), 2014. [32] Matthew Mason. The mechanics of manipulation.
In Int. Conf. on Robotics and Automation (ICRAâ85). IEEE, 1985.
[33] Masashi Okada, Luca Rigazio, and Takenobu Aoshima. Path integral networks: End-to-end differentiable optimal control. arXiv preprint arXiv:1706.09597, 2017. [34] Razvan Pascanu, Yujia Li, Oriol Vinyals, Nicolas Heess, Lars Buesing, S´ebastien Racani`ere, David P. Re- ichert, Theophane Weber, Daan Wierstra, and Peter W. Battaglia. Learning model-based planning from scratch. CoRR, abs/1707.06170, 2017.
[35] Chris Paxton, Yotam Barnoy, Kapil D. Katyal, Raman Arora, and Gregory D. Hager. Visual robot task planning. In International Conference on Robotics and Automation (ICRA), pages 8832â8838. IEEE, 2019.
[36] S´ebastien Racani`ere, Th´eophane Weber, David Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdom`enech Badia, Oriol Vinyals, Nicolas Imagination-augmented agents Heess, Yujia Li, et al. for deep reinforcement learning. In Advances in neural information processing systems, 2017.
[37] Ismael Rodriguez, Korbinian Nottensteiner, Daniel Lei- dner, Michael Kasecker, Freek Stulp, and Alin Albu- Sch¨affer. Iteratively reï¬ned feasibility checks in robotic assembly sequence planning. Robotics and Automation Letters, 2019.
[38] David Silver, Hado van Hasselt, Matteo Hessel, Tom Schaul, Arthur Guez, Tim Harley, Gabriel Dulac-Arnold, David Reichert, Neil Rabinowitz, Andre Barreto, et al. The predictron: End-to-end learning and planning. In International Conference on Machine Learning, 2017.
[39] Aravind Srinivas, Allan Jabri, Pieter Abbeel, Sergey Levine, and Chelsea Finn. Universal planning net- works: Learning generalizable representations for visuo- motor control. In International Conference on Machine
Learning (ICML), pages 4739â4748, 2018. URL http: //proceedings.mlr.press/v80/srinivas18b.html.
[40] Siddharth Srivastava, Eugene Fang, Lorenzo Riano, Ro- han Chitnis, Stuart J. Russell, and Pieter Abbeel. Com- bined task and motion planning through an extensible planner-independent interface layer. In Proc. of the Int. Conf. on Robotics and Automation (ICRA), 2014. [41] Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, and In Advances Pieter Abbeel. Value iteration networks. in Neural Information Processing Systems, pages 2154â 2162, 2016. [42] Marc Toussaint.
Logic-geometric programming: An optimization-based approach to combined task and mo- the Twenty-Fourth tion planning. International Joint Conference on Artiï¬cial Intelligence, IJCAI, pages 1930â1936. AAAI Press, 2015. URL http://ijcai.org/Abstract/15/274.
[43] Marc Toussaint and Manuel Lopes. Multi-bound tree search for logic-geometric programming in cooperative In International Conference on manipulation domains. Robotics and Automation (ICRA), pages 4044â4051. IEEE, 2017.
[44] Marc Toussaint, Kelsey R Allen, Kevin A Smith, and Josh B Tenenbaum. Differentiable physics and stable modes for tool-use and manipulation planning. In Proc. of Robotics: Science and Systems (R:SS), 2018.
[45] Marc Toussaint, Jung-Su Ha, and Danny Driess. Describ- ing physics for physical reasoning: Force-based sequen- tial manipulation planning. arXiv:2002.12780, 2020. [46] Zi Wang, Caelan Reed Garrett, Leslie Pack Kaelbling, and Tom´as Lozano-P´erez. Active model learning and diverse action sampling for task and motion planning. In International Conference on Intelligent Robots and Systems (IROS), pages 4107â4114. IEEE, 2018.
Joschka Boedecker, and Martin A. Riedmiller. Embed to con- trol: A locally linear latent dynamics model for control In Advances in Neural Information from raw images. Processing Systems, pages 2746â2754, 2015.
[48] Andrew M Wells, Neil T Dantam, Anshumali Shrivas- tava, and Lydia E Kavraki. Learning feasibility for task and motion planning in tabletop environments. Robotics and Automation Letters, 4(2):1255â1262, 2019.
Learning to manipulate object collections using grounded state rep- resentations. Conference on Robot Learning, 2019. URL https://arxiv.org/abs/1909.07876.
[50] Annie Xie, Frederik Ebert, Sergey Levine, and Chelsea Improvisation through physical understanding: In Finn. Using novel objects as tools with visual foresight. Proc. of Robotics: Science and Systems (R:SS), 2019. | {
"id": "2003.07754"
} |
2006.04768 | Linformer: Self-Attention with Linear Complexity | Large transformer models have shown extraordinary success in achieving
state-of-the-art results in many natural language processing applications.
However, training and deploying these models can be prohibitively costly for
long sequences, as the standard self-attention mechanism of the Transformer
uses $O(n^2)$ time and space with respect to sequence length. In this paper, we
demonstrate that the self-attention mechanism can be approximated by a low-rank
matrix. We further exploit this finding to propose a new self-attention
mechanism, which reduces the overall self-attention complexity from $O(n^2)$ to
$O(n)$ in both time and space. The resulting linear transformer, the
\textit{Linformer}, performs on par with standard Transformer models, while
being much more memory- and time-efficient. | http://arxiv.org/pdf/2006.04768 | Sinong Wang, Belinda Z. Li, Madian Khabsa, Han Fang, Hao Ma | cs.LG, stat.ML | null | null | cs.LG | 20200608 | 20200614 | 0 2 0 2
n u J 4 1 ] G L . s c [
3 v 8 6 7 4 0 . 6 0 0 2 : v i X r a
# Linformer: Self-Attention with Linear Complexity
Sinong Wang, Belinda Z. Li, Madian Khabsa, Han Fang, Hao Ma Facebook AI, Seattle, WA {sinongwang, belindali, hanfang, mkhabsa, haom}@fb.com
# Abstract
Large transformer models have shown extraordinary success in achieving state-of- the-art results in many natural language processing applications. However, training and deploying these models can be prohibitively costly for long sequences, as the standard self-attention mechanism of the Transformer uses O(n2) time and space with respect to sequence length. In this paper, we demonstrate that the self-attention mechanism can be approximated by a low-rank matrix. We further exploit this ï¬nding to propose a new self-attention mechanism, which reduces the overall self-attention complexity from O(n2) to O(n) in both time and space. The resulting linear transformer, the Linformer, performs on par with standard Transformer models, while being much more memory- and time-efï¬cient.
# Introduction
Transformer models (Vaswani et al., 2017) have become ubiquitous for wide variety of problems in natural language processing (NLP), including translation (Ott et al., 2018), text classiï¬cation, question answering, among others (Raffel et al., 2019; Mohamed et al., 2019). Over the last couple of years, the number of parameters in state-of-the-art NLP transformers has grown drastically, from the original 340 million introduced in BERT-Large to 175 billion in GPT-3 (Brown et al., 2020). Although these large-scale models yield impressive results on wide variety of tasks, training and deploying such model are slow in practice. For example, the original BERT-Large model (Devlin et al., 2019) takes four days to train on 16 Cloud TPUs, and the recent GPT-3 (Brown et al., 2020) consumed orders of magnitude more petaï¬ops / day to train compared to its predecessor, GPT-2 (Radford et al., 2019). Beyond training, deploying Transformer models to real world applications is also expensive, usually requiring extensive distillation (Hinton et al., 2015) or compression.
The main efï¬ciency bottleneck in Transformer models is its self-attention mechanism. Here, each tokenâs representation is updated by attending to all other tokens in the previous layer. This operation is key for retaining long-term information, giving Transformers the edge over recurrent models on long sequences. However, attending to all tokens at each layer incurs a complexity of O(n2) with respect to sequence length. Thus, in this paper, we seek to answer the question: can Transformer models be optimized to avoid this quadratic operation, or is this operation required to maintain strong performance?
Prior work has proposed several techniques for improving the efï¬ciency of self-attention. One popular technique is introducing sparsity into attention layers (Child et al., 2019; Qiu et al., 2019; Beltagy et al., 2020) by having each token attend to only a subset of tokens in the whole sequence. This reduces the overall complexity of the attention mechanism to O(n n) (Child et al., 2019). However, as shown in Qiu et al. (2019), this approach suffers from a large performance drop with limited efï¬ciency gains, i.e., a 2% drop with only 20% speed up. More recently, the Reformer (Kitaev et al., 2020) used locally-sensitive hashing (LSH) to reduce the self-attention complexity to O(n log(n)). However, in practice, the Reformerâs efï¬ciency gains only appear on sequences with length > 2048 (Figure 5 in Kitaev et al. (2020)). Furthermore, the Reformerâs multi-round hashing approach actually increases the number of sequential operations, which further undermines their ï¬nal efï¬ciency gains.
Preprint. Under review.
In this work, we introduce a novel approach for tackling the self-attention bottleneck in Transformers. Our approach is inspired by the key observation that self-attention is low rank. More precisely, we show both theoretically and empirically that the stochastic matrix formed by self-attention can be approximated by a low-rank matrix. Empowered by this observation, we introduce a novel mechanism that reduces self-attention to an O(n) operation in both space- and time-complexity: we decompose the original scaled dot-product attention into multiple smaller attentions through linear projections, such that the combination of these operations forms a low-rank factorization of the original attention. A summary of runtimes for various Transformer architectures, including ours, can be found in Table 1.
One predominant application of Transformers, that has seen the most gains, is using them as pretrained language models, whereby models are ï¬rst pretrained with a language modeling objective on a large corpus, then ï¬netuned on target tasks using supervised data (Devlin et al., 2019; Liu et al., 2019; Lewis et al., 2019). Following Devlin et al. (2019), we pretrain our model on BookCorpus (Zhu et al., 2015) plus English Wikipedia using masked-language-modeling objective. We observe similar pretraining performance to the standard Transformer model. We then ï¬netune our pretrained models on three tasks from GLUE (Wang et al., 2018) and one sentiment analysis task, IMDB reviews (Maas et al., 2011). On these tasks, we ï¬nd that our model performs comparably, or even slightly better, than the standard pretrained Transformer, while observing signiï¬cant training and inference speedups.
Model Architecture Complexity per Layer Sequential Operation Recurrent Transformer, (Vaswani et al., 2017) Sparse Tansformer, (Child et al., 2019) Reformer, (Kitaev et al., 2020) Linformer O(n) O(n2) â O(n n) O(n log(n)) O(n) O(n) O(1) O(1) O(log(n)) O(1)
Table 1: Per-layer time complexity and minimum number of sequential operations as a function of sequence length (n) for various architectures.
# 2 Backgrounds and Related works
# 2.1 Transformer and Self-Attention
The Transformer is built upon the idea of Multi-Head Self-Attention (MHA), which allows the model to jointly attend to information at different positions from different representation subspaces. MHA is deï¬ned as
MultiHead(Q, K, V ) = Concat(head1, head2, . . . , headh)W O, (1) where Q, K, V â RnÃdm are input embedding matrices, n is sequence length, dm is the embedding dimension, and h is the number of heads. Each head is deï¬ned as:
headi = Attention(QW Q i , KW K i , V W V i ) = softmax QW Q i (KW K dk i )T â V W V i , (2)
# P
where W Q i â RdmÃdv , W O â RhdvÃdm are learned matrices and dk, dv are the hidden dimensions of the projection subspaces. For the rest of this paper, we will not differentiate between dk and dv and just use d. The self-attention deï¬ned in (2) refers to a context mapping matrix P â RnÃn. The Transformer uses P to capture the input context for a given token, based on a combination of all tokens in the sequence. However, computing P is expensive. It requires multiplying two n à d matrices, which is O(n2) in time and space complexity. This quadratic dependency on the sequence length has become a bottleneck for Transformers.
# 2.2 Related works
There has been much prior literature on improving the efï¬ciency of Transformers, especially the self-attention bottleneck. The most common techniques for model efï¬ciency that can be applied to Transformers (some speciï¬c to Transformers, others more general-purpose) include:
2
Mixed Precision (Micikevicius et al., 2017): Using half-precision or mixed-precision representations of ï¬oating points is popular in deep learning, and is also widely used in training Transformers (Ott et al., 2019). This technique can be further improved through Quantization Aware Training (Jacob et al., 2018; Fan et al., 2020), where the weights are quantized during training and the gradients are approximated with the Straight-Through Estimator. This line of work is orthogonal to our approach, and we use mixed-precision training by default.
Knowledge Distillation (Hinton et al., 2015): Knowledge distillation aims to transfer the âknowl- edge" from a large teacher model to a lightweight student model. The student model is then used during inference. However this approach has drawbacks: It does not address speeding up the teacher model during training, and moreover, student models usually suffer performance degradation com- pared to the teacher model. For example, when distilling a 12-layer BERT to a 6-layer BERT, the student model experiences an average 2.5% performance drop on several benchmark tasks (Sanh et al., 2019).
Sparse Attention (Child et al., 2019): This technique improves the efï¬ciency of self-attention by adding sparsity in the context mapping matrix P . For example, the Sparse Transformer (Child et al., 2019) only computes Pij around the diagonal of matrix P (instead of the all Pij). Meanwhile, blockwise self-attention (Qiu et al., 2019) divides P into multiple blocks and only computes Pij within the selected blocks. However, these techniques also suffer a large performance degradation, while having only limited additional speed-up, i.e., 2% drop with 20% speed up.
LSH Attention (Kitaev et al., 2020): Locally-sensitive hashing (LSH) attention utilizes a multi-round hashing scheme when computing dot-product attention, which in theory reduces the self-attention complexity to O(n log(n)). However, in practice, their complexity term has a large constant 1282 and it is only more efï¬cient than the vanilla transformer when sequence length is extremely long.
Improving Optimizer Efï¬ciency: Microbatching (Huang et al., 2019) splits a batch into small microbatches (which can be ï¬t into memory), and then separately runs forward and backward passes on them with gradient accumulation. Gradient checkpointing (Chen et al., 2016) saves memory by only caching activations of a subset of layers. The uncached activations are recomputed during backpropagation from the latest checkpoint. Both techniques trade off time for memory, and do not speed up inference.
As weâve noted, most common techniques have limitations in reducing both the training and inference time/memory consumption, we investigate how to optimize the self-attention layers and introduce our approach next.
# 3 Self-Attention is Low Rank
In this section, we demonstrate that the self-attention mechanism, i.e., the context mapping matrix P , is low-rank.
We ï¬rst provide a spectrum analysis of the context mapping matrix P . We use two pretrained trans- former models, RoBERTa-base (12-layer stacked transformer) and RoBERTa-large (24-layer stacked transformer) (Liu et al., 2019) on two tasks: masked-language-modeling task on Wiki103 (Merity et al., 2016) and classiï¬cation task on IMDB (Maas et al., 2011). In Figure 1 (left), we apply singular value decomposition into P across different layers and different heads of the model, and plot the normalized cumulative singular value averaged over 10k sentences. The results exhibit a clear long-tail spectrum distribution across each layer, head and task. This implies that most of the information of matrix P can be recovered from the ï¬rst few largest singular values. In Figure 1 (right), we plot a heatmap of the normalized cumulative singular value at the 128-th largest singular value (out of 512). We observe that the spectrum distribution in higher layers is more skewed than in lower layers, meaning that, in higher layers, more information is concentrated in the largest singular values and the rank of P is lower.
Below, we provide a theoretical analysis of the above spectrum results. Theorem 1. (self-attention is low rank) For any Q, K, V â RnÃd and W Q any column vector w â Rn of matrix V W V
i â RdÃd, for i , W K i , W V i , there exists a low-rank matrix ËP â RnÃn such that
Pr(||Pw? â Pw || < [Pw ||) > 1 â 0(1) and rank(P) = O(log(n)), (3) where the context mapping matrix P is defined in (2).
3
i) 12-layers Transformer 24-layers Transformer g 0.96 z 0.9 ED 0.94 208 5 2 z | a 0.92 207 5 E 3 2 06 0.90 g £05 | â woe i â iwoB 1 0.88 S i â wiki103 Hi â wiki1o3 1 404 â0 128 512 0 128 512 7 Eigenvalue index Eigenvalue index Head index
Figure 1: Left two ï¬gures are spectrum analysis of the self-attention matrix in pretrained transformer model (Liu et al., 2019) with n = 512. The Y-axis is the normalized cumulative singular value of context mapping matrix P , and the X-axis the index of largest eigenvalue. The results are based on both RoBERTa-base and large model in two public datasets: Wiki103 and IMDB. The right ï¬gure plots the heatmap of normalized cumulative eigenvalue at the 128-th largest eigenvalue across different layers and heads in Wiki103 data.
Proof. Based on the deï¬nition of the context mapping matrix P , we can write
Q OW ney exp(A)- Dj", (4) eeâ~_âS A P= softmax
where D, is an n x n diagonal matrix. The main idea of this proof is based on the distributional Johnson-Lindenstrauss lemma 1984) (JL for short). We construct the approximate low rank matrix as P = exp (A) - D3" RT R, where R ⬠R**â with iid. entries from N(0,1/k). We can then use the JL lemma to show that, for any column vector w ⬠Râ of matrix Vwy, when k = 5log(n)/(e? â â¬Â°), we have
Pr (PR? Rw" â Pw" || < â¬l|Pw" ||) > 1-0(1). (5)
For more details, refer to the supplementary materials.
Given the low-rank property of the context mapping matrix P , one straightforward idea is to use singular value decomposition (SVD) to approximate P with a low-rank matrix Plow, as follows
U1 diag{o1,--- , ox} : k (6) Uk k T P® Pow = y O;UjU; = |U1,+++ Uk i=1 a k
where o;, u; and v; are the 7 largest singular values and their corresponding singular vectors. Based on the results in Theorem[IJand the Eckart~Young-Mirsky Theorem (Eckart & Young]|I936), one can use Prow to approximate self-attention (2) with ¢ error and O(nk) time and space complexity. However, this approach requires performing an SVD decomposition in each self-attention matrix, which adds additional complexity. Therefore, we propose another approach for low-rank approximation that avoids this added complexity.
# 4 Model
In this section, we propose a new self-attention mechanism which allows us to compute the contextual mapping P · V W V i
The main idea of our proposed linear self-attention (Figure 2) is to add two linear projection matrices Ei, Fi â RnÃk when computing key and value. We ï¬rst project the original (n à d)-dimensional
4
120 + ââ Linformer, k=2048 â*â Linformer, k=1024 âteâ Linformer, k=512 --- Linformer, k=256 â-= Linformer, k=128 Transformer a 3 nN is} inference time (s) . Scaled Dot-Product ) ) Attention iz b i=) 512/128 1024/64 2048/32 4096/16 8192/8 16384/4 32768/2 65536/1 Sequence length/ batch size Linformer KW) wk (WY) kxn din X dk kxd, Vv K Q nX dm
Figure 2: Left and bottom-right show architecture and example of our proposed multihead linear self-attention. Top right shows inference time vs. sequence length for various Linformer models.
key and value layers KW K into (k à d)-dimensional projected key and value layers. We i then compute an (n à k)-dimensional context mapping matrix ¯P using scaled dot-product attention.
head; = Attention(QW2, E:.KWF, FVW/â) Vo (E.KWE)T softmax QW (E:KW,") Vdk P:nxk ) -EVWY, (7) Se kxd
Finally, we compute context embeddings for each head; using P - (F;VW,â). Note the above operations only require O(nk) time and space complexity. Thus, if we can choose a very small projected dimension k, such that k < n, then we can significantly reduce the memory and space consumption. The following theorem states that, when k = O(d/e7) (independent of n), one can approximate P - VW/" using linear self-attention (7) with ¢ error. Theorem 2. (Linear self-attention) For any Q;, Ki,V; ⬠R"** and W2,WE,WY ⬠RYâ, if k = min{O(9dlog(d)/e?), 50 (log(n)/e?)}, then there exists matrices E;, F; ⬠R"** such that, for any row vector w of matrix QWe (KW)? /Va, we have
# i (KW K i â softmax(w)V W V
we have || < â¬||softmax(w)||||VW,"
Pr (||softmax(wE? )F;,VW," â softmax(w)VW;' || < â¬||softmax(w)||||VW," ||) > 1 â0(1) (8)
Proof. The main idea of proof is based on the distributional JohnsonâLindenstrauss lemma 1984). We first prove that for any row vector 2 ⬠Râ of matrix QW2 (KW)? / Vd, and column vector y ⬠Râ of matrix VW,â, Pr (|lexp(xE?) Fy? â exp(x)y"|| < ell exp(x)y⢠||) > 1 â 2-H, 9)
(9) where Ei = δR and Fi = eâδR, where R â RkÃn with i.i.d. entries from N (0, 1/k) and δ is a small constant. Applying the result in (9) to every row vector of matrix A and every column vector of matrix V , one can directly prove that, for any row vector Ai of matrix A,
# Pr (|lexp(AiE7
)FiV â exp(Ai)V|| < ell exp(Aa) ||) > 1 - 0(1),
Pr (|lexp(AiE7 )FiV â exp(Ai)V|| < ell exp(Aa) ||) > 1 - 0(1), (10) by setting k = 5 log(nd)/(e? â â¬?). This result does not utilize the low rank property of matrix A (rank(A)=d) and the resultant k has a dependency on sequence length n. We will further utlize the fact that rank(A)=d to prove the choice of k can be constant and independent of sequence length n. For more details, refer to the supplementary materials.
5
In Figure 2 (top right), we plot the inference speed of Linformer and standard Transformer versus sequence length, while holding the total number of tokens ï¬xed. We see that while standard Transformer becomes slower at longer sequence lengths, the Linformer speed remains relatively ï¬at and is signiï¬cantly faster at long sequences.
Additional Efï¬ciency Techniques Several additional techniques can be introduced on top of Linformer to further optimize for both performance and efï¬ciency:
Parameter sharing between projections: One can share parameters for the linear projection matri- ces Ei, Fi across layers and heads. In particular, we experimented with 3 levels of sharing:
⢠Headwise sharing: for each layer, we share two projection matrices E and F such that Ei = E and Fi = F across all heads i.
⢠Key-value sharing: we do headwise sharing, with the additional constraint of sharing the key and value projections. For each layer, we create a single projection matrix E such that Ei = Fi = E for each key-value projection matrix across all head i.
⢠Layerwise sharing: we use a single projection matrix E across all layers, for all heads, and for both key and value.
For example, in a 12-layer, 12-head stacked Transformer model, headwise sharing, key-value sharing and layerwise sharing will introduce 24, 12, and 1 distinct linear projection matrices, respectively.
Nonuniform projected dimension: One can choose a different projected dimension k for different heads and layers. As shown in Figure 1 (right), the contextual mapping matrices in different heads and layers have distinct spectrum distributions, and heads in higher layer tend towards a more skewed distributed spectrum (lower rank). This implies one can choose a smaller projected dimension k for higher layers.
General projections: One can also choose different kinds of low-dimensional projection methods instead of a simple linear projection. For example, one can choose mean/max pooling, or convolution where the kernel and stride is set to n/k. The convolutional functions contain parameters that require training.
# 5 Experiments
In this section, we present experimental results for the the techniques described above. We analyze the techniques one-by-one and explore how they impact performance.
# 5.1 Pretraining Perplexities
We ï¬rst compare the pretraining performance of our proposed architecture against RoBERTa (Liu et al., 2019), which is based on the Transformer. Following Devlin et al. (2019), we use BookCor- pus (Zhu et al., 2015) plus English Wikipedia as our pretraining set (3300M words). All models are pretrained with the masked-language-modeling (MLM) objective, and the training for all experiments are parallelized across 64 Tesla V100 GPUs with 250k updates.
Effect of projected dimension: We experiment with various values for the projected dimension k. (We use the same k across all layers and heads of Linformer.) In the Figure 3(a) and (b), we plot the validation perplexity curves for both the standard Transformer and the Linformer across different k, for maximum sequence lengths n = 512 and n = 1024. As expected, the Linformer performs better as projected dimension k increases. However, even at k = 128 for n = 512 and k = 256 for n = 1024, Linformerâs performance is already nearly on par with the original Transformer. Effect of sharing projections: In Figure 3(c), we plot the validation perplexity curves for the three parameter sharing strategies (headwise, key-value, and layerwise) with n = 512. Note that when we use just a single projection matrix (i.e. for layerwise sharing), the resulting Linformer modelâs validation perplexity almost matches that of the the non-shared model. This suggests that we can decrease the number of additional parameters in our model, and consequently, itâs memory consumption, without much detriment to performance.
Effect of longer sequences: We evaluate the effect of sequence length during Linformer pretraining. In the Figure 3(d), we plot the validation perplexity for Linformer with n â {512, 1024, 2048, 4096},
6
ââ standard transformer, n=512 14 ââ standard transformer, n=1024. 9 === Linformer, n=512, k=64 = Linformer, n=1024, k=256 8 â Linformer, n=512, k=128 12 â-- Linformer, n=1024, k=128 â-- Linformer, n=512, k=256 <7 = 10 Eq 6 ZR 3° 5 6 4 3 4 2 2 OK 50K 100K 150K 200K 250K OK 50K 100K 150K 200K 250K Number of updates Number of updates 25 = 44 â Layemise sharing ifi â informer, n=4096, k=256 --- Headwise sharing i 1 =-- Linformer, n=2048, k=256 42 Key-value sharing 20 i i â informer, n=1024, k=256 ty â = = 40 i] Linformer, n=512, k=256 Bis] jl & 3.8 2 \ g 3.6 > i to} | 3.4 â4 3.2 5 30 â50K 100K 150K 200K 250K OK 10K 20K 30K 40K SOK 60K Number of updates Number of updates
=
# B 8
# 2 Es 2 S >
Figure 3: Pretraining validation perplexity versus number of updates.
holding projected dimension k ï¬xed at 256. Note that as sequence length increases, even though our projected dimension is ï¬xed, the ï¬nal perplexities after convergence remain about the same. This further empirically supports our assertion that the Linformer is linear-time.
n Model SST-2 IMDB QNLI QQP Average 512 Liu et al. (2019), RoBERTa-base Linformer, 128 Linformer, 128, shared kv Linformer, 128, shared kv, layer Linformer, 256 Linformer, 256, shared kv Linformer, 256, shared kv, layer 93.1 92.4 93.4 93.2 93.2 93.3 93.1 94.1 94.0 93.4 93.8 94.0 93.6 94.1 90.9 90.4 90.3 90.1 90.6 90.6 91.2 90.9 90.2 90.3 90.2 90.5 90.6 90.8 92.25 91.75 91.85 91.83 92.08 92.03 92.30 512 Devlin et al. (2019), BERT-base Sanh et al. (2019), Distilled BERT 92.7 91.3 93.5 92.8 91.8 89.2 89.6 88.5 91.90 90.45 1024 Linformer, 256 Linformer, 256, shared kv Linformer, 256, shared kv, layer 93.0 93.0 93.2 93.8 93.6 94.2 90.4 90.3 90.8 90.4 90.4 90.5 91.90 91.83 92.18
Table 2: Dev set results on benchmark natural language understanding tasks. The RoBERTa-base model here is pretrained with same corpus as BERT.
# 5.2 Downstream Results
Thus far, we have only examined the pretraining perplexities of our model. However, we wish to show that our conclusions hold after ï¬netuning on downstream tasks. We ï¬netune our Linformer on IMDB (Maas et al., 2011) and SST-2 (Socher et al., 2013) (sentiment classiï¬cation), as well as QNLI (natural language inference) (Rajpurkar et al., 2016), and QQP (textual similarity) (Chen et al., 2018) We do the same with RoBERTa, 12-layer BERT-base and 6-layer distilled BERT. All of our models, including the Transformer baselines, were pretrained with the same objective, pretraining corpus, and up to 250k updates (although our Linformer takes much less wall-clock time to get to 250k updates, and was consequently trained for less time). Results are listed in Table 2.
7
We observe that the Linformer model (n = 512, k = 128) has comparable downstream performance to the RoBERTa model, and in fact even slightly outperforms it at k = 256. Moreover, we note that although the Linformerâs layerwise sharing strategy shares a single projection matrix across the entire model, it actually exhibits the best accuracy result of all three parameter sharing strategies. Furthermore, the Linformer pretrained with longer sequence length (n = 1024, k = 256) has similar results to the one pretrained with shorter length (n = 512, k = 256), this empirically supports the notion that the performance of Linformer model is mainly determined by the projected dimension k instead of the ratio n/k.
# Inference-time Efï¬ciency Results
In Table 3, we report the inference efï¬ciencies of Linformer (with layerwise sharing) against a standard Transformer. We benchmark both modelsâ inference speed and memory on a 16GB Tesla V100 GPU card. We randomly generate data up to some sequence length n and perform a full forward pass on a multiple batches. We also choose batch size based on the maximum batch size that can ï¬t in memory, and our memory savings are computed based on this number.
length n 128 projected dimensions k 512 256 1024 2048 length n 128 projected dimensions k 512 256 1024 512 1024 2048 4096 8192 16384 32768 65536 1.5x 1.7x 2.6x 3.4x 5.5x 8.6x 13x 20x 1.3x 1.6x 2.4x 3.2x 5.0x 7.8x 12x 18x - 1.3x 2.1x 2.8x 4.4x 7.0x 11x 16x - - 1.3x 2.2x 3.5x 5.6x 8.8x 14x - - - 1.3x 2.1x 3.3x 5.0x 7.9x 512 1024 2048 4096 8192 16384 32768 65536 1.7x 3.0x 6.1x 14x 28x 56x 56x 60x 1.5x 2.9x 5.6x 13x 26x 48x 48x 52x - 1.8x 3.6x 8.3x 17x 32x 36x 40x - - 2.0x 4.3x 8.5x 16x 18x 20x 2048 - - - 2.3x 4.5x 8x 16x 18x
Table 3: Inference-time efï¬ciency improvements of the Linformer over the Transformer, across various projected dimensions k and sequence lengths n. Left table shows time saved. Right table shows memory saved.
From Table 3, we see that even with n = 512 and k = 128, Linformer has 1.5Ã faster inference time and allows for a 1.7Ã larger maximum batch size than the Transformer. As sequence length increases, the inference-time speed-up and memory savings are even more dramatic. We also plot inference times of both Linformer and Transformer on the 100 data samples in the top right of Figure 2.
# 6 Conclusion
Transformer models are notoriously slow to train and deploy in practice since their self-attention operations have O(n2) time and space complexity with respect to sequence length n. In this paper, we demonstrate, both theoretically and empirically, that the stochastic matrix formed by self-attention mechanism is low-rank. We further leverage this observation to propose a new, highly efï¬cient self- attention mechanism. Through a combination of theoretical and empirical analysis, we demonstrate that our proposed approach is O(n) with respect to sequence length.
# Broader Impact
Our work focuses on making Transformers more efï¬cient by introducing a mechanism that reduces self-attention to linear-time complexity. Potential positive impacts of efï¬cient transformers include increasing the accessibility of our models, both for deployment on devices, as well as during training for research purposes. It also has potential impact on training transformer on images since we can support very long sequences. Furthermore, there are positive environmental beneï¬ts associated with decreasing the power consumption of models. As such, we see no immediate negative ethical or societal impacts of our work beyond what applies to other core building blocks of deep learning.
8
# References
Rosa I Arriaga and Santosh Vempala. An algorithmic theory of learning: Robust concepts and random projection. Machine Learning, 63(2):161â182, 2006.
Iz Beltagy, Matthew E Peters, and Arman Cohan. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150, 2020.
Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. arXiv preprint arXiv:2005.14165, 2020.
Tianqi Chen, Bing Xu, Chiyuan Zhang, and Carlos Guestrin. Training deep nets with sublinear memory cost. arXiv preprint arXiv:1604.06174, 2016.
Zihan Chen, Hongbo Zhang, Xiaoji Zhang, and Leqi Zhao. Quora question pairs, 2018.
Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. Generating long sequences with sparse transformers. arXiv preprint arXiv:1904.10509, 2019.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171â4186, 2019.
Carl Eckart and Gale Young. The approximation of one matrix by another of lower rank. Psychome- trika, 1(3):211â218, 1936.
Angela Fan, Pierre Stock, Benjamin Graham, Edouard Grave, Remi Gribonval, Herve Jegou, and Armand Joulin. Training with quantization noise for extreme ï¬xed-point compression. arXiv preprint arXiv:2004.07320, 2020.
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
Yanping Huang, Youlong Cheng, Ankur Bapna, Orhan Firat, Dehao Chen, Mia Chen, HyoukJoong Lee, Jiquan Ngiam, Quoc V Le, Yonghui Wu, et al. Gpipe: Efï¬cient training of giant neural networks using pipeline parallelism. In Advances in Neural Information Processing Systems, pp. 103â112, 2019.
Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew Howard, Hartwig Adam, and Dmitry Kalenichenko. Quantization and training of neural networks for efï¬cient integer-arithmetic-only inference. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2704â2713, 2018.
Nikita Kitaev, Lukasz Kaiser, and Anselm Levskaya. Reformer: The efï¬cient transformer. International Conference on Learning Representations, 2020. In
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. ACL, 2019.
W Johnson J Lindenstrauss. Extensions of lipschitz maps into a hilbert space. Contemp. Math, 26: 189â206, 1984.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. Learning word vectors for sentiment analysis. In Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies-volume 1, pp. 142â150. Association for Computational Linguistics, 2011.
9
Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843, 2016.
Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, et al. Mixed precision training. arXiv preprint arXiv:1710.03740, 2017.
Abdelrahman Mohamed, Dmytro Okhonko, and Luke Zettlemoyer. Transformers with convolutional context for asr. arXiv preprint arXiv:1904.11660, 2019.
Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. Scaling neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pp. 1â9, 2018.
Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pp. 48â53, 2019.
Jiezhong Qiu, Hao Ma, Omer Levy, Scott Wen-tau Yih, Sinong Wang, and Jie Tang. Blockwise self-attention for long document understanding. arXiv preprint arXiv:1911.02972, 2019.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9, 2019.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a uniï¬ed text-to-text transformer. arXiv preprint arXiv:1910.10683, 2019.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 2383â2392, 2016.
Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108, 2019.
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pp. 1631â1642, 2013.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz In Advances in neural information Kaiser, and Illia Polosukhin. Attention is all you need. processing systems, pp. 5998â6008, 2017.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. GLUE: A multi-task benchmark and analysis platform for natural language understanding. CoRR, abs/1804.07461, 2018. URL http://arxiv.org/abs/1804.07461.
Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the IEEE international conference on computer vision, pp. 19â27, 2015.
10
# A Proof of Theorem 1
Proof. The main proof idea is based on the distributional JohnsonâLindenstrauss lemma (Linden- strauss, 1984) (JL, for short), the following version is from (Arriaga & Vempala, 2006).
Lemma 1. Let R be an k à n matrix, 1 ⤠k ⤠n, with i.i.d. entries from N (0, 1/k). For any x, y â Rn, we have
(11)
Pr(||Rel| < (1-+e)|la]) > 1-4, Pr (aR? Ry? â xy? || < ¢llayl]) > 1-284,
(12)
For simplicity, we will omit the subscript i for matrix W K i as QW Q, K as KW K and V as V W V . Deï¬ne , W Q i , W V i , Ei and Fi. We will regard Q
A = QW Q i (KW K â d i )T (13)
Based on the deï¬nition of contextual mapping matrix P , we have
v2(K WE) P = softmax QW(KW") vd =exp(A)- D;', (14)
where DA is an n à n diagonal matrix such that
n (Da)ii = > exp (Aji) (5) j=l
Here we provide a constructive proof. Given any approximation error ⬠> 0, define the following matrix.
ËP = exp (A) · Dâ1 A RT R, (16)
where R be an k à n matrix, 1 ⤠k ⤠n, with i.i.d. entries from N (0, 1/k). Clearly the rank of matrix ËP satisiï¬es
rank( ËP ) ⤠rank(R) = k.
(17)
We further show that, when k = log(n), we have that, for any column vector w â Rn,
Pr (Ph â Phil < el|Phil) >1-o(). (18)
This concludes the theorem. For any row vector u â Rn of matrix P and any column vector w â Rn of matrix V W V , applying the JL Lemma, we can obtain
Pr (|\uR'Rw? â uw? || < â¬lluw" ||) > 1- Dele, (19)
Therefore, we have
Pr (Put â Pw" || < |Pw"||) =Pr (||PRâ Rw? â Pw" || < el|Pw" ||) (@) >1- Ss Pr (||cR? Rw? â aw" || > ellew⢠|) EP b) 3) p Oy one (Pe /A, (20)
The above, step (a) is based on the union bound. The step (b) is utilizing the result of JL Lemma. Let k = 5log(n)/(e? â â¬*), then theorem follows.
11
# B Proof of Theorem 2
Proof. Deï¬ne E = δR and F = eâδR, where R â RnÃk with i.i.d. entries from N (0, 1/k), δ is a constant with δ = 1/2n. We will ï¬rst prove that for any row vector x â Rn of matrix QK T and column vector y â Rn of matrix V ,
Pr (||exp(xE") Fy" âexp(x)y7|| < ell exp(x)y" ||) > 1a en Pe )K/4, (21) on the triangle inequality, we have
Based on the triangle inequality, we have || exp(wEâ¢) Fy exp(x)y" || < || exp(w@Eâ¢) Fy â exp(x)R? Ry|| + || exp() R? Ry â exp(x)y" |
|| Fy R? Ry | (a) (1 + â¬)|lyllll exp(@E7) â exp(x)R7|| + || exp(x)R7 Ry â exp(x)y" | < || exp(«) RT Ry â exp(x)y7 || + o(|| exp()|Illyll) ©) < || exp(x)|[llyll + oC exp@)Illlyll) (22)
(22) The above, step (a) is based on the Cauchy inequality and JL Lemma in (11). The step (b) utilizes the fact that exponential function is Lipchitz continuous in a compact region. Then we can choose a small enough δ, i.e., δ = θ(1/n) such that
|| exp(6xR) â exp(dx)R|| = o(|| exp(x)||) (23)
The step (c) is based on the JL Lemma deï¬ned in (12).
Applying the result in (21) to every row vector of matrix A and every column vector of matrix V , one can directly prove that, for any row vector Ai of matrix A,
Pr (|| exp(AiBâ )FV â exp(Ai)V|| < ell exp(As)|II|V II) > 1 â 0(1), (24) by setting k = 5 log(nd)/(e? â â¬?). This result does not utilize the low rank property of matrix A (rank(A)=d) and the resultant k has a dependency on sequence length n. We will further prove the choice of k can be constant and independent of sequence length n. Based on the fact that rank(A)=d, we can find a row submatrix A, ⬠R?¢*¢ of matrix exp(AE7)FH such that rank(A,)=d. Applying the result in (21) to every row vector of matrix A, and every column vector of matrix V, and k = 9 log(d)/(e? â â¬), we can obtain that, for any row vector A$ of matrix Aâ,
Pr (|| exp(AZET)FV â exp(Af)V|| < «| exp(A#)||||V|]) > 1-0(1), (25)
i ET )F V â exp(As Furthermore, deï¬ne the matrix Î â RnÃ2d as
re eee | Pee | 7 (26)
We have that, for any row vector Ai of matrix A, 1 ⤠i ⤠n.
\|exp(AiE?)FV â exp(Ai)V|| =|[Ti exp(4°E7)FV â Ty exp(*)V|| (a) ; < \|[exp(A°B7) FV _ exp(A*)V]" ||, Till (b) <O(d)|| exp(ASE*) FV â exp(A*)V |e 2d =O(d) > |lexp(Ai EBâ )FV â exp(A})V |) i=1 <<0(d (Yo bon (AMIIV I <cO(d iI expla ymIV
The above, step (a) utilizes the inequality || Az|| < || Ajl2 - ||a|], where ||.Al]2 = \/Amax(AT A) (max(-) is the largest eigenvalue) is the spectrum norm of a matrix A. The step (b) is based on matrix norm inequality || All2 < || All, where ||All7 = (Yy<;, <n Ai))!/? is the Frobenius norm of matrix A. The step (c) is based on the results of (24] (24).
12 | {
"id": "1503.02531"
} |
2006.03955 | Detecting Emergent Intersectional Biases: Contextualized Word Embeddings Contain a Distribution of Human-like Biases | With the starting point that implicit human biases are reflected in the
statistical regularities of language, it is possible to measure biases in
English static word embeddings. State-of-the-art neural language models
generate dynamic word embeddings dependent on the context in which the word
appears. Current methods measure pre-defined social and intersectional biases
that appear in particular contexts defined by sentence templates. Dispensing
with templates, we introduce the Contextualized Embedding Association Test
(CEAT), that can summarize the magnitude of overall bias in neural language
models by incorporating a random-effects model. Experiments on social and
intersectional biases show that CEAT finds evidence of all tested biases and
provides comprehensive information on the variance of effect magnitudes of the
same bias in different contexts. All the models trained on English corpora that
we study contain biased representations.
Furthermore, we develop two methods, Intersectional Bias Detection (IBD) and
Emergent Intersectional Bias Detection (EIBD), to automatically identify the
intersectional biases and emergent intersectional biases from static word
embeddings in addition to measuring them in contextualized word embeddings. We
present the first algorithmic bias detection findings on how intersectional
group members are strongly associated with unique emergent biases that do not
overlap with the biases of their constituent minority identities. IBD and EIBD
achieve high accuracy when detecting the intersectional and emergent biases of
African American females and Mexican American females. Our results indicate
that biases at the intersection of race and gender associated with members of
multiple minority groups, such as African American females and Mexican American
females, have the highest magnitude across all neural language models. | http://arxiv.org/pdf/2006.03955 | Wei Guo, Aylin Caliskan | cs.CY, cs.AI, cs.CL | 19 pages, 2 figures, 4 tables | AAAI/ACM Conference on Artificial Intelligence, Ethics, and
Society 2021 | cs.CY | 20200606 | 20210519 | # Detecting Emergent Intersectional Biases: Contextualized Word Embeddings Contain a Distribution of Human-like Biases
Wei Guo1 and Aylin Caliskan1, 2 1George Washington University 1Department of Computer Science 2Institute for Data, Democracy & Politics [email protected] and [email protected]
# Abstract
With the starting point that implicit human biases are reï¬ected in the statistical regularities of language, it is possible to measure biases in English static word embeddings. State-of-the-art neural language models generate dynamic word embeddings dependent on the context in which the word appears. Current methods measure pre-deï¬ned social and intersectional biases that appear in particular contexts deï¬ned by sentence templates. Dispensing with templates, we introduce the Contextualized Embedding Association Test (CEAT), that can summarize the magnitude of overall bias in neural language models by incorporating a random- effects model. Experiments on social and intersectional biases show that CEAT ï¬nds evidence of all tested biases and provides comprehensive information on the variance of effect magnitudes of the same bias in different contexts. All the models trained on English corpora that we study contain biased representations. GPT-2 contains the smallest magnitude of overall bias followed by GPT, BERT, and then ELMo, negatively correlating with how contextualized the models are. Furthermore, we develop two methods, Intersectional Bias Detection (IBD) and Emergent Intersectional Bias Detection (EIBD), to automatically identify the in- tersectional biases and emergent intersectional biases from static word embeddings in addition to measuring them in contextualized word embeddings. We present the ï¬rst algorithmic bias detection ï¬ndings on how intersectional group members are strongly associated with unique emergent biases that do not overlap with the biases of their constituent minority identities. IBD achieves an accuracy of 81.6% and 82.7%, respectively, when detecting the intersectional biases of African American females and Mexican American females, where the random correct identiï¬cation rates are 14.3% and 13.3%. EIBD reaches an accuracy of 84.7% and 65.3%, respectively, when detecting the emergent inter- sectional biases unique to African American females and Mexican American females, where the random correct identiï¬cation rates are 9.2% and 6.1%. Our re- sults indicate that intersectional biases associated with members of multiple minority groups, such as African American females and Mexican American females, have the highest magnitude across all neural language models.
Introduction State-of-the-art off-the-shelf neural language models such as the multi-million dollar GPT-3, associates men with com- petency and occupations demonstrating higher levels of ed- ucation, in downstream natural language processing (NLP) tasks such as sequence prediction (Brown et al. 2020). When GPT-3âs user interface for academic access is prompted for language generation with the input âWhat is the gender of a doctor,â the ï¬rst answer is âA: Doctor is a masculine noun;â whereas when prompted with âWhat is the gender of a nurse,â the ï¬rst answer is âItâs female.â Propagation of social group bias in NLP applications such as automated resume screening, that shapes the workforce by making consequential decisions about job candidates, would not only perpetuate existing bi- ases but potentially exacerbate harmful bias in society to affect future generations (De-Arteaga et al. 2019; Raghavan and Barocas). To enhance transparency in NLP, we use the representations of words learned from word co-occurrence statistics to discover social biases. Our methods uncover unique intersectional biases associated with individuals that are members of multiple minority groups. After identifying these emergent biases, we use numeric representations of words that vary according to neighboring words to analyze how prominent bias is in different contexts. Recent work has shown that human-like biases are embedded in the statistical regularities of language that are learned by word represen- tations, namely word embeddings (Caliskan, Bryson, and Narayanan 2017; Blodgett et al. 2020). We build a method on this work to automatically identify intersectional biases, such as the ones associated with African American and Mexican American women from static word embeddings (SWE). Then, we measure how human-like biases manifest themselves in contextualized word embeddings (CWE), which are dynamic word representations generated by neural language models that adapt to their context.
Artiï¬cial intelligence systems are known not only to per- petuate social biases, but they may also amplify existing cultural assumptions and inequalities (Campolo et al. 2017). While most work on biases in word embeddings focuses on a single social category (e.g., gender, race) (Caliskan, Bryson, and Narayanan 2017; Bolukbasi et al. 2016; Garg et al. 2018; Zhao et al. 2018; Gonen and Goldberg 2019), the lack of work on identifying intersectional biases, the bias associated with populations deï¬ned by multiple categories
(Cabrera et al.), leads to an incomplete measurement of so- cial biases (Hancock 2007; Hurtado and Sinha 2008). For example, Caliskan, Bryson, and Narayanan (2017)âs Word Embedding Association Test (WEAT) quantiï¬es biases docu- mented by the validated psychological methodology of the Implicit Association Test (IAT) (Greenwald, McGhee, and Schwartz 1998; Greenwald, Nosek, and Banaji 2003). The IAT provides the sets of words to represent social groups and attributes to be used while measuring bias. Consequently, the analysis of bias via WEAT is limited to the types of IATs and their corresponding words contributed by the IAT literature, which happens to include intersectional represen- tation for only African American women. To overcome these constraints of WEATs, we extend WEAT to automatically identify attributes associated with individuals that are mem- bers of more than one social group. While this allows us to discover emergent intersectional biases, it is also a promising step towards automatically identifying all biased associations embedded in the regularities of language. To ï¬ll the gap in understanding the complex nature of intersectional bias, we develop a method called Intersectional Bias Detection (IBD) to automatically identify intersectional biases without relying on pre-deï¬ned attribute sets from the IAT literature.
Biases associated with intersectional group members con- tain emergent elements that do not overlap with the biases of their constituent minority identities (Ghavami and Peplau 2013; Arrington-Sanders et al. 2015). For example, "hair weaves" is stereotypically associated with African Amer- ican females but not with African Americans or females. We extend IBD and introduce a method called Emergent Intersectional Bias Detection (EIBD) to identify the emer- gent intersectional biases of an intersectional group in SWE. Then, we construct new tests to quantify these intersectional and emergent biases in CWE. To investigate the inï¬uence of different contexts, we use a ï¬ll-in-the-blank task called masked language modeling. The goal of the task is to gen- erate the most probable substitution for the [MASK] that is surrounded with neighboring context words in a given sen- tence. BERT, a widely used language model trained on this task, substitutes [MASK] in âMen/women excel in [MASK].â with âscienceâ and âsportsâ, reï¬ecting stereotype-congruent associations. However, when we feed in similar contexts âThe man/woman is known for his/her [MASK],â BERT ï¬lls âwitâ in both sentences, which indicates gender bias may not appear in these contexts. Prior methods use templates analo- gous to masked language modeling to measure bias in CWE (May et al. 2019; Tan and Celis 2019; Kurita et al. 2019). The templates are designed to substitute words from WEATâs sets of target words and attributes in a simple manner such as "This is [TARGET]" or "[TARGET] is a [ATTRIBUTE]".In this work, we propose the Contextualized Embedding Asso- ciation Test (CEAT), a test eschewing templates and instead generating the distribution of effect magnitudes of biases in different contexts from a control corpus. To comprehensively measure the social and intersectional biases in this distri- bution, a random-effects model designed to combine effect sizes of similar bias interventions summarizes the overall ef- fect size of bias in the neural language model (DerSimonian and Kacker 2007). As a result, instead of focusing on biases
template-based contexts, CEAT measures the distribution of biased associations in a language model. Contributions. In summary, this paper presents three novel contributions along with three complementary methods (CEAT, IBD, and EIBD) to automatically identify intersec- tional biases as well as emergent intersectional biases in SWE, then use these ï¬ndings to measure all available types of so- cial biases in CWE. We ï¬nd that ELMo is the most biased, followed by BERT, then GPT, with GPT-2 being the least biased. The overall level of bias correlated with how contex- tualized the CWE generated by the models are. Our results indicate that the strongest biased associations are embedded in the representations of intersectional group members such as African American women. Data, source code, and detailed results are available. Intersectional Bias Detection (IBD). We develop a novel method for SWE to detect words that represent biases associ- ated with intersectional group members. To our knowledge, IBD is the ï¬rst algorithmic method to automatically iden- tify individual words that are strongly associated with inter- sectionality. IBD reaches an accuracy of 81.6% and 82.7%, respectively, when evaluated on intersectional biases associ- ated with African American females and Mexican American females that are provided in Ghavami and Peplau (2013)âs validation dataset. In these machine learning settings, the ran- dom chances of correct identiï¬cation are 14.3% and 13.3%. Currently, the validation datasets represent gender as a binary label. Consequently, our method uses binary categorization when evaluating for gender related biases. However, we stress that our method generalizes to multiple categories from bi- nary. In future work, we aim to design non-categorical meth- ods that donât represent individuals as members of discrete categories compared to potentially using continuous repre- sentations. Accordingly, we also plan to compile validation datasets that wonât constrain our evaluation to categorical assumptions about humans. Emergent Intersectional Bias Detection (EIBD). We con- tribute a novel method to identify emergent intersectional biases that do not overlap with biases of constituent social groups in SWE. To our knowledge, EIBD is the ï¬rst algorith- mic method to detect the emergent intersectional biases in word embeddings automatically. EIBD reaches an accuracy of 84.7% and 65.3%, respectively, when validating on the emergent intersectional biases of African American females and Mexican American females that are provided provided in Ghavami and Peplau (2013)âs validation dataset. In these machine learning settings, the random chances of correct identiï¬cation are 9.2% and 6.1%. Contextualized Embedding Association Test (CEAT). WEAT measures human-like biases in SWE. We extend WEAT to the dynamic setting of neural language models to quantify the distribution of effect magnitudes of social and intersectional biases in contextualized word embeddings and summarize the combined magnitude of bias by pooling effect sizes with the validated random-effects methodology (Hedges 1983; Borenstein, Hedges, and Rothstein). We show that the magnitude of bias greatly varies according to the con- text in which the stimuli of WEAT appear. Overall, the pooled mean effect size is statistically signiï¬cant in all CEAT tests
including intersectional bias measurements and all models contain biased representations.
Related Work SWE are trained on word co-occurrence statistics of corpora to generate numeric representations of words so that ma- chines can process language (Mikolov et al. 2013; Penning- ton, Socher, and Manning 2014). Previous work on bias in SWE has shown that human-like biases that have been docu- mented by the IAT are embedded in the statistical regularities of language (Caliskan, Bryson, and Narayanan 2017). The IAT (Greenwald, McGhee, and Schwartz 1998) is a widely used measure of implicit bias in human subjects that quanti- ï¬es the differential reaction time to pairing two concepts. Analogous to the IAT, Caliskan, Bryson, and Narayanan (2017) developed the WEAT to measure the biases in SWE by quantifying the relative associations of two sets of target words (e.g., African American and European American) that represent social groups with two sets of polar attributes (e.g., pleasant and unpleasant). WEAT computes an effect size (Cohenâs d) that is a standardized bias score and its p-value based on a one-sided permutation test. WEAT measures bi- ases pre-deï¬ned by the IAT such as racism, sexism, ableism, and attitude towards the elderly, as well as widely shared non-discriminatory non-social group associations. Swinger et al. (2019) presented an adaptation of the WEAT to identify biases associated with clusters of names.
Regarding the biases of intersectional groups categorized by multiple social categories, there is prior work in the so- cial sciences focusing on the experiences of African Amer- ican females (Crenshaw 1989; Hare-Mustin and Marecek 1988; Kahn and Yoder 1989; Thomas and Miles 1995). Buo- lamwini et al. demonstrated intersectional accuracy dispari- ties in commercial gender classiï¬cation in computer vision (Buolamwini and Gebru 2018). May et al. (2019) and Tan and Celis (2019) used the attributes presented in Caliskan, Bryson, and Narayanan (2017) to measure emergent inter- sectional biases of African American females in CWE. We develop the ï¬rst algorithmic method to automatically iden- tify intersectional bias and emergent bias attributes in SWE, which can be measured in both SWE and CWE. Furthermore, we construct new embedding association tests for the inter- sectional groups. As a result, our work is the ï¬rst to discuss biases regarding Mexican American females in word em- beddings. Ghavami and Peplau (2013) used a free-response procedure in human subjects to collect words that represent intersectional biases. They show that emergent intersectional biases exist in several gender-by-race groups in the U.S. We use the validation dataset constructed by Ghavami and Peplau (2013) to evaluate our methods.
# eee
Recently, neural language models, which use neural net- works to assign probability values to sequences of words, have achieved state-of-the-art results in NLP tasks with their dynamic word representations, CWE (Edunov et al. 2018; Bohnet et al. 2018; Yang et al. 2019). Neural language mod- els typically consist of an encoder that generates CWE for each word based on its accompanying context in the input sequence. Speciï¬cally, the collection of values on a particular layerâs hidden units forms the CWE (Tenney et al. 2019),
which has the same vector shape as a SWE. However, unlike SWE that represent each word, including polysemous words, with a ï¬xed vector, CWE of the same word vary according to its context window that is encoded into its representation by the neural language model. Ethayarajh, Duvenaud, and Hirst (2019) demonstrate how these limitations of SWE impact measuring gender biases. With the wide adaption of neural language models (Edunov et al. 2018; Bohnet et al. 2018; Yang et al. 2019), human-like biases were observed in CWE (Kurita et al. 2019; Zhao et al. 2019; May et al. 2019; Tan and Celis 2019). To measure human-like biases in CWE, May et al. (2019) applied the WEAT to contextualized representa- tions in template sentences. Tan and Celis (2019) adopted the method of May et al. (2019) by applying Caliskan, Bryson, and Narayanan (2017)âs WEAT to the CWE of the stimuli tokens in templates such as âThis is a [TARGET]â. Kurita et al. (2019) measured biases in BERT based on the predic- tion probability of the attribute in a template that contains the target and masks the attribute, e.g., [TARGET] is [MASK]. Hutchinson et al. (2020) reveal biases associated with dis- abilities in CWE and demonstrate undesirable biases towards mentions of disability in applications such as toxicity predic- tion and sentiment analysis.
Nadeem, Bethke, and Reddy (2020) present a large-scale natural language dataset in English to measure stereotypical biases in the domains of gender, profession, race, and religion. Their strategy cannot be directly compared to ours since it is not aligned with our intersectional bias detection method, which is complementary to CEAT. The majority of prior work measures bias in a limited selection of contexts to report the unweighted mean value of bias magnitudes, which does not reï¬ect the scope of contextualization of biases embedded in a neural language model.
Data Identifying and measuring intersectional and social biases in word embeddings as well as neural language models requires four types of data sources that are detailed in this section. (1) SWE carry the signals for individual words that have statisti- cally signiï¬cant biased associations with social groups and intersectionality. Application of our methods IBD and EIBD to SWE automatically retrieves biased associations. (2) CWE extracted from sentence encodings of neural language mod- els provide precise word representations that depend on the context of word occurrence. We apply CEAT to summarize magnitude of bias in neural language models. (3) A corpus provides the samples of sentences used in CEAT when mea- suring the overall bias and analyzing the variance of contexts in CWE of neural language models. (4) Stimuli designed by experts in social psychology represent validated concepts in natural language including social group and intersectional targets in addition to their corresponding attributes.
# Static Word Embeddings (SWE)
We use GloVe (Pennington, Socher, and Manning 2014) SWE trained on the word co-occurrence statistics of the Common Crawl corpus to automatically detect words that are highly associated with intersectional group members. The Common
Crawl corpus consists of 840 billion tokens and more than 2 million unique vocabulary words collected from a crawl of the world wide web. Consequently, GloVe embeddings capture the language representation of the entire Internet pop- ulation that contributed to its training corpus. GloVe embed- dings learn ï¬ne-grained semantic and syntactic regularities (Pennington, Socher, and Manning 2014). Caliskan, Bryson, and Narayanan (2017) have shown that social biases are em- bedded in the linguistic regularities learned by GloVe.
Contextualized Word Embeddings (CWE) We generate the CWE by widely used neural language model implementations of ELMo from https://allennlp.org/ elmo, BERT, GPT and GPT-2 from https://huggingface.co/ transformers/v2.5.0/model_doc/ (Peters et al. 2018; Devlin et al. 2018; Radford et al. 2018, 2019). Speciï¬cally, CWE is formed by the collection of values on a particular layerâs hidden units in the neural language model. BERT, GPT and GPT-2 use subword tokenization. Since GPT and GPT-2 are unidirectional language models, CWE of the last subtokens contain the information of the entire word (Radford et al. 2019). We use the CWE of the last subtoken in the word as its representation in GPT and GPT-2. For consistency, we use the CWE of the last subtoken in the word as its represen- tation in BERT. BERT and GPT-2 provide several versions. We use BERT-small-cased and GPT-2-117m trained on cased English text. The sizes of the training corpora detailed below have been veriï¬ed from AÃenmacher and Heumann (2020). We obtained academic access to GPT-3âs API which does not provide training data or the CWE. Accordingly, we are not able to systematically study GPT-3.
ELMo is a 2-layer bidirectional long short term mem- ory (Bi-LSTM) (Hochreiter and Schmidhuber 1997) lan- guage model trained on the Billion Word Benchmark dataset (Chelba et al. 2013) that takes up â¼9GB memory. ELMo has 93.6 million parameters. It is different from the three other models since CWE in ELMo integrate the hidden states in all layers instead of using the hidden states of the top layer. We follow standard usage and compute the summation of hidden units over all aggregated layers of the same token as its CWE (Peters et al. 2018). CWE of ELMo have 1,024 dimensions. BERT (Devlin et al. 2018) is a bidirectional transformer encoder (Vaswani et al. 2017) trained on a masked language model and next sentence prediction. BERT is trained on BookCorpus (Zhu et al. 2015) and English Wikipedia dumps that take up â¼16GB memory (Bender et al. 2021). We use BERT-small-case with 12 layers that has 110 million param- eters. We extract the values of hidden units on the top layer corresponding to the token as its CWE of 768 dimensions.
GPT (Radford et al. 2018) is a 12-layer transformer de- coder trained on a unidirectional language model on Book- Corpus that takes up â¼13GB memory (Zhu et al. 2015). We use the values of hidden units on the top layer corresponding to the token as its CWE. This implementation of GPT has 110 million parameters. The CWE have 768 dimensions.
GPT-2 (Radford et al. 2019) is a transformer decoder trained on a unidirectional language model and is a scaled-up version of GPT. GPT-2 is trained on WebText that takes up â¼40GB memory (Radford et al. 2019). We use GPT-2-small
which has 12 layers and 117 million parameters. We use the values of hidden units on the top layer corresponding to the token as its CWE. CWE of GPT-2 have 768 dimensions.
We provide the source code, detailed information, and documentation in our open source repository at https://github. com/weiguowilliam/CEAT.
Corpus We need a comprehensive representation of all contexts a word can appear in naturally occurring sentences in order to investigate how bias associated with individual words varies across contexts. Identifying the potential contexts in which a word can be observed is not a trivial task. Consequently, we simulate the distribution of contexts a word appears in, by randomly sampling sentences that the word occurs in a large corpus.
Voigt et al. (2018) have shown that social biases are projected into Reddit comments. Consequently, we use a Reddit corpus to generate the distribution of contexts that words of interest appear in. The corpus consists of 500 mil- lion comments made in the period between 1/1/2014 and 12/31/2014. We take all the stimuli used in Caliskan, Bryson, and Narayanan (2017)âs WEAT that measures effect size of bias for social groups and related attributes. For each WEAT type, we retrieve the sentences from the Reddit corpus that contain one of these stimuli. In this way, we collect a great variety of CWE from the Reddit corpus to measure bias com- prehensively in a neural language model while simulating the natural distribution of contexts in language. We discuss the justiï¬cation of sampling 10,000 sentences from the Reddit corpus in the upcoming sections.
Stimuli Caliskan, Bryson, and Narayanan (2017)âs WEAT is inspired by the IAT literature (Greenwald and Banaji 1995; Green- wald, McGhee, and Schwartz 1998; Greenwald, Nosek, and Banaji 2003) that measures implicit associations of concepts by representing them with stimuli. Experts in social psychol- ogy and cognitive science select stimuli which are words typically representative of various concepts. These linguistic or sometimes picture-based stimuli are proxies to overall rep- resentations of concepts in cognition. Similarly, in the word embedding space, WEAT uses these unambiguous stimuli as semantic representations to study biased associations related to these concepts. Since the stimuli are chosen by experts to most accurately represent concepts, they are not polysemous or ambiguous words. Each WEAT, designed to measure a certain type of association or social group bias, has at least 32 stimuli. There are 8 stimuli for each one of the four concepts. Two of these concepts represent target groups and two of them represent polar attributes. WEAT measures the mag- nitude of bias by quantifying the standardized differential association or targets with attributes. The larger the set of appropriate stimuli to represent a concept, the more statis- tically signiï¬cant and accurate the representation becomes (Caliskan, Bryson, and Narayanan 2017). Validation data for intersectional bias. To investigate inter- sectional bias with respect to race and gender, we represent members of social groups with target words provided by
WEAT and Parada et al. (Caliskan, Bryson, and Narayanan 2017; Parada 2016). WEAT and Parada et al. represent racial categories with frequent given names that signal group mem- bership. WEAT contains a balanced combination of common female and male names of African Americans and European Americans whereas Parada et al. presents the Mexican Amer- ican names for women and men combined. The intersectional bias detection methods identify attributes that are associated with these target group representations. Human subjects pro- vide the validation set of intersectional attributes with ground truth information in prior work (Ghavami and Peplau 2013). The evaluation of intersectional bias detection methods uses this validation set. One limitation of these validation sets is the way they represent gender as a binary category. We will address this constraint in future work by constructing our own validation sets that wonât have to represent people by discrete categorical labels of race and gender.
Approach Our approach includes four components. (1) (Caliskan, Bryson, and Narayanan 2017)âs WEAT for SWE is the foun- dation of our approach to summarizing overall bias in CWE generated by neural language models. (2) Random-effects models from the meta analysis literature summarizes the com- bined effect size for a neural language modelâs CWE via com- bining 10,000 WEAT samples by weighting each result with the within-WEAT and between-WEAT variances (Hedges 1983). (3) Our novel method IBD automatically detects words associated with intersectional biases. (4) Our novel method EIBD automatically detects words that are uniquely associ- ated with members of multiple minority or disadvantaged groups, but do not overlap with the biases of their constituent minority identities.
Supplementary materials includes the details of all the bias types studied in this paper, namely, WEAT biases intro- duced by Caliskan, Bryson, and Narayanan (2017) as well as intersectional biases and their validation set introduced by Ghavami and Peplau (2013) and Parada (2016).
# Word Embedding Association Test (WEAT)
WEAT, designed by Caliskan, Bryson, and Narayanan (2017), measures the effect size of bias in SWE, by quantifying the relative associations of two sets of target words (e.g., career, professional; and family, home) with two sets of polar at- tributes (e.g., woman, female; and man, male). Two of these WEATs measure baseline associations that are widely ac- cepted such as the attitude towards ï¬owers vs. insects or the attitude towards musical instruments vs. weapons. Human subjects and word embeddings tend to associate ï¬owers and musical instruments with pleasantness that corresponds to positive valence. However, human subjects associate insects and weapons with unpleasantness that corresponds to neg- ative valence. Greenwald, McGhee, and Schwartz (1998) refers to these as universally accepted stereotypes since they are widely shared across human subjects and are not po- tentially harmful to society. However, the rest of the tests measure the magnitude of social-group associations, such as gender and race stereotypes and attitude towards the elderly
or people with disabilities. Biased social-group associations in word embeddings can potentially be prejudiced and harm- ful to society. Especially, if downstream applications of NLP that use static or dynamic word embeddings to make conse- quential decisions about individuals, such as resume screen- ing for job candidate selection, perpetuate existing biases to eventually exacerbate historical injustices (De-Arteaga et al. 2019; Raghavan and Barocas). The formal deï¬nition of Caliskan, Bryson, and Narayanan (2017)âs WEAT, the test statistic, and the statistical signiï¬cance of biased associations are detailed in the appendices.
Intersectional Bias Detection (IBD) IBD identiï¬es words associated with intersectional group members, deï¬ned by two social categories simultaneously. Our method automatically detects the attributes that have high associations with the intersectional group from a set of SWE. Analogous to the Word Embedding Factual Associa- tion Test (WEFAT) (Caliskan, Bryson, and Narayanan 2017), we measure the standardized differential association of a sin- gle stimulus w â W with two social groups A and B using the following statistic.
im s(w, A, B) Meany acos(tw, @) â meanye Beos(w, b) std-devzc aupcos(w, )
We refer to the above statistic as the association score, which is used by WEFAT to verify that gender statistics are embedded in linguistic regularities. Targets A and B are words that represent males (e.g., he, him) and females (e.g., she, her) and W is a set of occupations. For example, nurse has an association score s(nurse, A, B) that measures effect size of gender associations. WEFAT has been shown to have high predictive validity (Ï = 0.90) in quantifying facts about the world (Caliskan, Bryson, and Narayanan 2017).
We extend WEFATâs gender association measurement to quantify the relative association to other social categories (e.g., race), by following an approach similar to lexicon in- duction that quantiï¬es certain associations without annotating large-scale ground truth training data (Hatzivassiloglou and McKeown 1997; Riloff and Wiebe 2003; Turney and Littman 2003). Let Pi = (Ai, Bi) (e.g., African American and Eu- ropean American) be a pair of social groups, and W be a set of attribute words. We calculate the association score s(w, Ai, Bi) for w â W . If s(w, Ai, Bi) is greater than the positive effect size threshold t, w is detected to be associated with group Ai. Let Wi = {w|s(w, Ai, Bi) > t, w â W } be the associated word list for each pair Pi.
We detect the biased attributes associated with an intersec- tional group Cmn deï¬ned by two social categories C1n, Cm1 with M and N subcategories (C11, . . . , Cmn) (e.g., African American females by race (C1n) and gender (Cm1)). We assume, there are three racial categories M = 3, and two gender categories N = 2 in our experiments because of the limited structure of representation for individuals in the validation dataset as well as the stimuli. We plan to extend these methods to non-binary individuals and non- categorical representations. However, precisely validating such an approach would require us to construct the corre- sponding validation sets, which currently donât exist. Gener- alizing the method to represent humans with continuous
values as opposed to categorical group labels is left to future work. There are in total M à N combinations of intersectional groups Cmn. We use all groups Cmn to build WEFAT pairs Pij = (C11, Cij), i = 1, ..., M, j = 1, ..., N . Then, we detect lists of words associated with each pair Wij, i = 1, ..., M, j = 1, ..., N based on threshold t de- termined by an ROC curve. We detect the attributes highly associated with the intersectional group, for example C11, from all (M à N ) WEFAT pairs. We deï¬ne the words asso- ciated with intersectional biases of group C11 as WIB and these words are identiï¬ed by
WIB = WIBij , 1â¤iâ¤M 1â¤jâ¤N
where WIBij = {w|s(w, C11, C ij) > tmn, w â WIBmn } where
WIBmn = {( Wij) ⪠Wrandom} 1â¤iâ¤M 1â¤jâ¤N
W11 contains validated words associated with C11. Each Wij contains validated words associated with one intersectional group (Ghavami and Peplau 2013). Wrandom contains ran- dom words, which are stimuli taken from WEAT that are not associated with any Cij, thus represent true negatives.
To identify the thresholds, we treat IBD as a one-vs-all ver- iï¬cation classiï¬er in machine learning to determine whether attributes belong to group C11. We select the threshold with the highest value of truepositiverateâf alsepositiverate (T P R â F P R). When multiple thresholds have the same values, we select the one with the highest T P to detect more attributes associated with C11. Detection accuracy is calcu- lated as true positives plus true negatives over true positives plus true negatives plus false positives plus false negatives ( T P +T N +F P +F N ). The attributes which are associated with C11 and are detected as C11 are T P . The attributes which are not associated with C11 and are not detected as C11 are T N . The attributes which are associated with C11 but are not detected as C11 are F N . The attributes which are not associated with C11 but are detected as C11 are F P .
Emergent Intersectional Bias Detection (EIBD) EIBD identiï¬es words that are uniquely associated with in- tersectional group members. These emergent biases are only associated with the intersectional group (e.g., African Amer- ican females C11) but not associated with its constituent category such as African Americans S1n or females Sm1. EIBD is a modiï¬ed and extended version of IBD. The formal deï¬nition is in the appendices.
Conceptually, to detect words uniquely associated with African American females in a set of attributes W , we as- sume there are two classes (females, males) of gender and two classes (African Americans, European Americans) of race. We measure the relative association of all words in W ï¬rst with African American females and African American males, second with African American females and European American females, third with African American females and European American males. (Fourth is the comparison of the
same groups, which leads to d = 0 effect size, which is al- ways below the detection threshold.) The union of attributes with an association score greater than the selected thresh- old represents intersectional biases associated with African American females. Then, we calculate the association scores of these IBD attributes ï¬rst with females and males, sec- ond with African Americans and European Americans. We remove the attributes with scores greater than the selected threshold from these IBD attributes, that are highly associ- ated with single social categories. The union of the remaining attributes are the emergent intersectional biases.
# Contextualized Embedding Association Test (CEAT)
CEAT quantiï¬es social biases in CWE by extending the WEAT methodology that measures human-like biases in SWE (Caliskan, Bryson, and Narayanan 2017). WEATâs bias metric is effect size (Cohenâs d). In CWE, since embeddings of the same word vary based on context, applying WEAT to a biased set of CWE will not measure bias comprehensively. To deal with a range of dynamic embeddings representing individual words, CEAT measures the distribution of effect sizes that are embedded in a neural language model.
In WEATâs formal definition (Caliskan, Bryson, and| [Narayanan]|2017), X and Y are two sets of target words of equal size; A and B are two sets of evaluative polar at- tribute words of equal size. Each word in these sets of words is referred to as a stimulus. Let cos(d, b) stand for the co- sine similarity between vectors @ and b. WEAT measures the magnitude of bias by computing the effect size (ES) which is the standardized differential association of the targets and attributes. The p-value (P,,.) of WEAT measures the proba- bility of observing the effect size in the hypothesis, in case biased associations did not exist. According to Cohenâs effect size metric, d >| 0.5 | and d >| 0.8 | are medium and large effect sizes, respectively (Rice and Harris|2005).
In a neural language model, each stimulus s from WEAT contained in n, input sentences has at most n, different CWE $1, +.) $n, depending on the context in which it appears. If we calculate effect size ES(X,Y, A, B) with all different ¥ for a stimulus s ⬠X and keep the CWE for other stimuli unchanged, there will be at most n, different values of ef- fect size. For example, if we assume each stimulus s occurs in 2 contexts and each set in X,Y, A, B has 5 stimuli, the total number of combinations for all the CWE of stimuli will be 25*4 = 1,048,576. The numerous possible values of ES(X,Y,A, B) construct a distribution of effect sizes, therefore we extend WEAT to CEAT.
For each CEAT, all the sentences, where a CEAT stimu- lus occurs, are retrieved from the Reddit corpus. Then, we generate the corresponding CWE from these sentences with randomly varying contexts. In this way, we generate ns CWE from ns extracted sentences for each stimulus s, where ns can vary according to the contextual variance of each stimulus. We sample random combinations of CWE for each stimulus N times. In the ith sample out of N , for each stimulus that appears in at least N sentences, we randomly sample one of its CWE vectors without replacement. If a stimulus occurs
in less than N sentences, especially when N is very large, we randomly sample from its CWE vectors with replacement so that they can be reused while preserving their distribution. We provide the analysis and extended results in the appen- dices for both N = 1, 000 and N = 10, 000, which result in similar bias magnitudes. Based on the sampled CWEs, we calculate each sampleâs effect size ESi(X, Y, A, B), sam- ple variance Vi(X, Y, A, B) and p-value Pwi(X, Y, A, B) in WEAT. Then, we generate N of these samples to approximate the distribution of effect sizes via CEAT.
The distribution of bias effects in CEAT represents random effects computed by WEAT where we do not expect to ob- serve the same effect size due to variance in context (Hedges 1983). As a result, in order to provide comprehensive sum- mary statistics, we applied a random-effects model from the validated meta-analysis literature to compute the weighted mean of the effect sizes and statistical signiï¬cance (Rosen- thal and DiMatteo 2002; Borenstein, Hedges, and Rothstein). The summary of the effect magnitude of a particular bias in a neural language model, namely combined effect size (CES), is the weighted mean of a distribution of random effects,
N 7 CES(X,Y, A,B) = = VES i=1 Vi
where vi is the inverse of the sum of in-sample variance Vi and between-sample variance in the distribution of random ef- fects Ï2 between. Methodological details are in the appendices.
Random-Effects Model Meta-analysis is the statistical procedure for combining data from multiple studies (Hedges and Vevea 1998). Meta- analysis describes the results of each separate study by a numerical index (e.g., effect size) and then summarizes the results into combined statistics. In bias measurements, we are dealing with effect size. Based on different assump- tions whether the effect size is ï¬xed or not, there are two kinds of methods: ï¬xed-effects model and random-effects model. Fixed-effects model expects results with ï¬xed-effect sizes from different intervention studies. On the other hand, random-effects model treats the effect size as they are sam- ples from a random distribution of all possible effect sizes (DerSimonian and Laird 1986; Hedges and Olkin 2014). The expected results of different intervention studies in the random-effects model donât have to match other studiesâ re- sults. In our case, since the effect sizes calculated with the CWE in different contexts are expected to vary, we cannot as- sume a ï¬xed-effects model. Instead, we use a random-effects model that is appropriate for the type of data we are studying. We apply a random-effects model from the validated meta- analysis literature using the methods of Hedges and Vevea (1998). Speciï¬cally, we describe the procedures for estimat- ing the comprehensive summary statistic, combined effect size (CES), which is the weighted mean of a distribution of random-effect sizes. Each effect size is weighted by the variance in calculating that particular effect size in addition to the overall variance among all the random-effect sizes.
We combine effect size estimates from N independent WEATs. The details of CES are in the appendices.
Results and Evaluation We measure ten types of social biases via WEAT (C1-C10) and construct our own intersectional bias tests in ELMo, BERT, GPT, and GPT-2. Accordingly, we present four novel intersectional bias tests via IBD and EIBD for studying African American, European American, and Mexican Ameri- can men and women.
We use the stimuli introduced in Section to represent the target groups. For intersectional and emergent bias tests, we use the attributes associated with the intersectional minority or disadvantaged group members vs the majority European American males as the two polar attribute sets. We sample N = 10, 000 combinations of CWE for each CEAT since according to various evaluation trials, the resulting CES and p-value remain consistent under this parameter.
# Evaluation of IBD and EIBD
We use IBD and EIBD to automatically detect and retrieve the intersectional and emergent biases associated with inter- sectional group members (e.g., African American females, Mexican American females) in GloVe SWE. To evaluate our methods IBD and EIBD, we use validated stimuli provided in prior work that represents each social group with frequent given names, as explained in Section . IBD and EIBD ex- periments use the same test set consisting of 98 attributes associated with 2 groups deï¬ned by gender (females, males), 3 groups deï¬ned by race (African American, European Amer- ican, Mexican American), 6 intersectional groups in total deï¬ned by race and gender, in addition to random words taken from WEAT not associated with any group (Ghavami and Peplau 2013). These random words represent the true negatives for evaluating the identiï¬cation task.
We draw the ROC curves of four bias detection tasks in Figure 2, then select the highest value of T P R â F P R as thresholds for each intersectional group. IBD achieves an accuracy of 81.6% and 82.7%, respectively, when detect- ing the intersectional biases of African American females and Mexican American females, where the random correct identiï¬cation rates are 14.3% and 13.3%. EIBD reaches an accuracy of 84.7% and 65.3%, respectively, when detecting the emergent intersectional biases unique to African Ameri- can females and Mexican American females. The probability of random correct attribute detection in EIBD tasks are 9.2% and 6.1%. Intersectional biases have the highest magnitude compared to other biases across all language models, po- tentially disadvantaging members that belong to multiple minority groups in downstream applications.
The current validation set with ground truth information about each word constrains our evaluation to a closed-world machine learning classiï¬cation task, where we know the cate- gory each stimulus belongs to. On the other hand, evaluating the entire semantic space resembles an open-world machine learning problem where millions of stimuli in the entire word embedding vocabulary belong to unknown categories, thus require human-subject annotation studies. In future work, a human subject study can further evaluate the threshold selec- tion criteria, which would require validating a large set of biases retrieved from the entire vocabulary.
0.12 Effect Size(d) of MF emergent bias in Elmo 3030 0.10 S oa 50.24 6 0.08 a 2) 06 3018 30.06 2 a Zo 04 © 0.4 Bo.12 2 § ir 0.06 50.02 a 9.000 -16 -12 -08 -04 0.0 04 08 1.2 1.6 20 90-16-12 -08-04 00 04 08 12 16 2.0 Effect Size(d) of MF emergent bias in GPT-2
Figure 1: Distributions of effect sizes with ELMo (CES d = 1.51) and GPT-2 (CES d = â0.32) for emergent intersectional bias CEAT test I4. Test I4, after identifying the emergent and intersectional biases associated with Mexican American females and European American males (MF/EM) via IBD and EIBD in word embeddings, CEAT measures the overall distribution of biased associations for the retrieved stimuli in the neural language models. This example is chosen to demonstrate how different models exhibit varying degrees of bias when using the same set of stimuli to measure bias. The height of each bar shows the frequency of observed effect sizes among 10,000 effect size samples of a particular bias type that fall in each bin. The color coded bars stand for the average p-value of all effect sizes corresponding to that bin.
10 ra 10 al 2 2 08 08 £ £ o . o Bo6 B06 2 2 3 3 Qos Qos o o Bo2 Bo2/ F |= ar intersectional bias FT ar emerg inter bias ook ok 00 02 04 06 O08 1.0 0.0 02 04 06 0.8 10 False positive rate False positive rate 1.0 True positive rate True positive rate ââ MF intersectional bias ââ MF emerg inter bias 002 04 06 08 10 False positive rate 0.9, 00 02 04 06 08 Lo False positive rate
Figure 2: ROC curves of IBD and EIBD for African American females (AF) and Mexican American females (MF). The value that maximizes the true positive rate â f alse positive rate is selected as the optimal threshold marked with a dot. âemerg inter biasâ stands for emergent intersectional bias.
Evaluation of CEAT Congruent with Caliskan, Bryson, and Narayanan (2017)âs WEAT ï¬ndings, Table 1 presents signiï¬cant effect sizes for all previously documented and validated biases. GPT-2 ex- hibited less bias than other neural language models. Our method CEAT, designed for CWEs, computes the combined bias score of a distribution of effect sizes present in neural language models. We ï¬nd that the effect magnitudes of biases reported by Tan and Celis (Tan and Celis 2019) are individ- ual samples in the distributions generated by CEAT. We can view their method as a special case of CEAT that calculates the individual bias scores of a few pre-selected samples. In order to comprehensively measure the overall bias score in a neural language model, we apply a random-effects model from the meta-analysis literature that computes combined effect size and combined statistical signiï¬cance from a distri- bution of bias measurements. As a result, when CEAT reports signiï¬cant results, some of the corresponding bias scores in prior work are not statistically signiï¬cant. Furthermore, our results indicate statistically signiï¬cant bias in the opposite direction in some cases. These negative results suggest that some WEAT stimuli tend to occur in stereotype-incongruent contexts more frequently.
We sampled combinations of CWE 10, 000 times for each CEAT test; nonetheless, we observed varying intensities of the same social bias in different contexts. Using a completely random set vs ï¬xed set of contexts derived from 10, 000 sentences lead to low variance in corresponding bias scores. Using a ï¬xed set of contexts for each model makes it possi- ble to evaluate the magnitude of bias across models for the same variables. Experiments conducted with 1, 000, 5, 000, 10, 000 samples of CWE lead to similar bias scores with low
variance. As a result, the number of samples can be adjusted according to computational resources. However, future work on evaluating the lower bound of sampling size with respect to model and corpus characteristics would optimize the sam- pling process. Accordingly, the computation of overall bias in the language model would become more efï¬cient.
IBD, EIBD, and CEAT Results We report the overall magnitude of bias (CES) and p-value in Table 1. We pick an example from Table 1 that reï¬ects the great disparity in bias magnitudes between the two models. We present the distribution histograms of effect sizes in Fig- ure 1, which show the overall biases that can be measured with a comprehensive contextualized bias test related to the emergent biases associated with occurrences of stimuli un- ambiguously regarding Mexican American females (See row I4 in Table 1) with ELMo and GPT-2. The distribution plots for other bias tests are provided in our project repository.
We ï¬nd that CEAT uncovers more evidence of intersec- tional bias than gender or racial biases. This ï¬ndings suggest that, members of multiple minority or disadvantaged groups are associated with the strongest levels of bias in neural lan- guage representations. To quantify the intersectional biases in CWEs, we construct tests I1-I4. Tests with Mexican Amer- ican females tend to have stronger bias with a higher CES than those with African American females. Speciï¬cally, 13 of 16 instances in intersection-related tests (I1-I4) have signiï¬- cant stereotype-congruent CES; 9 of 12 instances in gender- related tests (C6-C8) have signiï¬cant stereotype-congruent CES; 8 of 12 instances in race-related tests (C3-C5) have signiï¬cant stereotype-congruent CES. In gender bias tests, the gender associations with career and family are stronger
Table 1: CEAT measures of social and intersectional biases in language mod- els. We report the overall magnitude of bias in language models with CES (d, rounded down) and statistical sig- niï¬cance with combined p-values (p, rounded up). CES pools N = 10, 000 samples from a random-effects model. The ï¬rst row for each bias test uses com- pletely random samples, whereas the sec- ond row for the bias test uses the same sentences to generate CWE across all neural language models. Ci stands for the ith WEAT in Caliskan, Bryson, and Narayanan (2017)âs Table 1. Ii stands for our tests constructed for measuring inter- sectional biases. A_ stands for African Americans, E_ for European Americans, M _ for Mexican Americans, _F for fe- males, and _M for males. Light, medium, and dark gray shading of combined d val- ues (CES) indicates small, medium, and large effect size, respectively.
GPT BERT ELMo p Test d d p d p d random 1.40 < 10â30 1.35 < 10â30 random 1.56 < 10â30 1.59 < 10â30 random 0.49 < 10â30 0.47 < 10â30 random 0.15 < 10â30 0.23 < 10â30 random 0.11 < 10â30 0.17 < 10â30 random 1.27 < 10â30 1.31 < 10â30 random 0.64 < 10â30 0.71 < 10â30 random 0.33 < 10â30 0.51 < 10â30 random 1.00 < 10â30 1.01 < 10â30 random 0.11 < 10â30 0.24 < 10â30 random 1.24 < 10â30 1.25 < 10â30 random 1.25 < 10â30 1.27 < 10â30 random 1.31 < 10â30 1.29 < 10â30 random 1.51 < 10â30 1.43 < 10â30 1.04 < 10â30 < 10â30 1.01 1.12 < 10â30 1.09 < 10â30 -0.11 < 10â30 -0.10 < 10â30 < 10â2 0.01 0.20 0.00 0.07 < 10â30 0.04 < 10â27 0.19 < 10â30 0.11 < 10â30 0.24 < 10â30 0.23 < 10â30 0.26 < 10â30 0.35 < 10â30 0.08 < 10â29 -0.23 < 10â30 0.07 < 10â30 0.04 < 10â17 0.07 < 10â30 < 10â30 0.23 -0.09 < 10â30 < 10â30 0.23 -0.06 < 10â30 0.00 0.16 < 10â30 0.20 < 10â30 0.97 < 10â30 0.64 < 10â30 0.94 < 10â30 0.54 < 10â30 0.44 < 10â30 0.31 < 10â30 0.47 < 10â30 0.49 < 10â30 < 10â7 0.02 0.07 < 10â30 0.92 < 10â30 0.41 < 10â30 0.41 < 10â30 0.20 < 10â30 -0.07 < 10â30 0.17 < 10â30 0.53 < 10â30 0.40 < 10â30 -0.01 0.07 < 10â30 0.77 < 10â30 0.98 < 10â30 0.67 < 10â30 1.00 < 10â30 0.68 < 10â30 0.51 < 10â30 0.86 < 10â30 0.58 < 10â30 0.14 < 10â30 < 10â30 0.21 -0.27 < 10â30 -0.21 < 10â30 -0.19 < 10â30 0.09 < 10â30 -0.23 < 10â30 -0.13 < 10â30 -0.21 < 10â30 -0.01 0.36 < 10â30 0.34 < 10â30 -0.01 < 10â2 -0.14 < 10â30 -0.16 < 10â30 -0.05 < 10â30 0.10 < 10â30 -0.21 < 10â30 -0.16 < 10â30 -0.14 < 10â30 < 10â2 0.02 -0.19 < 10â30 < 10â2 0.02 -0.14 < 10â30 0.38 < 10â30 < 10â30 0.32 -0.32 < 10â30 -0.25 < 10â30 âPleasant and unpleasant attributes used to measure valence and attitudes towards targets from Greenwald, McGhee, and Schwartz (1998). Flowers/Insects Pleasant/Unpleasantâ Instruments/Weapons Pleasant/Unpleasantâ EA/AA names Pleasant/Unpleasantâ EA/AA names Pleasant/Unpleasantâ EA/AA names Pleasant/Unpleasantâ Males/Female names Career/Family Math/Arts Male/Female terms Science/Arts Male/Female terms Mental/Physical disease Temporary/Permanent Young/Old peopleâs names Pleasant/Unpleasantâ AF/EM names AF/EM intersectional AF/EM names AF emergent/EM intersectional MF/EM names MF/EM intersectional MF/EM names MF emergent/EM intersectional C1: ï¬xed C2: ï¬xed C3: ï¬xed C4: ï¬xed C5: ï¬xed C6: ï¬xed C7: ï¬xed C8: ï¬xed C9: ï¬xed 0.016 C10: ï¬xed I1: ï¬xed I2: ï¬xed I3: ï¬xed 0.81 GPT-2 p 0.11 I4: ï¬xed
than other biased gender associations. In all models, the sig- niï¬cantly biased intersectionality associations have larger effect sizes than racial biases.
According to CEAT results in Table 1, ELMo is the most biased whereas GPT-2 is the least biased with respect to the types of biases CEAT measures. We notice that signiï¬cant negative CES exist in BERT, GPT and GPT-2, which imply that stereotype-incongruent biases with small effect size exist.
Discussion According to our ï¬ndings, GPT-2 has the highest variance in bias magnitudes followed by GPT, BERT, and ELMo (see an example in Figure 1). The overall magnitude of bias de- creases in the same order for the types of biases we measured. The similar number of parameters in these models or the size of the training corpora do not explain the distribution of bias that we observe w.r.t. variance and overall magnitude. How- ever, Ethayarajh (2019) note the same descending pattern when measuring wordsâ self-similarity, after adjusting for anisotropy (non-uniform directionality), across their CWE in GPT-2, BERT, and ELMo. (ELMo is compared in three layers due to its architecture.) Ethayarajh (2019) also ï¬nd that upper layers of contextualizing models produce more context-speciï¬c representations. Quantifying how contextual- ized these dynamic embeddings are supports our ï¬ndings that the highest variance in bias magnitude, low overall bias, and low self-similarity correlate. This correlation may explain the results that we are observing. As more recent models are learning highly-contextualized CWE in upper layers, the representations in highly-contextualized layers are almost overï¬tting to their contexts. Since words appear in numer- ous contexts, the more contextualized and diverse a wordâs representation becomes, the less overall bias and general stereotypical associations.
We present and validate a bias detection method generaliz- able to identifying biases associated with any social group or intersectional group member. We detect and measure biases associated with Mexican American and African American females in SWE and CWE. Our emergent intersectional bias measurement results for African American females are in line with previous ï¬ndings (May et al. 2019; Tan and Celis 2019). IBD and EIBD can detect intersectional biases from SWE with high accuracy in an unsupervised manner by following a lexicon induction strategy (Hatzivassiloglou and McKeown 1997). This approach can be complementary to the stimuli list predeï¬ned by social psychologists. Our current intersectional bias detection validation approach can be used to identify as- sociation thresholds when generalizing this work to the entire word embedding dictionary. Exploring all the potential biases associated with targets is left to future work since it requires extensive human subject validation studies in collaboration with social psychologists. We list all the stimuli representing biased associations in the supplementary materials. To name a few, the superset of intersectional biases associated with African American females are: aggressive, assertive, athletic, bigbutt, conï¬dent, darkskinned, fried-chicken, ghetto, loud, overweight, promiscuous, unfeminine, unintelligent, unre- ï¬ned. Emergent intersectional biases associated with African American females are: aggressive, assertive, bigbutt, conï¬- dent, darkskinned, fried-chicken, overweight, promiscuous, unfeminine. The superset of intersectional biases associated with Mexican American females are: attractive, cook, curvy, darkskinned, feisty, hardworker, loud, maids, promiscuous, sexy, short, uneducated, unintelligent. Emergent intersec- tional biases associated with Mexican American females are: cook, curvy, feisty, maids, promiscuous, sexy.
We follow the conventional method of using the most fre- quent given names in a social group that signal group mem-
bership in order to accurately represent targets (Caliskan, Bryson, and Narayanan 2017; Greenwald, McGhee, and Schwartz 1998). Our results indicate that the conventional method that relies on stimuli selected by experts in social psychology works accurately. Prior work on lexicon induc- tion methods compensates for the lack of existing annotated data on valence (Hatzivassiloglou and McKeown 1997; Riloff and Wiebe 2003; Turney and Littman 2003). Nevertheless, principled and robust lexicon induction methods that can be validated in this domain, when measuring the representa- tion accuracy of target group lexica or any semantic concept. Developing these principled methods is left to future work.
Semantics of languages can be represented by the distribu- tional statistics of word co-occurrences (Firth 1957; Harris 1954). Consequently, our methods are language agnostic and can be applied to neural language models as well as word em- beddings in any language as long as the stimuli for accurately representing the semantics of concepts are available. Project Implicit (https://implicit.harvard.edu/implicit) has been host- ing IATs for human subjects all over the world in numerous languages for two decades. As a result, their IATs, that in- spired WEATs, provide stimuli for targets and attributes in numerous languages. We leave generalizing our methods to other languages to future work since state-of-the-art neural language models are not widely or freely available for lan- guages other than English as of 2021.
When simulating contexts for WEAT, we make an assump- tion that the Reddit corpus represents naturally occurring sentences. Nevertheless, we acknowledge that the Reddit cor- pus also reï¬ects the biases of the underlying population con- tributing to its corpus. Studying the accuracy of simulating the most common distribution of contexts and co-occurring stimuli is left to future work since we donât have validated ground truth data for evaluating the distribution parameters of contexts in large-scale corpora. Instead, for evaluation, val- idation, and comparison, we rely on validated ground truth information about biases documented by Caliskan, Bryson, and Narayanan (2017) in word embeddings as well as biases documented by millions of people over decades via the im- plicit association literature (Nosek, Banaji, and Greenwald 2002) and Ghavami and Peplau (2013)âs intersectional biases. Given the energy and funding considerations, we are not able to train these language models on the same large-scale corpora to compare how a neural language modelâs architec- ture learns biases, because the training processes for these models are computationally and ï¬nancially expensive (Ben- der et al. 2021). The size of state-of-the-art models increase by at least a factor of 10 every year. BERT-Large from 2018 has 355 million parameters, GPT-2 from early 2019 reaches 1.5 billion, and GPT-3 from mid-2020 ï¬nally gets to 175 bil- lion parameters. The GPT-2 model used 256 Google Cloud TPU v3 cores for training, which costs 256 US dollars per hour. GPT-2 requires approximately 168 hours or 1 week of training on 32 TPU v3 chips (Strubell, Ganesh, and Mc- Callum 2019). GPT-3 is estimated to cost â¼12 million US dollars (Floridi and Chiriatti 2020) and we are not able to get access to its embeddings or training corpora. Regardless, measuring the scope of biases with validated bias quantiï¬ca- tion and meta-analysis methods, we are able to compare the
biased associations learned by neural language models that are widely used. Being able to study neural language models comprehensively is critical since they are replacing SWE in many NLP applications due to their high accuracy in various machine learning tasks.
We would like to conclude the discussion with our ethical concerns regarding the dual use of IBD and EIBD, that can detect stereotypical associations for an intersectional group or disadvantaged individuals. Words retrieved by our methods may be used in the generation of offensive or stereotypical content that perpetuates or ampliï¬es existing biases. For ex- ample, information inï¬uence operations in the 1970s used Osgood (1964)âs semantic differential technique among hu- man subjects to retrieve the words that would most effectively induce a negative attitude in a South American population towards their administration (Landis et al. 1982). Similarly, biased neural language models may be exploited to automate large-scale information inï¬uence operations that intend to sow discord among social groups Toney et al. (2020); Toney and Caliskan (2020). The biased outputs of these language models, that get recycled in future model generationâs train- ing corpora, may lead to an AI bias feedback cycle.
Conclusion We introduce methods called IBD and EIBD to identify bi- ases associated with members of multiple minority groups. These methods automatically detect the intersectional biases and emergent intersectional biases captured by word embed- dings. Intersectional biases associated with African Ameri- can and Mexican American females have the highest effect size compared to other social biases. Complementary to pre- deï¬ned sets of attributes to measure widely known biases, our methods automatically discover biases. IBD reaches an ac- curacy of 81.6% and 82.7% in detection, respectively, when validating on the intersectional biases of African American females and Mexican American females. EIBD reaches an ac- curacy of 84.7% and 65.3% in detection, respectively, when validating on the emergent intersectional biases of African American females and Mexican American females.
We present CEAT to measure biases identiï¬ed by IBD and EIBD in language models. CEAT uses a random-effects model to comprehensively measure social biases embedded in neural language models that contain a distribution of context- dependent biases. CEAT simulates this distribution by sam- pling (N = 10, 000) combinations of CWEs without replace- ment from a large-scale natural language corpus. Unlike prior work that focuses on a limited number of contexts deï¬ned by templates to measure the magnitude of particular biases, CEAT provides a comprehensive measurement of overall bias in contextualizing language models. Our results indicate that ELMo is the most biased, followed by BERT, and GPT. GPT-2 is the least biased language model with respect to the social biases we investigate. The overall magnitude of bias negatively correlates with the level of contextualization in the language model. Understanding how the architecture of a language model contributes to biased and contextualized word representations can help mitigate the harmful effects to society in downstream applications.
# Appendices
Formal Definition of WEAT We present a formal definition of |Caliskan, Bryson, and (2017)âs WEAT. Let X and Y be two sets of target words of equal size, and A, B be two sets of attribute words. Let cos(@, b) stand for the cosine similarity between the embeddings of words a and b. Here, the vector @ is the embedding for word a. The test statistic is
s(X,Y, A,B) = )> s(x, A,B) â > s(y, A,B) wEX yeY
where s(w, A, B) = meange cos(, @) â meanye Bcos(w, b)
A permutation test calculates the statistical signiï¬cance of association s(X, Y, A, B). The one-sided p â value is
P = P ri[s(Xi, Yi, A, B) > s(X, Y, A, B))]
where {(Xi, Yi)}i represents all the partitions of X ⪠Y in two sets of equal size. Random permutations of these stimuli sets represent the null hypothesis as if the biased associations did not exist so that we can perform a statistical signiï¬cance test by measuring the unlikelihood of the null hypothesis, given the effect size of WEAT.
The effect size of bias is calculated as
meanzex (a, A,B) â meanyey s(y, A, B) ES 5 std_devyex Uy s(w, A, B)
Formal Deï¬nition of EIBD We ï¬rst detect C11âs intersectional biases WIB with IBD. Then, we detect the biased attributes associated with only one constituent category of the intersectional group C11 (e.g., associated only with race S1n - or only with gender Sm1). Each intersectional category C1n has M constituent subcate- gories Sin, i = 1, ...M and category Cm1 has N constituent subcategories Smj, j = 1, ..., N . S1n and Sm1 are the con- stituent subcategories of intersectional group C11.
There are in total M + N groups deï¬ned by all the sin- gle constituent subcategories. We use all M + N groups to build WEFAT pairs Pi = (S1n, Sin), i = 1, ..., M and Pj = (Sm1, Smj), j = 1, ...N . Then, we detect lists of words associated with each pair Wi, i = 1, ...M and Wj, j = 1, ..., N based on the same positive threshold tmn used in IBD. We detect the attributes highly associated with the constituent subcategories S1n and Sm1 of the target in- tersectional group C11 from all (M + N ) WEFAT pairs. We deï¬ne the words associated with emergent intersectional bi- ases of group C11 as WEIB and these words are identiï¬ed by the formula
M N Were = (U(Wre â Wi) WU re - 3) i=l j=l
where Wi = {w|s(w, S1n, Sin) > tmn, w â WIB}
# and Wj = {w|s(w, Sm1, Smj) > tmn, w â WIB}
# Random-Effects Model Details
Each effect size is calculated by
meanzex (a, A, B) â meanyey s(y, A, B) ES; Si std_devyex Uy s(w, A, B)
The estimation of in-sample variance is V;, which is the square of std_devyex (yy $(w, A, B). We use the same prin- ciple as estimation of the variance components in ANOVA to measure the between-sample variance OPotweent which is calculated as:
Ï2 between = Q â (N â 1) c 0 if Q ⥠N â 1 if Q < N â 1
# where
Wi = 1 Vi
DW? 2 (WiES;)? 7. . es Cc » W; x W, & Q WES; x W,
The weight vi assigned to each WEAT is the inverse of the sum of estimated in-sample variance Vi and esti- mated between-sample variance in the distribution of random- effects Ï2
vi = 1 Vi + Ï2 between
CES, which is the sum of the weighted effect sizes divided by the sum of all weights, is then computed as
Vin Yi
To derive the hypothesis test, we calculate the standard error (SE) of CES as the square root of the inverse of the sum of the weights.
Based on the central limit theorem, the limiting form of the distribution of SE(CES) is the standard normal distribution (Montgomery and Runger 2010). Since we notice that some CES are negative, we use a two-tailed p â value which can test the signiï¬cance of biased associations in two directions. The two-tailed p â value of the hypothesis that there is no difference between all the contextualized variations of the two sets of target words in terms of their relative similarity to two sets of attribute words is given by the following for- mula, where Φ is the standard normal cumulative distribution function and SE stands for the standard error.
Pcombined(X, Y, A, B) = 2 à [1 â Φ(| CES SE(CES) |)]
# Meta-Analysis Details for CEAT
In this section, we ï¬rst construct all CEAT in the main paper (C1-C10,I1-I4) with sample size N = 1, 000 to provide a comparison of results with different sample sizes. We report CES d and combined p â value p in Table 2. We replicate these results with N = 1, 000 instead of using the original N = 10, 000 to show that even with N = 1, 000, we get valid results. Accordingly, we proceed to calculate all types of biases associated with intersectional groups based on the attributes used in original WEAT. We notice that there are ï¬ve tests which are signiï¬cant with sample size N = 10, 000 but insigniï¬cant with sample size N = 1, 000. They are C10 with Bert, C4 with GPT, C7 with GPT-2, I3 with GPT-2 and I4 with GPT-2. We also notice that CES of same test can be different with different sample size but all differences are smaller than 0.1.
We also construct four types of supplementary CEAT for all pairwise combinations of six intersectional groups: African American females (AF), African American males (AM), Mexican American females (MF), Mexican American males (MM), European American females (EF), European American males (EM). We use two intersectional groups as two target social groups. For each pairwise combination, we build four CEAT : ï¬rst, measure attitudes with words rep- resenting pleasantness and unpleasantness as two attribute groups (as in C1); second, measure career and family associ- ations that are particularly important in gender stereotypes with the corresponding two attribute groups (as in C6); third, similar to the career-family stereotypes for gender, measure math and arts associations that are particularly important in gender stereotypes with the corresponding two attribute groups (as in C7); fourth, similar to the math-arts stereotypes for gender, measure science (STEM) and arts associations that are particularly important in gender stereotypes with the corresponding two attribute groups (as in C8). We report the CES (d) and combined p â values (p) in Table 2 with sample size N = 1, 000. All of these attributes are from the C1, C6, C7 and C8 WEAT of Caliskan et al. (Caliskan, Bryson, and Narayanan 2017).
Stimuli The stimuli used to represent targets and attributes in CEAT (C1-C10) are taken from Caliskan et al.(Caliskan, Bryson, and Narayanan 2017). We construct four intersection-related CEAT for African American females and Mexican American females.
When conducting intersection-related CEAT , we use the names from Caliskan et al. (Caliskan, Bryson, and Narayanan 2017) and Parada et al. (Parada 2016) to represent the tar- get intersectional groups. Caliskan et al.âs WEAT provides the female and male names of African Americans and Euro- pean Americans from the ï¬rst Implicit Association Test in 1998 (Greenwald, McGhee, and Schwartz 1998). Parada et al. provide the female and male names of Mexican Americans (Parada 2016). To determine and verify the gender of names, we use three gender checkers (Huang et al. 2019). We only use the name as a target word in our experiments, if the name is categorized to belong to the same gender by all of the three
checkers. Human subjects provide the validation set of inter- sectional attributes with ground truth information (Ghavami and Peplau 2013). We use this validation set for evaluating the intersection-related CEAT, IBD and EIBD experiments. To follow the order of stereotype-congruity, we use Euro- pean American males as the second target group and use the attributes associated with their intersectional biases as the second attribute set in intersection-related CEAT. There are only three emergent intersectional biases associated with Eu- ropean American males in the validation set, which doesnât provide a sufï¬cient number of stimuli. A small set of stimuli does not satisfy the requirements for generating statistically signiï¬cant concept representation and WEATs. Related stim- uli details are discussed in the dataset and stimuli sections of the main paper. In addition, if the size of the ï¬rst attribute set is smaller than that of the attributes of European American males, we randomly select an equal number of attributes as- sociated with the intersectional biases of European American males. WEAT requires equal-sized sets of attributes.
# CEAT I1
We use the frequent given names of African American fe- males and European American males as two target social groups and use the attributes associated with the intersec- tional biases of African American females and attributes as- sociated with the intersectional biases of European American males as the two attribute groups.
Since âassertiveâ is associated with both African American females and European American males, we do not include it in this test.
⢠African American females: Aisha, Keisha, Lakisha, Latisha, Latoya, Malika, Nichelle, Shereen, Tamika, Tan- isha, Yolanda, Yvette
⢠European American males: Andrew, Brad, Frank, Geof- frey, Jack, Jonathan, Josh, Matthew, Neil, Peter, Roger, Stephen
⢠Intersectional biases of African American females: ag- gressive, athletic, bigbutt, conï¬dent, darkskinned, fried- chicken, ghetto, loud, overweight, promiscuous, unfemi- nine, unintelligent, unreï¬ned
⢠Intersectional biases of European American males: all- American, arrogant, attractive, blond, high-status, intel- ligent, leader, privileged, racist, rich, sexist, successful, tall
# CEAT I2
We use the frequent given names of African American fe- males and European American males as two target groups. We use attributes associated with emergent intersectional bi- ases of African American females and attributes associated with intersectional biases of European American males as two attribute groups.
Since âassertiveâ is associated with emergent intersectional bias of African American females and intersectional bias of European American males, we do not include it in this test.
Table 2: CEAT from main paper (C1-C10,I1-I4) with sample size N = 1, 000 as opposed to the N = 10, 000 hyper- parameter in the main paper. We report the CES (d) and combined p â values of all CEAT (p) in the main paper with sample size N = 1, 000. We observe that all of the results are consistent with the CES and p â values reported in the main paper on Table 1. Light, medium, and dark gray shading of combined d values (CES) indicates small, medium, and large effect size, respectively. There are ï¬ve tests which are signiï¬cant with sample size N = 10, 000 but not signiï¬cant with sample size N = 1, 000. However, these have small effect sizes and as a result we donât expect statistical signiï¬cance. According to our experiments, the Spearman correlation between WEATâs effect size and p â value is Ï = 0.99. Smaller effect sizes are expected to have insigniï¬cant p-values. Accordingly, all of the results under N = 1, 000 are consistent with the main ï¬ndings. The notable yet consistent differences are C10 with Bert, C4 with GPT, C7 with GPT-2, I3 with GPT-2, and I4 with GPT-2. CES varies minimally with different sample size (N ), but the differences of the results are smaller than 0.1, suggesting the degree of effect size remains consistent. In edge cases, where statistical signiï¬cance or effect size is close to a signiï¬cance threshold, gradually increasing N , in increments of N = +500 would provide more reliable results. A_ stands for African Americans. E_ stands for European Americans. M _ stands for Mexican Americans. _F stands for females. _M stands for males.
p < 10â30 < 10â30 < 10â30 < 10â30 < 10â30 < 10â30 0.81 < 10â30 0.04 < 10â30 0.06 0.26 < 10â30 < 10â30 âUnpleasant and pleasant attributes used to measure valence and attitudes towards targets (Greenwald, McGhee, and Schwartz 1998).
⢠African American females: Aisha, Keisha, Lakisha, Latisha, Latoya, Malika, Nichelle, Shereen, Tamika, Tan- isha, Yolanda, Yvette
⢠European American males: Andrew, Brad, Frank, Geof- frey, Jack, Jonathan, Josh, Matthew, Neil, Peter, Roger, Stephen
⢠Emergent intersectional biases of African American females: aggressive, bigbutt, conï¬dent, darkskinned, fried- chicken, overweight, promiscuous, unfeminine
⢠Intersectional biases of European American males: ar- rogant, blond, high-status, intelligent, racist, rich, success- ful, tall
Brenda, Carolina, Iliana, Karina, Liset, Maria, Mayra, So- nia, Yesenia
⢠European American males: Andrew, Brad, Frank, Geof- frey, Jack, Jonathan, Josh, Matthew, Neil, Peter, Roger, Stephen
⢠Intersectional biases of Mexican American females: cook, curvy, darkskinned, feisty, hardworker, loud, maids, promiscuous, sexy, short, uneducated, unintelligent
⢠Intersectional biases of European American males: all- American, arrogant, blond, high-status, intelligent, leader, privileged, racist, rich, sexist, successful, tall
# CEAT I3
We use the frequent given names of Mexican American fe- males and European American males as the target groups and the words associated with their intersectional biases as the attribute groups.
Since âattractiveâ is associated with intersectional biases of both Mexican American females and European American males, we do not include it in this test.
⢠Mexican American females: Adriana, Alejandra, Alma,
# CEAT I4
We use the frequent given names of Mexican American fe- males and European American males as target groups. We use words associated with the emergent intersectional biases of Mexican American females and words associated with the intersectional biases of European American males as the two attribute groups.
⢠Mexican American females: Adriana, Alejandra, Alma, Brenda, Carolina, Iliana, Karina, Liset, Maria, Mayra, So- nia, Yesenia
Table 3: CEAT for intersectional groups with sample size N = 1, 000. We construct 4 types of new CEAT with all pairwise combinations of intersectional groups. We use two intersectional groups as two target social groups. We use 1) pleasant/unpleasant 2) career/family 3) math/arts 4) science/arts as two attribute groups. We report the CES d and combined p â value p. Light, medium, and dark gray shading of combined d values (CES) indicates small, medium, and large effect size respectively. A_ stands for African Americans. E_ stands for European Americans. M _ stands for Mexican Americans. _F stands for females. _M stands for males.
Test ELMo BERT GPT GPT-2 d p d p d p d p EM/EF, P/Uâ - Attitude EM/EF, Career/Family EM/EF, Math/Arts EM/EF, Science/Arts EM/AM, P/Uâ - Attitude EM/AM, Career/Family EM/AM, Math/Arts EM/AM, Science/Arts EM/AF, P/Uâ - Attitude EM/AF, Career/Family EM/AF, Math/Arts EM/AF, Science/Arts EM/MM, P/Uâ - Attitude EM/MM, Career/Family EM/MM, Math/Arts EM/MM, Science/Arts EM/MF, P/Uâ - Attitude EM/MF, Career/Family EM/MF,Math/Arts EM/MF, Science/Arts EF/AM, P/Uâ - Attitude EF/AM, Career/Family EF/AM, Math/Arts EF/AM, Science/Arts EF/AF, P/Uâ - Attitude EF/AF, Career/Family EF/AF, Math/Arts EF/AF, Science/Arts EF/MM, P/Uâ - Attitude EF/MM, Career/Family EF/MM, Math/Arts EF/MM, Science/Arts EF/MF, P/Uâ - Attitude EF/MF, Career/Family EF/MF, Math/Arts EF/MF, Science/Arts AM/AF, P/Uâ - Attitude AM/AF, Career/Family AM/AF, Math/Arts AM/AF, Science/Arts AM/MM, P/Uâ - Attitude AM/MM, Career/Family AM/MM, Math/Arts AM/MM, Science/Arts AM/MF, P/Uâ - Attitude AM/MF, Career/Family AM/MF, Math/Arts AM/MF, Science/Arts AF/MM, P/Uâ - Attitude AF/MM, Career/Family AF/MM, Math/Arts AF/MM, Science/Arts AF/MF, P/Uâ - Attitude AF/MF, Career/Family AF/MF, Math/Arts AF/MF, Science/Arts MM/MF, P/Uâ - Attitude MM/MF, Career/Family MM/MF, Math/Arts MM/MF, Science/Arts -0.49 < 10â30 1.15 < 10â30 0.44 < 10â30 0.37 < 10â30 0.57 < 10â30 0.32 < 10â30 -0.28 < 10â30 0.02 -0.35 < 10â30 1.10 < 10â30 0.11 < 10â19 0.56 < 10â30 -0.15 < 10â30 0.46 0.01 < 10â5 0.06 0.21 < 10â30 -0.82 < 10â30 1.14 < 10â30 0.69 < 10â30 0.33 < 10â30 0.95 < 10â30 -0.98 < 10â30 -0.66 < 10â30 -0.30 < 10â30 0.09 < 10â22 < 10â7 0.04 -0.33 < 10â30 0.23 < 10â30 0.38 < 10â30 -1.10 < 10â30 -0.34 < 10â30 -0.18 < 10â30 -0.42 < 10â30 -0.09 < 10â30 0.30 < 10â30 -0.01 -0.79 < 10â30 0.94 < 10â30 0.34 < 10â30 0.50 < 10â30 -0.72 < 10â30 -0.28 < 10â30 0.33 < 10â30 0.13 < 10â30 -1.15 < 10â30 0.96 < 10â30 0.87 < 10â30 0.30 < 10â30 0.26 < 10â30 -1.07 < 10â30 -0.03 -0.38 < 10â30 -0.43 < 10â30 -0.15 < 10â30 0.59 < 10â30 -0.20 < 10â30 -0.77 < 10â30 1.11 < 10â30 0.62 < 10â30 0.18 < 10â30 0.10 0.40 0.03 -0.33 < 10â30 0.73 < 10â30 0.34 < 10â30 -0.11 < 10â30 0.40 < 10â30 0.16 < 10â30 -0.04 < 10â2 -0.18 < 10â30 0.10 < 10â11 0.90 < 10â30 0.72 < 10â30 0.29 < 10â30 0.42 < 10â30 0.28 < 10â30 -0.22 < 10â30 -0.27 < 10â30 -0.19 < 10â30 0.68 < 10â30 0.27 < 10â30 0.11 < 10â13 0.70 < 10â30 -0.62 < 10â30 -0.41 < 10â30 -0.08 < 10â30 0.50 < 10â30 0.22 < 10â30 0.39 < 10â30 0.43 < 10â30 0.70 < 10â30 -0.45 < 10â30 -0.55 < 10â30 -0.21 < 10â30 0.19 < 10â30 -0.07 < 10â30 -0.05 < 10â30 0.25 < 10â30 -0.32 < 10â30 0.84 < 10â30 0.79 < 10â30 0.47 < 10â30 0.02 0.16 < 10â30 -0.16 < 10â30 -0.13 < 10â30 -0.57 < 10â30 0.56 < 10â30 0.36 < 10â30 0.30 < 10â30 0.33 < 10â30 -0.64 < 10â30 -0.90 < 10â30 -0.56 < 10â30 -0.33 < 10â30 -0.31 < 10â30 -0.42 < 10â30 -0.18 < 10â30 -0.59 < 10â30 0.40 < 10â30 0.50 < 10â30 0.41 < 10â30 0.10 -0.01 0.34 < 10â30 0.13 < 10â25 < 10â6 0.07 < 10â2 0.04 -0.36 < 10â30 -0.05 < 10â30 0.17 < 10â30 -0.12 < 10â30 0.20 < 10â30 0.14 < 10â23 0.24 < 10â30 -0.17 < 10â30 -0.32 < 10â30 0.45 < 10â30 0.62 < 10â30 -0.34 < 10â30 0.09 < 10â11 0.28 < 10â30 0.41 < 10â30 < 10â5 0.06 -0.63 < 10â30 -0.15 < 10â30 0.11 < 10â13 -0.15 < 10â30 -0.16 < 10â30 -0.01 0.18 < 10â30 -0.19 < 10â30 -0.65 < 10â30 0.37 < 10â30 0.54 < 10â30 -0.33 < 10â30 -0.23 < 10â30 0.17 < 10â30 0.37 < 10â30 -0.19 < 10â30 0.50 < 10â30 0.16 < 10â30 < 10â7 0.07 -0.20 < 10â30 < 10â7 0.07 0.51 < 10â30 0.45 < 10â30 -0.38 < 10â30 0.41 < 10â30 0.31 < 10â30 0.27 < 10â30 -0.04 < 10â30 -0.54 < 10â30 0.37 < 10â30 0.43 < 10â30 -0.19 < 10â30 -0.06 < 10â30 0.16 < 10â30 0.22 < 10â30 -0.15 < 10â30 0.44 < 10â30 -0.18 < 10â30 -0.19 < 10â30 0.60 0.44 -0.53 < 10â30 0.41 < 10â30 -0.41 < 10â30 -0.04 -0.34 < 10â30 0.42 < 10â30 -0.45 < 10â30 -0.20 < 10â30 -0.60 < 10â30 0.62 < 10â30 -0.62 < 10â30 -0.19 < 10â30 -0.20 < 10â30 0.33 < 10â30 -0.38 < 10â30 -0.37 < 10â30 -0.60 < 10â30 0.68 < 10â30 -0.78 < 10â30 -0.29 < 10â30 0.09 < 10â17 0.11 < 10â21 -0.10 < 10â30 -0.19 < 10â30 -0.20 < 10â30 0.33 < 10â30 -0.35 < 10â30 -0.20 < 10â30 0.32 < 10â30 -0.02 -0.02 -0.36 < 10â30 -0.15 < 10â30 0.43 < 10â30 -0.55 < 10â30 -0.30 < 10â30 -0.24 < 10â30 0.17 < 10â30 -0.17 < 10â30 -0.02 0.20 < 10â30 -0.12 < 10â30 < 10â9 0.08 -0.16 < 10â30 -0.22 < 10â30 0.27 < 10â30 -0.38 < 10â30 -0.14 < 10â30 0.46 < 10â30 -0.31 < 10â30 0.29 < 10â30 -0.18 < 10â30 -0.01 0.15 < 10â30 -0.25 < 10â30 -0.15 < 10â30 -0.44 < 10â30 0.42 < 10â30 -0.49 < 10â30 0.02 0.02 0.14 0.28 0.15 0.48 0.18
âUnpleasant and pleasant attributes used to measure valence and attitudes towards targets from Greenwald, McGhee, and Schwartz (1998).
Table 4: CEAT for intersectional groups with sample size N = 1, 000. We construct 4 types of new CEAT with all pairwise combinations of intersectional groups. We use two intersectional groups as two target social groups. We use 1) pleasant/unpleasant 2) career/family 3) math/arts 4) science/arts as two attribute groups. Each one of the four experiments with the neural language models is conducted using the same sample of sentences. We report the CES d and combined p â value p. Light, medium, and dark gray shading of combined d values (CES) indicates small, medium, and large effect size respectively. A_ stands for African Americans. E_ stands for European Americans. M _ stands for Mexican Americans. _F stands for females. _M stands for males.
Test ELMo BERT GPT GPT-2 d p d p d p d p EM/EF, P/Uâ - Attitude EM/EF, Career/Family EM/EF, Math/Arts EM/EF, Science/Arts EM/AM, P/U - Attitude EM/AM, Career/Family EM/AM, Math/Arts EM/AM, Science/Arts EM/AF, P/U - Attitude EM/AF, Career/Family EM/AF, Math/Arts EM/AF, Science/Arts EM/MM, P/U - Attitude EM/MM, Career/Family EM/MM, Math/Arts EM/MM, Science/Arts EM/MF, P/U - Attitude EM/MF, Career/Family EM/MF,Math/Arts EM/MF, Science/Arts EF/AM, P/U - Attitude EF/AM, Career/Family EF/AM, Math/Arts EF/AM, Science/Arts EF/AF, P/U - Attitude EF/AF, Career/Family EF/AF, Math/Arts EF/AF, Science/Arts EF/MM, P/U - Attitude EF/MM, Career/Family EF/MM, Math/Arts EF/MM, Science/Arts EF/MF, P/U - Attitude EF/MF, Career/Family EF/MF, Math/Arts EF/MF, Science/Arts AM/AF, P/U - Attitude AM/AF, Career/Family AM/AF, Math/Arts AM/AF, Science/Arts AM/MM, P/U - Attitude AM/MM, Career/Family AM/MM, Math/Arts AM/MM, Science/Arts AM/MF, P/U - Attitude AM/MF, Career/Family AM/MF, Math/Arts AM/MF, Science/Arts AF/MM, P/U - Attitude AF/MM, Career/Family AF/MM, Math/Arts AF/MM, Science/Arts AF/MF, P/U - Attitude AF/MF, Career/Family AF/MF, Math/Arts AF/MF, Science/Arts MM/MF, P/U - Attitude MM/MF, Career/Family MM/MF, Math/Arts MM/MF, Science/Arts -0.62 < 10â30 < 10â30 1.07 < 10â30 0.07 < 10â30 0.21 < 10â30 0.50 < 10â30 0.19 -0.47 < 10â30 -0.11 < 10â30 -0.24 < 10â30 < 10â30 1.12 < 10â30 0.07 < 10â30 0.55 -0.18 < 10â30 -0.08 < 10â30 -0.08 < 10â30 < 10â30 0.16 -0.73 < 10â30 1.06 < 10â30 < 10â30 0.58 < 10â30 0.24 < 10â30 0.96 -1.00 < 10â30 -0.53 < 10â30 -0.28 < 10â30 < 10â30 0.27 < 10â30 0.13 0.85 0.00 < 10â30 0.34 < 10â30 0.47 -1.10 < 10â30 -0.15 < 10â30 -0.05 < 10â30 -0.18 < 10â30 -0.13 < 10â30 < 10â30 0.52 < 10â30 0.04 -0.63 < 10â30 < 10â30 1.05 < 10â30 0.49 < 10â30 0.57 -0.67 < 10â30 -0.27 < 10â30 < 10â30 0.38 < 10â30 0.24 -1.03 < 10â30 < 10â30 0.98 < 10â30 0.91 < 10â30 0.31 < 10â30 0.09 -1.15 < 10â30 -0.14 < 10â30 -0.39 < 10â30 -0.41 < 10â30 -0.27 < 10â30 < 10â30 0.48 -0.29 < 10â30 -0.62 < 10â30 < 10â30 1.11 < 10â30 0.63 < 10â30 0.09 -0.17 < 10â30 < 10â30 0.40 < 10â30 0.12 -0.04 < 10â30 < 10â30 0.37 < 10â30 0.10 -0.18 < 10â30 -0.03 < 10â14 < 10â30 0.32 < 10â30 0.40 < 10â30 0.30 < 10â30 0.47 < 10â30 0.37 < 10â21 0.04 -0.22 < 10â30 -0.09 < 10â30 < 10â30 0.09 < 10â30 0.35 < 10â30 0.09 < 10â30 0.26 < 10â30 0.51 -0.31 < 10â30 -0.30 < 10â30 0.42 0.00 < 10â30 0.47 0.037 0.01 < 10â30 0.19 < 10â30 0.50 < 10â30 0.52 -0.35 < 10â30 -0.34 < 10â30 -0.06 < 10â30 < 10â30 0.26 -0.05 < 10â30 -0.03 < 10â12 < 10â30 0.30 -0.06 < 10â30 < 10â30 0.32 < 10â30 0.48 < 10â30 0.52 < 10â3 -0.02 -0.06 < 10â30 -0.04 < 10â30 -0.06 < 10â30 -0.28 < 10â30 < 10â30 0.25 < 10â30 0.26 < 10â30 0.30 < 10â11 0.04 -0.36 < 10â30 -0.52 < 10â30 -0.56 < 10â30 -0.23 < 10â30 -0.05 < 10â30 -0.22 < 10â30 -0.21 < 10â30 -0.27 < 10â30 < 10â30 0.31 < 10â30 0.30 < 10â30 0.35 -0.11 < 10â30 < 10â30 0.27 < 10â30 0.23 < 10â30 0.12 < 10â30 0.13 -0.35 < 10â30 < 10â30 0.13 < 10â30 0.14 -0.26 < 10â30 < 10â30 0.23 < 10â30 0.28 < 10â30 0.21 -0.36 < 10â30 -0.26 < 10â30 < 10â30 0.54 < 10â30 0.56 -0.24 < 10â30 < 10â30 0.06 < 10â30 0.49 < 10â30 0.48 < 10â30 0.24 -0.57 < 10â30 -0.10 < 10â30 < 10â13 0.03 -0.17 < 10â30 -0.05 < 10â30 < 10â30 0.06 < 10â30 0.11 -0.25 < 10â30 -0.50 < 10â30 < 10â30 0.37 < 10â30 0.47 -0.14 < 10â30 -0.19 < 10â30 < 10â30 0.32 < 10â30 0.38 -0.39 < 10â30 < 10â30 0.53 < 10â30 0.16 < 10â30 0.08 -0.48 < 10â30 < 10â30 0.13 < 10â30 0.45 < 10â30 0.44 -0.37 < 10â30 < 10â30 0.38 < 10â30 0.39 < 10â30 0.34 -0.09 < 10â30 -0.47 < 10â30 < 10â30 0.31 < 10â30 0.35 < 10â8 0.02 -0.15 < 10â30 < 10â30 0.26 < 10â30 0.26 < 10â30 0.11 < 10â30 0.30 -0.04 < 10â30 -0.08 < 10â30 -0.28 < 10â30 < 10â30 0.33 -0.22 < 10â30 < 10â2 0.01 -0.15 < 10â30 < 10â30 0.30 -0.30 < 10â30 -0.20 < 10â30 -0.26 < 10â30 < 10â30 0.66 -0.52 < 10â30 -0.35 < 10â30 -0.12 < 10â30 < 10â30 0.23 -0.47 < 10â30 -0.45 < 10â30 -0.24 < 10â30 < 10â30 0.66 -0.61 < 10â30 -0.43 < 10â30 < 10â30 0.12 0.86 0.00 -0.13 < 10â30 -0.24 < 10â30 0.27 0.01 < 10â30 0.45 -0.40 < 10â30 -0.44 < 10â30 < 10â30 0.17 -0.09 < 10â30 -0.26 < 10â30 -0.51 < 10â30 < 10â2 0.02 < 10â30 0.46 -0.52 < 10â30 -0.52 < 10â30 -0.11 < 10â30 < 10â30 0.41 -0.24 < 10â30 -0.18 < 10â30 < 10â21 0.04 -0.08 < 10â30 -0.15 < 10â30 -0.27 < 10â30 -0.09 < 10â30 < 10â30 0.42 -0.37 < 10â30 -0.28 < 10â30 < 10â30 0.16 -0.47 < 10â30 < 10â30 0.11 -0.10 < 10â30 < 10â3 0.02 < 10â30 0.06 -0.17 < 10â30 -0.13 < 10â30 -0.15 < 10â30 < 10â30 0.49 -0.25 < 10â30 -0.04 < 10â14
âUnpleasant and pleasant attributes used to measure valence and attitudes towards targets from Greenwald, McGhee, and Schwartz (1998).
⢠European American males: Andrew, Brad, Frank, Geof- frey, Jack, Jonathan, Josh, Matthew, Neil, Peter, Roger, Stephen
⢠Emergent intersectional biases of Mexican American females: cook, curvy, feisty, maids, promiscuous, sexy
⢠Intersectional biases of European American males: ar- rogant, assertive, intelligent, rich, successful, tall
IBD and EIBD We detect the attributes associated with the intersectional biases and emergent intersectional biases of African Ameri- can females and Mexican American females in GloVe SWE. We assume that there are three subcategories under the race category (African American, Mexican American, European American) and two subcategories under the gender category (female, male). We use the frequent given names to represent each intersectional group. Again, we note that, in future work weâd generalize this work to n subcategories under each cate- gory. Further, in future work, instead of categorizing people into social groups, weâd like to explore representing individ- uals in social data with continuous real-valued variables as opposed to associating them with category labels.
⢠African American females: Aisha, Keisha, Lakisha, Latisha, Latoya, Malika, Nichelle, Shereen, Tamika, Tan- isha, Yolanda, Yvette
⢠African American males: Alonzo, Alphonse, Hakim, Ja- mal, Jamel, Jerome, Leroy, Lionel, Marcellus, Terrence, Tyrone, Wardell
European American females: Carrie, Colleen, Ellen, Emily, Heather, Katie, Megan, Melanie, Nancy, Rachel, Sarah, Stephanie
⢠European American males: Andrew, Brad, Frank, Geof- frey, Jack, Jonathan, Josh, Matthew, Neil, Peter, Roger, Stephen
⢠Mexican American females: Adriana, Alejandra, Alma, Brenda, Carolina, Iliana, Karina, Liset, Maria, Mayra, So- nia, Yesenia
⢠Mexican American males: Alberto, Alejandro, Alfredo, Antonio, César, Jesús, José, Juan, Miguel, Pedro, Rigob- erto, Rogelio
We list all the attributes used in validation experiments. These are stimuli associated with different social groups and a set of random words that are not expected to be associated with social groups. These random attributes are borrowed from the insect target category of the âuniversally accepted stereotypesâ IAT, which is a baseline WEAT. We use 98 words in total in the validation experiments.
We also list the probability of random chance of correct classiï¬cation in parentheses next to each social group. The goal here is to present the success rate of the machine learn- ing detection method in a meaningful way. To calculate the probability of random chance of correct classiï¬cation, we treat the detection task as if this were a one-vs-all veriï¬cation task, which is one type of a classiï¬cation task, that aims to
verify the membership of a data point in the main class. All the other data points from the open world, that donât belong to the main class we are testing for, can belong to any other class. However, in reality, our detection method is a semi- supervised non-parametric prediction method which relies on the threshold t. Some of the attributes of some social groups overlap. For example, âassertiveâ is associated with emergent intersectional biases of African American females and also associated with emergent intersectional biases of European American males. As a result, the sum of all random chances of different groups is not 100%. ⢠Biases of Females: attractive, caring, dependent, emo- tional, feminine, jealous, manipulative, materialistic, moth- erly, petite, soft, submissive, talkative (random chance of correct classiï¬cation: 13.3%)
⢠Biases of Males: aggressive, ambitious, arrogant, ï¬xer- upper, high-status, intelligent, leader, messy, provider, re- spected, sexist, tall, unfaithful (random chance of correct classiï¬cation: 13.3%)
⢠Biases of African Americans: athletic, criminals, danger- ous, gangsters, ghetto, lazy, loud, poor, tall, uneducated, unreï¬ned, violent (random chance of correct classiï¬cation: 12.2%)
⢠Emergent Intersectional Biases of African American
Females: aggressive, assertive, bigbutt, conï¬dent, darkskinned, fried-chicken, overweight, promiscuous, un- feminine (random chance of correct classiï¬cation: 9.2%) ⢠Intersectional Biases of African American Females: ag- gressive, assertive, athletic, bigbutt, conï¬dent, darkskinned, fried-chicken, ghetto, loud, overweight, promiscuous, un- feminine, unintelligent, unreï¬ned (random chance of cor- rect classiï¬cation: 14.3%)
⢠Emergent Intersectional Biases of African American Males: darkskinned, hypersexual, rapper (random chance of correct classiï¬cation: 3.1%)
⢠Intersectional Biases of African American Males: ath- letic, criminals, dangerous, darkskinned, gangsters, hyper- sexual, lazy, loud, poor, rapper, tall, unintelligent, violent (random chance of correct classiï¬cation: 13.3%)
Biases of European Americans: all-American, arrogant, attractive, blond, blue-eyes, high-status, ignorant, intelli- gent, overweight, patronizing, privileged, racist, red-neck, rich, tall (random chance of correct classiï¬cation: 15.3%) ⢠Emergent Intersectional Biases of European Ameri- can Females: ditsy (random chance of correct classiï¬- cation: 1.0%)
Intersectional Biases of European American Females: arrogant, attractive, blond, ditsy, emotional, feminine, high- status, intelligent, materialistic, petite, racist, rich, submis- sive, tall (random chance of correct classiï¬cation: 14.3%) ⢠Emergent Intersectional Biases of European Ameri- can Males: assertive, educated, successful (random chance of correct classiï¬cation: 3.1%)
⢠Intersectional Biases of European American Males: all- American, arrogant, assertive, attractive, blond, educated,
high-status, intelligent, leader, privileged, racist, rich, sex- ist, successful, tall (random chance of correct classiï¬cation: 15.3%)
⢠Biases of Mexican Americans: darkskinned, day-laborer, family-oriented, gangster, hardworker, illegal-immigrant, lazy, loud, macho, overweight, poor, short, uneducated, unintelligent (random chance of correct classiï¬cation: 14.3%)
⢠Emergent Intersectional Biases of Mexican American Females: cook, curvy, feisty, maids, promiscuous, sexy (random chance of correct classiï¬cation: 6.1%)
Intersectional Biases of Mexican American Females: attractive, cook, curvy, darkskinned, feisty, hardworker, loud, maids, promiscuous, sexy, short, uneducated, unin- telligent (random chance of correct classiï¬cation: 13.3%) ⢠Emergent Intersectional Biases of Mexican American Males: drunks, jealous, promiscuous, violent (random chance of correct classiï¬cation: 4.1%)
⢠Intersectional Biases of Mexican American Males: ag- gressive, arrogant, darkskinned, day-laborer, drunks, hard- worker, illegal-immigrant, jealous, macho, poor, promis- cuous, short, uneducated, unintelligent, violent (random chance of correct classiï¬cation: 15.3%)
⢠Random (Insects): ant, bedbug, bee, beetle, blackï¬y, caterpillar, centipede, cockroach, cricket, dragonï¬y, ï¬ea, ï¬y, gnat, hornet, horseï¬y, locust, maggot, mosquito, moth, roach, spider, tarantula, termite, wasp, weevil (random chance of correct classiï¬cation: 25.5%)
Open Source Code, Data, and Documentation https://github.com/weiguowilliam/CEAT is the link to our open source git repository. Code and links to datasets are available in the project repository. In addition, answers to frequently asked questions about the details of extracting the contextualized word embeddings are documented. The extracted embeddings for the stimuli take up approximately â¼ 50GB memory.
References Arrington-Sanders, R.; Oidtman, J.; Morgan, A.; Harper, G.; Trent, M.; and Fortenberry, J. D. 2015. 13. Intersecting Identities in Black Gay and Bisexual Young Men: A Potential Framework for HIV Risk. Journal of Adolescent Health 56(2): S7âS8.
AÃenmacher, M.; and Heumann, C. 2020. On the com- parability of pre-trained language models. arXiv preprint arXiv:2001.00781 .
Bender, E. M.; Gebru, T.; McMillan-Major, A.; and Shmitchell, S. 2021. On the dangers of stochastic parrots: Can language models be too big. Proceedings of FAccT .
Blodgett, S. L.; Barocas, S.; Daumé III, H.; and Wallach, H. 2020. Language (Technology) is Power: A Critical Survey of" Bias" in NLP. arXiv preprint arXiv:2005.14050 .
Bohnet, B.; McDonald, R.; Simoes, G.; Andor, D.; Pitler, E.; and Maynez, J. 2018. Morphosyntactic tagging with a meta-bilstm model over context sensitive token encodings. arXiv preprint arXiv:1805.08237 .
Bolukbasi, T.; Chang, K.-W.; Zou, J. Y.; Saligrama, V.; and Kalai, A. T. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Advances in neural information processing systems, 4349â4357.
Borenstein, M.; Hedges, L.; and Rothstein, H. ???? Meta- analysis: Fixed effect vs. random effects .
Brown, T. B.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. 2020. Language Models are Few-Shot Learners. arXiv preprint arXiv:2005.14165 .
Buolamwini, J.; and Gebru, T. 2018. Gender shades: In- tersectional accuracy disparities in commercial gender clas- In Conference on fairness, accountability and siï¬cation. transparency, 77â91.
Cabrera, A. A.; Kahng, M.; Hohman, F.; Morgenstern, J.; and Chau, D. H. ???? DISCOVERY OF INTERSECTIONAL BIAS IN MACHINE LEARNING USING AUTOMATIC SUBGROUP GENERATION .
Caliskan, A.; Bryson, J. J.; and Narayanan, A. 2017. Se- mantics derived automatically from language corpora contain human-like biases. Science 356(6334): 183â186.
Campolo, A.; Sanï¬lippo, M.; Whittaker, M.; and Crawford, K. 2017. AI now 2017 report. AI Now Institute at New York University .
Chelba, C.; Mikolov, T.; Schuster, M.; Ge, Q.; Brants, T.; Koehn, P.; and Robinson, T. 2013. One billion word bench- mark for measuring progress in statistical language modeling. arXiv preprint arXiv:1312.3005 .
Crenshaw, K. 1989. Demarginalizing the intersection of race and sex: A black feminist critique of antidiscrimination doctrine, feminist theory and antiracist politics. u. Chi. Legal f. 139.
De-Arteaga, M.; Romanov, A.; Wallach, H.; Chayes, J.; Borgs, C.; Chouldechova, A.; Geyik, S.; Kenthapadi, K.; and
Kalai, A. T. 2019. Bias in bios: A case study of semantic rep- resentation bias in a high-stakes setting. In Proceedings of the Conference on Fairness, Accountability, and Transparency, 120â128.
DerSimonian, R.; and Kacker, R. 2007. Random-effects model for meta-analysis of clinical trials: an update. Contem- porary clinical trials 28(2): 105â114.
DerSimonian, R.; and Laird, N. 1986. Meta-analysis in clini- cal trials. Controlled clinical trials 7(3): 177â188.
Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2018. Bert: Pre-training of deep bidirectional transformers for lan- guage understanding. arXiv preprint arXiv:1810.04805 .
Edunov, S.; Ott, M.; Auli, M.; and Grangier, D. 2018. Understanding back-translation at scale. arXiv preprint arXiv:1808.09381 .
Ethayarajh, K. 2019. How contextual are contextualized word representations? comparing the geometry of BERT, ELMo, and GPT-2 embeddings. arXiv preprint arXiv:1909.00512 .
Ethayarajh, K.; Duvenaud, D.; and Hirst, G. 2019. Under- standing Undesirable Word Embedding Associations. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 1696â1705.
Firth, J. R. 1957. A synopsis of linguistic theory, 1930-1955. Studies in linguistic analysis .
Floridi, L.; and Chiriatti, M. 2020. GPT-3: Its nature, scope, limits, and consequences. Minds and Machines 30(4): 681â 694.
Garg, N.; Schiebinger, L.; Jurafsky, D.; and Zou, J. 2018. Word embeddings quantify 100 years of gender and ethnic stereotypes. Proceedings of the National Academy of Sci- ences 115(16): E3635âE3644.
Ghavami, N.; and Peplau, L. A. 2013. An intersectional analy- sis of gender and ethnic stereotypes: Testing three hypotheses. Psychology of Women Quarterly 37(1): 113â127.
Gonen, H.; and Goldberg, Y. 2019. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. arXiv preprint arXiv:1903.03862 .
Greenwald, A. G.; and Banaji, M. R. 1995. Implicit social cognition: attitudes, self-esteem, and stereotypes. Psycholog- ical review 102(1): 4.
Greenwald, A. G.; McGhee, D. E.; and Schwartz, J. L. 1998. Measuring individual differences in implicit cognition: the implicit association test. Journal of personality and social psychology 74(6): 1464.
Greenwald, A. G.; Nosek, B. A.; and Banaji, M. R. 2003. Understanding and using the implicit association test: I. An improved scoring algorithm. Journal of personality and social psychology 85(2): 197.
Hancock, A.-M. 2007. When multiplication doesnât equal quick addition: Examining intersectionality as a research paradigm. Perspectives on politics 5(1): 63â79.
Hare-Mustin, R. T.; and Marecek, J. 1988. The meaning of difference: Gender theory, postmodernism, and psychology. American psychologist 43(6): 455. Harris, Z. S. 1954. Distributional structure. Word 10(2-3): 146â162. Hatzivassiloglou, V.; and McKeown, K. 1997. Predicting the semantic orientation of adjectives. In 35th annual meet- ing of the association for computational linguistics and 8th conference of the european chapter of the association for computational linguistics, 174â181. Hedges, L. V. 1983. A random effects model for effect sizes. Psychological Bulletin 93(2): 388. Hedges, L. V.; and Olkin, I. 2014. Statistical methods for meta-analysis. Academic press. Hedges, L. V.; and Vevea, J. L. 1998. Fixed-and random- effects models in meta-analysis. Psychological methods 3(4): 486. Hochreiter, S.; and Schmidhuber, J. 1997. Long short-term memory. Neural computation 9(8): 1735â1780. Huang, M.; Naser-Tavakolian, K.; Clifton, M.; Franceschi, A. M.; Kim, D.; Zhang, J. Z.; and Schweitzer, M. 2019. Gen- der Differences in Article Citations by Authors from Ameri- can Institutions in Major Radiology Journals. Cureus 11(8). Hurtado, A.; and Sinha, M. 2008. More than men: Latino feminist masculinities and intersectionality. Sex Roles 59(5- 6): 337â349. Hutchinson, B.; Prabhakaran, V.; Denton, E.; Webster, K.; Zhong, Y.; and Denuyl, S. 2020. Social Biases in NLP Mod- els as Barriers for Persons with Disabilities. In Proceedings of the 58th Annual Meeting of the Association for Computa- tional Linguistics, 5491â5501. Kahn, A. S.; and Yoder, J. D. 1989. The psychology of women and conservatism: Rediscovering social change. Psy- chology of Women Quarterly 13(4): 417â432. Kurita, K.; Vyas, N.; Pareek, A.; Black, A. W.; and Tsvetkov, Y. 2019. Quantifying Social Biases in Contextual Word Representations. In 1st ACL Workshop on Gender Bias for Natural Language Processing. Landis, F.; Lewontin, R.; Desnoyers, L.; Mergler, D.; and Weston, A. 1982. CIA psychological warfare operations. Case Studies in Chile, Jamaica and Nicaragua. Science for the People (14): 6â11. May, C.; Wang, A.; Bordia, S.; Bowman, S. R.; and Rudinger, R. 2019. On measuring social biases in sentence encoders. arXiv preprint arXiv:1903.10561 . Mikolov, T.; Sutskever, I.; Chen, K.; Corrado, G. S.; and Dean, J. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, 3111â3119. Montgomery, D. C.; and Runger, G. C. 2010. Applied statis- tics and probability for engineers. John Wiley & Sons. Nadeem, M.; Bethke, A.; and Reddy, S. 2020. StereoSet: Measuring stereotypical bias in pretrained language models. arXiv preprint arXiv:2004.09456 .
Nosek, B. A.; Banaji, M. R.; and Greenwald, A. G. 2002. Harvesting implicit group attitudes and beliefs from a demon- stration web site. Group Dynamics: Theory, Research, and Practice 6(1): 101. Osgood, C. E. 1964. Semantic differential technique in the comparative study of cultures. American Anthropologist 66(3): 171â200.
Parada, M. 2016. Ethnolinguistic and gender aspects of Latino naming in Chicago: Exploring regional variation. Names 64(1): 19â35. Pennington, J.; Socher, R.; and Manning, C. D. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), 1532â1543. Peters, M. E.; Neumann, M.; Iyyer, M.; Gardner, M.; Clark, C.; Lee, K.; and Zettlemoyer, L. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365 . Radford, A.; Narasimhan, K.; Salimans, T.; and Sutskever, I. 2018. Improving language understanding by generative pre- training. URL https://s3-us-west-2. amazonaws. com/openai- assets/researchcovers/languageunsupervised/language un- derstanding paper. pdf . Radford, A.; Wu, J.; Child, R.; Luan, D.; Amodei, D.; and Sutskever, I. 2019. Language models are unsupervised multi- task learners. OpenAI Blog 1(8): 9. Raghavan, M.; and Barocas, S. ???? Challenges for miti- gating bias in algorithmic hiring. 2019. URL https://www. brookings. edu/research/challenges-for-mitigating-bias-in- algorithmic-hiring . Rice, M. E.; and Harris, G. T. 2005. Comparing effect sizes in follow-up studies: ROC Area, Cohenâs d, and r. Law and human behavior 29(5): 615â620. Riloff, E.; and Wiebe, J. 2003. Learning extraction patterns for subjective expressions. In Proceedings of the 2003 confer- ence on Empirical methods in natural language processing, 105â112.
Rosenthal, R.; and DiMatteo, M. R. 2002. Metaanalysis. Stevensâ handbook of experimental psychology . Strubell, E.; Ganesh, A.; and McCallum, A. 2019. Energy and policy considerations for deep learning in NLP. arXiv preprint arXiv:1906.02243 . Swinger, N.; De-Arteaga, M.; Heffernan IV, N. T.; Leiserson, M. D.; and Kalai, A. T. 2019. What are the biases in my word embedding? In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 305â311. Tan, Y. C.; and Celis, L. E. 2019. Assessing social and inter- sectional biases in contextualized word representations. In Advances in Neural Information Processing Systems, 13209â 13220.
Tenney, I.; Xia, P.; Chen, B.; Wang, A.; Poliak, A.; McCoy, R. T.; Kim, N.; Van Durme, B.; Bowman, S. R.; Das, D.; et al. 2019. What do you learn from context? probing for sentence structure in contextualized word representations. arXiv preprint arXiv:1905.06316 .
Thomas, V. G.; and Miles, S. E. 1995. Psychology of Black women: Past, present, and future. .
Toney, A.; and Caliskan, A. 2020. ValNorm: A New Word Embedding Intrinsic Evaluation Method Reveals Valence Biases are Consistent Across Languages and Over Decades. arXiv preprint arXiv:2006.03950 . Toney, A.; Pandey, A.; Guo, W.; Broniatowski, D.; and Caliskan, A. 2020. Pro-Russian Biases in Anti-Chinese arXiv preprint Tweets about arXiv:2004.08726 . Turney, P. D.; and Littman, M. L. 2003. Measuring praise and criticism: Inference of semantic orientation from association. ACM Transactions on Information Systems (TOIS) 21(4): 315â346.
Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Å.; and Polosukhin, I. 2017. Attention is all you need. In Advances in neural information processing systems, 5998â6008. Voigt, R.; Jurgens, D.; Prabhakaran, V.; Jurafsky, D.; and Tsvetkov, Y. 2018. RtGender: A corpus for studying differ- ential responses to gender. In Proceedings of the Eleventh International Conference on Language Resources and Evalu- ation (LREC 2018). Yang, Z.; Dai, Z.; Yang, Y.; Carbonell, J.; Salakhutdinov, R. R.; and Le, Q. V. 2019. Xlnet: Generalized autoregres- sive pretraining for language understanding. In Advances in neural information processing systems, 5754â5764. Zhao, J.; Wang, T.; Yatskar, M.; Cotterell, R.; Ordonez, V.; and Chang, K.-W. 2019. Gender bias in contextualized word embeddings. arXiv preprint arXiv:1904.03310 . Zhao, J.; Zhou, Y.; Li, Z.; Wang, W.; and Chang, K.-W. 2018. Learning gender-neutral word embeddings. arXiv preprint arXiv:1809.01496 . Zhu, Y.; Kiros, R.; Zemel, R.; Salakhutdinov, R.; Urtasun, R.; Torralba, A.; and Fidler, S. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the IEEE international conference on computer vision, 19â27. | {
"id": "1809.01496"
} |
2006.03274 | GMAT: Global Memory Augmentation for Transformers | Transformer-based models have become ubiquitous in natural language
processing thanks to their large capacity, innate parallelism and high
performance. The contextualizing component of a Transformer block is the
$\textit{pairwise dot-product}$ attention that has a large $\Omega(L^2)$ memory
requirement for length $L$ sequences, limiting its ability to process long
documents. This has been the subject of substantial interest recently, where
multiple approximations were proposed to reduce the quadratic memory
requirement using sparse attention matrices. In this work, we propose to
augment sparse Transformer blocks with a dense attention-based $\textit{global
memory}$ of length $M$ ($\ll L$) which provides an aggregate global view of the
entire input sequence to each position. Our augmentation has a manageable
$O(M\cdot(L+M))$ memory overhead, and can be seamlessly integrated with prior
sparse solutions. Moreover, global memory can also be used for sequence
compression, by representing a long input sequence with the memory
representations only. We empirically show that our method leads to substantial
improvement on a range of tasks, including (a) synthetic tasks that require
global reasoning, (b) masked language modeling, and (c) reading comprehension. | http://arxiv.org/pdf/2006.03274 | Ankit Gupta, Jonathan Berant | cs.LG, cs.CL, stat.ML | null | null | cs.LG | 20200605 | 20200605 | 0 2 0 2 n u J 5 ] G L . s c [
1 v 4 7 2 3 0 . 6 0 0 2 : v i X r a
# GMAT: Global Memory Augmentation for Transformers
# Ankit Gupta Tel Aviv University [email protected]
# Jonathan Berant Tel Aviv University, Allen Institute for AI [email protected]
# Abstract
Transformer-based models have become ubiquitous in natural language processing thanks to their large capacity, innate parallelism and high performance. The contex- tualizing component of a Transformer block is the pairwise dot-product attention that has a large Q(L?) memory requirement for length L sequences, limiting its ability to process long documents. This has been the subject of substantial interest recently, where multiple approximations were proposed to reduce the quadratic memory requirement using sparse attention matrices. In this work, we propose to augment sparse Transformer blocks with a dense attention-based global memory of length M (« L) which provides an aggregate global view of the entire input sequence to each position. Our augmentation has a manageable O(M - (L + M)) memory overhead, and can be seamlessly integrated with prior sparse solutions. Moreover, global memory can also be used for sequence compression, by represent- ing a long input sequence with the memory representations only. We empirically show that our method leads to substantial improvement on a range of tasks, includ- ing (a) synthetic tasks that require global reasoning, (b) masked language modeling, and (c) reading comprehension.
# Introduction
The Transformer architecture [26] has been widely successful in achieving state-of-the-art perfor- mance on a wide range of natural language processing (NLP) tasks, including machine translation [4], language modeling [25], question-answering [8], and many more. In particular, Transform- ers pre-trained on large amounts of text with a language modeling (LM) objective, have become the de-facto standard in NLP, exhibiting surprising amounts of linguistic and world knowledge [18, 3, 13, 12, 19, 6, 24]. Moreover, Transformers with tailored attention patterns have been success- fully used to replace convolutions in computer vision [16, 15], and have also been useful in music generation [7], symbolic mathematics [11] and other modalities.
One of the most powerful features in Transformers is (pairwise) self-attention, where all positions in an input sequence aggregate information from the entire sequence in parallel. However, this requires computing a similarity score for all pairs of positions simultaneously, leading to a â¦(L2) memory requirement for length L sequences, which is prohibitively expensive for long sequences. To alleviate this issue, several sparsiï¬cations of vanilla self-attention have been recently proposed; each restricting the number of positions that a given position can attend to [28, 2, 21, 30, 20, 9, 1]. For example, in BLOCKBERT [20], the sequence is split into L M chunks of length M and positions in chunk i only attend to positions in chunk Ï(i) for some pre-determined permutation Ï, thereby having a O(M · L) memory requirement. The REFORMER [9] uses locality-sensitive hashing (LSH) to arrange similar vectors close to one another and then chunks them. Each chunk then attends to only a couple of chunks leading to a O(M · L) memory requirement. While such sparsiï¬cations often lead to performance that is comparable to vanilla Transformers, they have some undesirable consequences:
Preprint. Under review.
â= (a) decompression Xa x compressed contextualization (x) =n =W+P W: word embeds P: positional embeds memory main input
Figure 1: (a) GMAT: For each attention head, tokens of the main sequence X (blue) use approximate/sparse attention for attending to X, and vanilla attention for attending to the global memory XM . Tokens of XM (red) attend to all tokens using vanilla attention. (b) chunked self-attention: tokens of the main sequence use vanilla attention for attending to their respective chunk and global memory, but do not attend to other chunks. (c) GMAT for sequence compression for a model with N = Nc + Nm + Nd layers.
⢠A position can require many layers to accumulate information from the entire input, and thus struggle when aggregating global information is necessary. For example, in §3.1, we show that a LSH Transformer (REFORMER without reversible connections) struggles on a simple tagging task that requires information from the entire sequence.
⢠Most sparsiï¬cation schemes pose an inductive bias based on the locality of natural language, by restricting a position to only attend to its nearby tokens. While this is often a reasonable assumption, it raises several concerns. First, it is trivial to construct examples where locality is violated. For example, vanilla attention is invariant to input permutation, and thus can handle tasks such as âword deshufï¬ingâ, where a randomly shufï¬ed sequence needs to be mapped to the original order. On the other hand, any locality-based inductive bias would be detrimental. Second, progress in natural language understanding has led to increasing interest in handling global dependencies in long documents and even entire books [10], where a locality-based inductive bias is sub-optimal.
In this work, we propose Global Memory Augmentation for Transformers (GMAT). We augment sparse variants of the Transformer with a small global memory which is read and updated by all the positions using vanilla attention. Specifically, we prefix every input sequence (of length L) with a list of M memory tokens. At each multi-head attention layer, for each head (Figure[Ih), the L tokens! of the main sequence attend to other tokens of the main sequence using any sparse variant of attention, whereas they attend to the Z memory tokens using vanilla dense attention. Moreover, the 14 memory tokens attend to all M + L tokens using vanilla attention. This results in a O(M - (L + M)) memory overhead which is manageable for / < L. Because the number of parameters in Transformers does not depend on the length of the input (modulo learned positional embeddings), the number of parameters grows by only a negligible MZ - E parameters, for an embedding size FE.
We propose also to use GMAT for sequence compression (Figure [Ip). After encoding an input sequence with NV, GMAT layers, we discard the vectors corresponding to the main sequence X, and keep only the global memory vectors, which are now a compressed representation of the entire input. The memory vectors are then processed and decompressed using Ny layers back to the original input length. The sequence can now be stored using only //(« L) vectors, and decompression is done with a small number (Vz) of GMAT layers.
We evaluate GMAT on a wide range of tasks and show: (a) large improvements on synthetic tasks where global reasoning is required, (b) it improves masked langauge modeling (MLM) accuracy (used in Transformer pre-training), (c) improvements on two reading comprehension (RC) tasks, and last (d) moderate reductions in MLM and RC performance when using GMAT for compression.
To summarize, GMAT is a simple extension of the Transformers architecture, that can be seamlessly combined with any sparse attention variant. We show GMAT is useful for reducing memory require- ments as well as for sequence compression, and demonstrate performance enhancements on a wide range of tasks. Our code and data can be downloaded from https://github.com/ag1988/gmat.
1For brevity, we use the word token to refer both to the input token and its contextualized representation interchangeably.
2
# 2 Global-Memory Augmented Transformers
A Transformer [26] is a stack of layers each consisting of sub-layers such as multi-head attention, feed-forward, etc. Its contextualizing component is the multi-head attention deï¬ned as follows.
Multi-head Attention Given a query Q â RLQÃd, key K â RLK Ãd and value V â RLK Ãd, the output of scaled dot-product attention is deï¬ned as:
âon(O.K QKt Attention(Q, K,V) = softmax V. qd) vd
In multi-head attention, instead of computing a single attention output with dmodel dimensional keys, queries, and values, these are linearly projected down in parallel h times to d = dmodel/h dimensions, using different learned projection matrices. Attention is applied to each of the h new queries, keys and values, yielding d dimensional outputs which are concatenated and again projected to obtain the dmodel-dimensional output. The attention function (Eq. 1) requires the computation of QK T containing LQ · LK entries and can be expensive for long sequences. To alleviate this issue, sparse attention variants [2, 30, 20, 9, 1] relax this requirement and compute only a few entries of QK T , masking out the rest. For a binary mask2 B â {0, ââ}LQÃLK ,
oT SparseAttention(Q, K,V, B) = softmax (c + B) V. (2)
Global Memory As explained in §1, sparse attention variants have some undesirable properties. To remedy this, we augment such models with a small global memory which is read and updated by all the positions using vanilla attention (Figure 1a). Speciï¬cally, we preï¬x every token sequence X (of length L) with a sequence of M memory tokens [m1],..., [mM ]. At each multi-head attention layer of the model, at each head, the L representations X corresponding to the tokens of the main sequence attend to the other positions in X using any sparse attention variant, but attend to the representations of all memory tokens XM normally (Eq. 3). Moreover, the memory tokens attend to all the M + L tokens normally.
oe. . Xx x B ] S . xX Xx X = SparseAttention (x [a ; [a ; 0 ) ; Xm = Attention (x0 [a ; a) (3)
(3) This results ina O(.M - (L + M)) memory overhead (manageable for M < L). Moreover, this does not add any parameters to the model, except for a negligible / - E parameters used to embed the MZ new memory tokens with an embedding size of E.
Chunked self-attention To explore the limits of GMAT and highlight its ability to contextualize over multiple fragments via a memory, we work with chunked self-attention (Figure 1b), a simple sparsiï¬cation method. In C à k chunked self-attention (Figure 1b), a sequence of length C · k is partitioned into k contiguous chunks of length C. Each token within a given chunk uses vanilla (multi-head) attention to attend to tokens in its chunk in addition to the global memory but does not attend to other chunks. Hence, chunks interact with each other only via the memory. Without memory, training with C à k attention is equivalent to training with vanilla Transformer over a length-C sequence. While more complex sparsiï¬cation schemes are possible, this setup focuses on the ability to contextualize disjoint segments through the global memory only. Note that a single model can be used with different values of C and k, as Transformer models are invariant to the length of their input, aside for positional embeddings, which we handle below. We use the notation (C à k, M ) to denote a chunked self-attention model where the input sequence X has length C · k with a global memory of size M .
Positional Embeddings As attention is invariant to order, it is important to supply the model with positional information corresponding to the individual tokens. Rather than have a distinct learnable vector for each position [26], we represent a position p as a tuple (q,7) where r = p (mod 512), and q = |p/512|. Each 0 < r < 512 and 0 < q < 64 has a distinct learnable vector. The positional
2The sparsity of B can be leveraged via customized implementations of matrix product [2, 1]. 3These particular values allow us to initialize the vectors for r with the learned 512 positional embeddings of pre-trained LMs such as BERT.
3
.
embedding of p is represented by the sum of the vectors corresponding to q and r and allows us to model positions up to 215. Memory tokens have a ï¬xed position, and thus positional embeddings are used only for the main sequence X.
# 2.1 Sequence Compression
Contextualized word representations [18, 3] improve performance compared to ï¬xed word embed- dings such as GloVe [17]. Unfortunately, some large models that compute contextualized repre- sentations do not ï¬t on popular GPUs and need specialized hardware [22]. Instead of leaving this computation to the users, an appealing option might be to release pre-computed contextualized representations, at least for popular benchmarks, similar to word embeddings [17]. However, storing a vector for each position in a large corpus is expensive. A second natural motivation for GMAT is for sequence compression, i.e. using a small memory to represent a long sequence. This can dramatically reduce the overhead in storing pre-computing contextualized representations for large corpora.
Consider a N -layer GMAT with a memory of size M . We apply the N model layers in 3 steps as shown in Figure 1c. Given a length L input sequence, let W and P denote its word and positional embeddings. First, the bottom Nc layers are applied for compressing the entire information of the input sequence into just M memory vectors X (c) M . The next Nm layers are then applied only on the M memory vectors resulting in richer representations X (m) M . This length M sequence is restored to the original length M + L by concatenating the positional embeddings P and, ï¬nally, the remaining Nd = N â Nc â Nm layers are applied for decompressing the information packed in X (m) M into the ï¬nal representations. Here, the positional embeddings P act as queries for restoring the original input information from X (m) M .
The M(< L) vectors X (rr) can be efficiently serialized for later use and are decompressed using minimal post-processing into representations of length L. In §4.3]and §4.4] we show that using the decompressed representations leads to only a small performance degradation on masked language modeling and reading comprehension. We use positional embeddings P in the decompression step and not the contextualized representation X (© (Figure|I}), since we want the output to depend only on M vectors instead of M + L.
Comparison to Longformer In contemporaneous work [1], the authors demonstrate the effective- ness of a sparse sliding window attention pattern combined with a global memory on multiple natural understanding tasks. In contrast to our work, the contribution of the global memory component is not evaluated, and is designed on a task-by-task basis. Moreover, attention scores for the memory and the main sequence are computed using separate sets of projection matrices, thereby introducing new parameters. In comparison, we demonstrate the utility of a global memory for various sparse attention patterns on synthetic, NLP and compression tasks, introducing a minimal set of new parameters.
# 3 Global Reasoning on Synthetic Data
We consider two synthetic datasets, where global reasoning over the entire input is required.
# 3.1 Majority Tagging
Recently proposed sparse attention schemes exploit the locality of natural language (§1). One exception is locality sensitive hashing (LSH) attention [9]. While in LSH attention, tokens are not bound to attend only within their proximity, it does limit the number of positions a token can attend to at each head. We examine the utility of adding GMAT to a LSH Transformer [9], that is, a Transformer that uses LSH attention, for a sequence tagging task.
MAJORITY(L, p): Let X = (x1, . . . , xL) be a sequence of L integers from the set {1, . . . , 2p}. Let Ni denote the number of occurrences of i in X. For every even integer i in the domain, deï¬ne
i â i> Ni mnaj(i~ 1) = maj(i) = { 1 N12 Mi i otherwise
.
Then the majority sequence is deï¬ned to be (maj(x1), . . . , maj(xL)). Note that for p = 1, the task requires tagging all tokens of X by their mode.
4
(a) 2-layers (b) 12-layers
i Sl 0 1 2 3 4 5 taining step ( x10")
â (11792, p=4, M=0) â (41792, p=4, M=8) 0 2 4 6 taining step ( x10")
Figure 2: Evaluation exact match (EM) of LSH Transformer on the MAJORITY(L, p) task of §3.1. M denotes the memory size. Models in the same ï¬gure use the same hyperparameters.
To create the data, we sampled the elements of X independently and uniformly from the domain. After training a 2-layer LSH Transformer on the above task (Figure 2a) we compared its performance with and without a global memory of size M = 8. Hyperparamaters for LSH attention (bucket size, rounds of hashing, etc.) were used as suggested by [9] and the other hyperparameters (hidden size, etc.) were taken from BERT-base (without dropout). As shown in Table 1, we found that LSH attention failed to do well on the MAJORITY(8192, 1) task. On much shorter inputs of length 512, it managed to do well on MAJORITY(512, 1) but again struggled on a slightly more complex task of MAJORITY(512, 3). Conversely, GMAT obtains near perfect performance on these tasks.
[
M = 0 M = 8 0.15 0.98 0.86 0.98
Table 1: Evaluation exact match (EM) of a 2-layer LSH Transformer on majority tagging. EM for an example is 1 iff every position is tagged correctly. M denotes the memory size.
To determine if GMAT also leads to lower sample complexity in deeper models, we trained (Figure 2b) a 12-layer model on MAJORITY(1792, 4) and found GMAT obtains better performance with lower sample complexity. This suggests that in LSH attention, while a certain level of sparsity works well when information is mostly local, it can be too restrictive for inherently global tasks.
# 3.2 Numerical Reasoning Over Text
Having shown the utility of GMAT on a combinatorial task, we now transition towards language tasks. We create a pseudo-textual task that requires global mathematical reasoning by generating examples from a rule-based generator from [5]. The generator generates examples that include a passage P , describing a sequence of related events, and a question Q that requires aggregating information from various parts of P and performing numerical reasoning to arrive at a numeric answer A (see Table 2).
P: The commander recruited 358 households and 9669 Italian troops. The commander lost 812 of the households. The commander recruited 542 households in France. The commander recruited 3075 households in France . The commander recruited 2843 households and 5413 Native American troops. Q: How many more Italian troops did the commander have than Native American troops?
Table 2: An example from the textual synthetic data used in §3.2.
Following [5], we train a generative encoder-decoder model, GENBERT, on the generated data after replacing the encoder self-attention [26] with chunked self-attention (§2), and compare the performance before and after GMAT augmentation (see training and data details in §A.1). We summarize the results in Table 3. Compared to vanilla attention over the entire input (i.e., (140Ã1, 0)), chunking the encoder input into 2 chunks signiï¬cantly reduced the performance (i.e., (70 à 2, 0)), in line with the global nature of the task. Surprisingly, adding a global memory of size 30 reduced accuracy even further. We hypothesize this is due to the strict weight-tying strategy employed by GENBERT, where the parameters of the Transformer encoder and decoder are tied, leading to underï¬tting. To handle that, we untie the parameters of the projection matrices that update the memory representations XM in all attention heads (Eq. 3, right), initializing them randomly. This separates the attention heads that update the main sequence from the heads that update the memory. In this setup, accuracy improved substantially, almost recovering the performance of vanilla attention.
5
(70 Ã 2, 0) 62.2 (70 Ã 2, 30) 51.4 (70 Ã 2, 30) (untied) 86.7 (140 Ã 1, 0) 93.9
Table 3: Evaluation exact match (EM) of GENBERT on the textual synthetic data. EM for a sample is 1 iff every digit of the desired number is predicted correctly. All models use same hyperparamters.
(a) Wikipedia (random init) (b) Wikipedia (BERT init) (c) PG19 (BERT init)
â (512 x 1,0) â (5124.64) (2024 x 1,0) 507 s 06 § os oa 03 ° 1 2 3 4 training step ( x105 )
0294 = 612%4,0) 0293 â (812x404) = (1024x2,0 (1024%2,0) mad â, 5 0291 $ 0290 0289 o2e8 24 26 «20 «30~=«C«S training step ( x105)
|g a04 555 â @12x4,0) â @i2x4,64) = 024 x2, 0) 5 0.320 $ oste 0316 24 «26 «+20 +90 a2 24 training step ( x10° )
Figure 3: Evaluation error for the MLM task in 3 different setups.
# 4 Masked Language Modeling
One of the biggest success stories of Transformers is as an architecture for pre-training LMs. We now investigate pre-training GMAT with a masked language modeling objective, as a memory-efï¬cient replacement for models such as BERT [3, 13]. Past work has shown strong correlation between performance on the MLM task and that on downstream applications [13, 12]. For our experiments, we use the BERT-base architecture [3] after making the modiï¬cations described in §2.
We form examples by sampling sequences of length L from English Wikipedia and the PG19 dataset, and replacing sub-words with the [MASK] token following the procedure in [3] (details in §A.2). The model is trained to maximize the log probability of the masked out tokens. We evaluate the error of the model as the fraction of tokens predicted incorrectly, and the MLM âperplexityâ as the reciprocal of the geometric mean of probabilities of all masked out tokens.4 PG19 contains 29K long books, and is thus likely to beneï¬t from modeling long context, while in Wikipedia most articles are short and can ï¬t into the 512 word pieces that are the input of BERT. We experiment with training a Transformer from scratch, as well as initializing with BERT-base.
# 4.1 Random Initialization
As shown in Figure 3a, we train 3 models on Wikipedia. The setting (512 Ã 1, 0) corresponds to standard MLM training on instances of length 512 without global memory. Similarly, (1024 Ã 1, 0) denotes training with vanilla attention over a context of size 1024, incurring a large memory penalty.5 Lastly, in (512 Ã 4, 64), a 2048-long context is chunked into four 512-chunks that have to interact via a global memory of size 64. Increasing the context size to 1024 improves MLM performance on Wikipedia (Table 4). Using global memory improves sample complexity and performance compared to training on 512-long instances, albeit only moderately. Thus, the chunks are able to leverage global memory to exchange information, alleviating the context fragmentation problem [28].
setting best evaluation error / perplexity (512 Ã 1, 0) 33.67 / 5.14 (512 Ã 4, 64) 33.25 / 5.11 (1024 Ã 1, 0) 31.53 / 4.64
Table 4: MLM training on Wikipedia from random initialization for 431K steps. Error denotes the percentage of masked tokens predicted incorrectly.
# 4.2 BERT Initialization
To show that GMAT can be easily integrated into existing pre-trained LMs, we take a pre-trained BERT-base model, and further train it using GMAT. Because BERT was pre-trained on Wikipedia, improving performance on Wikipedia itself could be difï¬cult, as it already has high conï¬dence on tokens from this corpus. Hence, we also train on PG19, which was not part of BERTâs training data.
4Equivalently, the natural exponential of the average loss over the development set. 5We do not train in the (2048 Ã 1, 0) setting due to memory constraints.
6
Table 5 summarizes the results. On Wikipedia, increasing the context size to 1024 provides a signiï¬cant improvement (Figure 3b), but global memory does not improve performance compared to standard MLM training on 512-long instances. However, on PG19 (Figure 3c) using global memory substantially improves perplexity from 4.4 â 4.35, closing roughly half the gap from using a context of size 1024, which obtains an MLM perplexity of 4.3. This hints that the lack of improvement on the Wikipedia data might be due to the fact that BERT was pre-trained on Wikipedia.
setting Wikipedia PG19 BERT (no training) 35.2 / 6.856 42.5 / 10.57 (512 Ã 4, 0) 29.163 / 3.953 31.90 / 4.40 (512 Ã 4, 64) 29.15 / 3.956 31.72 / 4.35 (1024 Ã 2, 0) 28.74 / 3.87 31.44 / 4.30 (8 Ã 64, 0) 53.11 / 20.68 - (8 Ã 64, 64) 32.98 / 4.94 -
Table 5: Evaluation error / perplexity for MLM training.. Models are initialized with BERT-base, except for (8 Ã 64, 0), (8 Ã 64, 64), which are initialized with the trained models (512 Ã 4, 0), (512 Ã 4, 64) respectively.
The above results indicate that disjoint text segments can exchange useful information via global memory. However, because natural language has a locality bias, the utility of memory diminishes as the chunk length C increases. To determine the efï¬cacy of GMAT when C is small, where contextualization should highly depend on the memory, we experiment with chunks of size 8. As expected, without access to a reasonably-large surrounding context, the model (8 à 64, 0) fails to predict masked tokens (Table 5). Interestingly, a small global memory of size 64 signiï¬cantly improves performance (53.11 â 32.98 error, 20.68 â 4.94 perplexity), reaching performance that is close to (512 à 4, 0). We further evaluate the pre-trained GMAT models on reading comprehension tasks in §4.4.
# 4.3 Sequence Compression
We turn to sequence compression, where our goal is to compress a sequence of length L into M vectors that can be saved and later decompressed back into a sequence of length L, with minimal drop in performance. Using the setup described in §2.1, we use Nc compression layers, followed by Nd = N â Nc decompression layers, and train with the same data and MLM objective as above on Wikipedia. As shown in Table 6, we found that Nc = 9 outperforms Nc = 3 (which also happens to be well-aligned with our need for a small number of decompression layers). Compared to a model without compression, we observe a moderate degradation in performance (29.163 â 32.98 error, and 3.953 â 5.017 MLM perplexity), showing that a global memory of size just 64 provides a compact and useful representation for the entire sequence of length 512.
setting initialization best evaluation error / perplexity (512 Ã 4, 0) BERT 29.163 / 3.953 (512 Ã 1, 64), Nc = 3 (512 Ã 4, 64) 33.44 / 5.112 (512 Ã 1, 64), Nc = 9 (512 Ã 4, 64) 32.98 / 5.017
Table 6: MLM training on Wikipedia with compression. Compressed models were initialized with the (512 à 4, 64) model trained in §4.2 and further trained for 440K steps.
# 4.4 Reading Comprehension Performance
While MLM performance is known to correlate well with downstream applications [13, 22], we take Wikipedia-based GMAT models trained with the MLM objective in §4.2 and §4.3 and further ï¬ne-tune them on reading comprehension (RC) tasks.
SQUAD We ï¬rst ï¬ne-tune on SQUAD v1 [23], using the simple sliding-window based approach of [3]. Similar to past work [12], we limit the input size to 384 tokens, as most paragraphs are relatively short. We train all models using identical hyperparameters. Summarized in Table 7, the model (512 à 4, 64) improves performance over BERT (88.6 â 89.2 F1), indicating global memory helps even with vanilla self-attention. The performance of (512 à 4, 0) is very similar to BERT, ruling out the possibility that the performance of (512 à 4, 64) was a result of extra pre-training on Wikipedia. Surprisingly, the model (8 à 64, 64) reported 84.2 F1, a moderate drop in performance given that, with chunks of size 8, the contextualization depends almost entirely on the memory. Interestingly, the compression model with Nc = 9 reported 87.1 F1 (compared to BERTâs 88.6) an impressive score given that, after 9 layers, the information of the entire input must pass through only 64 vectors.
7
BERT 81.09 / 88.60 (512 Ã 4, 0) 80.93 / 88.47 (512 Ã 4, 64) 81.77 / 89.16 (8 Ã 64, 64) 75.64 / 84.17 (8 Ã 64, 0) 9.82 / 14.60 (512 Ã 1, 64) Nc = 3 76.59 / 84.58 (512 Ã 1, 64) Nc = 9 79.55 / 87.05
Table 7: Evaluation EM/F1 on SQUAD v1.
HOTPOTQA To investigate the utility of GMAT for long-range reasoning, we ï¬ne-tune our models on HOTPOTQA [29], an RC dataset focusing on multi-paragraph reasoning. In HOTPOTQA, examples comprise of a question Q, 2 gold paragraphs G1, G2 required for answering Q, and 8 distractor paragraphs. Each gold paragraph contains supporting facts: the sentences relevant for answering Q.
As we are interested in analyzing if models can aggregate information dispersed in a long context, we order the paragraphs such that one of G1, G2 is among the ï¬rst 3 paragraphs, and the other is among the last 3. We refer to this setting as the âgold-seperatedâ setting, denoted by subscript GS. To reuse the sliding-window setup from SQUAD, we concatenate the 10 paragraphs (details in §A.3) into a single long context D. Following [3], each input instance consists of Q concatenated with a window P of contiguous text from D whose length is limited according to the maximum sequence length allowed by the model (512 for BERT, 2048 for (512 à 4, 0) and (512 à 4, 64)). We leverage the supporting facts supervision provided in the dataset, and include a binary tagging task (denoted by SF) of independently tagging each token of the input by 1/0 indicating if it is part of a supporting fact (§A.3). Moreover, for GMAT models we replaced the input memory tokens by the ï¬rst 64 tokens of Q, as this improved performance.
model SF task included all only-comparison no 66.3 61.6 BERT yes 66.3 58.2 (512 Ã 4, 64) yes no 68.3 67.6 65.6 65.3
Table 8: F1 scores on HOTPOTQAGS development set.
The results after ï¬netuning on HOTPOTQAGS are summarized in Table 8. With the ability to contextualize over a much larger context containing both G1, G2, the model (512 à 4, 64) reported an improved performance compared to BERT (66.3 â 68.3 F1). The improvement on the âcomparisonâ type of questions is even more pronounced (61.6 â 65.6) as such questions usually require comparing quantities from each G1, G2 and are hard if either of G1, G2 is missing from the input. We omit results for (512 à 4, 0), because chunked self-attention without memory means that the chunk that contains the question has no access to the entire context, leading to low performance.
To fairly compare (512 à 4, 0) and (512 à 4, 64) and study the effect of global memory, we create a new setting, âquestion-repeatedâ (QR), in which, while forming the context D, Q is appended after each paragraph thereby ensuring that all four 512-chunks of (512 à 4, 0) have access to Q. Moreover, we repeat this experiment after creating an adversarial version ADV-HOTPOTQA of HOTPOTQA by following [14]. The results after ï¬ne-tuning on these datasets are summarized in Table 9. Global memory improves model performance on both HOTPOTQAGS+QR (66.4 â 67.4 F1) and ADV-HOTPOTQAGS+QR (64 â 64.6 F1) reafï¬rming its utility for long-range reasoning.
model SF task included HOTPOTQAGS+QR ADV-HOTPOTQAGS+QR (512 Ã 4, 0) no 65.4 62.6 yes 66.4 64.0 (512 Ã 4, 64) yes no 67.4 66.0 64.6 64.0
# Table 9: F1 on HOTPOTQAGS+QR and ADV-HOTPOTQAGS+QR development sets.
# 5 Conclusion
In this work, we proposed GMAT, a simple extension to the Transformer architecture that allows a bet- ter trade-off between compute and performance and can be naturally used for sequence compression. Our approach can be seamlessly integrated with the increasingly-popular sparse Transformer variants. We show GMAT (a) leads to performance improvements on a wide range of tasks, and (b) can be used to compress long sequences by factor of 8Ã with only a small degradation in performance.
8
# Broader Impact
Transformers have become a popular architecture for sequence processing and generation in natural language processing and outside of it. The goal of this paper it to reduce the memory requirement and thereby allow for longer sequences to be processed. Moreover, our compression technique can facilitate the use of pre-computed contextualized representations, allowing users access to an approximation of these representations even if they cannot compute the representations from scratch themselves. As such, we consider a positive impact of this work to be the ability of more users with constraints on their computational resources to use the Transformer architecture and its pre- trained representations. Moreover, being able to process long documents can open the door to new applications in natural language processing, such as multiple-document understanding, and perhaps also processing of sequences outside of NLP, for example in Biology. As Transformers are becoming ubiquitous in machine learning, naturally any negative impact that can be attributed to Transformers (fake news generation, classiï¬ers in sensitive domains such as the justice system and healthcare) are also inherited by our approach, and perhaps enhanced when long sequences need to be processed.
# Acknowledgments and Disclosure of Funding
We thank Shimi Salant, Yoav Goldberg and Mor Geva for helpful discussions and constructive suggestions. This research was partially supported by The Israel Science Foundation grant 942/16, The Yandex Initiative for Machine Learning, and the European Research Council (ERC) under the European Union Horizons 2020 research and innovation programme (grant ERC DELPHI 802800).
# References
[1] I. Beltagy, M. E. Peters, and A. Cohan. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150, 2020.
[2] R. Child, S. Gray, A. Radford, and I. Sutskever. Generating long sequences with sparse transformers. arXiv preprint arXiv:1904.10509, 2019.
[3] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In North American Association for Computational Linguistics (NAACL), pages 4171â4186, Minneapolis, Minnesota, June 2019.
[4] S. Edunov, M. Ott, M. Auli, and D. Grangier. Understanding back-translation at scale. In Empirical Methods in Natural Language Processing (EMNLP), 2018.
[5] M. Geva, A. Gupta, and J. Berant. Injecting numerical reasoning skills into language models. In ACL, 2020.
[6] J. Hewitt and C. D. Manning. A structural probe for ï¬nding syntax in word representations. In North American Association for Computational Linguistics (NAACL), pages 4129â4138, 2019.
[7] C.-Z. A. Huang, A. Vaswani, J. Uszkoreit, I. Simon, C. Hawthorne, N. Shazeer, A. M. Dai, M. D. Hoffman, M. Dinculescu, and D. Eck. Music transformer. In International Conference on Learning Representations, 2019.
[8] V. Karpukhin, B. OËguz, S. Min, L. Wu, S. Edunov, D. Chen, and W.-t. Yih. Dense passage retrieval for open-domain question answering. arXiv preprint arXiv:2004.04906, 2020.
[9] N. Kitaev, L. Kaiser, and A. Levskaya. Reformer: The efï¬cient transformer. In International Conference on Learning Representations, 2020.
[10] T. KoËcisk`y, J. Schwarz, P. Blunsom, C. Dyer, K. M. Hermann, G. Melis, and E. Grefenstette. The narrativeqa reading comprehension challenge. Transactions of the Association for Computational Linguistics, 6:317â 328, 2018.
[11] G. Lample and F. Charton. Deep learning for symbolic mathematics. In International Conference on Learning Representations, 2020.
[12] Z. Lan, M. Chen, S. Goodman, K. Gimpel, P. Sharma, and R. Soricut. Albert: A lite bert for self-supervised learning of language representations. In International Conference on Learning Representations, 2020.
[13] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
[14] S. Min, E. Wallace, S. Singh, M. Gardner, H. Hajishirzi, and L. Zettlemoyer. Compositional questions do not necessitate multi-hop reasoning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4249â4257, 2019.
9
[15] N. Parmar, P. Ramachandran, A. Vaswani, I. Bello, A. Levskaya, and J. Shlens. Stand-alone self-attention in vision models. In Advances in Neural Information Processing Systems, pages 68â80, 2019.
[16] N. Parmar, A. Vaswani, J. Uszkoreit, L. Kaiser, N. Shazeer, and A. Ku. Image transformer. CoRR, abs/1802.05751, 2018.
[17] J. Pennington, R. Socher, and C. D. Manning. GloVe: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532â1543, 2014.
[18] M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer. Deep contex- tualized word representations. In North American Association for Computational Linguistics (NAACL), 2018.
[19] F. Petroni, T. Rocktäschel, P. Lewis, A. Bakhtin, Y. Wu, A. Miller, and S. Riedel. Language models as knowledge bases? In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019.
[20] J. Qiu, H. Ma, O. Levy, S. W.-t. Yih, S. Wang, and J. Tang. Blockwise self-attention for long document understanding. arXiv preprint arXiv:1911.02972, 2019.
[21] J. W. Rae, A. Potapenko, S. M. Jayakumar, C. Hillier, and T. P. Lillicrap. Compressive transformers for long-range sequence modelling. In International Conference on Learning Representations, 2020.
[22] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu. Exploring the limits of transfer learning with a uniï¬ed text-to-text transformer. arXiv preprint arXiv:1910.10683, 2019.
[23] P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang. SQuAD: 100,000+ questions for machine comprehension of text. In Empirical Methods in Natural Language Processing (EMNLP), 2016.
[24] A. Roberts, C. Raffel, and N. Shazeer. How much knowledge can you pack into the parameters of a language model? ArXiv, abs/2002.08910, 2020.
[25] A. Roy, M. Saffar, A. Vaswani, and D. Grangier. Efï¬cient content-based sparse attention with routing transformers. arXiv preprint arXiv:2003.05997, 2020.
[26] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Å. Kaiser, and I. Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems (NIPS), pages 5998â6008, 2017.
[27] T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Fun- towicz, and J. Brew. Huggingfaceâs transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771, 2019.
[28] Z. Yang, Z. Dai, Y. Yang, J. Carbonell, R. R. Salakhutdinov, and Q. V. Le. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in neural information processing systems, pages 5754â5764, 2019.
[29] Z. Yang, P. Qi, S. Zhang, Y. Bengio, W. W. Cohen, R. R. Salakhutdinov, and C. D. Manning. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. In EMNLP, 2018.
[30] Z. Ye, Q. Guo, Q. Gan, X. Qiu, and Z. Zhang. Bp-transformer: Modelling long-range context via binary partitioning. arXiv preprint arXiv:1911.04070, 2019.
10
# A Supplemental Material
# A.1 Details of the Numerical Reasoning Task
For creating the textual synthetic data for the generative QA task of §3.2, we used the data generation set-up of [5] (§4.2 of [5]). Default templates and vocabulary were used to create passages containing 5 sentences. While instantiating the templates, the probability of sampling from one of the previously used values was set to 0.999 to promote inter-sentence dependencies. This gave us 629906/15K train/dev passage-question-answer triples. Among these, we only kept the samples where the answer was a number not appearing in the passage, and discarded the rest. This gave us 223067 training and 5146 evaluation instances.
We only kept the decoder/generative head of GENBERT (§3 of [5]) and allowed the decoder to attend to all the encoder outputs in the cross-attention layers. As the weights of the encoder and decoder are tied, we used segment ids 0 for the encoder input sequence and 1 for the decoder inputs.
# A.2 Data for Masked LM task
The instances for the MLM task (§4) were formed separately using 5.2M pages from English Wikipedia (October 2017 dump) and the training set of PG19 dataset containing â¼29K books from Project Gutenberg [21]. For each dataset, after appending a special symbol at the end of each document, the documents were arranged in a random order and concatenated into a single long text which was then tokenized into a list of tokens. Depending upon the input length L of the experiment (512/1024/etc) this list was chunked into full length L â 2 sequences which were then masked randomly following [3] and enclosed within [CLS] and [SEP] tokens. For each dataset, the ï¬rst 2.55B tokens (i.e. 510 à 5M) were used to form the respective training set, next 10.20M tokens (510 à 20K) the dev set and the rest were discarded.
# A.3 Finetuning on HOTPOTQA
Given the question Q and 10 arranged paragraphs Piâs, each Pi is extended by preï¬xing it with its title. Moreover, to handle yes/no questions, a special string <a> yes no </a> is also preï¬xed. The context D is formed by simply concatenating the resulting paragraphs. Following [3], given a window/chunk P from tokenized D, the corresponding instance is formed as [CLS] Q [SEP] P [SEP].
Supporting Facts Tagging Task (SF): Besides the standard span extraction loss, we also include another task using the supporting facts supervision. Contextualized representations of the model are linearly projected to 2 scores (for 0/1) per token and normalized to obtain log-probabilities. For an input, loss is computed as negative log probability of the correct tag averaged over the positions. As supporting facts positions are fewer, log-probabilities are weighted according to the respective class (0/1) size.
Adversarial data generation: After training the single-paragraph model of [14] on HOTPOTQA, for each sample in the training and development sets, we retrieved top 50 introductory paragraphs from Wikipedia according their TF-IDF similarity with the question. The 50 paragraphs were then re- ranked using the "no-answer-logit" score predicted by the trained model and 8 adversarial distractors were chosen accordingly. When evaluated on the adversarial version of the development set the performance of the trained model reduced from 64.4 â 57.8 F1. Re-training on the adversarial data increased the performance to 61.3. In both cases, we trained for 10 epochs with batch size 36, maximum sequence length 300 and learning rate 5e-5 with linear warmup proportion 0.1.
# A.4 Hyperparameters
For all our experiments, we used an older version of Hugging Faceâs Transformers library [27]. For convenience, we denote the training hyperparameters using the following abbreviations, INS: number of training instances, BSZ: number of instances in a batch, ISZ: instance size, SQL: ï¬nal input sequence length, LR: learning rate, WRM: linear LR warm-up proportion, EP: number of epochs, STP: number of optimizer steps, GAC: gradient accumulation steps, POSq: whether (y/n) q part is included in positional embeddings deï¬ned in §2.
11
The hyperparameters for majority tagging are in Table 12, for GENBERT ï¬netuning in Table 13, for MLM trainings in Table 10, for SQUAD ï¬netuning in Table 11 and for HOTPOTQA ï¬netuning in Table 14.
section §4.1 §4.1 §4.1 §4.2 §4.2 §4.2 §4.3 model(s) (512 à 1, 0) (512 à 4, 64) (1024 à 1, 0) (512 à 4, 0) (512 à 4, 64) (1024 à 2, 0) Wikipedia , PG19 (8 à 64, 0) (8 à 64, 64) (512 à 1, 64), Nc = 3 (512 à 1, 64), Nc = 9 init random random random BERT (512 à 4, 0) (512 à 4, 64) (512 à 4, 64) BSZ 60 60 60 16 24 24 24 ISZ 512 512 512 512 512 512 512 SQL 512 2048 1024 2048 512 512 512 LR 5e-5 5e-5 5e-5 2e-5 5e-5 5e-5 5e-5 WRM EP 9 9 9 0.1 0.1 0.1 0.001 5 0.01 0.01 5 5 0.01 5 STP 431K 431K 431K 350K 270K 270K 440K GAC 1 1 1 1 1 1 1 POSq n y y y y y y
Table 10: Training hyperparameters for MLM training (§4). Common parameters: INS=5M, dropout-rate=0.1, optimizer=Bert-Adam, weight-decay=0.01, max-grad-norm=1.0, seed=42. If STP speciï¬ed, training is termi- nated after STP-many optimizer steps.
model(s) all BSZ 12 ISZ 384 SQL 384 LR 3e-5 WRM EP 3 0.1 GAC 1
Table 11: Training hyperparameters for SQUAD v1 ï¬netuning (§4.4). POSq for a model is same as during its pre-training. Common parameters: maximum query length=64, window-stride-length=128, dropout-rate=0.1, optimizer=Bert-Adam, weight-decay=0.01, max-grad-norm=1.0, seed=42.
task L = 8192 , p = 1 L = 512 , p = 1, 3 L = 1792 , p = 4 num. of layers 2 2 12 INS 200K 200K 300K BSZ 4 80 8 ISZ 8192 512 1792 SQL 8192 512 1792 LR 2e-6 2e-6 2e-6 WRM EP 1 0.01 2 0.01 2 0.01 GAC 1 1 1 POSq y n y
Table 12: Training hyperparameters for majority tagging task (§3.1). Common parameters: init=random, dropout-rate=0.0, optimizer=Bert-Adam, weight-decay=0.01, max-grad-norm=1.0, seed=42.
model(s) all BSZ 68 ISZ 140 SQL 140 LR 3e-5 WRM EP 15 0.1 GAC 1
Table 13: Training hyperparameters for GENBERT ï¬netuning (§3.2). Common parameters: init=BERT, INS=223067, POSq=n, dropout-rate=0.1, optimizer=Bert-Adam, weight-decay=0.01, max-grad-norm=1.0, seed=42.
model(s) BERT (512 Ã 4, 0) (512 Ã 4, 64) (512 Ã 4, 0) (512 Ã 4, 64) dataset-setting GS GS GS + QR window-stride-length 128 128 256 BSZ 32 12 8 ISZ 512 2048 2048 SQL 512 2048 2048 LR 2.5e-5 3.5e-5 3e-5 WRM EP 3 0.1 0.1 6 0.1 4 GAC 1 1 2 POSq n y y
Table 14: Training hyperparameters for ï¬netuning on HOTPOTQA variants (§4.4). Common parameters: maximum query length=64, dropout-rate=0.1, optimizer=Bert-Adam, weight-decay=0.01, max-grad-norm=1.0, seed=42.
12 | {
"id": "2003.05997"
} |
2006.03659 | DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations | Sentence embeddings are an important component of many natural language
processing (NLP) systems. Like word embeddings, sentence embeddings are
typically learned on large text corpora and then transferred to various
downstream tasks, such as clustering and retrieval. Unlike word embeddings, the
highest performing solutions for learning sentence embeddings require labelled
data, limiting their usefulness to languages and domains where labelled data is
abundant. In this paper, we present DeCLUTR: Deep Contrastive Learning for
Unsupervised Textual Representations. Inspired by recent advances in deep
metric learning (DML), we carefully design a self-supervised objective for
learning universal sentence embeddings that does not require labelled training
data. When used to extend the pretraining of transformer-based language models,
our approach closes the performance gap between unsupervised and supervised
pretraining for universal sentence encoders. Importantly, our experiments
suggest that the quality of the learned embeddings scale with both the number
of trainable parameters and the amount of unlabelled training data. Our code
and pretrained models are publicly available and can be easily adapted to new
domains or used to embed unseen text. | http://arxiv.org/pdf/2006.03659 | John Giorgi, Osvald Nitski, Bo Wang, Gary Bader | cs.CL, cs.LG | ACL2021 Camera Ready V2 | null | cs.CL | 20200605 | 20210527 | 1 2 0 2
y a M 7 2 ] L C . s c [
4 v 9 5 6 3 0 . 6 0 0 2 : v i X r a
# DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations
John Giorgi1,5,6 Osvald Nitski2,7 Bo Wang1,4,6,7,â Gary Bader1,3,5,â 1Department of Computer Science, University of Toronto 2Faculty of Applied Science and Engineering, University of Toronto 3Department of Molecular Genetics, University of Toronto 4Department of Laboratory Medicine and Pathobiology, University of Toronto 5Terrence Donnelly Centre for Cellular & Biomolecular Research 6Vector Institute for Artiï¬cial Intelligence 7Peter Munk Cardiac Center, University Health Network â Co-senior authors {john.giorgi, osvald.nitski, gary.bader}@mail.utoronto.ca [email protected]
# Abstract
Sentence embeddings are an important com- ponent of many natural language processing (NLP) systems. Like word embeddings, sen- tence embeddings are typically learned on large text corpora and then transferred to var- ious downstream tasks, such as clustering and retrieval. Unlike word embeddings, the highest performing solutions for learning sen- tence embeddings require labelled data, limit- ing their usefulness to languages and domains where labelled data is abundant. In this pa- per, we present DeCLUTR: Deep Contrastive Learning for Unsupervised Textual Represen- tations. Inspired by recent advances in deep metric learning (DML), we carefully design a self-supervised objective for learning univer- sal sentence embeddings that does not require labelled training data. When used to extend the pretraining of transformer-based language models, our approach closes the performance gap between unsupervised and supervised pre- training for universal sentence encoders. Im- portantly, our experiments suggest that the quality of the learned embeddings scale with both the number of trainable parameters and the amount of unlabelled training data. Our code and pretrained models are publicly avail- able and can be easily adapted to new domains or used to embed unseen text.1
# Introduction
Due to the limited amount of labelled training data available for many natural language processing (NLP) tasks, transfer learning has become ubiq- uitous (Ruder et al., 2019). For some time, transfer learning in NLP was limited to pretrained word em- beddings (Mikolov et al., 2013; Pennington et al.,
2014). Recent work has demonstrated strong trans- fer task performance using pretrained sentence em- beddings. These ï¬xed-length vectors, often re- ferred to as âuniversalâ sentence embeddings, are typically learned on large corpora and then trans- ferred to various downstream tasks, such as cluster- ing (e.g. topic modelling) and retrieval (e.g. seman- tic search). Indeed, sentence embeddings have be- come an area of focus, and many supervised (Con- neau et al., 2017), semi-supervised (Subramanian et al., 2018; Phang et al., 2018; Cer et al., 2018; Reimers and Gurevych, 2019) and unsupervised (Le and Mikolov, 2014; Jernite et al., 2017; Kiros et al., 2015; Hill et al., 2016; Logeswaran and Lee, 2018) approaches have been proposed. However, the highest performing solutions require labelled data, limiting their usefulness to languages and do- mains where labelled data is abundant. Therefore, closing the performance gap between unsupervised and supervised universal sentence embedding meth- ods is an important goal.
Pretraining transformer-based language models has become the primary method for learning textual representations from unlabelled corpora (Radford et al., 2018; Devlin et al., 2019; Dai et al., 2019; Yang et al., 2019; Liu et al., 2019; Clark et al., 2020). This success has primarily been driven by masked language modelling (MLM). This self- supervised token-level objective requires the model to predict the identity of some randomly masked to- kens from the input sequence. In addition to MLM, some of these models have mechanisms for learn- ing sentence-level embeddings via self-supervision. In BERT (Devlin et al., 2019), a special classiï¬ca- tion token is prepended to every input sequence, and its representation is used in a binary classiï¬-
1https://github.com/JohnGiorgi/DeCLUTR
cation task to predict whether one textual segment follows another in the training corpus, denoted Next Sentence Prediction (NSP). However, recent work has called into question the effectiveness of NSP (Conneau and Lample, 2019; You et al., 1904; Joshi et al., 2020). In RoBERTa (Liu et al., 2019), the authors demonstrated that removing NSP dur- ing pretraining leads to unchanged or even slightly improved performance on downstream sentence- level tasks (including semantic text similarity and natural language inference). In ALBERT (Lan et al., 2020), the authors hypothesize that NSP conï¬ates topic prediction and coherence predic- tion, and instead propose a Sentence-Order Pre- diction objective (SOP), suggesting that it better models inter-sentence coherence. In preliminary evaluations, we found that neither objective pro- duces good universal sentence embeddings (see Appendix A). Thus, we propose a simple but ef- fective self-supervised, sentence-level objective in- spired by recent advances in metric learning.
Metric learning is a type of representation learning that aims to learn an embedding space where the vector representations of similar data are mapped close together, and vice versa (Lowe, 1995; Mika et al., 1999; Xing et al., 2002). In computer vision (CV), deep metric learning (DML) has been widely used for learning visual represen- tations (Wohlhart and Lepetit, 2015; Wen et al., 2016; Zhang and Saligrama, 2016; Bucher et al., 2016; Leal-Taix´e et al., 2016; Tao et al., 2016; Yuan et al., 2020; He et al., 2018; Grabner et al., 2018; Yelamarthi et al., 2018; Yu et al., 2018). Gener- ally speaking, DML is approached as follows: a âpretextâ task (often self-supervised, e.g. colouriza- tion or inpainting) is carefully designed and used to train deep neural networks to generate useful feature representations. Here, âusefulâ means a rep- resentation that is easily adaptable to other down- stream tasks, unknown at training time. Down- stream tasks (e.g. object recognition) are then used to evaluate the quality of the learned features (inde- pendent of the model that produced them), often by training a linear classiï¬er on the task using these features as input. The most successful approach to date has been to design a pretext task for learning with a pair-based contrastive loss function. For a given anchor data point, contrastive losses attempt to make the distance between the anchor and some positive data points (those that are similar) smaller than the distance between the anchor and some neg-
ative data points (those that are dissimilar) (Had- sell et al., 2006). The highest-performing methods generate anchor-positive pairs by randomly aug- menting the same image (e.g. using crops, ï¬ips and colour distortions); anchor-negative pairs are randomly chosen, augmented views of different im- ages (Bachman et al., 2019; Tian et al., 2020; He et al., 2020; Chen et al., 2020). In fact, Kong et al., 2020 demonstrate that the MLM and NSP objec- tives are also instances of contrastive learning.
Inspired by this approach, we propose a self- supervised, contrastive objective that can be used to pretrain a sentence encoder. Our objective learns universal sentence embeddings by training an en- coder to minimize the distance between the embed- dings of textual segments randomly sampled from nearby in the same document. We demonstrate our objectiveâs effectiveness by using it to extend the pretraining of a transformer-based language model and obtain state-of-the-art results on SentE- val (Conneau and Kiela, 2018) â a benchmark of 28 tasks designed to evaluate universal sentence embeddings. Our primary contributions are:
⢠We propose a self-supervised sentence-level objective that can be used alongside MLM to pretrain transformer-based language mod- inducing generalized embeddings for els, sentence- and paragraph-length text without any labelled data (subsection 5.1).
⢠We perform extensive ablations to determine which factors are important for learning high- quality embeddings (subsection 5.2).
⢠We demonstrate that the quality of the learned embeddings scale with model and data size. Therefore, performance can likely be im- proved simply by collecting more unlabelled text or using a larger encoder (subsection 5.3).
⢠We open-source our solution and provide de- tailed instructions for training it on new data or embedding unseen text.2
# 2 Related Work
Previous works on universal sentence embeddings can be broadly grouped by whether or not they use labelled data in their pretraining step(s), which we refer to simply as supervised or semi-supervised and unsupervised, respectively.
2https://github.com/JohnGiorgi/DeCLUTR
Supervised or semi-supervised The highest per- forming universal sentence encoders are pretrained on the human-labelled natural language inference (NLI) datasets Stanford NLI (SNLI) (Bowman et al., 2015) and MultiNLI (Williams et al., 2018). NLI is the task of classifying a pair of sentences (de- noted the âhypothesisâ and the âpremiseâ) into one of three relationships: entailment, contradiction or neutral. The effectiveness of NLI for training universal sentence encoders was demonstrated by the supervised method InferSent (Conneau et al., 2017). Universal Sentence Encoder (USE) (Cer et al., 2018) is semi-supervised, augmenting an un- supervised, Skip-Thoughts-like task (Kiros et al. 2015, see section 2) with supervised training on the SNLI corpus. The recently published Sen- tence Transformers (Reimers and Gurevych, 2019) method ï¬ne-tunes pretrained, transformer-based language models like BERT (Devlin et al., 2019) using labelled NLI datasets.
Unsupervised Skip-Thoughts (Kiros et al., 2015) and FastSent (Hill et al., 2016) are popular unsupervised techniques that learn sentence em- beddings by using an encoding of a sentence to predict words in neighbouring sentences. However, in addition to being computationally expensive, this generative objective forces the model to reconstruct the surface form of a sentence, which may capture information irrelevant to the meaning of a sentence. QuickThoughts (Logeswaran and Lee, 2018) ad- dresses these shortcomings with a simple discrim- inative objective; given a sentence and its context (adjacent sentences), it learns sentence represen- tations by training a classiï¬er to distinguish con- text sentences from non-context sentences. The unifying theme of unsupervised approaches is that they exploit the âdistributional hypothesisâ, namely that the meaning of a word (and by extension, a sentence) is characterized by the word context in which it appears.
Our overall approach is most similar to Sen- tence Transformers â we extend the pretraining of a transformer-based language model to produce useful sentence embeddings â but our proposed objective is self-supervised. Removing the depen- dence on labelled data allows us to exploit the vast amount of unlabelled text on the web without being restricted to languages or domains where labelled data is plentiful (e.g. English Wikipedia). Our objective most closely resembles QuickThoughts; some distinctions include: we relax our sampling to
âSample anchor(s) Sr ra Share weights a Minimize distance Paes van , . , . âSample positive(s) ia ~ i ~ â9-0 B. c. = ele Overlapping view Adjacent view Subsumed view Positive Anchor LA Span length
Figure 1: Overview of the self-supervised contrastive (A) For each document d in a minibatch objective. of size N , we sample A anchor spans per document and P positive spans per anchor. For simplicity, we illustrate the case where A = P = 1 and denote the anchor-positive span pair as si, sj. Both spans are fed through the same encoder f (·) and pooler g(·) to produce the corresponding embeddings ei = g(f (si)), ej = g(f (sj)). The encoder and pooler are trained to minimize the distance between embeddings via a contrastive prediction task, where the other em- beddings in a minibatch are treated as negatives (omit- ted here for simplicity). (B) Positive spans can overlap with, be adjacent to or be subsumed by the sampled anchor span. (C) The length of anchors and positives are randomly sampled from beta distributions, skewed toward longer and shorter spans, respectively.
textual segments of up to paragraph length (rather than natural sentences), we sample one or more positive segments per anchor (rather than strictly one), and we allow these segments to be adjacent, overlapping or subsuming (rather than strictly adja- cent; see Figure 1, B).
# 3 Model
# 3.1 Self-supervised contrastive loss
Our method learns textual representations via a contrastive loss by maximizing agreement between textual segments (referred to as âspansâ in the rest of the paper) sampled from nearby in the same document. Illustrated in Figure 1, this approach comprises the following components:
⢠A data loading step randomly samples paired anchor-positive spans from each document in a minibatch of size N . Let A be the number of anchor spans sampled per document, P be the number of positive spans sampled per anchor and i â {1 . . . AN } be the index of an arbi- trary anchor span. We denote an anchor span
and its corresponding p â {1 . . . P } positive spans as si and si+pAN respectively. This pro- cedure is designed to maximize the chance of sampling semantically similar anchor-positive pairs (see subsection 3.2).
⢠An encoder f (·) maps each token in the input spans to an embedding. Although our method places no constraints on the choice of encoder, we chose f (·) to be a transformer-based lan- guage model, as this represents the state-of- the-art for text encoders (see subsection 3.3).
⢠A pooler g(·) maps the encoded spans f (si) and f (si+pAN ) to ï¬xed-length embeddings ei = g(f (si)) and its corresponding mean positive embedding
P Ej AN = P Ball (SitpANn))
Similar to Reimers and Gurevych 2019, we found that choosing g(·) to be the mean of the token-level embeddings (referred to as âmean poolingâ in the rest of the paper) per- forms well (see Appendix, Table 4). We pair each anchor embedding with the mean of multiple positive embeddings. This strategy was proposed by Saunshi et al. 2019, who demonstrated theoretical and empirical im- provements compared to using a single posi- tive example for each anchor.
A contrastive loss function defined for a con- trastive prediction task. Given a set of embed- ded spans {e;} including a positive pair of ex- amples e; and e;4 4, the contrastive predic- tion task aims to identify e;, 4 in {ex}2: for a given e;
exp(sim(e;, e;)/T) © ayizu) « exp(sim(e:, ex)/T) (i, 7) = â log
where sim(u, v) = u2'v/||u||2||v||2 denotes the cosine similarity of two vectors u and v, 1pzn ⬠{0,1} is an indicator function evaluating to 1 if i £4 k, and 7 > 0 denotes the temperature hyperparameter.
During training, we randomly sample mini- batches of N documents from the train set and deï¬ne the contrastive prediction task on anchor- positive pairs ei, ei+AN derived from the N docu- ments, resulting in 2AN data points. As proposed in (Sohn, 2016), we treat the other 2(AN â 1) in- stances within a minibatch as negative examples. The cost function takes the following form
AN Leontrastive = oes + AN) + Li + AN, i) i=l
This is the InfoNCE loss used in previous works (Sohn, 2016; Wu et al., 2018; Oord et al., 2018) and denoted normalized temperature-scale cross- entropy loss or âNT-Xentâ in (Chen et al., 2020). To embed text with a trained model, we simply pass batches of tokenized text through the model, without sampling spans. Therefore, the computa- tional cost of our method at test time is the cost of the encoder, f (·), plus the cost of the pooler, g(·), which is negligible when using mean pooling.
# 3.2 Span sampling
We start by choosing a minimum and maxi- mum span length; in this paper, â¬min = 32 and lmax = 512, the maximum input size for many pretrained transformers. Next, a document d is to- kenized to produce a sequence of n tokens x? = (#1, @2...2n). To sample an anchor span s; from x, we first sample its length Canchor from a beta distribution and then randomly (uniformly) sample its starting position s$*"
anchor = | Panchor X x (Cmax â Emin) + Lmin st {0,..., n- âend , _ start Ss = Sj + Canchor Lanchor + d s= © cstart, send
|
We then sample p â {1 . . . P } corresponding posi- tive spans si+pAN independently following a simi- lar procedure
Lpositive = | Ppositive x (Cmax _ min) + Emin start start end SitpAN ~ {sp â Lpositives-- +, $7" } end __ start SitpAN = Sit+pAN + Cpositive SitpAN = wx PAN = Pose 59H y
|
where Panchor ~ Beta(a = 4,8 = 2), which skews anchor sampling towards longer spans, and Ppositive ~ Beta(a = 2,8 = 4), which skews positive sampling towards shorter spans (Figure 1, C). In practice, we restrict the sampling of anchor spans from the same document such that they are a minimum of 2 * â¬max tokens apart. In Appendix B, we show examples of text that has been sampled by our method. We note several carefully considered decisions in the design of our sampling procedure:
¢ Sampling span lengths from a distribution clipped at (min = 32 and max = 512 encour- ages the model to produce good embeddings for text ranging from sentence- to paragraph- length. At test time, we expect our model to be able to embed up-to paragraph-length texts.
⢠We found that sampling longer lengths for the anchor span than the positive spans improves performance in downstream tasks (we did not ï¬nd performance to be sensitive to the speciï¬c choice of α and β). The rationale for this is twofold. First, it enables the model to learn global-to-local view prediction as in (Hjelm et al., 2019; Bachman et al., 2019; Chen et al., 2020) (referred to as âsubsumed viewâ in Fig- ure 1, B). Second, when P > 1, it encourages diversity among positives spans by lowering the amount of repeated text.
⢠Sampling positives nearby to the anchor ex- ploits the distributional hypothesis and in- creases the chances of sampling valid (i.e. se- mantically similar) anchor-positive pairs.
⢠By sampling multiple anchors per document, each anchor-positive pair is contrasted against both easy negatives (anchors and positives sampled from other documents in a mini- batch) and hard negatives (anchors and posi- tives sampled from the same document).
In conclusion, the sampling procedure produces three types of positives: positives that partially overlap with the anchor, positives adjacent to the anchor, and positives subsumed by the anchor (Fig- ure 1, B) and two types of negatives: easy nega- tives sampled from a different document than the anchor, and hard negatives sampled from the same document as the anchor. Thus, our stochastically generated training set and contrastive loss implic- itly deï¬ne a family of predictive tasks which can be
used to train a model, independent of any speciï¬c encoder architecture.
# 3.3 Continued MLM pretraining
We use our objective to extend the pretraining of a transformer-based language model (Vaswani et al., 2017), as this represents the state-of-the-art encoder in NLP. We implement the MLM objective as de- scribed in (Devlin et al., 2019) on each anchor span in a minibatch and sum the losses from the MLM and contrastive objectives before backpropagating
L = Lcontrastive + LMLM
This is similar to existing pretraining strategies, where an MLM loss is paired with a sentence-level loss such as NSP (Devlin et al., 2019) or SOP (Lan et al., 2020). To make the computational require- ments feasible, we do not train from scratch, but rather we continue training a model that has been pretrained with the MLM objective. Speciï¬cally, we use both RoBERTa-base (Liu et al., 2019) and DistilRoBERTa (Sanh et al., 2019) (a distilled ver- In sion of RoBERTa-base) in our experiments. the rest of the paper, we refer to our method as DeCLUTR-small (when extending DistilRoBERTa pretraining) and DeCLUTR-base (when extending RoBERTa-base pretraining).
# 4 Experimental setup
# 4.1 Dataset, training, and implementation
Dataset We collected all documents with a min- imum token length of 2048 from OpenWebText (Gokaslan and Cohen, 2019) an open-access sub- set of the WebText corpus (Radford et al., 2019), yielding 497,868 documents in total. For refer- ence, Googleâs USE was trained on 570,000 human- labelled sentence pairs from the SNLI dataset (among other unlabelled datasets). InferSent and Sentence Transformer models were trained on both SNLI and MultiNLI, a total of 1 million human- labelled sentence pairs.
Implementation We implemented our model in PyTorch (Paszke et al., 2017) using AllenNLP (Gardner et al., 2018). We used the NT-Xent loss function implemented by the PyTorch Met- ric Learning library (Musgrave et al., 2019) and the pretrained transformer architecture and weights from the Transformers library (Wolf et al., 2020). All models were trained on up to four NVIDIA Tesla V100 16 or 32GB GPUs.
Table 1: Trainable model parameter counts and sen- tence embedding dimensions. DeCLUTR-small and DeCLUTR-base are pretrained DistilRoBERTa and RoBERTa-base models respectively after continued pretraining with our method.
Model Parameters Embedding dim. Bag-of-words (BoW) baselines GloVe fastText â â 300 300 Supervised and semi-supervised InferSent Universal Sentence Encoder Sentence Transformers 38M 147M 125M 4096 512 768 Unsupervised QuickThoughts DeCLUTR-small DeCLUTR-base 73M 82M 125M 4800 768 768
Training Unless speciï¬ed otherwise, we train for one to three epochs over the 497,868 docu- ments with a minibatch size of 16 and a temper- ature Ï = 5 à 10â2 using the AdamW optimizer (Loshchilov and Hutter, 2019) with a learning rate (LR) of 5 à 10â5 and a weight decay of 0.1. For every document in a minibatch, we sample two anchor spans (A = 2) and two positive spans per anchor (P = 2). We use the Slanted Triangular LR scheduler (Howard and Ruder, 2018) with a num- ber of train steps equal to training instances and a cut fraction of 0.1. The remaining hyperparame- ters of the underlying pretrained transformer (i.e. DistilRoBERTa or RoBERTa-base) are left at their defaults. All gradients are scaled to a vector norm of 1.0 before backpropagating. Hyperparameters were tuned on the SentEval validation sets.
# 4.2 Evaluation
We evaluate all methods on the SentEval bench- mark, a widely-used toolkit for evaluating general- purpose, ï¬xed-length sentence representations. SentEval is divided into 18 downstream tasks â rep- resentative NLP tasks such as sentiment analysis, natural language inference, paraphrase detection and image-caption retrieval â and ten probing tasks, which are designed to evaluate what linguistic prop- erties are encoded in a sentence representation. We report scores obtained by our model and the rel- evant baselines on the downstream and probing tasks using the SentEval toolkit3 with default pa- rameters (see Appendix C for details). Note that
3https://github.com/facebookresearch/ SentEval
all the supervised approaches we compare to are trained on the SNLI corpus, which is included as a downstream task in SentEval. To avoid train-test contamination, we compute average downstream scores without considering SNLI when comparing to these approaches in Table 2.
4.2.1 Baselines We compare to the highest performing, most popular sentence embedding methods: InferSent, Googleâs USE and Sentence Transformers. For In- ferSent, we compare to the latest model.4 We use the latest âlargeâ USE model5, as it is most similar in terms of architecture and number of parameters to DeCLUTR-base. For Sentence Transformers, we compare to âroberta-base-nli-mean-tokensâ6, which, like DeCLUTR-base, uses the RoBERTa- base architecture and pretrained weights. The only difference is each methodâs extended pretraining strategy. We include the performance of averaged GloVe7 and fastText8 word vectors as weak base- lines. Trainable model parameter counts and sen- tence embedding dimensions are listed in Table 1. Despite our best efforts, we could not evaluate the pretrained QuickThought models against the full SentEval benchmark. We cite the scores from the paper directly. Finally, we evaluate the pretrained transformer modelâs performance before it is sub- jected to training with our contrastive objective, denoted âTransformer-*â. We use mean pooling on the pretrained transformers token-level output to produce sentence embeddings â the same pooling strategy used in our method.
# 5 Results
In subsection 5.1, we compare the performance of our model against the relevant baselines. In the remaining sections, we explore which components contribute to the quality of the learned embeddings.
# 5.1 Comparison to baselines
# Downstream task performance Compared to the underlying pretrained models DistilRoBERTa
4https://dl.fbaipublicfiles.com/ infersent/infersent2.pkl
# 5https://tfhub.dev/google/
# universal-sentence-encoder-large/5
# 6https://www.sbert.net/docs/
# pretrained_models.html
7http://nlp.stanford.edu/data/glove. 840B.300d.zip
8https://dl.fbaipublicfiles.com/ fasttext/vectors-english/crawl-300d-2M. vec.zip
Table 2: Results on the downstream tasks from the test set of SentEval. QuickThoughts scores are taken di- rectly from (Logeswaran and Lee, 2018). USE: Googleâs Universal Sentence Encoder. Transformer-small and Transformer-base are pretrained DistilRoBERTa and RoBERTa-base models respectively, using mean pooling. DeCLUTR-small and DeCLUTR-base are pretrained DistilRoBERTa and RoBERTa-base models respectively af- ter continued pretraining with our method. Average scores across all tasks, excluding SNLI, are shown in the top half of the table. Bold: best scores. â: difference to DeCLUTR-base average score. â and â denote increased or decreased performance with respect to the underlying pretrained model. *: Unsupervised evaluations.
CR MR MPQA SUBJ SST2 SST5 TREC MRPC SNLI Avg. â Bag-of-words (BoW) weak baselines 78.78 79.18 77.70 78.45 87.76 87.88 91.25 91.53 80.29 82.15 44.48 45.16 83.00 83.60 73.39/81.45 74.49/82.44 65.85 68.79 65.47 68.56 -13.63 -10.54 Supervised and semi-supervised 84.37 85.70 90.78 79.42 79.38 84.98 89.04 88.89 88.72 93.03 93.11 92.67 84.24 84.90 90.55 45.34 46.11 52.76 90.80 95.00 87.40 76.35/83.48 72.41/82.01 76.64/82.99 84.16 83.25 84.18 76.00 78.89 77.19 -3.10 -0.21 -1.91 Unsupervised 86.00 86.60 88.19 87.52 â 90.68 â 82.40 82.12 84.35 82.79 â 85.16 â 90.20 87.04 86.49 87.87 â 88.52 â 94.80 94.77 95.28 94.96 â 95.78 â 87.60 88.03 89.46 87.64 â 90.01 â â 49.50 51.27 48.42 â 51.18 â 92.40 91.60 93.20 90.80 â 93.20 â 76.90/84.00 74.55/81.75 74.20/81.44 75.36/82.70 â 74.61/82.65 â â 71.88 72.19 73.59 â 74.74 â â 72.58 72.70 77.50 â 79.10 â â -6.52 -6.40 -1.60 â SICK-E SICK-R STS-B COCO STS12* STS13* STS14* STS15* STS16* 78.89 79.01 86.30 85.37 82.97 â 81.96 80.29 83.46 â 83.84 â 72.30 72.98 83.06 81.53 79.17 â 77.51 76.84 77.66 â 78.62 â 62.86 68.26 78.48 81.50 74.28 â 70.31 69.62 77.51 â 79.39 â 0.40 0.40 65.84 62.42 60.96 60.55 60.48 60.14 60.85 â 62.35 â 53.44 58.85 62.90 68.87 64.10 â 53.99 53.28 63.66 â 63.56 â 51.24 58.83 56.08 71.70 65.63 â 45.53 46.10 68.93 â 72.58 â 55.71 63.42 66.36 72.76 69.80 â 57.23 56.17 70.40 â 71.70 â 59.62 69.05 74.01 83.88 74.71 â 65.57 64.69 78.25 â 79.95 â 57.93 68.24 72.89 82.78 72.85 â 63.51 62.79 77.74 â 79.59 â â â â â â â â â â â â â â â â â â â â â
Table 3: Results on the probing tasks from the test set of SentEval. USE: Googleâs Universal Sentence Encoder. Transformer-small and Transformer-base are pretrained DistilRoBERTa and RoBERTa-base models respectively, using mean pooling. DeCLUTR-small and DeCLUTR-base are pretrained DistilRoBERTa and RoBERTa-base models respectively after continued pretraining with our method. Bold: best scores. â and â denote increased or decreased performance with respect to the underlying pretrained model.
Model SentLen WC TreeDepth TopConst BShift Tense SubjNum ObjNum SOMO CoordInv Avg. Bag-of-words (BoW) weak baselines GloVe fastText 57.82 55.46 81.10 82.10 31.41 32.74 62.70 63.32 49.74 50.16 83.58 86.68 78.39 79.75 76.31 79.81 49.55 50.21 53.62 51.41 62.42 63.16 Supervised and semi-supervised InferSent USE Sent. Transformers 78.76 73.14 69.21 89.50 69.44 51.79 37.72 30.87 30.08 80.16 73.27 50.38 61.41 58.88 69.70 88.56 83.81 83.02 86.83 80.34 79.74 83.91 79.14 77.85 52.11 56.97 60.10 66.88 61.13 60.33 72.58 66.70 63.22 Unsupervised Transformer-small Transformer-base DeCLUTR-small (ours) DeCLUTR-base (ours) 88.62 81.96 88.85 â 84.62 â 65.00 59.67 74.87 â 68.98 â 40.87 38.84 38.48 â 38.35 â 75.38 74.02 75.17 â 74.78 â 88.63 90.08 86.12 â 87.85 â 87.84 88.59 88.71 â 88.82 â 86.68 85.51 86.31 â 86.56 â 84.17 83.33 84.30 â 83.88 â 63.75 68.54 61.27 â 65.08 â 64.78 71.32 62.98 â 67.54 â 74.57 74.19 74.71 â 74.65 â
and RoBERTa-base, DeCLUTR-small and DeCLUTR-base obtain large boosts in average downstream performance, +4% and +6% respec- tively (Table 2). DeCLUTR-base leads to improved or equivalent performance for every downstream task but one (SST5) and DeCLUTR-small for all but three (SST2, SST5 and TREC). Compared
_
to existing methods, DeCLUTR-base matches or even outperforms average performance without using any hand-labelled training data. Surprisingly, we also ï¬nd that DeCLUTR-small outperforms Sentence Transformers while using â¼34% less trainable parameters.
Probing task performance With the exception of InferSent, existing methods perform poorly on the probing tasks of SentEval (Table 3). Sentence Transformers, which begins with a pretrained trans- former model and ï¬ne-tunes it on NLI datasets, scores approximately 10% lower on the probing tasks than the model it ï¬ne-tunes. In contrast, both DeCLUTR-small and DeCLUTR-base perform comparably to the underlying pretrained model in terms of average performance. We note that the purpose of the probing tasks is not the development of ad-hoc models that attain top performance on them (Conneau et al., 2018). However, it is still in- teresting to note that high downstream task perfor- mance can be obtained without sacriï¬cing probing task performance. Furthermore, these results sug- gest that ï¬ne-tuning transformer-based language models on NLI datasets may discard some of the linguistic information captured by the pretrained modelâs weights. We suspect that the inclusion of MLM in our training objective is responsible for DeCLUTRâs relatively high performance on the probing tasks.
Supervised vs. unsupervised downstream tasks The downstream evaluation of SentEval includes supervised and unsupervised tasks. In the unsu- pervised tasks, the embeddings of the method to evaluate are used as-is without any further train- ing (see Appendix C for details). Interestingly, we ï¬nd that USE performs particularly well across the unsupervised evaluations in SentEval (tasks marked with a * in Table 2). Given the similarity of the USE architecture to Sentence Transformers and DeCLUTR and the similarity of its supervised NLI training objective to InferSent and Sentence Transformers, we suspect the most likely cause is one or more of its additional training objectives. These include a conversational response prediction task (Henderson et al., 2017) and a Skip-Thoughts (Kiros et al., 2015) like task.
# 5.2 Ablation of the sampling procedure
We ablate several components of the sampling pro- cedure, including the number of anchors sampled per document A, the number of positives sampled per anchor P , and the sampling strategy for those positives (Figure 2). We note that when A = 2, the model is trained on twice the number of spans and twice the effective batch size (2AN , where N is the number of documents in a minibatch) as compared to when A = 1. To control for this, all experi-
(a) Anchors per document (b) Positives per anchor (c) Sampling strategy DeCLUTR-base I I I | I | £ ⬠5
Figure 2: Effect of the number of anchor spans sampled per document (a), the number of positive spans sampled per anchor (b), and the sampling strategy (c). Averaged downstream task scores are reported from the valida- tion set of SentEval. Performance is computed over a grid of hyperparameters and plotted as a distribution. The grid is deï¬ned by all permutations of number of an- chors A = {1, 2}, number of positives P = {1, 2, 4}, temperatures Ï = {5 à 10â3, 1 à 10â2, 5 à 10â2} and learning rates α = {5 à 10â5, 1 à 10â4}. P = 4 is omitted for DeCLUTR-base as these experiments did not ï¬t into GPU memory.
ments where A = 1 are trained for two epochs (twice the number of epochs as when A = 2) and for two times the minibatch size (2N ). Thus, both sets of experiments are trained on the same number of spans and the same effective batch size (4N ), and the only difference is the number of anchors sampled per document (A).
We ï¬nd that sampling multiple anchors per doc- ument has a large positive impact on the qual- ity of learned embeddings. We hypothesize this is because the difï¬culty of the contrastive objec- tive increases when A > 1. Recall that a mini- batch is composed of random documents, and each anchor-positive pair sampled from a document is contrasted against all other anchor-positive pairs in the minibatch. When A > 1, anchor-positive pairs will be contrasted against other anchors and positives from the same document, increasing the difï¬culty of the contrastive objective, thus lead- ing to better representations. We also ï¬nd that a positive sampling strategy that allows positives to be adjacent to and subsumed by the anchor out- performs a strategy that only allows adjacent or subsuming views, suggesting that the information captured by these views is complementary. Finally, we note that sampling multiple positives per anchor (P > 1) has minimal impact on performance. This is in contrast to (Saunshi et al., 2019), who found both theoretical and empirical improvements when
(a) DeCLUTR-small (b) DeCLUTR-base a t I 1 t 1 t t I 3 â Both 4 --- Contrastive Mum Avg. downstream performance (%) 3 © s s ep ££ © SF SF s © ⬠e ⬠S Ss x Fraction of train set (%)
Figure 3: Effect of training objective, train set size and model capacity on SentEval performance. DeCLUTR- small has 6 layers and â¼82M parameters. DeCLUTR- base has 12 layers and â¼125M parameters. Averaged downstream task scores are reported from the valida- tion set of SentEval. 100% corresponds to 1 epoch of training with all 497,868 documents from our Open- WebText subset.
multiple positives are averaged and paired with a given anchor.
# 5.3 Training objective, train set size and model capacity
To determine the importance of the training objec- tives, train set size, and model capacity, we trained two sizes of the model with 10% to 100% (1 full epoch) of the train set (Figure 3). Pretraining the model with both the MLM and contrastive objec- tives improves performance over training with ei- ther objective alone. Including MLM alongside the contrastive objective leads to monotonic im- provement as the train set size is increased. We hypothesize that including the MLM loss acts as a form of regularization, preventing the weights of the pretrained model (which itself was trained with an MLM loss) from diverging too dramati- cally, a phenomenon known as âcatastrophic for- gettingâ (McCloskey and Cohen, 1989; Ratcliff, 1990). These results suggest that the quality of em- beddings learned by our approach scale in terms of model capacity and train set size; because the train- ing method is completely self-supervised, scaling the train set would simply involve collecting more unlabelled text.
# 6 Discussion and conclusion
In this paper, we proposed a self-supervised ob- jective for learning universal sentence embeddings. Our objective does not require labelled training data and is applicable to any text encoder. We demonstrated the effectiveness of our objective by evaluating the learned embeddings on the SentE-
val benchmark, which contains a total of 28 tasks designed to evaluate the transferability and linguis- tic properties of sentence representations. When used to extend the pretraining of a transformer- based language model, our self-supervised objec- tive closes the performance gap with existing meth- ods that require human-labelled training data. Our experiments suggest that the learned embeddingsâ quality can be further improved by increasing the model and train set size. Together, these results demonstrate the effectiveness and feasibility of re- placing hand-labelled data with carefully designed self-supervised objectives for learning universal sentence embeddings. We release our model and code publicly in the hopes that it will be extended to new domains and non-English languages.
# Acknowledgments
This by Compute Ontario by support (https://computeontario.ca/), Compute Canada (www.computecanada.ca) and the CIFAR AI Chairs Program and partially funded by the US National Institutes of Health (NIH) [U41 HG006623, U41 HG003751).
# References
Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, IËnigo Lopez-Gazpio, Montse Maritxalar, Rada Mihalcea, German Rigau, Larraitz Uria, and Janyce Wiebe. 2015. SemEval-2015 task 2: Semantic tex- tual similarity, English, Spanish and pilot on inter- pretability. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 252â263, Denver, Colorado. Association for Computational Linguistics.
Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2014. SemEval-2014 task 10: Multilingual In Proceedings of the semantic textual similarity. 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 81â91, Dublin, Ireland. As- sociation for Computational Linguistics.
Eneko Agirre, Carmen Banea, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2016. SemEval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation. In Proceedings of the 10th International Workshop on Semantic Evalua- tion (SemEval-2016), pages 497â511, San Diego, California. Association for Computational Linguis- tics.
Eneko Agirre, Daniel Cer, Mona Diab, and Aitor Gonzalez-Agirre. 2012. SemEval-2012 task 6: A pilot on semantic textual similarity. In *SEM 2012: The First Joint Conference on Lexical and Compu- tational Semantics â Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012), pages 385â 393, Montr´eal, Canada. Association for Computa- tional Linguistics.
Eneko Agirre, Daniel Cer, Mona Diab, Aitor Gonzalez- Agirre, and Weiwei Guo. 2013. *sem 2013 shared In Second Joint task: Semantic textual similarity. Conference on Lexical and Computational Seman- tics (*SEM), Volume 1: Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity, pages 32â43.
Philip Bachman, R. Devon Hjelm, and William Buch- walter. 2019. Learning representations by maximiz- ing mutual information across views. In Advances in Neural Information Processing Systems 32: An- nual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 15509â15519.
Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large anno- tated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empiri- cal Methods in Natural Language Processing, pages 632â642, Lisbon, Portugal. Association for Compu- tational Linguistics.
Maxime Bucher, St´ephane Herbin, and Fr´ed´eric Jurie. Improving semantic embedding consistency 2016. by metric learning for zero-shot classifï¬cation. In European Conference on Computer Vision, pages 730â746. Springer.
Daniel Cer, Mona Diab, Eneko Agirre, IËnigo Lopez- Gazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and In Proceedings crosslingual focused evaluation. of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 1â14, Vancouver, Canada. Association for Computational Linguistics.
Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Brian Strope, and Ray Kurzweil. 2018. Universal In Proceedings of sentence encoder for English. the 2018 Conference on Empirical Methods in Nat- ural Language Processing: System Demonstrations, pages 169â174, Brussels, Belgium. Association for Computational Linguistics.
Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. 2020. A simple framework for contrastive learning of visual representations. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Ma- chine Learning Research, pages 1597â1607. PMLR.
Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: pre- training text encoders as discriminators rather than In 8th International Conference on generators. Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
Alexis Conneau and Douwe Kiela. 2018. SentEval: An evaluation toolkit for universal sentence representa- tions. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).
Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo¨ıc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Nat- ural Language Processing, pages 670â680, Copen- hagen, Denmark. Association for Computational Linguistics.
Alexis Conneau, German Kruszewski, Guillaume Lam- ple, Lo¨ıc Barrault, and Marco Baroni. 2018. What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 2126â2136, Melbourne, Aus- tralia. Association for Computational Linguistics.
Alexis Conneau and Guillaume Lample. 2019. Cross- In Advances lingual language model pretraining. in Neural Information Processing Systems 32: An- nual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 7057â7067.
Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Car- bonell, Quoc Le, and Ruslan Salakhutdinov. 2019. Transformer-XL: Attentive language models beyond In Proceedings of the 57th a ï¬xed-length context. Annual Meeting of the Association for Computa- tional Linguistics, pages 2978â2988, Florence, Italy. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- In Proceedings of the 2019 Conference standing. of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171â4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
Bill Dolan, Chris Quirk, and Chris Brockett. 2004. Unsupervised construction of large paraphrase cor- pora: Exploiting massively parallel news sources. In COLING 2004: Proceedings of the 20th Inter- national Conference on Computational Linguistics, pages 350â356, Geneva, Switzerland. COLING.
Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Pe- ters, Michael Schmitz, and Luke Zettlemoyer. 2018. AllenNLP: A deep semantic natural language pro- In Proceedings of Workshop for cessing platform. NLP Open Source Software (NLP-OSS), pages 1â 6, Melbourne, Australia. Association for Computa- tional Linguistics.
Aaron Gokaslan and Vanya Cohen. 2019. Openweb- text corpus. http://Skylion007.github.io/ OpenWebTextCorpus.
Alexander Grabner, Peter M. Roth, and Vincent Lep- etit. 2018. 3d pose estimation and 3d model retrieval In 2018 IEEE Conference for objects in the wild. on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pages 3022â3031. IEEE Computer Society.
Raia Hadsell, Sumit Chopra, and Yann LeCun. 2006. Dimensionality reduction by learning an invariant mapping. In 2006 IEEE Computer Society Confer- ence on Computer Vision and Pattern Recognition (CVPRâ06), volume 2, pages 1735â1742. Ieee.
Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross B. Girshick. 2020. Momentum contrast for un- In 2020 supervised visual representation learning. IEEE/CVF Conference on Computer Vision and Pat- tern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pages 9726â9735. IEEE.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In 2016 IEEE Conference on Computer Vi- sion and Pattern Recognition, CVPR 2016, Las Ve- gas, NV, USA, June 27-30, 2016, pages 770â778. IEEE Computer Society.
Xinwei He, Yang Zhou, Zhichao Zhou, Song Bai, and Xiang Bai. 2018. Triplet-center loss for multi-view In 2018 IEEE Conference on 3d object retrieval. Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pages 1945â1954. IEEE Computer Society.
Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun- Hsuan Sung, L´aszl´o Luk´acs, Ruiqi Guo, Sanjiv Ku- mar, Balint Miklos, and Ray Kurzweil. 2017. Efï¬- cient natural language response suggestion for smart reply. arXiv preprint arXiv:1705.00652.
Felix Hill, Kyunghyun Cho, and Anna Korhonen. 2016. Learning distributed representations of sen- tences from unlabelled data. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 1367â1377, San Diego, California. Association for Computational Linguistics.
R. Devon Hjelm, Alex Fedorov, Samuel Lavoie- Marchildon, Karan Grewal, Philip Bachman, Adam Trischler, and Yoshua Bengio. 2019. Learning deep
representations by mutual information estimation and maximization. In 7th International Conference on Learning Representations, ICLR 2019, New Or- leans, LA, USA, May 6-9, 2019. OpenReview.net.
Jeremy Howard and Sebastian Ruder. 2018. Universal language model ï¬ne-tuning for text classiï¬cation. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 328â339, Melbourne, Australia. Association for Computational Linguistics.
Minqing Hu and Bing Liu. 2004. Mining and summa- rizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowl- edge discovery and data mining, pages 168â177.
Yacine Jernite, Samuel R. Bowman, and David A. Son- tag. 2017. Discourse-based objectives for fast un- supervised sentence representation learning. CoRR, abs/1705.00557.
Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. SpanBERT: Improving pre-training by representing and predicting spans. Transactions of the Associa- tion for Computational Linguistics, 8:64â77.
Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Raquel Urtasun, Antonio Tor- ralba, and Sanja Fidler. 2015. Skip-thought vec- tors. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Informa- tion Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 3294â3302.
Lingpeng Kong, Cyprien de Masson dâAutume, Lei Yu, Wang Ling, Zihang Dai, and Dani Yogatama. 2020. A mutual information maximization perspec- In 8th tive of language representation learning. International Conference on Learning Representa- tions, ICLR 2020, Addis Ababa, Ethiopia, April 26- 30, 2020. OpenReview.net.
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A lite BERT for self-supervised In 8th Inter- learning of language representations. national Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
Quoc V. Le and Tom´as Mikolov. 2014. Distributed representations of sentences and documents. In Proceedings of the 31th International Conference on Machine Learning, ICML 2014, Beijing, China, 21-26 June 2014, volume 32 of JMLR Workshop and Conference Proceedings, pages 1188â1196. JMLR.org.
Laura Leal-Taix´e, Cristian Canton-Ferrer, and Konrad Schindler. 2016. Learning by tracking: Siamese cnn for robust target association. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 33â40.
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick. 2014. Microsoft coco: In European confer- Common objects in context. ence on computer vision, pages 740â755. Springer.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.
Lajanugen Logeswaran and Honglak Lee. 2018. An efï¬cient framework for learning sentence represen- tations. In 6th International Conference on Learn- ing Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net.
Ilya Loshchilov and Frank Hutter. 2019. Decou- In 7th Inter- pled weight decay regularization. national Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
David G Lowe. 1995. Similarity metric learning for a variable-kernel classiï¬er. Neural computation, 7(1):72â85.
Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Roberto Zampar- elli. 2014. A SICK cure for the evaluation of compo- sitional distributional semantic models. In Proceed- ings of the Ninth International Conference on Lan- guage Resources and Evaluation (LRECâ14), pages 216â223, Reykjavik, Iceland. European Language Resources Association (ELRA).
Michael McCloskey and Neal J Cohen. 1989. Catas- trophic interference in connectionist networks: The sequential learning problem. In Psychology of learn- ing and motivation, volume 24, pages 109â165. El- sevier.
Sebastian Mika, Gunnar Ratsch, Jason Weston, Bern- hard Scholkopf, and Klaus-Robert Mullers. 1999. Fisher discriminant analysis with kernels. In Neural networks for signal processing IX: Proceedings of the 1999 IEEE signal processing society workshop (cat. no. 98th8468), pages 41â48. Ieee.
Tom´as Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Distributed rep- resentations of words and phrases and their com- In Advances in Neural Information positionality. Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Pro- ceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States, pages 3111â 3119.
Kevin Musgrave, Belongie. 2019. https://github.com/KevinMusgrave/ pytorch-metric-learning.
Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive pre- dictive coding. arXiv preprint arXiv:1807.03748.
Bo Pang and Lillian Lee. 2004. A sentimental edu- cation: Sentiment analysis using subjectivity sum- In Proceed- marization based on minimum cuts. ings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04), pages 271â 278, Barcelona, Spain.
Bo Pang and Lillian Lee. 2005. Seeing stars: Ex- ploiting class relationships for sentiment categoriza- In Proceed- tion with respect to rating scales. ings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACLâ05), pages 115â 124, Ann Arbor, Michigan. Association for Compu- tational Linguistics.
Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in PyTorch. In NIPS Autodiff Workshop.
Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), pages 1532â1543, Doha, Qatar. Association for Computational Linguistics.
Jason Phang, Thibault F´evry, and Samuel R. Bowman. 2018. Sentence encoders on stilts: Supplementary training on intermediate labeled-data tasks. CoRR, abs/1811.01088.
Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL https://s3-us-west-2. com/openai- assets/researchcovers/languageunsupervised/language understanding paper. pdf.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
Roger Ratcliff. 1990. Connectionist models of recog- nition memory: constraints imposed by learning Psychological review, and forgetting functions. 97(2):285.
Nils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using Siamese BERT- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 3982â3992, Hong Kong, China. Association for Computational Linguistics.
Sebastian Ruder, Matthew E. Peters, Swayamdipta, and Thomas Wolf. 2019. fer learning in natural language processing. Swabha Trans- In
Proceedings of the 2019 Conference of the North the Association for Com- American Chapter of putational Linguistics: Tutorials, pages 15â18, Minneapolis, Minnesota. Association for Computa- tional Linguistics.
Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
Nikunj Saunshi, Orestis Plevrakis, Sanjeev Arora, Mikhail Khodak, and Hrishikesh Khandeparkar. 2019. A theoretical analysis of contrastive unsuper- vised representation learning. In Proceedings of the 36th International Conference on Machine Learn- ing, ICML 2019, 9-15 June 2019, Long Beach, Cali- fornia, USA, volume 97 of Proceedings of Machine Learning Research, pages 5628â5637. PMLR.
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- In Proceedings of the 2013 Conference on bank. Empirical Methods in Natural Language Processing, pages 1631â1642, Seattle, Washington, USA. Asso- ciation for Computational Linguistics.
Improved deep metric learning with multi-class n-pair loss objective. In Advances in Neural Information Processing Systems 29: An- nual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 1849â1857.
Sandeep Subramanian, Adam Trischler, Yoshua Ben- gio, and Christopher J. Pal. 2018. Learning gen- eral purpose distributed sentence representations In 6th Inter- via large scale multi-task learning. national Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenRe- view.net.
Ran Tao, Efstratios Gavves, and Arnold W. M. Smeul- ders. 2016. Siamese instance search for tracking. In 2016 IEEE Conference on Computer Vision and Pat- tern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 1420â1429. IEEE Com- puter Society.
Yonglong Tian, Dilip Krishnan, and Phillip Isola. 2020. Contrastive multiview coding. In ECCV.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4- 9, 2017, Long Beach, CA, USA, pages 5998â6008.
Ellen M Voorhees and Dawn M Tice. 2000. Building a question answering test collection. In Proceedings
of the 23rd annual international ACM SIGIR confer- ence on Research and development in information retrieval, pages 200â207.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A multi-task benchmark and analysis plat- In 7th form for natural language understanding. International Conference on Learning Representa- tions, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
Yandong Wen, Kaipeng Zhang, Zhifeng Li, and Yu Qiao. 2016. A discriminative feature learn- In Euro- ing approach for deep face recognition. pean conference on computer vision, pages 499â515. Springer.
Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emo- tions in language. Language resources and evalua- tion, 39(2-3):165â210.
Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112â1122, New Orleans, Louisiana. Association for Computational Linguis- tics.
Paul Wohlhart and Vincent Lepetit. 2015. Learning de- scriptors for object recognition and 3d pose estima- tion. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015, pages 3109â3118. IEEE Computer Society.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language process- ing. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 38â45, Online. Asso- ciation for Computational Linguistics.
Zhirong Wu, Yuanjun Xiong, Stella X. Yu, and Dahua Lin. 2018. Unsupervised feature learning via non- In 2018 IEEE parametric instance discrimination. Conference on Computer Vision and Pattern Recog- nition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pages 3733â3742. IEEE Computer So- ciety.
Eric P. Xing, Andrew Y. Ng, Michael I. Jordan, and Stu- art J. Russell. 2002. Distance metric learning with application to clustering with side-information. In Advances in Neural Information Processing Systems
15 [Neural Information Processing Systems, NIPS 2002, December 9-14, 2002, Vancouver, British Columbia, Canada], pages 505â512. MIT Press.
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Car- bonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for In Advances in Neural language understanding. Information Processing Systems 32: Annual Con- ference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancou- ver, BC, Canada, pages 5754â5764.
Sasi Kiran Yelamarthi, Shiva Krishna Reddy, Ashish Mishra, and Anurag Mittal. 2018. A zero-shot framework for sketch based image retrieval. In Eu- ropean Conference on Computer Vision, pages 316â 333. Springer.
Yang You, Jing Li, Jonathan Hseu, Xiaodan Song, James Demmel, and C Hsieh. 1904. Reducing bert pre-training time from 3 days to 76 minutes. corr abs/1904.00962 (2019).
Rui Yu, Zhiyong Dou, Song Bai, Zhaoxiang Zhang, Yongchao Xu, and Xiang Bai. 2018. Hard-aware point-to-set deep metric for person re-identiï¬cation. In Proceedings of the European Conference on Com- puter Vision (ECCV), pages 188â204.
Ye Yuan, Wuyang Chen, Yang Yang, and Zhangyang Wang. 2020. In defense of the triplet loss again: Learning robust person re-identiï¬cation with fast ap- proximated triplet loss and label distillation. In Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops.
Ziming Zhang and Venkatesh Saligrama. 2016. Zero- shot learning via joint latent similarity embedding. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 6034â6042. IEEE Computer Society.
# A Pretrained transformers make poor universal sentence encoders
Certain pretrained transformers, such as BERT and ALBERT, have mechanisms for learning sequence- level embeddings via self-supervision. These mod- els prepend every input sequence with a special classiï¬cation token (e.g. â[CLS]â), and its repre- sentation is learned using a simple classiï¬cation task, such as Next Sentence Prediction (NSP) or Sentence-Order Prediction (SOP) (see Devlin et al. 2019 and Lan et al. 2020 respectively for details on these tasks). However, during preliminary experi- ments, we noticed that these models are not good universal sentence encoders, as measured by their performance on the SentEval benchmark (Conneau and Kiela, 2018). As a simple experiment, we
evaluated three pretrained transformer models on SentEval: one trained with the NSP loss (BERT), one trained with the SOP loss (ALBERT) and one trained with neither, RoBERTa (Liu et al., 2019). We did not ï¬nd that the CLS embeddings produced by models trained against the NSP or SOP losses to outperform that of a model trained without either loss and sometimes failed to outperform a bag-of- words (BoW) baseline (Table 4). Furthermore, we ï¬nd that pooling token embeddings via averaging (referred to as âmean poolingâ in our paper) out- performs pooling via the CLS classiï¬cation token. Our results are corroborated by Liu et al. 2019, who ï¬nd that removing NSP loss leads to the same or better results on downstream tasks and Reimers and Gurevych 2019, who ï¬nd that directly using the output of BERT as sentence embeddings leads to poor performances on the semantic similarity tasks of SentEval.
# B Examples of sampled spans
In Table 5, we present examples of anchor-positive and anchor-negative pairs generated by our sam- pling procedure. We show one example for each possible view of a sampled positive, e.g. posi- tives adjacent to, overlapping with, or subsumed by the anchor. For each anchor-positive pair, we show examples of both a hard negative (derived from the same document) and an easy negative (derived from another document). Recall that a minibatch is composed of random documents, and each anchor-positive pair sampled from a docu- ment is contrasted against all other anchor-positive pairs in the minibatch. Thus, hard negatives, as we have described them here, are generated only when sampling multiple anchors per document (A > 1).
# C SentEval evaluation details
SentEval is a benchmark for evaluating the qual- ity of ï¬xed-length sentence embeddings. It is di- vided into 18 downstream tasks, and 10 probing tasks. Sentence embedding methods are evaluated on these tasks via a simple interface9, which stan- dardizes training, evaluation and hyperparameters. For most tasks, the method to evaluate is used to produce ï¬x-length sentence embeddings, and a simple logistic regression (LR) or multi-layer per- ception (MLP) model is trained on the task using these embeddings as input. For other tasks (namely
9https://github.com/facebookresearch/ SentEval
Table 4: Results on the downstream and probing tasks from the validation set of SentEval. We compare models trained with the Next Sentence Prediction (NSP) and Sentence-Order Prediction (SOP) losses to a model trained with neither, using two different pooling strategies: â*-CLSâ, where the special classiï¬cation token is used as its sentence representation, and â*-meanâ, where each sentence is represented by the mean of its token embeddings.
SentEval Model Parameters Embed. dim. Downstream Probing Bag-of-Words (BoW) weak baselines GloVe fastText â â 300 300 66.05 68.75 62.93 63.46 Trained with Next Sentence Prediction (NSP) loss BERT-base-CLS BERT-base-mean 110M 110M 768 768 63.53 71.98 69.57 73.37 Trained with Sentence-Order Prediction (SOP) loss ALBERT-base-V2-CLS ALBERT-base-V2-mean 11M 11M 768 768 58.75 69.39 69.88 74.83 Trained with neither NSP or SOP losses RoBERTa-base-CLS RoBERTa-base-mean 125M 125M 768 768 68.53 72.84 66.92 74.59
several semantic text similarity tasks), the embed- dings are used as-is without any further training. Note that this setup is different from evaluations on the popular GLUE benchmark (Wang et al., 2019), which typically use the task data to ï¬ne-tune the parameters of the sentence embedding model.
In subsection C.1, we present the individual tasks of the SentEval benchmark. In subsection C.2, we explain our method for computing the average downstream and average probing scores presented in our paper.
# C.1 SentEval tasks
The downstream tasks of SentEval are representa- tive NLP tasks used to evaluate the transferability of ï¬xed-length sentence embeddings. We give a brief overview of the broad categories that divide the tasks below (see Conneau and Kiela 2018 for more details):
classiï¬cation: These tasks cover various types of sentence including sentiment analy- classiï¬cation, sis (MR Pang and Lee 2005, SST2 and SST5 Socher et al. 2013), question-type (TREC) (Voorhees and Tice, 2000), product reviews (CR) (Hu and Liu, 2004), subjectiv- ity/objectivity (SUBJ) (Pang and Lee, 2004) and opinion polarity (MPQA) (Wiebe et al., 2005).
datasets (also known as natural language inference or NLI), including SICK-E (Marelli et al., 2014) and the Stanford NLI dataset (SNLI) (Bowman et al., 2015) as well as multiple semantic relatedness datasets including SICK-R and STS-B (Cer et al., 2017).
⢠Semantic textual similarity These tasks (STS12 Agirre et al. 2012, STS13 Agirre et al. 2013, STS14 Agirre et al. 2014, STS15 Agirre et al. 2015 and STS16 Agirre et al. 2016) are similar to the semantic relatedness tasks, ex- cept the embeddings produced by the encoder are used as-is in a cosine similarity to deter- mine the semantic similarity of two sentences. No additional model is trained on top of the encoderâs output.
⢠Paraphrase detection Evaluated on the Mi- crosoft Research Paraphrase Corpus (MRPC) (Dolan et al., 2004), this binary classiï¬cation task is comprised of human-labelled sentence pairs, annotated according to whether they capture a paraphrase/semantic equivalence re- lationship.
⢠Entailment and semantic relatedness: cover multiple entailment These tasks
e w , r o h c n a h c a e r o F . h c t a b i n i m a n i t n e m u c o d y r e v e m o r f s r o h c n a e r o m r o e n o e l p m a s y l m o d n a r e w , g n i n i a r t g n i r u D . d o h t e m r i a p e v i t i s o p - r o h c n a r e h t o y r e v e h t i w d e t s a r t n o c e r a s r i a p e v i t i s o p - r o h c n a l l A . r o h c n a e h t y b d e m u s b u s r o , h t i w g n i p p a l r e v o , o t t n e c a j d a e h t m o r f d e l p m a s s e v i t i s o p d n a s r o h c n a ( s e v i t a g e n d r a h d n a ) h c t a b i n i m a n i s t n e m u c o d r e h t o m o r f d e l p m a s s e v i t i s o p d n a s r o h c n a ( . s n e k o t 2 1 5 f o h t g n e l a o t p u s n a p s e l p m a s e w , g n i n i a r t g n i r u D . s n e k o t 4 6 f o h t g n e l m u m i x a m a t a d e p p a c e r a e v i t a g e n y s a E e v i t a g e n d r a H e v i t i s o P w e i v g n i p p a l r e v O - o l p x e r o f e l b a l i a v a w o n s i n o i t a c o l w e n a t a h t - i m m i e g r a l h t i w y r t n u o c e h t f o s t r a p l a r e b i l e r e w e m o S . e c i l o p o t g n i k l a t e l b a t r o f m o c l e e f - e c r o f n e w a l a e k i l s l e e f , w e i v y m n i , a e r a d o o g A . n o i t a r n i y t n u o C a r a l C a t n a S e k i l , s n o i t a l u p o p t n a r g o t y l l a e r e r e w s n o i t n e t n i s â E C I t a h t l a c i t p e k s w e n e h t t â n s e o d t i d l r o w e m a g a f o n o i s s e r g o r p l a r u t a n d e e r g a , s i o n i l l I n i y t n u o C k o o C d n a a i n r o f i l a C - e d o t y l p m i s n a h t r e h t a r , y t e f a s c i l b u p t c e t o r p e c r o f n e o t s d e e n n r u t n i t a h T . y r a r t i b r a r o n o d e k c a t m e e s y e h T . s e i t i n u m m o C e r u c e S f o s c i t i r c e h t h t i w . y l i s a e e r o m s t n a r g i m m i d e z i r o h t u a n u t r o p e t a l e r o t t i d l u o w m a r g o r p e h t g n i t n e m e l p m i t a h t d e i r r o w s m i t c i v - i s e r t n a r g i m m i h t i w s p i h s n o i t a l e r r i e h t n i a r t s - t r o f m o c . s t n e d w e i v t n e c a j d A s i h t n e h W . r e d r o O F I F n i d e s s e c o r p e r a s t n e v e n i d e d n u o r g s a w t e j o b m u j s y a w r i A h s i t i r B A e b n a c c i l b u p e h T â . o g l l i w h s a e h t e r e h w o n a c l o v e h t - n o c p o o l t n e v e e h t , d e i t p m e s i e u e u Q k c i T t x e n s e n i g n e e h t s r a e f g n i w o l l o f y a d n u S n o a d a n a C e l b a y l n o e r a s e n i l r i a t a h t t n e d ï¬ n o c y l e t u l o s b a e v a h l l i d e t e l p m o c n e e b e v a h o t s n o i t a r e p o l l a s r e d i s h s a c i n a c l o v h t i w d e t a n i m a t n o c n e e b d a h r i a n a y R â . o s o d o t e f a s s i t i n e h w e t a r e p o o t r e h t o e h T â t x e n e h t o t s n o i t i s n a r t d n a e s a h p t n e r r u c e h t r o f d u o l c h s a y n a e e s t o n d l u o c t i d i a s - o m e h t t . e s a h p e l i t a l o v - n u , t l u c ï¬ f i d w e i v d e m u s b u S w e ï¬ d n a t u o d e i r c s e k w a F . s g n i l e e f h c u s o t n a m s e b i r t a j n e W a f o e l o r e h t n o e k a t s r e y a l P r o f e d i w d l r o w d e s a e l e r s a w t I . t f o s i b U y b o e d i v - r u F . d e w o l l o f e r o d e l b m u D s u b l A d n a , d a e h a o n h t i w s o r O n i d e d n a r t s s i o h w , r a k k a T d e m a n , 3 2 y r a u r b e F n o e n O x o b X d n a 4 n o i t a t S y a l P - b u p d n a e r e w e l p o e p , h t a p â s r o t n e m e D e h t g n o l a r e h t d e h s u b m a s i y t r a p g n i t n u h s i h r e t f a s n o p a e w , 1 h c r a M n o s w o d n i W t f o s o r c i M r o f d n a , 6 1 0 2 w o h r e t t a m o n d n A . r o f t h g u o f e b o t e v i l a l l i t s . r e g i T h t o o t - r e b a S a y b r a F n i a m e h t f o f f o - n i p s a s i e m a g e h T . 6 1 0 2 - u r b e F n o e r e w e r e h t e l i h w , g n i t r u h s a w f l e s m i h e h h c u m . s e i r e s y r C . n o o g d l u o w e h m i h d e d e e n o h w e l p o e p l l i t s e h t r o F e m a g y r C
r u o y b d e t a r e n e g s n a p s
s e v i t a g e n y s a e o t
e d i w d l r o w d e s a e l e r
r o f d a b e b d l u o w
e r u t n e v d a - n o i t c a
# w m e l b o r p
t n a r g i m m
f o l a c i t p e k s
# l a e r t n o M
A â . n o i t c e r i d
s e l p m a x e
# s p o c
# e n O
. s r o t c a f
# f o
e t i u q
d n a
# t â n d l u o w e m
# e r o m
e r a
# t u o
# x o b X
l a c o l y b t r o f f e y n A
s d a e l
s e t a c o v d a
# i
# , t l e f y e h t
s n r e t t a p
t f o s i b U y b
# t x e t
# t i
e c n i s
# e h t
# e h t
g n i h c l e b
# r o
s a w
e r e w s l a n o i s s e f o r p t n e m
g n i k a m s i
# , e r e H
t c i d e r p
f o e n o s â t a h t o s
s i h T
d n a
, s y a d w e f
d n a
e n o
f o s e l p m a x E
. e c i l o p o t
# n a
, g n i c i l o p
# t I
i r c
, s w a l n o i t a r g i m m
r e h t a e w
. t f o s i b U y b d e h s i l
# s i
4
# d e e p s
e l p m a s
s t h g i r - t n a r g i m m
# . ) t n e m u c o d
.
h c t a b i n i m e h t n i
# d e p o l e v e d
# o t
# s p o t s
n o i t a t S y a l P
# l a m
# [RULIg
# f o
t a h w s i
, r a e y
# s e s s e n t i
# a
g n i k l a t
i r P
d n i w e h t
y t i n u m m o c
r e t f a
h s a
.
# e h t
# y l m o d n a r
# m a r g o r p
# y r C
t s a l
, d e r a e l c
r o h c n A
: 5 e l b a T
h c i h w
# e h t
e m a g
# t n e m
# , n e h t
# w
e l b a
e m a s
# e k i l
r a F
# r o f
# r o
# s i
# f i
# i
# i
# t f o s o r c i
# M
r o f d n a
,
6 1 0 2 , 3 2 y r a
f o f f o - n i p s
# a
# s i
e m a g e h T
. 6 1 0 2 , 1 h c r a
# M
# t s r ï¬ e h t
# s i
# Stay
# t I
. s e i r e s y r C
r a F n i a m
. e g A c i h t i l o s e
# M
# e h t
# n i
# t e s
⢠Caption-Image retrieval This task is com- prised of two sub-tasks: ranking a large col- lection of images by their relevance for some given query text (Image Retrieval) and rank- ing captions by their relevance for some given query image (Caption Retrieval). Both tasks are evaluated on data from the COCO dataset (Lin et al., 2014). Each image is represented by a pretrained, 2048-dimensional embedding produced by a ResNet-101 (He et al., 2016).
The probing tasks are designed to evaluate what linguistic properties are encoded in a sentence rep- resentation. All tasks are binary or multi-class clas- siï¬cation. We give a brief overview of each task below (see Conneau et al. 2018 for more details):
⢠Sentence length (SentLen): A multi-class classiï¬cation task where a model is trained to predict the length of a given input sen- tence, which is binned into six possible length ranges.
⢠Word content (WC): A multi-class classiï¬ca- tion task where, given 1000 words as targets, the goal is to predict which of the target words appears in a given input sentence. Each sen- tence contains a single target word, and the word occurs exactly once in the sentence.
⢠Tree depth (TreeDepth): A multi-class clas- siï¬cation task where the goal is to predict the maximum depth (with values ranging from 5 to 12) of a given input sentenceâs syntactic tree.
⢠Bigram Shift (BShift): A multi-class clas- siï¬cation task where the goal is to predict whether two consecutive tokens within a given sentence have been inverted.
⢠Top Constituents (TopConst): A multi-class classiï¬cation task where the goal is to predict the top constituents (from a choice of 19) im- mediately below the sentence (S) node of the sentenceâs syntactic tree.
⢠Tense: A binary classiï¬cation task where the goal is to predict the tense (past or present) of the main verb in a sentence.
⢠Subject number (SubjNum): A binary clas- siï¬cation task where the goal is to predict the number (singular or plural) of the subject of the main clause.
⢠Object number (ObjNum): A binary classi- ï¬cation task, analogous to SubjNum, where the goal is to predict the number (singular or plural) of the direct object of the main clause.
⢠Semantic odd man out (SOMO): A binary classiï¬cation task where the goal is to predict whether a sentence has had a single randomly picked noun or verb replaced with another word with the same part-of-speech.
⢠Coordinate inversion (CoordInv): A binary classiï¬cation task where the goal is to predict whether the order of two coordinate clauses in a sentence has been inverted.
# C.2 Computing an average score
In our paper, we present averaged downstream and probing scores. Computing averaged probing scores was straightforward; each of the ten probing tasks reports a simple accuracy, which we averaged. To compute an averaged downstream score, we do the following:
⢠If a task reports Spearman correlation (i.e. SICK-R, STS-B), we use this score when com- puting the average downstream task score. If the task reports a mean Spearman correlation for multiple subtasks (i.e. STS12, STS13, STS14, STS15, STS16), we use this score.
⢠If a task reports both an accuracy and an F1- score (i.e. MRPC), we use the average of these two scores.
⢠For the Caption-Image Retrieval task, we re- port the average of the Recall@K, where K â {1, 5, 10} for the Image and Caption retrieval tasks (a total of six scores). This is the default behaviour of SentEval.
⢠Otherwise, we use the reported accuracy. | {
"id": "1807.03748"
} |
2006.03654 | DeBERTa: Decoding-enhanced BERT with Disentangled Attention | Recent progress in pre-trained neural language models has significantly
improved the performance of many natural language processing (NLP) tasks. In
this paper we propose a new model architecture DeBERTa (Decoding-enhanced BERT
with disentangled attention) that improves the BERT and RoBERTa models using
two novel techniques. The first is the disentangled attention mechanism, where
each word is represented using two vectors that encode its content and
position, respectively, and the attention weights among words are computed
using disentangled matrices on their contents and relative positions,
respectively. Second, an enhanced mask decoder is used to incorporate absolute
positions in the decoding layer to predict the masked tokens in model
pre-training. In addition, a new virtual adversarial training method is used
for fine-tuning to improve models' generalization. We show that these
techniques significantly improve the efficiency of model pre-training and the
performance of both natural language understanding (NLU) and natural langauge
generation (NLG) downstream tasks. Compared to RoBERTa-Large, a DeBERTa model
trained on half of the training data performs consistently better on a wide
range of NLP tasks, achieving improvements on MNLI by +0.9% (90.2% vs. 91.1%),
on SQuAD v2.0 by +2.3% (88.4% vs. 90.7%) and RACE by +3.6% (83.2% vs. 86.8%).
Notably, we scale up DeBERTa by training a larger version that consists of 48
Transform layers with 1.5 billion parameters. The significant performance boost
makes the single DeBERTa model surpass the human performance on the SuperGLUE
benchmark (Wang et al., 2019a) for the first time in terms of macro-average
score (89.9 versus 89.8), and the ensemble DeBERTa model sits atop the
SuperGLUE leaderboard as of January 6, 2021, out performing the human baseline
by a decent margin (90.3 versus 89.8). | http://arxiv.org/pdf/2006.03654 | Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen | cs.CL, cs.LG, cs.CL, cs.GL, I.2; I.7 | 20 pages,5 figures, 13 tables. In v2, we scale up DeBERTa to 1.5B
parameters and it surpasses the human performance on SuperGLUE leaderboard
for the first time as of December 29, 2020. In v3, we replace MLM with RTD
objective which significantly improves the model performance | null | cs.CL | 20200605 | 20211006 | 1 2 0 2
t c O 6 ] L C . s c [
6 v 4 5 6 3 0 . 6 0 0 2 : v i X r a
Published as a conference paper at ICLR 2021
# DEBERTA: DECODING-ENHANCED BERT WITH DIS- ENTANGLED ATTENTION
Pengcheng He1, Xiaodong Liu2, Jianfeng Gao2, Weizhu Chen1 1 Microsoft Dynamics 365 AI {penhe,xiaodl,jfgao,wzchen}@microsoft.com
# ABSTRACT
Recent progress in pre-trained neural language models has signiï¬cantly improved the performance of many natural language processing (NLP) tasks. In this pa- per we propose a new model architecture DeBERTa (Decoding-enhanced BERT with disentangled attention) that improves the BERT and RoBERTa models using two novel techniques. The ï¬rst is the disentangled attention mechanism, where each word is represented using two vectors that encode its content and position, respectively, and the attention weights among words are computed using disen- tangled matrices on their contents and relative positions, respectively. Second, an enhanced mask decoder is used to incorporate absolute positions in the de- coding layer to predict the masked tokens in model pre-training. In addition, a new virtual adversarial training method is used for ï¬ne-tuning to improve modelsâ generalization. We show that these techniques signiï¬cantly improve the efï¬ciency of model pre-training and the performance of both natural language understand (NLU) and natural langauge generation (NLG) downstream tasks. Compared to RoBERTa-Large, a DeBERTa model trained on half of the training data per- forms consistently better on a wide range of NLP tasks, achieving improvements on MNLI by +0.9% (90.2% vs. 91.1%), on SQuAD v2.0 by +2.3% (88.4% vs. 90.7%) and RACE by +3.6% (83.2% vs. 86.8%). Notably, we scale up DeBERTa by training a larger version that consists of 48 Transform layers with 1.5 bil- lion parameters. The signiï¬cant performance boost makes the single DeBERTa model surpass the human performance on the SuperGLUE benchmark (Wang et al., 2019a) for the ï¬rst time in terms of macro-average score (89.9 versus 89.8), and the ensemble DeBERTa model sits atop the SuperGLUE leaderboard as of Jan- uary 6, 2021, outperforming the human baseline by a decent margin (90.3 versus 89.8). The pre-trained DeBERTa models and the source code were released at: https://github.com/microsoft/DeBERTa1.
# INTRODUCTION
The Transformer has become the most effective neural network architecture for neural language modeling. Unlike recurrent neural networks (RNNs) that process text in sequence, Transformers apply self-attention to compute in parallel every word from the input text an attention weight that gauges the inï¬uence each word has on another, thus allowing for much more parallelization than RNNs for large-scale model training (Vaswani et al., 2017). Since 2018, we have seen the rise of a set of large-scale Transformer-based Pre-trained Language Models (PLMs), such as GPT (Radford et al., 2019; Brown et al., 2020), BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019c), XLNet (Yang et al., 2019), UniLM (Dong et al., 2019), ELECTRA (Clark et al., 2020), T5 (Raffel et al., 2020), ALUM (Liu et al., 2020), StructBERT (Wang et al., 2019c) and ERINE (Sun et al., 2019) . These PLMs have been ï¬ne-tuned using task-speciï¬c labels and created new state of the art in many downstream natural language processing (NLP) tasks (Liu et al., 2019b; Minaee et al., 2020; Jiang et al., 2020; He et al., 2019a;b; Shen et al., 2020).
1Our code and models are also available at HuggingFace Transformers: https://github.com/ huggingface/transformers, https://huggingface.co/models?filter=deberta
1
Published as a conference paper at ICLR 2021
In this paper, we propose a new Transformer-based neural language model DeBERTa (Decoding- enhanced BERT with disentangled attention), which improves previous state-of-the-art PLMs using two novel techniques: a disentangled attention mechanism, and an enhanced mask decoder.
Disentangled attention. Unlike BERT where each word in the input layer is represented using a vector which is the sum of its word (content) embedding and position embedding, each word in DeBERTa is represented using two vectors that encode its content and position, respectively, and the attention weights among words are computed using disentangled matrices based on their contents and relative positions, respectively. This is motivated by the observation that the attention weight of a word pair depends on not only their contents but their relative positions. For example, the dependency between the words âdeepâ and âlearningâ is much stronger when they occur next to each other than when they occur in different sentences.
Enhanced mask decoder. Like BERT, DeBERTa is pre-trained using masked language modeling (MLM). MLM is a ï¬ll-in-the-blank task, where a model is taught to use the words surrounding a mask token to predict what the masked word should be. DeBERTa uses the content and position information of the context words for MLM. The disentangled attention mechanism already considers the contents and relative positions of the context words, but not the absolute positions of these words, which in many cases are crucial for the prediction. Consider the sentence âa new store opened beside the new mallâ with the italicized words âstoreâ and âmallâ masked for prediction. Although the local contexts of the two words are similar, they play different syntactic roles in the sentence. (Here, the subject of the sentence is âstoreâ not âmall,â for example.) These syntactical nuances depend, to a large degree, upon the wordsâ absolute positions in the sentence, and so it is important to account for a wordâs absolute position in the language modeling process. DeBERTa incorporates absolute word position embeddings right before the softmax layer where the model decodes the masked words based on the aggregated contextual embeddings of word contents and positions.
In addition, we propose a new virtual adversarial training method for ï¬ne-tuning PLMs to downstream NLP tasks. The method is effective in improving modelsâ generalization.
We show through a comprehensive empirical study that these techniques substantially improve the efï¬ciency of pre-training and the performance of downstream tasks. In the NLU tasks, compared to RoBERTa-Large, a DeBERTa model trained on half the training data performs consistently better on a wide range of NLP tasks, achieving improvements on MNLI by +0.9% (90.2% vs. 91.1%), on SQuAD v2.0 by +2.3%(88.4% vs. 90.7%), and RACE by +3.6% (83.2% vs. 86.8%). In the NLG tasks, DeBERTa reduces the perplexity from 21.6 to 19.5 on the Wikitext-103 dataset. We further scale up DeBERTa by pre-training a larger model that consists of 48 Transformer layers with 1.5 billion parameters. The single 1.5B-parameter DeBERTa model substantially outperforms T5 with 11 billion parameters on the SuperGLUE benchmark (Wang et al., 2019a) by 0.6%(89.3% vs. 89.9%), and surpasses the human baseline (89.9 vs. 89.8) for the ï¬rst time. The ensemble DeBERTa model sits atop the SuperGLUE leaderboard as of January 6, 2021, outperforming the human baseline by a decent margin (90.3 versus 89.8).
# 2 BACKGROUND
2.1 TRANSFORMER
A Transformer-based language model is composed of stacked Transformer blocks (Vaswani et al., 2017). Each block contains a multi-head self-attention layer followed by a fully connected positional feed-forward network. The standard self-attention mechanism lacks a natural way to encode word position information. Thus, existing approaches add a positional bias to each input word embedding so that each input word is represented by a vector whose value depends on its content and position. The positional bias can be implemented using absolute position embedding (Vaswani et al., 2017; Radford et al., 2019; Devlin et al., 2019) or relative position embedding (Huang et al., 2018; Yang et al., 2019). It has been shown that relative position representations are more effective for natural language understanding and generation tasks (Dai et al., 2019; Shaw et al., 2018). The proposed disentangled attention mechanism differs from all existing approaches in that we represent each input word using two separate vectors that encode a wordâs content and position, respectively, and
2
Published as a conference paper at ICLR 2021
attention weights among words are computed using disentangled matrices on their contents and relative positions, respectively.
2.2 MASKED LANGUAGE MODEL
Large-scale Transformer-based PLMs are typically pre-trained on large amounts of text to learn contextual word representations using a self-supervision objective, known as Masked Language Model (MLM) (Devlin et al., 2019). Speciï¬cally, given a sequence X â txiu, we corrupt it into ËX by masking 15% of its tokens at random and then train a language model parameterized by θ to reconstruct X by predicting the masked tokens Ëx conditioned on ËX:
# ÿ
max θ log pθpX| ËXq â max θ iPC log pθpËxi â xi| ËXq (1)
where C is the index set of the masked tokens in the sequence. The authors of BERT propose to keep 10% of the masked tokens unchanged, another 10% replaced with randomly picked tokens and the rest replaced with the [MASK] token.
# 3 THE DEBERTA ARCHITECTURE
3.1 DISENTANGLED ATTENTION: A TWO-VECTOR APPROACH TO CONTENT AND POSITION EMBEDDING
For a token at position i in a sequence, we represent it using two vectors, tHiu and tPi|ju, which represent its content and relative position with the token at position j, respectively. The calculation of the cross attention score between tokens i and j can be decomposed into four components as
Aij = (Hi, Pig} x (Hy » Pj}t = HH} + HiP}y, Pay +P; pt (2) |9* gla
That is, the attention weight of a word pair can be computed as a sum of four attention scores using disentangled matrices on their contents and positions as content-to-content, content-to-position, position-to-content, and position-to-position 2.
Existing approaches to relative position encoding use a separate embedding matrix to compute the relative position bias in computing attention weights (Shaw et al., 2018; Huang et al., 2018). This is equivalent to computing the attention weights using only the content-to-content and content-to- position terms in equation 2. We argue that the position-to-content term is also important since the attention weight of a word pair depends not only on their contents but on their relative positions, which can only be fully modeled using both the content-to-position and position-to-content terms. Since we use relative position embedding, the position-to-position term does not provide much additional information and is removed from equation 2 in our implementation.
Taking single-head attention as an example, the standard self-attention operation (Vaswani et al., 2017) can be formulated as:
Qk" vd Q = HW,, K = HW,,V = HW,,A H, = softmax(A)V
where H P RN Ëd represents the input hidden vectors, Ho P RN Ëd the output of self-attention, Wq, Wk, Wv P RdËd the projection matrices, A P RN ËN the attention matrix, N the length of the input sequence, and d the dimension of hidden states.
Denote k as the maximum relative distance, δpi, jq P r0, 2kq as the relative distance from token i to token j, which is deï¬ned as:
#
0 2k ´ 1 for for i ´ j Ä Â´k i ´ j Ä k δpi, jq â i ´ j ` k others. (3)
2In this sense, our model shares some similarity to Tensor Product Representation (Smolensky, 1990; Schlag et al., 2019; Chen et al., 2019) where a word is represented using a tensor product of its ï¬ller (content) vector and its role (position) vector.
3
Published as a conference paper at ICLR 2021
We can represent the disentangled self-attention with relative position bias as equation 4, where Qc, Kc and Vc are the projected content vectors generated using projection matrices Wq,c, Wk,c, Wv,c P RdËd respectively, P P R2kËd represents the relative position embedding vectors shared across all layers (i.e., staying ï¬xed during forward propagation), and Qr and Kr are projected relative position vectors generated using projection matrices Wq,r, Wk,r P RdËd, respectively.
Qe = HWac, Ke = HWr,e, Ve = HWe,e, Qr = PWar, Kr = PWie A..â cEget chur T egqr T Aig= QUEST + QIK G5)) + KGQ5G,i) (a) content-to-content â_(b) content-to-position _(c) position-to-content A H, = softmax( â=)V. V3d
ËAi,j is the element of attention matrix ËA, representing the attention score from token i to token j. Qc i is the i-th row of Qc. Kc δpi,jq is the δpi, jq-th row of Kr with regarding to relative distance δpi, jq. Qr δpj,iq is the δpj, iq-th row of Qr with regarding to relative distance δpj, iq. Note that we use δpj, iq rather than δpi, jq here. This is because for a given position i, position-to-content computes the attention weight of the key content at j with respect to the query position at i, thus the relative distance is δpj, iq. The position-to-content term is calculated as Kc
on ËA. The factor is important for stabilizing model 1? Finally, we apply a scaling factor of training (Vaswani et al., 2017), especially for large-scale PLMs. 3d
# Algorithm 1 Disentangled Attention
Input: Hidden state H, relative distance embedding P, relative distance matrix 6. Content projec- tion matrix Wx,c, Woe Mae c, position projection matrix Wx,r, Wa,r- bi Ke = HWy,e: Qe = HW, Ve = HWo,c, Kr = PWrr, Qn = PWar 2: Acse = QeK} 3: fori = 0,...,Nâ1do 4: Ac spli, = = Q.li, â|KT 5: end for 6: fori = 0 7: for j ; 8: Acplt, j] = Acopli, 6[t, J] 9: end for 10: end for 11: for 7 = 0,...,N â1do 12: Ap-el:, j] = Kelj,:JQ7 13: end for 14: for j = 0 15: fori ; 16: Ap-eli, i] = Apcl6[i, 4), /] 17: end for 18: end for 19: A= Ace + Acop + Apoe 20: Ho = softmax (45) Ve Output: H,
3.1.1 EFFICIENT IMPLEMENTATION
For an input sequence of length N , it requires a space complexity of OpN 2dq (Shaw et al., 2018; Huang et al., 2018; Dai et al., 2019) to store the relative position embedding for each token. However, taking content-to-position as an example, we note that since δpi, jq P r0, 2kq and the embeddings
4
(4)
Published as a conference paper at ICLR 2021
of all possible relative positions are always a subset of Kr P R2kËd, then we can reuse Kr in the attention calculation for all the queries.
In our experiments, we set the maximum relative distance k to 512 for pre-training. The disentangled attention weights can be computed efficiently using Algorithm[I] Let 6 be the relative position matrix according to equation [3| ie, d[2, 7] = 6(i,7). Instead of allocating a different relative position embedding matrix for each query, we multiply each query vector Q,[i, :] by KJ ¢ R?*?*, as in line 3 â 5. Then, we extract the attention weight using the relative position matrix 6 as the index, as in line 6 â 10. To compute the position-to-content attention score, we calculate Apel, jj, ie., the column vector of the attention matrix Ap_,., by multiplying each key vector K¢[j, :] by QT, as in line 11 â 13. Finally, we extract the corresponding attention score via the relative position matrix 6 as the index, as in line 14 â 18. In this way, we do not need to allocate memory to store a relative position embedding for each query and thus reduce the space complexity to O(kd) (for storing K.,. and Q,.).
3.2 ENHANCED MASK DECODER ACCOUNTS FOR ABSOLUTE WORD POSITIONS
DeBERTa is pretrained using MLM, where a model is trained to use the words surrounding a mask token to predict what the masked word should be. DeBERTa uses the content and position information of the context words for MLM. The disentangled attention mechanism already considers the contents and relative positions of the context words, but not the absolute positions of these words, which in many cases are crucial for the prediction.
Given a sentence âa new store opened beside the new mallâ with the words âstoreâ and âmallâ masked for prediction. Using only the local context (e.g., relative positions and surrounding words) is insufï¬cient for the model to distinguish store and mall in this sentence, since both follow the word new with the same relative positions. To address this limitation, the model needs to take into account absolute positions, as complement information to the relative positions. For example, the subject of the sentence is âstoreâ not âmallâ. These syntactical nuances depend, to a large degree, upon the wordsâ absolute positions in the sentence.
There are two methods of incorporating absolute positions. The BERT model incorporates absolute positions in the input layer. In DeBERTa, we incorporate them right after all the Transformer layers but before the softmax layer for masked token prediction, as shown in Figure 2. In this way, DeBERTa captures the relative positions in all the Transformer layers and only uses absolute positions as complementary information when decoding the masked words. Thus, we call DeBERTaâs decoding component an Enhanced Mask Decoder (EMD). In the empirical study, we compare these two methods of incorporating absolute positions and observe that EMD works much better. We conjecture that the early incorporation of absolute positions used by BERT might undesirably hamper the model from learning sufï¬cient information of relative positions. In addition, EMD also enables us to introduce other useful information, in addition to positions, for pre-training. We leave it to future work.
# 4 SCALE INVARIANT FINE-TUNING
This section presents a new virtual adversarial training algorithm, Scale-invariant-Fine-Tuning (SiFT), a variant to the algorithm described in Miyato et al. (2018); Jiang et al. (2020), for ï¬ne-tuning.
Virtual adversarial training is a regularization method for improving modelsâ generalization. It does so by improving a modelâs robustness to adversarial examples, which are created by making small perturbations to the input. The model is regularized so that when given a task-speciï¬c example, the model produces the same output distribution as it produces on an adversarial perturbation of that example.
For NLP tasks, the perturbation is applied to the word embedding instead of the original word sequence. However, the value ranges (norms) of the embedding vectors vary among different words and models. The variance gets larger for bigger models with billions of parameters, leading to some instability of adversarial training.
5
Published as a conference paper at ICLR 2021
Inspired by layer normalization (Ba et al., 2016), we propose the SiFT algorithm that improves the training stability by applying the perturbations to the normalized word embeddings. Speciï¬cally, when ï¬ne-tuning DeBERTa to a downstream NLP task in our experiments, SiFT ï¬rst normalizes the word embedding vectors into stochastic vectors, and then applies the perturbation to the normalized embedding vectors. We ï¬nd that the normalization substantially improves the performance of the ï¬ne-tuned models. The improvement is more prominent for larger DeBERTa models. Note that we only apply SiFT to DeBERTa1.5B on SuperGLUE tasks in our experiments and we will provide a more comprehensive study of SiFT in our future work.
# 5 EXPERIMENT
This section reports DeBERTa results on various NLU tasks.
5.1 MAIN RESULTS ON NLU TASKS
Following previous studies of PLMs, we report results using large and base models.
5.1.1 PERFORMANCE ON LARGE MODELS
Model CoLA QQP MNLI-m/mm SST-2 STS-B QNLI RTE MRPC Avg. Mcc Acc Acc Acc Corr Acc Acc Acc BERTlarge 60.6 91.3 RoBERTalarge 68.0 92.2 XLNetlarge 69.0 92.3 ELECTRAlarge 69.1 92.4 DeBERTalarge 70.5 92.3 86.6/- 90.2/90.2 90.8/90.8 90.9/- 91.1/91.1 93.2 96.4 97.0 96.9 96.8 90.0 92.4 92.5 92.6 92.8 92.3 93.9 94.9 95.0 95.3 70.4 86.6 85.9 88.0 88.3 88.0 90.9 90.8 90.8 91.9 84.05 88.82 89.15 89.46 90.00
Table 1: Comparison results on the GLUE development set.
We pre-train our large models following the setting of BERT (Devlin et al., 2019), except that we use the BPE vocabulary of Radford et al. (2019); Liu et al. (2019c). For training data, we use Wikipedia (English Wikipedia dump3; 12GB), BookCorpus (Zhu et al., 2015) (6GB), OPENWEBTEXT (public Reddit content (Gokaslan & Cohen, 2019); 38GB), and STORIES (a subset of CommonCrawl (Trinh & Le, 2018); 31GB). The total data size after data deduplication (Shoeybi et al., 2019) is about 78G. Refer to Appendix A.2 for a detailed description of the pre-training dataset.
We use 6 DGX-2 machines (96 V100 GPUs) to train the models. A single model trained with 2K batch size and 1M steps takes about 20 days. Refer to Appendix A for the detailed hyperparamters.
We summarize the results on eight NLU tasks of GLUE (Wang et al., 2019b) in Table 1, where DeBERTa is compared DeBERTa with previous Transform-based PLMs of similar structures (i.e. 24 layers with hidden size of 1024) including BERT, RoBERTa, XLNet, ALBERT and ELECTRA. Note that RoBERTa, XLNet and ELECTRA are pre-trained on 160G training data while DeBERTa is pre- trained on 78G training data. RoBERTa and XLNet are pre-trained for 500K steps with 8K samples in a step, which amounts to four billion training samples. DeBERTa is pre-trained for one million steps with 2K samples in each step. This amounts to two billion training samples, approximately half of either RoBERTa or XLNet. Table 1 shows that compared to BERT and RoBERTa, DeBERTa performs consistently better across all the tasks. Meanwhile, DeBERTa outperforms XLNet in six out of eight tasks. Particularly, the improvements on MRPC (1.1% over XLNet and 1.0% over RoBERTa), RTE (2.4% over XLNet and 1.7% over RoBERTa) and CoLA (1.5% over XLNet and 2.5% over RoBERTa) are signiï¬cant. DeBERTa also outperforms other SOTA PLMs, i.e., ELECTRAlarge and XLNetlarge, in terms of average GLUE score.
Among all GLUE tasks, MNLI is most often used as an indicative task to monitor the research progress of PLMs. DeBERTa signiï¬cantly outperforms all existing PLMs of similar size on MNLI and creates a new state of the art.
# 3https://dumps.wikimedia.org/enwiki/
6
Published as a conference paper at ICLR 2021
Model MNLI-m/mm SQuAD v1.1 SQuAD v2.0 RACE ReCoRD SWAG NER F1 Acc F1/EM F1/EM Acc F1/EM Acc BERTlarge ALBERTlarge RoBERTalarge XLNetlarge Megatron336M DeBERTalarge ALBERTxxlarge Megatron1.3B Megatron3.9B 86.6/- 86.5/- 90.2/90.2 90.8/90.8 89.7/90.0 91.1/91.1 90.8/- 90.9/91.0 91.4/91.4 90.9/84.1 91.8/85.2 94.6/88.9 95.1/89.7 94.2/88.0 95.5/90.1 94.8/89.3 94.9/89.1 95.5/90.0 81.8/79.0 84.9/81.8 89.4/86.5 90.6/87.9 88.1/84.8 90.7/88.0 90.2/87.4 90.2/87.1 91.2/88.5 72.0 75.2 83.2 90.6/90.0 85.4 83.0 86.8 91.4/91.0 86.5 87.3 89.5 - - - - - - - 86.6 - 89.9 - - 90.8 - - - 92.8 - 93.4 - - 93.8 - - -
Table 2: Results on MNLI in/out-domain, SQuAD v1.1, SQuAD v2.0, RACE, ReCoRD, SWAG, CoNLL 2003 NER development set. Note that missing results in literature are signiï¬ed by â-â.
In addition to GLUE, DeBERTa is evaluated on three categories of NLU benchmarks: (1) Question Answering: SQuAD v1.1 (Rajpurkar et al., 2016), SQuAD v2.0 (Rajpurkar et al., 2018), RACE (Lai et al., 2017), ReCoRD (Zhang et al., 2018) and SWAG (Zellers et al., 2018); (2) Natural Language Inference: MNLI (Williams et al., 2018); and (3) NER: CoNLL-2003. For comparison, we include ALBERTxxlarge (Lan et al., 2019) 4 and Megatron (Shoeybi et al., 2019) with three different model sizes, denoted as Megatron336M, Megatron1.3B and Megatron3.9B, respectively, which are trained using the same dataset as RoBERTa. Note that Megatron336M has a similar model size as other models mentioned above5.
We summarize the results in Table 2. Compared to the previous SOTA PLMs with a similar model size (i.e., BERT, RoBERTa, XLNet, ALBERTlarge, and Megatron336M), DeBERTa shows superior performance in all seven tasks. Taking the RACE benchmark as an example, DeBERTa signiï¬cantly outperforms XLNet by +1.4% (86.8% vs. 85.4%). Although Megatron1.3B is three times larger than DeBERTa, DeBERTa outperforms it in three of the four benchmarks. We further report DeBERTa on text generation tasks in Appendix A.4.
5.1.2 PERFORMANCE ON BASE MODELS
Our setting for base model pre-training is similar to that for large models. The base model structure follows that of the BERT base model, i.e., L â 12, H â 768, A â 12. We use 4 DGX-2 with 64 V100 GPUs to train the base model. It takes 10 days to ï¬nish a single pre-training of 1M training steps with batch size 2048. We train DeBERTa using the same 78G dataset, and compare it to RoBERTa and XLNet trained on 160G text data.
We summarize the base model results in Table 3. Across all three tasks, DeBERTa consistently outperforms RoBERTa and XLNet by a larger margin than that in large models. For example, on MNLI-m, DeBERTabase obtains +1.2% (88.8% vs. 87.6%) over RoBERTabase, and +2% (88.8% vs. 86.8%) over XLNetbase.
Model MNLI-m/mm (Acc) SQuAD v1.1 (F1/EM) SQuAD v2.0 (F1/EM) RoBERTabase XLNetbase DeBERTabase 87.6/- 86.8/- 88.8/88.5 91.5/84.6 -/- 93.1/87.2 83.7/80.5 -/80.2 86.2/83.1
Table 3: Results on MNLI in/out-domain (m/mm), SQuAD v1.1 and v2.0 development set.
4The hidden dimension of ALBERTxxlarge is 4 times of DeBERTa and the computation cost is about 4 times of DeBERTa.
5T5 (Raffel et al., 2020) has more parameters (11B). Raffel et al. (2020) only report the test results of T5 which are not comparable with other models.
7
Published as a conference paper at ICLR 2021
5.2 MODEL ANALYSIS
In this section, we ï¬rst present an ablation study to quantify the relative contributions of different components introduced in DeBERTa. Then, we study the convergence property to characterize the model training efï¬ciency. We run experiments for analysis using the base model setting: a model is pre-trained using the Wikipedia + Bookcorpus dataset for 1M steps with batch size 256 in 7 days on a DGX-2 machine with 16 V-100 GPUs. Due to space limit, we visualize the different attention patterns of DeBERTa and RoBERTa in Appendix A.7.
5.2.1 ABLATION STUDY
To verify our experimental setting, we pre-train the RoBERTa base model from scratch. The re-pre- trained RoBERTa model is denoted as RoBERTa-ReImpbase. To investigate the relative contributions of different components in DeBERTa, we develop three variations:
-EMD is the DeBERTa base model without EMD. ⢠-C2P is the DeBERTa base model without the content-to-position term ((c) in Eq. 4). ⢠-P2C is the DeBERTa base model without the position-to-content term ((b) in Eq. 4). As
XLNet also uses the relative position bias, this model is close to XLNet plus EMD.
Model MNLI-m/mm SQuAD v1.1 Acc F1/EM SQuAD v2.0 RACE F1/EM Acc BERTbase Devlin et al. (2019) RoBERTabase Liu et al. (2019c) XLNetbase Yang et al. (2019) RoBERTa-ReImpbase DeBERTabase -EMD -C2P -P2C -(EMD+C2P) -(EMD+P2C) 84.3/84.7 84.7/- 85.8/85.4 84.9/85.1 86.3/86.2 86.1/86.1 85.9/85.7 86.0/85.8 85.8/85.9 85.8/85.8 88.5/81.0 90.6/- -/- 91.1/84.8 92.1/86.1 91.8/85.8 91.6/85.8 91.7/85.7 91.5/85.3 91.3/85.1 76.3/73.7 79.7/- 81.3/78.5 79.5/76.0 82.5/79.3 81.3/78.0 81.3/78.3 80.8/77.6 80.3/77.2 80.2/77.1 65.0 65.6 66.7 66.8 71.7 70.3 69.3 69.6 68.1 68.5
Table 4: Ablation study of the DeBERTa base model.
Table 4 summarizes the results on four benchmark datasets. First, RoBERTa-ReImp performs similarly to RoBERTa across all benchmark datasets, verï¬ying that our setting is reasonable. Second, we see that removing any one component in DeBERTa results in a sheer performance drop. For instance, removing EMD (-EMD) results in a loss of 1.4% (71.7% vs. 70.3%) on RACE, 0.3% (92.1% vs. 91.8%) on SQuAD v1.1, 1.2% (82.5% vs. 81.3%) on SQuAD v2.0, 0.2% (86.3% vs. 86.1%) and 0.1% (86.2% vs. 86.1%) on MNLI-m/mm, respectively. Similarly, removing either content-to-position or position-to-content leads to inferior performance in all the benchmarks. As expected, removing two components results in even more substantial loss in performance.
5.3 SCALE UP TO 1.5 BILLION PARAMETERS
Larger pre-trained models have shown better generalization results (Raffel et al., 2020; Brown et al., 2020; Shoeybi et al., 2019). Thus, we have built a larger version of DeBERTa with 1.5 billion parameters, denoted as DeBERTa1.5B. The model consists of 48 layers with a hidden size of 1,536 and 24 attention heads 6. DeBERTa1.5B is trained on a pre-training dataset amounting to 160G, similar to that in Liu et al. (2019c), with a new vocabulary of size 128K constructed using the dataset.
To train DeBERTa1.5B, we optimize the model architecture as follows. First, we share the projection matrices of relative position embedding Wk,r, Wq,r with Wk,c, Wq,c, respectively, in all attention layers to reduce the number of model parameters. Our ablation study in Table 13 on base models shows that the projection matrix sharing reduces the model size while retaining the model performance.
# 6See Table 8 in Appendix for the model hyperparameters.
8
Published as a conference paper at ICLR 2021
Second, a convolution layer is added aside the ï¬rst Transformer layer to induce n-gram knowledge of sub-word encodings and their outputs are summed up before feeding to the next Transformer layer 7.
Table 5 reports the test results of SuperGLUE (Wang et al., 2019a) which is one of the most popular NLU benchmarks. SuperGLUE consists of a wide of NLU tasks, including Question Answering (Clark et al., 2019; Khashabi et al., 2018; Zhang et al., 2018), Natural Language Inference (Dagan et al., 2006; Bar-Haim et al., 2006; Giampiccolo et al., 2007; Bentivogli et al., 2009), Word Sense Disambiguation (Pilehvar & Camacho-Collados, 2019), and Reasoning (Levesque et al., 2011; Roemmele et al., 2011). Since its release in 2019, top research teams around the world have been developing large-scale PLMs that have driven striking performance improvement on SuperGLUE.
The signiï¬cant performance boost due to scaling DeBERTa to a larger model makes the single DeBERTa1.5B surpass the human performance on SuperGLUE for the ï¬rst time in terms of macro- average score (89.9 versus 89.8) as of December 29, 2020, and the ensemble DeBERTa model (DeBERTaEnsemble) sits atop the SuperGLUE benchmark rankings as of January 6, 2021, outper- forming the human baseline by a decent margin (90.3 versus 89.8). Compared to T5, which consists of 11 billion parameters, the 1.5-billion-parameter DeBERTa is much more energy efï¬cient to train and maintain, and it is easier to compress and deploy to apps of various settings.
Model BoolQ Acc CB COPA MultiRC ReCoRD RTE WiC WSC Average F1a/EM F1/EM Acc Acc Acc F1/Acc Acc Score RoBERTalarge NEXHA-Plus T511B T511B+Meena Human DeBERTa1.5B+SiFT 90.4 94.9/97.2 96.8 90.4 95.7/97.6 98.4 DeBERTaEnsemble 89.0 87.1 90.5/95.2 90.6 93.2 87.8 94.4/96.0 93.6 93.8 91.2 93.9/96.8 94.8 91.3 95.8/97.6 97.4 95.9 89.0 95.8/98.9 100.0 81.8/51.9 91.7/91.3 93.6 80.0 100.0 95.9 95.9 84.4/52.5 90.6/90.0 88.2 69.9 84.6/55.1 90.1/89.6 89.1 74.6 88.1/63.3 94.1/93.4 92.5 76.9 88.3/63.0 94.2/93.5 92.7 77.9 88.2/63.7 94.5/94.1 93.2 76.4 88.2/63.7 94.5/94.1 93.2 77.5 84.6 86.7 89.3 90.2 89.8 89.9 90.3
Table 5: SuperGLUE test set results scored using the SuperGLUE evaluation server. All the results are obtained from https://super.gluebenchmark.com on January 6, 2021.
# 6 CONCLUSIONS
This paper presents a new model architecture DeBERTa (Decoding-enhanced BERT with disentangled attention) that improves the BERT and RoBERTa models using two novel techniques. The ï¬rst is the disentangled attention mechanism, where each word is represented using two vectors that encode its content and position, respectively, and the attention weights among words are computed using disentangled matrices on their contents and relative positions, respectively. The second is an enhanced mask decoder which incorporates absolute positions in the decoding layer to predict the masked tokens in model pre-training. In addition, a new virtual adversarial training method is used for ï¬ne-tuning to improve modelâs generalization on downstream tasks.
We show through a comprehensive empirical study that these techniques signiï¬cantly improve the efï¬ciency of model pre-training and the performance of downstream tasks. The DeBERTa model with 1.5 billion parameters surpasses the human performance on the SuperGLUE benchmark for the ï¬rst time in terms of macro-average score.
DeBERTa surpassing human performance on SuperGLUE marks an important milestone toward general AI. Despite its promising results on SuperGLUE, the model is by no means reaching the human-level intelligence of NLU. Humans are extremely good at leveraging the knowledge learned from different tasks to solve a new task with no or little task-speciï¬c demonstration. This is referred to as compositional generalization, the ability to generalize to novel compositions (new tasks) of familiar constituents (subtasks or basic problem-solving skills). Moving forward, it is worth exploring how to make DeBERTa incorporate compositional structures in a more explicit manner, which could allow combining neural and symbolic computation of natural language similar to what humans do.
7Please refer to Table 12 in Appendix A.6 for the ablation study of different model sizes, and Table 13 in Appendix A.6 for the ablation study of new modiï¬cations.
9
Published as a conference paper at ICLR 2021
# 7 ACKNOWLEDGMENTS
We thank Jade Huang and Nikos Karampatziakis for proofreading the paper and providing insightful comments. We thank Yoyo Liang, Saksham Singhal, Xia Song, and Saurabh Tiwary for their help with large-scale model training. We also thank the anonymous reviewers for valuable discussions.
# REFERENCES
Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
Roy Bar-Haim, Ido Dagan, Bill Dolan, Lisa Ferro, and Danilo Giampiccolo. The second PASCAL recognising textual entailment challenge. In Proceedings of the Second PASCAL Challenges Workshop on Recognising Textual Entailment, 01 2006.
Iz Beltagy, Matthew E Peters, and Arman Cohan. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150, 2020.
Luisa Bentivogli, Ido Dagan, Hoa Trang Dang, Danilo Giampiccolo, and Bernardo Magnini. The ï¬fth pascal recognizing textual entailment challenge. In In Proc Text Analysis Conference (TACâ09, 2009.
Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. arXiv preprint arXiv:2005.14165, 2020.
Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez-Gazpio, and Lucia Specia. Semeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation. arXiv preprint arXiv:1708.00055, 2017.
Kezhen Chen, Qiuyuan Huang, Hamid Palangi, Paul Smolensky, Kenneth D Forbus, and Jianfeng Gao. Natural-to formal-language generation using tensor product representations. arXiv preprint arXiv:1910.02339, 2019.
Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. Generating long sequences with sparse transformers. arXiv preprint arXiv:1904.10509, 2019.
Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. BoolQ: Exploring the surprising difï¬culty of natural yes/no questions. In Proceedings of NAACL-HLT 2019, 2019.
Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. ELECTRA: Pre-training text encoders as discriminators rather than generators. In ICLR, 2020.
Ido Dagan, Oren Glickman, and Bernardo Magnini. The pascal recognising textual entailment In Proceedings of the First International Conference on Machine Learning Chal- challenge. lenges: Evaluating Predictive Uncertainty Visual Object Classiï¬cation, and Recognizing Textual Entailment, MLCWâ05, Berlin, Heidelberg, 2006.
Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G Carbonell, Quoc Le, and Ruslan Salakhutdinov. Transformer-xl: Attentive language models beyond a ï¬xed-length context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2978â2988, 2019.
Marie-Catherine De Marneffe, Mandy Simons, and Judith Tonhauser. The commitmentbank: In- vestigating projection in naturally occurring discourse. In proceedings of Sinn und Bedeutung, volume 23, pp. 107â124, 2019.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171â4186, 2019.
10
Published as a conference paper at ICLR 2021
William B Dolan and Chris Brockett. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005), 2005.
Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. Uniï¬ed language model pre-training for natural language understanding and generation. In Advances in Neural Information Processing Systems, pp. 13042â13054, 2019.
William Fedus, Barret Zoph, and Noam Shazeer. Switch transformers: Scaling to trillion parameter models with simple and efï¬cient sparsity. arXiv preprint arXiv:2101.03961, 2021.
Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. The third PASCAL recognizing textual entailment challenge. In Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, pp. 1â9, Prague, June 2007. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/W07-1401.
Aaron Gokaslan and Vanya Cohen. Openwebtext corpus. http://Skylion007.github.io, 2019.
Pengcheng He, Xiaodong Liu, Weizhu Chen, and Jianfeng Gao. A hybrid neural network model for commonsense reasoning. arXiv preprint arXiv:1907.11983, 2019a.
Pengcheng He, Yi Mao, Kaushik Chakrabarti, and Weizhu Chen. X-sql: reinforce schema representa- tion with context. arXiv preprint arXiv:1908.08113, 2019b.
Cheng-Zhi Anna Huang, Ashish Vaswani, Jakob Uszkoreit, Ian Simon, Curtis Hawthorne, Noam Shazeer, Andrew M Dai, Matthew D Hoffman, Monica Dinculescu, and Douglas Eck. Music transformer: Generating music with long-term structure. 2018.
Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Tuo Zhao. SMART: Robust and efï¬cient ï¬ne-tuning for pre-trained natural language models through principled regu- larized optimization. In ACL, July 2020. doi: 10.18653/v1/2020.acl-main.197.
Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. Spanbert: Improving pre-training by representing and predicting spans. Transactions of the Association for Computational Linguistics, 8:64â77, 2020.
Kamal Raj Kanakarajan, Bhuvana Kundumani, and Malaikannan Sankarasubbu. Small-bench nlp: Benchmark for small single gpu trained models in natural language processing. ArXiv, abs/2109.10847, 2021.
Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth. Looking beyond the surface: A challenge set for reading comprehension over multiple sentences. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 252â262, 2018.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Nikita Kitaev, Lukasz Kaiser, and Anselm Levskaya. Reformer: The efï¬cient transformer. International Conference on Learning Representations, 2019. In
Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. Race: Large-scale reading comprehension dataset from examinations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 785â794, 2017.
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. In International Albert: A lite bert for self-supervised learning of language representations. Conference on Learning Representations, 2019.
In Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning, 2012.
11
Published as a conference paper at ICLR 2021
Hector J Levesque, Ernest Davis, and Leora Morgenstern. The Winograd schema challenge. In AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning, volume 46, pp. 47, 2011.
Liyuan Liu, Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Jiawei Han. On the variance of the adaptive learning rate and beyond. In International Conference on Learning Representations, 2019a.
Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. Multi-task deep neural networks for natural language understanding. In Proceedings of the 57th Annual Meeting of the Associa- tion for Computational Linguistics, pp. 4487â4496, Florence, Italy, July 2019b. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/P19-1441.
Xiaodong Liu, Hao Cheng, Pengcheng He, Weizhu Chen, Yu Wang, Hoifung Poon, and Jianfeng Gao. Adversarial training for large neural language models. arXiv preprint arXiv:2004.08994, 2020.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019c.
Ilya Loshchilov and Frank Hutter. Fixing weight decay regularization in adam. 2018.
Yu Meng, Chenyan Xiong, Payal Bajaj, Saurabh Tiwary, Paul Bennett, Jiawei Han, and Xia Song. Coco-lm: Correcting and contrasting text sequences for language model pretraining. 2021.
Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. arXiv, pp. arXivâ1609, 2016.
Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, and Jian- feng Gao. Deep learning based text classiï¬cation: A comprehensive review. arXiv preprint arXiv:2004.03705, 2020.
Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE transactions on pattern analysis and machine intelligence, 41(8):1979â1993, 2018.
Mohammad Taher Pilehvar and Jose Camacho-Collados. Wic: the word-in-context dataset for evaluating context-sensitive meaning representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 1267â1273, 2019.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. OpenAI Blog, 1(8), 2019.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a uniï¬ed text-to-text transformer. Journal of Machine Learning Research, 21(140):1â67, 2020. URL http://jmlr.org/papers/v21/20-074.html.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, November 2016.
Pranav Rajpurkar, Robin Jia, and Percy Liang. Know what you donât know: Unanswerable questions for squad. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 784â789, 2018.
Melissa Roemmele, Cosmin Adrian Bejan, and Andrew S. Gordon. Choice of plausible alternatives: An evaluation of commonsense causal reasoning. In 2011 AAAI Spring Symposium Series, 2011.
Imanol Schlag, Paul Smolensky, Roland Fernandez, Nebojsa Jojic, Jürgen Schmidhuber, and Jianfeng Gao. Enhancing the transformer with explicit relational encoding for math problem solving. arXiv preprint arXiv:1910.06611, 2019.
12
Published as a conference paper at ICLR 2021
Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. Self-attention with relative position representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pp. 464â468, 2018.
Tao Shen, Yi Mao, Pengcheng He, Guodong Long, Adam Trischler, and Weizhu Chen. Ex- ploiting structured knowledge in text via graph-guided representation learning. arXiv preprint arXiv:2004.14224, 2020.
Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. Megatron-lm: Training multi-billion parameter language models using gpu model parallelism. arXiv preprint arXiv:1909.08053, 2019.
Paul Smolensky. Tensor product variable binding and the representation of symbolic structures in connectionist systems. Artiï¬cial intelligence, 46(1-2):159â216, 1990.
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pp. 1631â1642, 2013.
Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. Ernie: Enhanced representation through knowledge integration. arXiv preprint arXiv:1904.09223, 2019.
Trieu H Trinh and Quoc V Le. A simple method for commonsense reasoning. arXiv preprint arXiv:1806.02847, 2018.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz In Advances in neural information Kaiser, and Illia Polosukhin. Attention is all you need. processing systems, pp. 5998â6008, 2017.
Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. Superglue: A stickier benchmark for general-purpose language understanding systems. In Advances in neural information processing systems, pp. 3266â3280, 2019a.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. Glue: A multi-task benchmark and analysis platform for natural language understanding. In 7th Interna- tional Conference on Learning Representations, ICLR 2019, 2019b.
Wei Wang, Bin Bi, Ming Yan, Chen Wu, Zuyi Bao, Liwei Peng, and Luo Si. Structbert: Incor- porating language structures into pre-training for deep language understanding. arXiv preprint arXiv:1908.04577, 2019c.
Alex Warstadt, Amanpreet Singh, and Samuel R Bowman. Neural network acceptability judgments. arXiv preprint arXiv:1805.12471, 2018.
Adina Williams, Nikita Nangia, and Samuel Bowman. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technolo- gies, Volume 1 (Long Papers), pp. 1112â1122. Association for Computational Linguistics, 2018. URL http://aclweb.org/anthology/N18-1101.
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in neural information processing systems, pp. 5754â5764, 2019.
Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. Swag: A large-scale adversarial dataset for grounded commonsense inference. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 93â104, 2018.
13
Published as a conference paper at ICLR 2021
Sheng Zhang, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Kevin Duh, and Benjamin Van Durme. ReCoRD: Bridging the gap between human and machine commonsense reading comprehension. arXiv preprint 1810.12885, 2018.
Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the IEEE international conference on computer vision, pp. 19â27, 2015.
14
Published as a conference paper at ICLR 2021
# A APPENDIX
A.1 DATASET
Corpus Task #Train #Dev #Test #Label Metrics General Language Understanding Evaluation (GLUE) CoLA SST MNLI RTE WNLI QQP MRPC QNLI STS-B Acceptability Sentiment NLI NLI NLI Paraphrase Paraphrase QA/NLI Similarity 8.5k 67k 393k 2.5k 634 364k 3.7k 108k 7k 1k 872 20k 276 71 40k 408 5.7k 1.5k 1k 1.8k 20k 3k 146 391k 1.7k 5.7k 1.4k 2 2 3 2 2 2 2 2 1 Matthews corr Accuracy Accuracy Accuracy Accuracy Accuracy/F1 Accuracy/F1 Accuracy Pearson/Spearman corr WSC BoolQ COPA CB RTE WiC ReCoRD MultiRC SuperGLUE 554k 9,427 400k 250 2.5k 2.5k 101k 5,100 Coreference QA QA NLI NLI WSD MRC Multiple choice 104 3,270 100 57 276 276 10k 953 146 3,245 500 250 3k 3k 10k 1,800 2 2 2 3 2 2 - - Accuracy Accuracy Accuracy Accuracy/F1 Accuracy Accuracy Exact Match (EM)/F1 Exact Match (EM)/F1 Question Answering SQuAD v1.1 MRC SQuAD v2.0 MRC MRC RACE Multiple choice SWAG 87.6k 130.3k 87,866 73.5k 10.5k 11.9k 4,887 20k 9.5k 8.9k 4,934 20k - - 4 4 Exact Match (EM)/F1 Exact Match (EM)/F1 Accuracy Accuracy Token Classiï¬cation CoNLL 2003 NER 14,987 3,466 3,684 8 F1
Table 6: Summary information of the NLP application benchmarks.
â GLUE. The General Language Understanding Evaluation (GLUE) benchmark is a collection of nine natural language understanding (NLU) tasks. As shown in Table 6, it includes question answer- ing (Rajpurkar et al., 2016), linguistic acceptability (Warstadt et al., 2018), sentiment analysis (Socher et al., 2013), text similarity (Cer et al., 2017), paraphrase detection (Dolan & Brockett, 2005), and natural language inference (NLI) (Dagan et al., 2006; Bar-Haim et al., 2006; Giampiccolo et al., 2007; Bentivogli et al., 2009; Levesque et al., 2012; Williams et al., 2018). The diversity of the tasks makes GLUE very suitable for evaluating the generalization and robustness of NLU models.
â SuperGLUE. SuperGLUE is an extension of the GLUE benchmark, but more difï¬cult, which is a collection of eight NLU tasks. It covers a various of tasks including question answering (Zhang et al., 2018; Clark et al., 2019; Khashabi et al., 2018), natural language inference (Dagan et al., 2006; Bar-Haim et al., 2006; Giampiccolo et al., 2007; Bentivogli et al., 2009; De Marneffe et al., 2019), coreference resolution (Levesque et al., 2012) and word sense disambiguation (Pilehvar & Camacho-Collados, 2019).
â RACE is a large-scale machine reading comprehension dataset, collected from English examinations in China, which are designed for middle school and high school students (Lai et al., 2017).
â SQuAD v1.1/v2.0 is the Stanford Question Answering Dataset (SQuAD) v1.1 and v2.0 (Rajpurkar et al., 2016; 2018) are popular machine reading comprehension benchmarks. Their passages come from approximately 500 Wikipedia articles and the questions and answers are obtained by crowd- sourcing. The SQuAD v2.0 dataset includes unanswerable questions about the same paragraphs.
15
Published as a conference paper at ICLR 2021
â SWAG is a large-scale adversarial dataset for the task of grounded commonsense inference, which uniï¬es natural language inference and physically grounded reasoning (Zellers et al., 2018). SWAG consists of 113k multiple choice questions about grounded situations.
â CoNLL 2003 is an English dataset consisting of text from a wide variety of sources. It has 4 types of named entity.
A.2 PRE-TRAINING DATASET
For DeBERTa pre-training, we use Wikipedia (English Wikipedia dump8; 12GB), BookCorpus (Zhu et al., 2015) 9 (6GB), OPENWEBTEXT (public Reddit content (Gokaslan & Cohen, 2019); 38GB) and STORIES10 (a subset of CommonCrawl (Trinh & Le, 2018); 31GB). The total data size after data deduplication(Shoeybi et al., 2019) is about 78GB. For pre-training, we also sample 5% training data as the validation set to monitor the training process. Table 7 compares datasets used in different pre-trained models.
Model Wiki+Book | OpenWebText | Stories | CC-News | Giga5 | ClueWeb | Common Crawl 16GB 38GB 31GB | 76GB |16GB| 19GB 110GB BERT v XLNet v v v v RoBERTa v v v v DeBERTa v v v DeBERTay 52 v v v v
Table 7: Comparison of the pre-training data.
IMPLEMENTATION DETAILS
Following RoBERTa (Liu et al., 2019c), we adopt dynamic data batching. We also include span masking (Joshi et al., 2020) as an additional masking strategy with the span size up to three. We list the detailed hyperparameters of pre-training in Table 8. For pre-training, we use Adam (Kingma & Ba, 2014) as the optimizer with weight decay (Loshchilov & Hutter, 2018). For ï¬ne-tuning, even though we can get better and robust results with RAdam(Liu et al., 2019a) on some tasks, e.g. CoLA, RTE and RACE, we use Adam(Kingma & Ba, 2014) as the optimizer for a fair comparison. For ï¬ne-tuning, we train each task with a hyper-parameter search procedure, each run takes about 1-2 hours on a DGX-2 node. All the hyper-parameters are presented in Table 9. The model selection is based on the performance on the task-speciï¬c development sets.
Our code is implemented based on Huggingface Transformers11, FairSeq12 and Megatron (Shoeybi et al., 2019)13.
A.3.1 PRE-TRAINING EFFICIENCY
To investigate the efï¬ciency of model pre-training, we plot the performance of the ï¬ne-tuned model on downstream tasks as a function of the number of pre-training steps. As shown in Figure 1, for RoBERTa-ReImpbase and DeBERTabase, we dump a checkpoint every 150K pre-training steps, and then ï¬ne-tune the checkpoint on two representative downstream tasks, MNLI and SQuAD v2.0, and then report the accuracy and F1 score, respectively. As a reference, we also report the ï¬nal model performance of both the original RoBERTabase (Liu et al., 2019c) and XLNetbase (Yang et al., 2019). The results show that DeBERTabase consistently outperforms RoBERTa-ReImpbase during the course of pre-training.
# 8https://dumps.wikimedia.org/enwiki/ 9https://github.com/butsugiri/homemade_bookcorpus 10https://github.com/tensorï¬ow/models/tree/master/research/lm_commonsense 11https://github.com/huggingface/transformers 12https://github.com/pytorch/fairseq 13https://github.com/NVIDIA/Megatron-LM
16
Published as a conference paper at ICLR 2021
Hyper-parameter DeBERTay_5 3 | DeBERTaâ¬jay-ge | DEBERTapase | DeBERT Aba seâabiation Number of Layers 48 24 12 12 Hidden size 1536 1024 768 768 FNN inner hidden size 6144 4096 3072 3072 Attention Heads 24 16 12 12 Attention Head size 64 64 64 64 Dropout 0.1 0.1 0.1 0.1 Warmup Steps 10k 10k 10k 10k Learning Rates 1.5e-4 2e-4 2e-4 le-4 Batch Size 2k 2k 2k 256 Weight Decay 0.01 0.01 0.01 0.01 Max Steps 1M 1M 1M 1M Learning Rate Decay Linear Linear Linear Linear Adam ⬠le-6 le-6 le-6 le-6 Adam (3; 0.9 0.9 0.9 0.9 Adam (2 0.999 0.999 0.999 0.999 Gradient Clipping 1.0 1.0 1.0 1.0 Number of DGX-2 nodes 16 6 4 1 Training Time 30 days 20 days 10 days 7 days
# DeBERTa1.5B DeBERTalarge DeBERTabase DeBERTabase´ablation
Table 8: Hyper-parameters for pre-training DeBERTa.
Hyper-parameter DeBERTa,.58 DeBERTajarge DeBERTapase Dropout of task layer Warmup Steps Learning Rates Batch Size Weight Decay Maximun Training Epochs Learning Rate Decay Adam ⬠Adam 3; Adam {5 Gradient Clipping {0,0.15,0.3} {50,100,500, 1000} {le-6, 3e-6, 5e-6} {16,32,64} 0.01 10 Linear le-6 0.9 0.999 1.0 {0,0.1,0.15} {50,100,500,1000} {5e-6, 8e-6, 9e-6, le-5} {16,32,48,64} 0.01 10 Linear le-6 0.9 0.999 1.0 {0,0.1,0.15} {50,100,500, 1000} {1.5e-5,2e-5, 3e-5, 4e-5} {16,32,48,64} 10 Linear le-6 0.9 0.999 1.0
Table 9: Hyper-parameters for ï¬ne-tuning DeBERTa on down-streaming tasks.
(a) Results on MNLI development (b) Results on SQuAD v2.0 development
82 S w 80 > a g78 3 â#â RoBERTa-Relmppase Qo, â*â DeBERTAbase 4 a ~@- ROBERTAbase 74 Â¥ XLN@tpase Number of 150k 250k 350k 450k 550k 650k 750k 850k 1M pre-training steps
5 z2 = > 84 g > Fe} â < 82 soe: a RoBERTa-RelmPpase DeBERTAapase RoBERTAabpase XLNetpase 150k 250k 350k 450k 550k 650k 750k 850k 1M Number of pre-training steps
Figure 1: Pre-training performance curve between DeBERTa and its counterparts on the MNLI and SQuAD v2.0 development set.
17
Published as a conference paper at ICLR 2021
A.4 MAIN RESULTS ON GENERATION TASKS
In addition to NLU tasks, DeBERTa can also be extended to handle NLG tasks. To allow DeBERTa operating like an auto-regressive model for text generation, we use a triangular matrix for self- attention and set the upper triangular part of the self-attention mask to ´8, following Dong et al. (2019).
We evaluate DeBERTa on the task of auto-regressive language model (ARLM) using Wikitext- 103 (Merity et al., 2016). To do so, we train a new version of DeBERTa, denoted as DeBERTa-MT. It is jointly pre-trained using the MLM and ARLM tasks as in UniLM (Dong et al., 2019). The pre-training hyper-parameters follows that of DeBERTabase except that we use fewer training steps (200k). For comparison, we use RoBERTa as baseline, and include GPT-2 and Transformer-XL as additional references. DeBERTa-AP is a variant of DeBERTa where absolute position embeddings are incorporated in the input layer as RoBERTa. For a fair comparison, all these models are base models pre-trained in a similar setting.
Model Dev PPL Test PPL 21.6 21.6 20.7 20.0 20.5 19.9 19.5 19.5 - 37.50 23.1 24
Table 10: Language model results in perplexity (lower is better) on Wikitext-103 .
Table 10 summarizes the results on Wikitext-103. We see that DeBERTabase obtains lower perplexities on both dev and test data, and joint training using MLM and ARLM reduces perplexity further. That DeBERTa-AP is inferior to DeBERTa indicates that it is more effective to incorporate absolute position embeddings of words in the decoding layer as the EMD in DeBERTa than in the input layer as RoBERTa.
A.5 HANDLING LONG SEQUENCE INPUT
With relative position bias, we choose to truncate the maximum relative distance to k as in equation 3. Thus in each layer, each token can attend directly to at most 2pk ´ 1q tokens and itself. By stacking Transformer layers, each token in the l´th layer can attend to at most p2k ´ 1ql tokens implicitly. Taking DeBERTalarge as an example, where k â 512, L â 24, in theory, the maximum sequence length that can be handled is 24,528. This is a byproduct beneï¬t of our design choice and we ï¬nd it beneï¬cial for the RACE task. A comparison of long sequence effect on the RACE task is shown in Table 11.
512 768 88.8 88.7 85.0 86.3 86.3 86.8
# Sequence length Middle High Accuracy
Table 11: The effect of handling long sequence input for RACE task with DeBERTa
Long sequence handling is an active research area. There have been a lot of studies where the Transformer architecture is extended for long sequence handling(Beltagy et al., 2020; Kitaev et al., 2019; Child et al., 2019; Dai et al., 2019). One of our future research directions is to extend DeBERTa to deal with extremely long sequences.
# A.6 PERFORMANCE IMPROVEMENTS OF DIFFERENT MODEL SCALES
In this subsection, we study the effect of different model sizes applied to large models on GLUE. Table 12 summarizes the results, showing that larger models can obtain a better result and SiFT also improves the model performance consistently.
18
Published as a conference paper at ICLR 2021
Model CoLA QQP MNLI-m/mm SST-2 STS-B QNLI RTE MRPC Avg. Mcc Acc Acc Acc Corr Acc Acc Acc 70.5 92.3 DeBERTalarge 71.1 92.3 DeBERTa900M DeBERTa1.5B 72.0 92.7 DeBERTa1.5B+SiFT 73.5 93.0 91.1/91.1 91.7/91.6 91.7/91.9 92.0/92.1 96.8 97.5 97.2 97.5 92.8 92.0 92.9 93.2 95.3 95.8 96.0 96.5 88.3 93.5 93.9 96.5 91.9 93.1 92.0 93.2 90.00 90.86 91.17 91.93
Table 12: Comparison results of DeBERTa models with different sizes on the GLUE development set.
Model RoBERTa-ReImpbase DeBERTabase + ShareProjection + Conv + 128k Vocab Parameters MNLI-m/mm SQuAD v1.1 Acc 84.9/85.1 86.3/86.2 86.3/86.3 86.3/86.5 86.7/86.9 F1/EM 91.1/84.8 92.1/86.1 92.2/86.2 92.5/86.4 93.1/86.8 120M 134M 120M 122M 190M SQuAD v2.0 F1/EM 79.5/76.0 82.5/79.3 82.3/79.5 82.5/79.7 83.0/80.1
Table 13: Ablation study of the additional modiï¬cations in DeBERTa1.5B and DeBERTa900M models. Note that we progressively add each component on the top of DeBERTabase.
A.7 MODEL COMPLEXITY
With the disentangled attention mechanism, we introduce three additional sets of parameters Wq,r, Wk,r P RdËd and P P R2kËd. The total increase in model parameters is 2L Ë d2 ` 2k Ë d. For the large model pd â 1024, L â 24, k â 512q, this amounts to about 49M additional parameters, an increase of 13%. For the base modelpd â 768, L â 12, k â 512q, this amounts to 14M additional parameters, an increase of 12%. However, by sharing the projection matrix between content and position embedding, i.e. Wq,r â Wq,c, Wk,r â Wk,c, the number of parameters of DeBERTa is the same as RoBERTa. Our experiment on base model shows that the results are almost the same, as in Table 13.
The additional computational complexity is OpN kdq due to the calculation of the additional position- to-content and content-to-position attention scores. Compared with BERT or RoBERTa, this increases the computational cost by 30%. Compared with XLNet which also uses relative position embedding, the increase of computational cost is about 15%. A further optimization by fusing the attention computation kernel can signiï¬cantly reduce this additional cost. For EM D, since the decoder in pre-training only reconstructs the masked tokens, it does not introduce additional computational cost for unmasked tokens. In the situation where 15% tokens are masked and we use only two decoder layers, the additional cost is 0.15 Ë 2{L which results in an additional computational cost of only 3% for base model(L â 12) and 2% for large model(L â 24) in EMD.
A.8 ADDITIONAL DETAILS OF ENHANCED MASK DECODER
The structure of EMD is shown in Figure 2b. There are two inputs for EMD, (i.e., I, H). H denotes the hidden states from the previous Transformer layer, and I can be any necessary information for decoding, e.g., H, absolute position embedding or output from previous EMD layer. n denotes n stacked layers of EMD where the output of each EMD layer will be the input I for next EMD layer and the output of last EMD layer will be fed to the language model head directly. The n layers can share the same weight. In our experiment we share the same weight for n â 2 layers to reduce the number of parameters and use absolute position embedding as I of the ï¬rst EMD layer. When I â H and n â 1, EMD is the same as the BERT decoder layer. However, EMD is more general and ï¬exible as it can take various types of input information for decoding.
A.9 ATTENTION PATTERNS
To visualize how DeBERTa operates differently from RoBERTa, we present in Figure 3 the attention patterns (taken in the last self-attention layers) of RoBERTa, DeBERTa and three DeBERTa variants.
19
Published as a conference paper at ICLR 2021
(a) BERT decoding layer (b) Enhanced Mask Decoder
Figure 2: Comparison of the decoding layer.
DeBERTa RoBERTa DeBERTa-EMD DeBERTa-C2P DeBERTa-P2C -1.0 -0.8 0.6 0.4 0.2 024 68101214 0 2 46 8101214 0 2 4 6 8101214 0 24 6 8101214 0 2 4 6 8101214 ° 141210 86420
Figure 3: Comparison of attention patterns of the last layer among DeBERTa, RoBERTa and DeBERTa variants (i.e., DeBERTa without EMD, C2P and P2C respectively).
We observe two differences. First, RoBERTa has a clear diagonal line effect for a token attending to itself. But this effect is not very visible in DeBERTa. This can be attributed to the use of EMD, in which the absolute position embedding is added to the hidden state of content as the query vector, as veriï¬ed by the attention pattern of DeBERTa-EMD where the diagonal line effect is more visible than that of the original DeBERTa. Second, we observe vertical strips in the attention patterns of RoBERTa, which are mainly caused by high-frequent functional words or tokens (e.g., âaâ, âtheâ, and punctuation). For DeBERTa, the strip only appears in the ï¬rst column, which represents the [CLS] token. We conjecture that a dominant emphasis on [CLS] is desirable since the feature vector of [CLS] is often used as a contextual representation of the entire input sequence in downstream tasks. We also observe that the vertical strip effect is quite obvious in the patterns of the three DeBERTa variants.
We present three additional examples to illustrate the different attention patterns of DeBERTa and RoBERTa in Figures 4 and 5.
20
Published as a conference paper at ICLR 2021
DeBERTa RoBERTa 12210 8 6 4 2 0 4 1.0 0.8 0.6 0.4 0.2 CunmTnenaag Pq CHAMeNeNBoOonAMED 4 easgs9
(a)
DeBERTa RoBERTa
(b)
Figure 4: Comparison on attention patterns of the last layer between DeBERTa and RoBERTa.
21
Published as a conference paper at ICLR 2021
(a)
DeBERTa DeBERTa-EMD DeBERTa-C2P DeBERTa-P2C SANMTMOR@AGIAMSN CANMTNONO®QIAMEN CAHNMIHONOMOMAMTN CANMTMONOBOGAMSN 14 12 10
DeBERTa DeBERTa-EMD DeBERTa-C2P. DeBERTa-P2C 10 08 0.6 0.4 0.2
(b)
DeBERTa-EMD DeBERTa-C2P_ DeBERTa-P2C 1.0 08 0.6 0.4 0.2
(c)
Figure 5: Comparison on attention patterns of last layer between DeBERTa and its variants (i.e. DeBERTa without EMD, C2P and P2C respectively).
A.10 ACCOUNT FOR THE VARIANCE IN FINE-TUNING
Accounting for the variance of different runs of ï¬ne-tuning, in our experiments, we always follow (Liu et al., 2019c) to report the results on downstream tasks by averaging over ï¬ve runs with different random initialization seeds, and perform signiï¬cance test when comparing results. As the examples shown in Table 14, DeBERTabase signiï¬cantly outperforms RoBERTabase (p-value < 0.05).
Model RoBERTabase DeBERTabase MNLI-matched (Min/Max/Avg) 84.7/85.0/84.9 86.1/86.5/86.3 SQuAD v1.1 (Min/Max/Avg) 90.8/91.3/91.1 91.8/92.2/92.1 p-value 0.02 0.01
Table 14: Comparison of DeBERTa and RoBERTa on MNLI-matched and SQuAD v1.1.
# A.11 FURTHER IMPROVE THE MODEL EFFICIENCY
In addition to scaling up transformer models with billions or trillions of parameters (Raffel et al., 2020; Brown et al., 2020; Fedus et al., 2021), it is important to improve modelâs parameter efï¬ciency (Kanakarajan et al., 2021). In A.3.1 we have shown that DeBERTa is more parameter efï¬cient than BERT and RoBERTa. In this section, we show further improvements in terms of parameter efï¬ciency.
22
Published as a conference paper at ICLR 2021
Replaced token detection (RTD) is a new pre-training objective introduced by ELECTRA (Clark et al., 2020). It has been shown to be more effective than masked language model (MLM) (Devlin et al., 2019; Liu et al., 2019c). In DeBERTa, we replace the MLM objective with the RTD objective, and denote the new variant as DeBERTaRT D. We pre-train DeBERTaRT D using small, base and large settings with the same 160GB data as DeBERTa1.5B. Following (Meng et al., 2021), we set the width of the generator the same as that of the discriminator, but set its depth only half of the discriminatorâs depth. Other hyper-parameters remain the same as DeBERTabase or DeBERTalarge. For the new member DeBERTaRT Dsmall , it has 6 layers with the same width as DeBERTaRT Dbase . We evaluate our models on MNLI and SQuAD v2 datasets. Table 15 summarizes the results. We observe that both DeBERTaRT Dbase and DeBERTaRT Dlarge signiï¬cantly outperform other models. For example, DeBERTaRT Dlarge obtains 0.9 absolute improvement over DeBERTaLarge (the previous SoTA model) on MNLI and SQuAD v2.0, respectively. It is worth noting that DeBERTaRT Dlarge is on-par with DeBERTa1.5B while has only 1/3 parameters of DeBERTa1.5B. Furthermore, DeBERTaRT Dsmall even outperforms BERTlarge by a large margin. All these demonstrate the efï¬ciency of DeBERTaRT D models and clearly show a huge potential to further improve modelâs parameter efï¬ciency. Our work lays the base for future studies on far more parameter-efï¬cient pre-trained language models.
Model BERTRT Dsmall BERTbase RoBERTabase ELECTRAbase DeBERTabase DeBERTaRT Dbase BERTlarge RoBERTalarge ELECTRAlarge DeBERTalarge DeBERTaRT Dlarge DeBERTa1.5B MNLI(m/mm Acc) 88.2/87.9 84.3/84.7 87.6/- 88.8/- 88.8/88.5 90.6/90.8 86.6/- 90.2/90.2 90.9/- 91.1/91.1 92.0/91.9 91.7/91.9 SQuAD v2.0 (F1/EM) 82.9/80.4 76.3/73.7 83.7/80.5 83.3/80.5 86.2/83.1 88.4/85.4 81.8/79.0 89.4/86.5 90.6/88.0 90.7/88.0 91.5/89.0 92.2/89.7
Table 15: Comparison of different variants of DeBERTa models on MNLI and SQuAD 2.0.
23 | {
"id": "1908.04577"
} |
2006.03511 | Unsupervised Translation of Programming Languages | A transcompiler, also known as source-to-source translator, is a system that
converts source code from a high-level programming language (such as C++ or
Python) to another. Transcompilers are primarily used for interoperability, and
to port codebases written in an obsolete or deprecated language (e.g. COBOL,
Python 2) to a modern one. They typically rely on handcrafted rewrite rules,
applied to the source code abstract syntax tree. Unfortunately, the resulting
translations often lack readability, fail to respect the target language
conventions, and require manual modifications in order to work properly. The
overall translation process is timeconsuming and requires expertise in both the
source and target languages, making code-translation projects expensive.
Although neural models significantly outperform their rule-based counterparts
in the context of natural language translation, their applications to
transcompilation have been limited due to the scarcity of parallel data in this
domain. In this paper, we propose to leverage recent approaches in unsupervised
machine translation to train a fully unsupervised neural transcompiler. We
train our model on source code from open source GitHub projects, and show that
it can translate functions between C++, Java, and Python with high accuracy.
Our method relies exclusively on monolingual source code, requires no expertise
in the source or target languages, and can easily be generalized to other
programming languages. We also build and release a test set composed of 852
parallel functions, along with unit tests to check the correctness of
translations. We show that our model outperforms rule-based commercial
baselines by a significant margin. | http://arxiv.org/pdf/2006.03511 | Marie-Anne Lachaux, Baptiste Roziere, Lowik Chanussot, Guillaume Lample | cs.CL, cs.PL | null | null | cs.CL | 20200605 | 20200922 | 0 2 0 2
p e S 2 2 ] L C . s c [
3 v 1 1 5 3 0 . 6 0 0 2 : v i X r a
# Unsupervised Translation of Programming Languages
Marie-Anne Lachauxâ Facebook AI Research [email protected]
Baptiste Roziere* Facebook AI Research Paris-Dauphine University [email protected]
Lowik Chanussot Facebook AI Research [email protected]
Guillaume Lample Facebook AI Research [email protected]
# Abstract
A transcompiler, also known as source-to-source translator, is a system that converts source code from a high-level programming language (such as C++ or Python) to another. Transcompilers are primarily used for interoperability, and to port codebases written in an obsolete or deprecated language (e.g. COBOL, Python 2) to a modern one. They typically rely on handcrafted rewrite rules, applied to the source code abstract syntax tree. Unfortunately, the resulting translations often lack readability, fail to respect the target language conventions, and require manual modiï¬cations in order to work properly. The overall translation process is time- consuming and requires expertise in both the source and target languages, making code-translation projects expensive. Although neural models signiï¬cantly outper- form their rule-based counterparts in the context of natural language translation, their applications to transcompilation have been limited due to the scarcity of paral- lel data in this domain. In this paper, we propose to leverage recent approaches in unsupervised machine translation to train a fully unsupervised neural transcompiler. We train our model on source code from open source GitHub projects, and show that it can translate functions between C++, Java, and Python with high accuracy. Our method relies exclusively on monolingual source code, requires no expertise in the source or target languages, and can easily be generalized to other programming languages. We also build and release a test set composed of 852 parallel functions, along with unit tests to check the correctness of translations. We show that our model outperforms rule-based commercial baselines by a signiï¬cant margin.
# Introduction
A transcompiler, transpiler, or source-to-source compiler, is a translator which converts between programming languages that operate at a similar level of abstraction. Transcompilers differ from traditional compilers that translate source code from a high-level to a lower-level programming language (e.g. assembly language) to create an executable. Initially, transcompilers were developed to port source code between different platforms (e.g. convert source code designed for the Intel 8080 processor to make it compatible with the Intel 8086). More recently, new languages have been developed (e.g. CoffeeScript, TypeScript, Dart, Haxe) along with dedicated transcompilers that convert them into a popular or omnipresent language (e.g. JavaScript). These new languages address some shortcomings of the target language by providing new features such as list comprehension (CoffeeScript), object-oriented programming and type checking (TypeScript), while detecting errors and providing optimizations. Unlike traditional programming languages, these new languages are
âEqual contribution. The order was determined randomly.
Preprint. Under review.
the compiled language does not require designed to be translated with a perfect accuracy (i.e. manual adjustments to work properly). In this paper, we are more interested in the traditional type of transcompilers, where typical use cases are to translate an existing codebase written in an obsolete or deprecated language (e.g. COBOL, Python 2) to a recent one, or to integrate code written in a different language to an existing codebase.
Migrating an existing codebase to a modern or more efï¬cient language like Java or C++ requires expertise in both the source and target languages, and is often costly. For instance, the Commonwealth Bank of Australia spent around $750 million and 5 years of work to convert its platform from COBOL to Java. Using a transcompiler and manually adjusting the output source code may be a faster and cheaper solution than rewriting the entire codebase from scratch. In natural language, recent advances in neural machine translation have been widely accepted, even among professional translators, who rely more and more on automated machine translation systems. A similar phenomenon could be observed in programming language translation in the future.
Translating source code from one Turing-complete language to another is always possible in theory. Unfortunately, building a translator is difï¬cult in practice: different languages can have a different syntax and rely on different platform APIs and standard-library functions. Currently, the majority of transcompilation tools are rule-based; they essentially tokenize the input source code and convert it into an Abstract Syntax Tree (AST) on which they apply handcrafted rewrite rules. Creating them requires a lot of time, and advanced knowledge in both the source and target languages. Moreover, translating from a dynamically-typed language (e.g. Python) to a statically-typed language (e.g. Java) requires to infer the variable types which is difï¬cult (and not always possible) in itself.
The applications of neural machine translation (NMT) to programming languages have been limited so far, mainly because of the lack of parallel resources available in this domain. In this paper, we propose to apply recent approaches in unsupervised machine translation, by leveraging large amount of monolingual source code from GitHub to train a model, TransCoder, to translate between three popular languages: C++, Java and Python. To evaluate our model, we create a test set of 852 parallel functions, along with associated unit tests. Although never provided with parallel data, the model manages to translate functions with a high accuracy, and to properly align functions from the standard library across the three languages, outperforming rule-based and commercial baselines by a signiï¬cant margin. Our approach is simple, does not require any expertise in the source or target languages, and can easily be extended to most programming languages. Although not perfect, the model could help to reduce the amount of work and the level of expertise required to successfully translate a codebase. The main contributions of the paper are the following:
⢠We introduce a new approach to translate functions from a programming language to another, that is purely based on monolingual source code.
⢠We show that TransCoder successfully manages to grasp complex patterns speciï¬c to each language, and to translate them to other languages.
⢠We show that a fully unsupervised method can outperform commercial systems that leverage rule-based methods and advanced programming knowledge.
⢠We build and release a validation and a test set composed of 852 parallel functions in 3 languages, along with unit tests to evaluate the correctness of generated translations.
We will make our code and pretrained models publicly available.
# 2 Related work
Source-to-source translation. Several studies have investigated the possibility to translate pro- gramming languages with machine translation. For instance, Nguyen et al. [36] trained a Phrase-Based Statistical Machine Translation (PBSMT) model, Moses [27], on a Java-C# parallel corpus. They cre- ated their dataset using the implementations of two open source projects, Lucene and db4o, developed in Java and ported to C#. Similarly, Karaivanov et al. [22] developed a tool to mine parallel datasets from ported open source projects. Aggarwal et al. [1] trained Moses on a Python 2 to Python 3 parallel corpus created with 2to3, a Python library 2 developed to port Python 2 code to Python 3. Chen et al. [12] used the Java-C# dataset of Nguyen et al. [36] to translate code with tree-to-tree neural networks.
# 2https://docs.python.org/2/library/2to3.html
2
They also use a transcompiler to create a parallel dataset CoffeeScript-Javascript. Unfortunately, all these approaches are supervised, and rely either on the existence of open source projects available in multiple languages, or on existing transcompilers, to create parallel data. Moreover, they essentially rely on BLEU score [38] to evaluate their translations [1, 10, 22, 36], which is not a reliable metric, as a generation can be a valid translation while being very different from the reference.
Translating from source code. Other studies have investigated the use of machine translation from source code. For instance, Oda et al. [37] trained a PBSMT model to generate pseudo-code. To create a training set, they hired programmers to write the pseudo-code of existing Python functions. Barone and Sennrich [10] built a corpus of Python functions with their docstrings from open source GitHub repositories. They showed that a neural machine translation model could be used to map functions to their associated docstrings, and vice versa. Similarly, Hu et al. [21] proposed a neural approach, DeepCom, to automatically generate code comments for Java methods.
Other applications. Another line of work studied the applications of neural networks to code suggestion [2, 11, 34], or error detection [13, 18, 47]. Recent approaches have also investigated the use of neural approaches for code decompilation [16, 24]. For instance, Katz et al. [23] propose a sequence-to-sequence model to predict the C code of binary programs. A common issue with standard seq2seq models, is that the generated functions are not guaranteed to compile, and even to be syntactically correct. To address this issue, several approaches proposed to use additional constraints on the decoder, to ensure that the generated functions respect the syntax of the target language [3, 4, 5, 40, 48]. Recently, Feng et al. [15] introduced Codebert, a transformer pretrained with a BERT-like objective [14] on open source GitHub repositories. They showed that pretraining improves the performance on several downstream tasks such as code documentation generation and code completion.
Unsupervised Machine Translation. The quality of NMT systems highly depends on the quality of the available parallel data. However, for the majority of languages, parallel resources are rare or nonexistent. Since creating parallel corpora for training is not realistic (creating a small parallel corpus for evaluation is already challenging [19]), some approaches have investigated the use of monolingual data to improve existing machine translation systems [17, 20, 41, 49]. More recently, several methods were proposed to train a machine translation system exclusively from monolingual corpora, using either neural models [30, 8] and statistical models [32, 7]. We describe now some of these methods and how they can be instantiated in the setting of unsupervised transcompilation.
# 3 Model
For TransCoder, we consider a sequence-to-sequence (seq2seq) model with attention [44, 9], com- posed of an encoder and a decoder with a transformer architecture [45]. We use a single shared model for all programming languages. We train it using the three principles of unsupervised ma- chine translation identiï¬ed in Lample et al. [32], namely initialization, language modeling, and back-translation. In this section, we summarize these principles and detail how we instantiate them to translate programming languages. An illustration of our approach is given in Figure 1.
# 3.1 Cross Programming Language Model pretraining
Pretraining is a key ingredient of unsupervised machine translation Lample et al. [32]. It ensures that sequences with a similar meaning are mapped to the same latent representation, regardless of their languages. Originally, pretraining was done by initializing the model with cross-lingual word representations [30, 8]. In the context of unsupervised English-French translation, the embedding of the word âcatâ will be close to the embedding of its French translation âchatâ. Cross-lingual word embeddings can be obtained by training monolingual word embeddings and aligning them in an unsupervised manner [31, 6].
Subsequent work showed that pretraining the entire model (and not only word representations) in a cross-lingual way could lead to signiï¬cant improvements in unsupervised machine translation [29, 33, 43]. In particular, we follow the pretraining strategy of Lample and Conneau [29], where a Cross-lingual Language Model (XLM) is pretrained with a masked language modeling objective [14] on monolingual source code datasets.
3
Cross-lingual Masked Language Model pretraining
Input code Masked code Recovered code (prime [p]) ( isp*p; issn prime [il ; (prime [p]) Cross-Lingual _. (primetp]) ( i=p*p. Masked LM (int ispxp; i<snj itp) primel J prime [i] ; â> Mask tokens â> Denoising auto-encoding Input code Corrupted code Recovered code (a, » high); > MT Model piv (a, low, high) ; (a, low, 1 piv -) Java - Java (a, low, piv piv (a, low, high) ; â> Corrupt code â> (a, low, piv-1); (a, pivel, high); a, piv+, high); (a, pivel, high); Back-translation Python code C++ translation Python reconstruction MT Model Python - C++ MT Model C++ Python
Figure 1: Illustration of the three principles of unsupervised machine translation used by our approach. The ï¬rst principle initializes the model with cross-lingual masked language model pretraining. As a result, pieces of code that express the same instructions are mapped to the same representation, regardless of the programming language. Denoising auto-encoding, the second principle, trains the decoder to always generate valid sequences, even when fed with noisy data, and increases the encoder robustness to input noise. Back-translation, the last principle, allows the model to generate parallel data which can be used for training. Whenever the Python â C++ model becomes better, it generates more accurate data for the C++ â Python model, and vice versa. Figure 5 in the appendix provides a representation of the cross-lingual embeddings we obtain after training.
The cross-lingual nature of the resulting model comes from the signiï¬cant number of common tokens (anchor points) that exist across languages. In the context of English-French translation, the anchor points consists essentially of digits and city and people names. In programming languages, these anchor points come from common keywords (e.g. for, while, if, try), and also digits, mathematical operators, and English strings that appear in the source code. 3
For the masked language modeling (MLM) objective, at each iteration we consider an input stream of source code sequences, randomly mask out some of the tokens, and train TransCoder to predict the tokens that have been masked out based on their contexts. We alternate between streams of batches of different languages. This allows the model to create high quality, cross-lingual sequence representations. An example of XLM pretraining is given on top of Figure 1.
# 3.2 Denoising auto-encoding
We initialize the encoder and decoder of the seq2seq model with the XLM model pretrained in Section 3.1. The initialization is straightforward for the encoder, as it has the same architecture as the XLM model. The transformer decoder, however, has extra parameters related to the source attention mechanism [45]. Following Lample and Conneau [29], we initialize these parameters randomly.
XLM pretraining allows the seq2seq model to generate high quality representations of input sequences. However, the decoder lacks the capacity to translate, as it has never been trained to decode a sequence based on a source representation. To address this issue, we train the model to encode and decode sequences with a Denoising Auto-Encoding (DAE) objective [46]. The DAE objective operates like a supervised machine translation algorithm, where the model is trained to predict a sequence of tokens given a corrupted version of that sequence. To corrupt a sequence, we use the same noise model as the one described in Lample et al. [30]. Namely, we randomly mask, remove and shufï¬e input tokens.
3In practice, the âcross-lingualityâ of the model highly depends on the amount of anchor points across languages. As a result, a XLM model trained on English-French will provide better cross-lingual representations than a model trained on English-Chinese, because of the different alphabet which reduces the number of anchor points. In programming languages, the majority of strings are composed of English words, which results in a fairly high number of anchor points, and the model naturally becomes cross-lingual.
4
The ï¬rst symbol given as input to the decoder is a special token indicating the output programming language. At test time, a Python sequence can be encoded by the model, and decoded using the C++ start symbol to generate a C++ translation. The quality of the C++ translation will depend on the âcross-lingualityâ of the model: if the Python function and a valid C++ translation are mapped to the same latent representation by the encoder, the decoder will successfully generate this C++ translation.
The DAE objective also trains the âlanguage modelingâ aspect of the model, i.e. the decoder is always trained to generate a valid function, even when the encoder output is noisy. Moreover it also trains the encoder to be robust to input noise, which is helpful in the context of back-translation where the model is trained with noisy input sequences. DAE is illustrated in the middle of Figure 1.
# 3.3 Back-translation
In practice, XLM pretraining and denoising auto-encoding alone are enough to generate translations. However, the quality of these translations tends to be low, as the model is never trained to do what it is expected to do at test time, i.e. to translate functions from one language to another. To address this issue, we use back-translation, which is one of the most effective methods to leverage monolingual data in a weakly-supervised scenario. Initially introduced to improve the performance of machine translation in the supervised setting [41], back-translation turned out to be an important component of unsupervised machine translation [30, 32, 8].
In the unsupervised setting, a source-to-target model is coupled with a backward target-to-source model trained in parallel. The target-to-source model is used to translate target sequences into the source language, producing noisy source sequences corresponding to the ground truth target sequences. The source-to-target model is then trained in a weakly supervised manner to reconstruct the target sequences from the noisy source sequences generated by the target-to-source model, and vice versa. The two models are trained in parallel until convergence. An example of back-translation is illustrated in Figure 1.
# 4 Experiments
# 4.1 Training details
We use a transformer with 6 layers, 8 attention heads, and set the dimensionality of the model to 1024. We use a single encoder and a single decoder for all programming languages. During XLM pretraining, we alternate between batches of C++, Java, and Python, composed of 32 sequences of source code of 512 tokens. At training time, we alternate between the denoising auto-encoding and back-translation objectives, and use batches of around 6000 tokens. We optimize TransCoder with the Adam optimizer [25], a learning rate of 10â4, and use the same learning rate scheduler as Vaswani et al. [45]. We implement our models in PyTorch [39] and train them on 32 V100 GPUs. We use ï¬oat16 operations to speed up training and to reduce the memory usage of our models.
# 4.2 Training data
We download the GitHub public dataset available on Google BigQuery4. It contains more than 2.8 million open source GitHub repositories. We ï¬lter projects whose license explicitly permits the re-distribution of parts of the project, and select the C++, Java, and Python ï¬les within those projects. Ideally, a transcompiler should be able to translate whole projects. In this work, we decide to translate at function level. Unlike ï¬les or classes, functions are short enough to ï¬t into a single batch, and working at function level allows for a simpler evaluation of the model with unit tests (c.f. Section 4.4). We pretrain TransCoder on all source code available, and train the denoising auto-encoding and back-translation objectives on functions only. Please refer to Section A.3 and Table 3 in the appendix for more details on how the functions are extracted, and for statistics about our training set. We carry out an ablation study to determine whether it is better to keep or remove comments from source code. Keeping comments in the source code increases the number of anchor points across languages, which results in a better overall performance (c.f. Table 6 in the appendix). Therefore, we keep them in our ï¬nal datasets and experiments.
# 4https://console.cloud.google.com/marketplace/details/github/github-repos
5
# 4.3 Preprocessing
Recent approaches in multilingual natural language processing tend to use a common tokenizer [28], and a shared vocabulary for all languages. This reduces the overall vocabulary size, and maximizes the token overlap between languages, improving the cross-linguality of the model [14, 29]. In our case, a universal tokenizer would be suboptimal, as different languages use different patterns and keywords. The logical operators && and || exist in C++ where they should be tokenized as a single token, but not in Python. The indentations are critical in Python as they deï¬ne the code structure, but have no meaning in languages like C++ or Java. We use the javalang5 tokenizer for Java, the tokenizer of the standard library for Python6, and the clang7 tokenizer for C++. These tokenizers ensure that meaningless modiï¬cations in the code (e.g. adding extra new lines or spaces) do not have any impact on the tokenized sequence. An example of tokenized code is given in Figure 3 in the appendix. We learn BPE codes [42] on extracted tokens, and split tokens into subword units. The BPE codes are learned with fastBPE8 on the concatenation of tokenized C++, Java, and Python ï¬les.
# 4.4 Evaluation
GeeksforGeeks is an online platform9 with computer science and programming articles. It gathers many coding problems and presents solutions in several programming languages. From these solutions, we extract a set of parallel functions in C++, Java, and Python, to create our validation and test sets. These functions not only return the same output, but also compute the result with similar algorithm. In Figure 4 in the appendix, we show an example of C++-Java-Python parallel function that determines whether an integer represented by a string is divisible by 13.
The majority of studies in source code translation use the BLEU score to evaluate the quality of generated functions [1, 10, 22, 36], or other metrics based on the relative overlap between the tokens in the translation and in the reference. A simple metric is to compute the reference match, i.e. the percentage of translations that perfectly match the ground truth reference [12]. A limitation of these metrics is that they do not take into account the syntactic correctness of the generations. Two programs with small syntactic discrepancies will have a high BLEU score while they could lead to very different compilation and computation outputs. Conversely, semantically equivalent programs with different implementations will have low BLEU scores. Instead, we introduce a new metric, the computational accuracy, that evaluates whether the hypothesis function generates the same outputs as the reference when given the same inputs. We consider that the hypothesis is correct if it gives the same output as the reference for every input. Section B and Table 4 in the appendix present more details on how we create these unit tests, and give statistics about our validation and test sets.
At inference, TransCoder can generate multiple translations using beam search decoding [26]. In machine translation, the considered hypotheses are typically the ones with the highest log-probabilities in the beam. In our case, we have access to unit tests to verify the correctness of the generated hypotheses, so we report two sets of results for our computational accuracy metric: Beam N, the percentage of functions with at least one correct translation in the beam, and Beam N - Top 1 the percentage of functions where the hypothesis in the beam with the highest log-probability is a correct translation. We select our best model using greedy decoding (Beam 1) for speed efï¬ciency.
# 4.5 Results
We report the results on our test set in Table 1, using greedy decoding (beam size 1), for the three metrics presented in Section 4.4. In Table 2, we report our results with beam search decoding, and compare TransCoder to existing baselines. We give an example of unsupervised translation from Python to C++ in Figure 2.
Evaluation metric differences. In Table 1, we observe that a very large fraction of translations differ from the reference, and are considered as invalid by the reference match metric although they
5https://github.com/c2nes/javalang 6https://docs.python.org/3/library/tokenize.html 7https://pypi.org/project/clang 8https://github.com/glample/fastBPE 9https://practice.geeksforgeeks.org
6
# Python input
Unsupervised C++ translation
def SumOfKsubArray(arr, n, k): Sum = 0 S = deque() G = deque() for i in range(k): while (len(S) > 0 and arr[S[-1]] >= arr[i]): S.pop() while (len(G) > 0 and arr[G[-1]] <= arr[i]): G.pop() G.append(i) S.append(i) for i in range(k, n): Sum += arr[S[0]] + arr[G[0]] while (len(S) > 0 and S[0] <= i - k): S.popleft() while (len(G) > 0 and G[0] <= i - k): G.popleft() while (len(S) > 0 and arr[S[-1]] >= arr[i]): S.pop() while (len(G) > 0 and arr[G[-1]] <= arr[i]): G.pop() G.append(i) S.append(i) Sum += arr[S[0]] + arr[G[0]] return Sum
int SumOfKsubArray(int arr[], int n, int k){ int Sum = 0; deque <int> S; deque <int> G; for(int i = 0; i < k; i ++){ while((int) S.size() > 0 && arr[S.back()] >= arr[i]) S.pop_back(); while((int) G.size() > 0 && arr[G.back()] <= arr[i]) G.pop_back(); G.push_back(i); S.push_back(i); } for(int i = k; i < n; i ++){ Sum += arr[S.front()] + arr[G.front()]; while((int) S.size() > 0 && S.front() <= i - k) S.pop_front(); while((int) G.size() > 0 && G.front() <= i - k) G.pop_front(); while((int) S.size() > 0 && arr[S.back()] >= arr[i]) S.pop_back(); while((int) G.size() > 0 && arr[G.back()] <= arr[i]) G.pop_back(); G.push_back(i); S.push_back(i); } Sum += arr[S.front()] + arr[G.front()]; return Sum;
}
Figure 2: Example of unsupervised Python to C++ translation. TransCoder successfully translates the Python input function SumOfKsubArray into C++. TransCoder infers the types of the arguments, of the variables, and the return type of the function. The model maps the Python deque() container, to the C++ implementation deque<>, and uses the associated front, back, pop_back and push_back methods to retrieve and insert elements into the deque, instead of the Python square brackets [], pop and append methods. Moreover, it converts the Python for loop and range function properly.
successfully pass the unit tests. For instance, when translating from C++ to Java, only 3.1% of the generations are strictly identical to the ground truth reference, although 60.9% of them return the expected outputs. Moreover, the performance in terms of BLEU is relatively ï¬at and does not correlate well with the computational accuracy. These results highlight the issues with the traditional reference match and BLEU metrics commonly used in the ï¬eld.
Beam search decoding. In Table 2, we study the impact of beam search, either by considering all hypotheses in the beam that pass the unit tests (Beam N) or by only considering the ones with the highest log-probabilities (Beam N - Top 1). Compared to greedy decoding (Beam 1), beam search signiï¬cantly improves the computational accuracy, by up to 33.7% in Java â Python with Beam 25. When the model only returns the hypothesis with the highest log-probability, the performance drops, indicating that TransCoder often ï¬nds a valid translation, although it sometimes gives a higher log-probability to incorrect hypotheses. More generally, beam search allows minor variations of the translations which can make the unit tests succeed, such as changing the return or variable types in Java and C++, or ï¬xing small errors such as the use of / instead of the // operator in Python. More examples of errors corrected by beam search are presented in Figure 9 in the appendix.
In a real use-case, checking whether the generated functions are syntactically correct and compile, or creating unit tests from the input function would be better approaches than comparing log-probabilities in order to select an hypothesis from the beam. Table 5 in the appendix shows that many failures
Table 1: Results of TransCoder on our test set with greedy decoding. We evaluate TransCoder with different metrics: reference match, BLEU score, and computational accuracy. Only 3.1% of C++ to Java translations match the ground truth reference, although 60.9% of them successfully pass the unit tests, suggesting that reference match is not an accurate metric to evaluate the quality of translations. Similarly, the BLEU score does not correlate well with the computational accuracy.
C++ â Java C++ â Python Java â C++ Java â Python Python â C++ Python â Java Reference Match BLEU Computational Accuracy 3.1 85.4 60.9 6.7 70.1 44.5 24.7 97.0 80.9 3.7 68.1 35.0 4.9 65.4 32.2 0.8 64.6 24.7
7
Table 2: Computational accuracy with beam search decoding and comparison to baselines. Increasing the beam size improves the performance by up to 33.7% in Java â Python. When the model only returns the hypothesis with the highest log-probability (Beam 10 - Top 1), the performance drops, indicating that the model often ï¬nds a correct translation, although it does not necessarily assign it with the highest probability. TransCoder signiï¬cantly outperforms the Java â Python baseline (+30.4%) and the commercial C++ â Java baseline (+13.9%), although it is trained in a fully unsupervised manner and does not leverage human knowledge.
C++ â Java C++ â Python Java â C++ Java â Python Python â C++ Python â Java
61.0 - - 38.3 - - 60.9 70.7 73.4 65.1 74.8 44.5 58.3 62.0 46.9 67.2 80.9 86.9 89.3 79.8 91.6 35.0 60.0 64.4 49.0 68.7 32.2 44.4 49.6 32.4 57.3 24.7 44.3 51.1 36.6 56.1
come from compilation errors when the target language is Java or C++. It suggests that the âBeam N - Top 1â metric could easily be improved. We leave this to future work.
Comparison to existing baselines. We compare TransCoder with two existing approaches: j2py10, a framework that translates from Java to Python, and a commercial solution from Tangible Software Solutions11, that translates from C++ to Java. Both systems rely on rewrite rules manually built using expert knowledge. The latter handles the conversion of many elements, including core types, arrays, some collections (Vectors and Maps), and lambdas. In Table 2, we observe that TransCoder signiï¬cantly outperforms both baselines in terms of computational accuracy, with 74.8% and 68.7% in the C++ â Java and Java â Python directions, compared to 61% and 38.3% for the baselines. TransCoder particularly shines when translating functions from the standard library. In rule-based transcompilers, rewrite rules need to be manually encoded for each standard library function, while TransCoder learns them in an unsupervised way. In Figure 10 of the appendix, we present several examples where TransCoder succeeds, while the baselines fail to generate correct translations.
# 4.6 Discussion - Analysis
In Figure 2, we give an example of TransCoder unsupervised translation from C++ to Java. Additional examples can be found in Figure 6 and Figure 7 of the appendix. We observe that TransCoder successfully understands the syntax speciï¬c to each language, learns data structures and their methods, and correctly aligns libraries across programming languages. For instance, it learns to translate the ternary operator âX ? A : Bâ in C++ or Java to âif X then A else Bâ in Python, in an unsupervised way. In Figure 5 of the appendix, we present a t-SNE [35] visualization of cross-lingual token embeddings learned by the model. TransCoder successfully map tokens with similar meaning to the same latent representation, regardless of their languages. Figure 8 of the appendix shows that TransCoder can adapt to small modiï¬cations. For instance, renaming a variable in the input may result in different translated types, still with valid translations. In Figure 11, we present some typical failure cases where TransCoder fails to account for the variable type during generation. For instance, it copies the C++ NOT operator ! applied to an integer in Java, while it should be translated to ~. It also translates the Python min function on lists to Math.min in Java, which is incorrect when applied to Java arrays. Finally, Table 5 gives detailed results on failure cases.
# 5 Conclusion
In this paper, we show that approaches of unsupervised machine translation can be applied to source code to create a transcompiler in a fully unsupervised way. TransCoder can easily be generalized to any programming language, does not require any expert knowledge, and outperforms commercial solutions by a large margin. Our results suggest that a lot of mistakes made by the model could easily be ï¬xed by adding simple constraints to the decoder to ensure that the generated functions are syntactically correct, or by using dedicated architectures [12]. Leveraging the compiler output or other approaches such as iterative error correction [16] could also boost the performance.
# 10https://github.com/natural/java2python 11https://www.tangiblesoftwaresolutions.com/
8
# Broader Impact
Automatic transcompilation has the potential to make programmers working in companies or on open source projects more efï¬cient, by allowing them to integrate various codes from other teams within the company or other open source projects more easily. It can also lower the cost of updating an old codebase written in an obsolete language to a more recent language. Many large banks, insurance companies and utilities companies still run code written in COBOL. Advances in transcompilation could incite them to update to more recent languages and facilitate future innovation. Transcom- pilation being a tool facilitating innovation, its applications could have both positive and negative societal impacts. However, we believe that the impact of more innovation within companies and in open source projects would be positive overall. In the long-term, updates of old codebases could put the experts in obsolete languages at a disadvantage as the need for their expertise will decrease. We believe it would be offset by an increased need for programmers caused by more innovative projects, beneï¬ting all programmers.
# References
[1] Karan Aggarwal, Mohammad Salameh, and Abram Hindle. Using machine translation for converting python 2 to python 3 code. Technical report, PeerJ PrePrints, 2015.
[2] Miltiadis Allamanis, Earl T Barr, Christian Bird, and Charles Sutton. Learning natural cod- ing conventions. In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, pages 281â293, 2014.
[3] Uri Alon, Shaked Brody, Omer Levy, and Eran Yahav. code2seq: Generating sequences from structured representations of code. ICLR, 2019.
[4] Uri Alon, Roy Sadaka, Omer Levy, and Eran Yahav. Structural language models for any-code generation. arXiv preprint arXiv:1910.00577, 2019.
[5] Matthew Amodio, Swarat Chaudhuri, and Thomas Reps. Neural attribute machines for program generation. arXiv preprint arXiv:1705.09231, 2017.
[6] Mikel Artetxe, Gorka Labaka, and Eneko Agirre. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 451â462, 2017.
[7] Mikel Artetxe, Gorka Labaka, and Eneko Agirre. Unsupervised statistical machine translation. arXiv preprint arXiv:1809.01272, 2018.
[8] Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. Unsupervised neural machine translation. In International Conference on Learning Representations (ICLR), 2018.
[9] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
[10] Antonio Valerio Miceli Barone and Rico Sennrich. A parallel corpus of python functions and documentation strings for automated code documentation and code generation. arXiv preprint arXiv:1707.02275, 2017.
[11] Avishkar Bhoopchand, Tim Rocktäschel, Earl Barr, and Sebastian Riedel. Learning python code suggestion with a sparse pointer network. arXiv preprint arXiv:1611.08307, 2016.
[12] Xinyun Chen, Chang Liu, and Dawn Song. Tree-to-tree neural networks for program translation. In Advances in neural information processing systems, pages 2547â2557, 2018.
[13] Zimin Chen, Steve James Kommrusch, Michele Tufano, Louis-Noël Pouchet, Denys Poshy- vanyk, and Martin Monperrus. Sequencer: Sequence-to-sequence learning for end-to-end program repair. IEEE Transactions on Software Engineering, 2019.
[14] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805, 2018.
[15] Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, et al. Codebert: A pre-trained model for programming and natural languages. arXiv preprint arXiv:2002.08155, 2020.
9
[16] Cheng Fu, Huili Chen, Haolan Liu, Xinyun Chen, Yuandong Tian, Farinaz Koushanfar, and In Advances in Neural Jishen Zhao. Coda: An end-to-end neural program decompiler. Information Processing Systems, pages 3703â3714, 2019.
[17] Caglar Gulcehre, Orhan Firat, Kelvin Xu, Kyunghyun Cho, Loic Barrault, Huei-Chi Lin, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. On using monolingual corpora in neural machine translation. arXiv preprint arXiv:1503.03535, 2015.
[18] Rahul Gupta, Soham Pal, Aditya Kanade, and Shirish Shevade. Deepï¬x: Fixing common c language errors by deep learning. In Thirty-First AAAI Conference on Artiï¬cial Intelligence, 2017.
[19] Francisco Guzmán, Peng-Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, and MarcâAurelio Ranzato. Two new evaluation datasets for low-resource machine translation: Nepali-english and sinhala-english. arXiv preprint arXiv:1902.01382, 2019.
[20] Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tie-Yan Liu, and Wei-Ying Ma. Dual learning for machine translation. In Advances in neural information processing systems, pages 820â828, 2016.
[21] Xing Hu, Ge Li, Xin Xia, David Lo, and Zhi Jin. Deep code comment generation. In Proceedings of the 26th Conference on Program Comprehension, pages 200â210, 2018.
[22] Svetoslav Karaivanov, Veselin Raychev, and Martin Vechev. Phrase-based statistical translation of programming languages. In Proceedings of the 2014 ACM International Symposium on New Ideas, New Paradigms, and Reï¬ections on Programming & Software, pages 173â184, 2014.
[23] Deborah S Katz, Jason Ruchti, and Eric Schulte. Using recurrent neural networks for decom- pilation. In 2018 IEEE 25th International Conference on Software Analysis, Evolution and Reengineering (SANER), pages 346â356. IEEE, 2018.
[24] Omer Katz, Yuval Olshaker, Yoav Goldberg, and Eran Yahav. Towards neural decompilation. arXiv preprint arXiv:1905.08325, 2019.
[25] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
[26] Philipp Koehn. Pharaoh: a beam search decoder for phrase-based statistical machine translation models. In Conference of the Association for Machine Translation in the Americas, pages 115â124. Springer, 2004.
[27] Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Ondrej Bojar Chris Dyer, Alexandra Constantin, and Evan Herbst. Moses: Open source toolkit for statistical machine translation. In Annual Meeting of the Association for Computational Linguistics (ACL), demo session, 2007.
[28] Taku Kudo and John Richardson. Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. arXiv preprint arXiv:1808.06226, 2018.
[29] Guillaume Lample and Alexis Conneau. Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291, 2019.
[30] Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and MarcâAurelio Ranzato. Unsuper- vised machine translation using monolingual corpora only. ICLR, 2018.
[31] Guillaume Lample, Alexis Conneau, MarcâAurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. Word translation without parallel data. In ICLR, 2018.
[32] Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and MarcâAurelio Ranzato. Phrase-based & neural unsupervised machine translation. In EMNLP, 2018.
[33] Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461, 2019.
[34] Jian Li, Yue Wang, Michael R Lyu, and Irwin King. Code completion with neural attention and pointer networks. IJCAI, 2018.
10
[35] Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579â2605, 2008.
[36] Anh Tuan Nguyen, Tung Thanh Nguyen, and Tien N Nguyen. Lexical statistical machine translation for language migration. In Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering, pages 651â654, 2013.
[37] Yusuke Oda, Hiroyuki Fudaba, Graham Neubig, Hideaki Hata, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. Learning to generate pseudo-code from source code using statistical machine translation (t). In 2015 30th IEEE/ACM International Conference on Automated Software Engineering (ASE), pages 574â584. IEEE, 2015.
[38] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311â318. Association for Computational Linguistics, 2002.
[39] Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. NIPS 2017 Autodiff Workshop, 2017.
[40] Maxim Rabinovich, Mitchell Stern, and Dan Klein. Abstract syntax networks for code genera- tion and semantic parsing. arXiv preprint arXiv:1704.07535, 2017.
[41] Rico Sennrich, Barry Haddow, and Alexandra Birch. Improving neural machine translation models with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 86â96, 2015.
[42] Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 1715â1725, 2015.
[43] Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie-Yan Liu. Mass: Masked sequence to sequence pre-training for language generation. In International Conference on Machine Learning, pages 5926â5936, 2019.
[44] Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104â3112, 2014. [45] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pages 5998â6008, 2017.
[46] Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th international conference on Machine learning, pages 1096â1103, 2008.
[47] Ke Wang, Rishabh Singh, and Zhendong Su. Dynamic neural program embedding for program repair. arXiv preprint arXiv:1711.07163, 2017.
[48] Pengcheng Yin and Graham Neubig. A syntactic neural model for general-purpose code generation. arXiv preprint arXiv:1704.01696, 2017.
[49] Hao Zheng, Yong Cheng, and Yang Liu. Maximum expected likelihood estimation for zero- resource neural machine translation. In IJCAI, 2017.
11
# A Data and preprocessing
# A.1 Training dataset
We tried removing and keeping the comments in the code from our training data. As shown in Table 6, keeping the comments gives better results overall. Thus, we decided to keep them in our ï¬nal training data. Detailed statistics of the resulting dataset can be found in Table 3.
Table 3: Statistics of our GitHub dataset. We show the statistic for our entire github dataset (All) and for the extracted functions. We give the size in GigaBytes, the number of ï¬les and functions, and the number of tokens.
C++ Java Python All - Size All - Nb of ï¬les All - Nb of tokens Functions - Size Functions - Nb of functions 168 GB 352 GB 224 GB 56 M 18 M 50 B 75 B 93 GB 185 GB 152 GB 120 M 402 M 217 M 15 M 38 B
# A.2 Tokenization
Python function v1 Python function v2 def rm_file(path): def rm_file(path): try: os.remove(path) print("Deleted") except: try: os.remove( path ) print( "Deleted" ) print("Error while deleting file", path) except :
print("Error while deleting file", path)
def rm_file ( path ) : NEWLINE try : NEWLINE INDENT os . remove (path) NEWLINE print ( " Deleted " ) DEDENT except : NEWLINE INDENT print ( " Error _ while _ deleting _ file " , path ) DEDENT
Figure 3: Example of function tokenization. We show two versions of the same Python function and their common tokenization. These function versions differ by extra spaces and one extra new line. Our Python tokenizer is robust to extra spaces and extra new lines except in strings. In strings, spaces are tokenized as (U+2581). Indentation is meaningful in Python: indented blocks are surrounded by INDENT DEDENT tokens.
# A.3 Function extraction
We train and evaluate our translation model on functions only. We differentiate class functions and standalone functions. By standalone functions, we refer to functions that can be used without instantiating a class. In C++ and Python, this corresponds to static methods of classes, and functions outside classes. In Java, it only corresponds to static methods. In GeeksforGeeks, solutions are implemented with standalone functions, and our evaluation protocol only involves these functions. In Table 3, the functions statistics are given for all kind of functions. In C++ and Python, 50% of functions are standalone functions. In Java, standalone functions only represent 15% of the dataset. We tried to train our model on standalone functions only, and observed better results than when training on all functions. Thus, all the results in this work are given for models pretrained on all available data and trained on standalone functions only.
12
# B Evaluation
GeeksforGeeks is an online platform with computer science and programming articles. It contains many coding problems and presents solutions in several programming languages. We gather all the problems for which some solutions are implemented in C++, Java, and Python. The parallel data for these problems is already enough to test a model using the BLEU score or the Reference Match score. However, we need to generate some unit tests to check that the function are semantically correct and to compute the Computational Accuracy.
These unit tests are contained in a script, which contains a reference function â named f_gold â from the parallel dataset, a commented TOFILL marker which is to be replaced with a generated function, and a main which runs both functions on a series of inputs and compares the behaviors of the two functions. We have one script per function and per programming language.
In order to generate these scripts, we extract the parameters and their types from the Java implemen- tation of the solution. Then, we generate 10 random inputs for these types, which are hardcoded in the test script and used to test the function. We test the generated scripts by injecting the reference function a second time with the name f_filled instead of the TOFILL comment and running it. We keep only the scripts that return a perfect score in less than 10 seconds. As Python is dynamically typed, we need to infer the Python parameters types from the Java types, and to assume that the order and types of the parameters is the same in Java and Python. When this assumption happens to be wrong, the generated script fails the tests and is discarded. As this approach is quite effective, we generated the C++ scripts in a similar manner and barely use the C++ parameter types which can be extracted from the function deï¬nition.
Equality tests. We adapt the tests checking that the reference and gold function behave in the same way based on the output type of the function (extracted from its Java implementation). For instance, we test the equality of int outputs with ==, while we use equals for String outputs and relative tests for double outputs. If the function is inplace (the output type is void), we check the side effects on all its mutable arguments instead.
Special cases for random input generation. The goal of our scripts is to decide whether a function is semantically equivalent to from the reference function, and the way we generate the random inputs is critical to how discriminative the script will be. For instance, if the input of the reference function is a string, a naive solution may be to generate strings of random length and with characters sampled randomly from the set of all characters. However, our dataset contains several functions such as checkDivisibility in Figure 4 which considers the string to be a representation of a long integer. This type of function could always return the same result (e.g. False) on inputs strings that do not contain only digits. As many functions in our dataset assume the input strings or characters to be representations of long integers or representations of integers in base 2, we alternate between sampling the characters from (i) the set of all lowercase and uppercase letters plus the space character, (ii) the set of all digits, and (iii) the set containing 0 and 1. For similar reasons, when there is an integer array in the function parameters, we alternate between the sets {0 . . . 100}, {â100 . . . 100} and {0, 1} to sample the integers inside the array. When the function takes no argument, we do not generate any input for it and only check that the output is the same for the reference function and the generated function.
Manual veriï¬cations. In order to ensure that our unit tests are appropriate, we manually check and modify the scripts when the output of the function is the same on all 10 inputs, when the function is inplace, or when the function contains prints. As we only check the side effects affecting the mutable arguments, we remove all the functions which mainly print or write to a ï¬le.
13
C++
# Java
# Python
bool checkDivisibility(string num){ static boolean checkDivisibility( int length = num.size(); if(length == 1 && num[0] == '0') return true; String num){ int length = num.length(); if(length == 1 && num.charAt(0) == '0') if(length % 3 == 1){ return true; num += "00"; length += 2; } else if(length % 3 == 2){ num += '0'; length += 1; } int sum = 0, p = 1; for(int i = length - 1; i >= 0; i--){ int group = 0; group += num[i--] - '0'; group += (num[i--] - '0') * 10; group += (num[i] - '0') * 100; sum = sum + group * p; p *= (-1); if(length % 3 == 1){ num += "00"; length += 2; } else if(length % 3 == 2){ num += "0"; length += 1; } int sum = 0, p = 1; for(int i = length - 1; i >= 0; i--){ int group = 0; group += num.charAt(i--) - '0'; group += (num.charAt(i--) - '0') * 10; group += (num.charAt(i) - '0') * 100; sum = sum + group * p; p *= (-1); } } sum = abs(sum); return (sum % 13 == 0); sum = Math.abs(sum); return (sum % 13 == 0); } }
# def checkDivisibility(num):
length = len(num) if(length == 1 and num[0] == '0'):
# return True
# if(length % 3 == 1):
num = str(num) + "00" length += 2
elif(length % 3 == 2): num = str(num) + "0" length += 1
sum = 0 p = 1 for i in range(length - 1, -1, -1):
group = 0 group += ord(num[i]) - ord('0') i -= 1 group += (ord(num[i]) - ord('0')) * 10 i -= 1 group += (ord(num[i]) - ord('0')) * 100 sum = sum + group * p p *= (-1)
# sum = abs(sum) return (sum % 13 == 0)
Figure 4: Example of parallel function from our test set. We extracted parallel functions from GeeksforGeeks to create validation and test sets. Here, we have the parallel implementations in C++, Java, and Python of the checkDivisibility function, which determines whether a long integer represented as a string is divisible by 13.
Table 4: Number of functions with unit tests for our validation and test sets. We report the number of function with unit tests for C++, Java, and Python, for the validation and test sets. We also show the average number of tokens per function. A unit test checks whether a generated function is semantically equivalent to its reference. For each function, we have 10 unit tests, each testing it on a different input. As a result, the number of functions with unit tests per language gives the size of the validation and test sets of each pair of languages. For instance, we have 231 C++ functions with unit tests for the validation set, which means that we have a validation set of 231 functions for Java â C++ and Python â C++.
C++ Java Python Nb of functions with unit tests - valid set Nb of functions with unit tests - test set 231 466 234 481 237 463 Average #tokens per function 105.8 112.0 103.1
14
# C Results
# C.1 Detailed results
Table 5: Detailed results for greedy decoding. Many failures come from compilation errors when the target language is Java or C++. It suggests that our method could be improved by constraining the decoder to generate compilable code. Runtime errors mainly occur when translating from Java or C++ into Python. Since Python code is interpreted and not compiled, this category also includes syntax errors in Python. The majority of remaining errors are due to the program returning the wrong output on one or several of the unit tests. Timeout errors are generally caused by inï¬nite loops and mainly occur in the Java â Python pair.
#tests Success Compilation Runtime Wrong Output Timeout C++ â Java C++ â Python Java â C++ Java â Python Python â C++ Python â Java 481 463 466 463 466 481 60.9% 44.5% 80.9% 35.0% 32.2% 24.7% 27.2% 0.0% 10.3% 0.0% 29.0% 23.5% 4.4% 36.5% 1.1% 31.8% 4.9% 12.5% 5.4% 18.1% 7.5% 15.6% 32.6% 24.3% 2.1% 0.9% 0.2% 17.7% 1.3% 15.0%
# C.2 Ablation study
Table 6: Training data ablation study - with and without code comments. We compare the computational accuracy of TransCoder for different training sets, where we either keep or remove comments from source code training data. We give results for different beam sizes. When translating from C++ to Python, from Java to C++ and from Java to Python, keeping comments in the training set gives better results. In the other directions, keeping or removing comments does not have a signiï¬cant impact on the performance.
With Comments C++ â Java C++ â Python No Yes No Yes Java â C++ Yes No Java â Python No Yes Python â C++ No Yes Python â Java No Yes Beam 1 Beam 5 Beam 10 Beam 25 62.2 71.6 73.6 75.3 60.9 70.7 73.4 74.8 40.8 54.0 57.9 64.6 44.5 58.3 62.0 67.2 76.8 85.6 88.4 89.1 80.9 86.9 89.3 91.6 46.4 58.5 62.9 66.7 35.0 60.0 64.4 68.7 34.1 46.4 50.9 56.7 32.2 44.4 49.6 57.3 33.9 46.0 50.3 56.3 24.7 44.3 51.1 56.1
15
C.3 Cross-lingual token embedding space
e Python e Java 5.0 worn c_str, ,toCharArray IOException e CH , cout *loError N - . ene listdir, âListFiles Boolean 4.5 boolâcharacter string char string str y gap AP âdict 4.8 size âLen length . . Lon ounteger 3.5 long int oet set catch â* exceptâ -4 ~2 9 5 a
Figure 5: Cross-lingual token embedding space. We show a t-SNE visualization of our cross-lingual token embeddings. These embeddings are obtained by encoding programming language tokens into TransCoderâs lookup table. We show the embeddings of C++, Java, and Python keywords. Keywords of different programming languages that are used in similar contexts are very close in the embedding space. For instance, except in Python and catch in Java and C++, which are both used to catch exceptions, are mapped to very similar embeddings. The same phenomenon is observed for implementations of maps (Map, map and dict), for c_str and toCharArray which are used to transform a string into a char array, and for similar primitive types (e.g. Long, long, Integer, and int).
16
# C.4 Additional examples of translations
# C++ Input
int maxLen(int arr[], int n){ unordered_map<int, int> hM; int sum = 0; int max_len = 0; int ending_index = - 1; for(int i = 0; i < n; i++) arr[i] = (arr[i] == 0) ? -1 : 1; for(int i = 0; i < n; i++){ sum += arr[i]; if(sum == 0){ max_len = i + 1; ending_index = i; } if(hM.find(sum + n) != hM.end()){ if(max_len < i - hM[sum + n]){ max_len = i - hM[sum + n]; ending_index = i; } } else hM[sum + n] = i; } for(int i = 0; i < n; i++) arr[i] = (arr[i] == -1) ? 0 : 1; printf("%d to %d
", ending_index - max_len + 1, ending_index); return max_len;
}
std::unordered_set<int> uniqueElements(
const std::vector<int>& l){
return std::unordered_set<int>(l.begin(), l.end());
}
std::vector<int> squareList(const std::vector<int>& l){
std::vector<int> result = {}; for(int elmt : l){
# Unsupervised Java Translation
public static int maxLen(int arr[], int n){ HashMap<Integer, Integer> hM = new HashMap<Integer, Integer>(); int sum = 0; int maxLen = 0; int endingIndex = - 1; for(int i = 0; i < n; i++){ arr[i] = (arr[i] == 0) ? - 1 : 1; } for(int i = 0; i < n; i++){ sum += arr[i]; if(sum == 0){ maxLen = i + 1; endingIndex = i; } if(hM.containsKey(sum + n)){ if(maxLen < i - hM.get(sum + n)){ maxLen = i - hM.get(sum + n); endingIndex = i; } }else{ hM.put(sum + n, i); } } for(int i = 0; i < n; i ++){ arr[i] = (arr[i] == -1) ? 0 : 1; } System.out.println(endingIndex - maxLen + 1 + " to " + endingIndex); return maxLen; } public static Set<Integer> uniqueElements( Collection<Integer> l){ return new HashSet<Integer>(l); } public static List<Integer> squareList( } List<Integer> l){ List<Integer> result = new ArrayList<>(); for(Integer elmt : l) result.add(elmt*elmt); return result;
# result.push_back(elmt * elmt);
# } return result;
}
Figure 6: Examples of correct translations using TransCoder. In all these examples, TransCoder converts properly the primitive types, ï¬nd the equivalent data structures and the corresponding methods, and libraries between languages. For instance, in the maxLen function, TransCoder converts the C++ unordered_map container into the Java implementation of the Map interface, HashMap, and properly uses the associated get and put methods to retrieve and insert keys and values into the map, instead of the C++ square brackets [].
17
# Input
# Input
# TransCoder Translation
C++ void worstFit(int blockSize[], int m, int processSize[], int n){ int allocation[n]; memset(allocation, -1, sizeof(allocation)); for(int i = 0; i < n; i ++){ int wstIdx = - 1; for(int j = 0; j < m; j++){ if(blockSize[j] >= processSize[i]){ if(wstIdx == - 1) wstIdx = j; else if(blockSize[wstIdx] < blockSize[j]) wstIdx = j; } } if(wstIdx != -1){ allocation[i] = wstIdx; blockSize[wstIdx] -= processSize[i]; } } cout << "
Process No. Process Size Block no.
"; for(int i = 0; i < n; i++){ cout << " if(allocation[i] != -1) cout << allocation[i] + 1; else " << i+1 << " " << processSize[i] << " "; cout << "Not Allocated"; cout << endl; }
Python def worstFit(blockSize, m, processSize, n): allocation = [-1] * n for i in range(n): wstIdx = -1 for j in range(m): if blockSize[j] >= processSize[i]: if wstIdx == -1: wstIdx = j elif blockSize[wstIdx] < blockSize[j]: wstIdx = j if wstIdx != -1: allocation[i] = wstIdx blockSize[wstIdx] -= processSize[i] print("
Process No. Process Size Block no.
") for i in range(n): print(" " + str(i+1) + " " + str(processSize[i]) + " ") if allocation[i] != -1: print(allocation[i] + 1) else: print("Not Allocated") print()
}
# Java
# Python
public static int max(int a, int b){
return a > b ? a : b;
def max(a, b): return a if a > b else b
}
# public static void createDirectory(Path path)
# throws IOException{
# if(!Files.exists(path)){
def create_directory(path): if not os.path.exists(path): os.makedirs(path)
# Files.createDirectories(path);
}
}
Python
# C++
def sum_elements(values): return sum(values)
template <typename T> T sum_elements(const std::vector<T> &values){ return std::accumulate(values.begin(), values.end(), 0);
}
def no_letters(s): return s.lower() == s.upper()
static bool noLetters(const QString& str){ return str.toLower() == str.toUpper(); }
def get_env_variable(name): return os.environ[name]
const char* GetEnvVariable(const char* name){ return getenv(name); }
# Python
# Java
def calcMaxValue(str): res = ord(str[0]) - 48 for i in range(1, len(str)): if(str[i] == '0' or str[i] == '1' or res < 2): res += ord(str[i]) - 48 else: res *= ord(str[i]) - 48 return res
# def foo(x):
return bar(x) + 1
# def area(r):
public static int calcMaxValue(String str){ int res = (int)(str.charAt(0) - 48); for(int i = 1; i < str.length(); i++){ if(str.charAt(i) == '0' || str.charAt(i) == '1' || res < 2){ res += (int)(str.charAt(i) - 48); } else{ res *= (int)(str.charAt(i) - 48); } } return res; } public static int foo(int x){ return bar(x) + 1; } public static double area(double r){ return 2 * PI * r * r; }
return 2 * PI * r ** 2
Figure 7: Examples of correct translations from or to Python using TransCoder. When translating from Python, TransCoder successfully infers types. Here, TransCoder infers the Python list type and translates it into its C++ equivalent std::vector. The last two examples show that TransCoder does not modify the call to the non-standard function bar or the global variable PI. 18
# C++ input
int minPalPartion(char* str){ int n = strlen(str); int C[n][n]; bool P[n][n]; int i, j, k, L; for(i = 0; i < n; i ++){ P[i][i] = true; C[i][i] = 0;} for(L = 2; L <= n; L ++){ for(i = 0; i < n - L + 1; i ++){ j = i + L - 1; if(L == 2) P[i][j] = (str[i] == str[j]); else P[i][j] = (str[i] == str[j]) && P[i+1][j-1]; if(P[i][j] == true) C[i][j] = 0; else{ C[i][j] = INT_MAX; for(k = i; k <= j - 1; k ++) C[i][j] = min(C[i][j], } } C[i][k]+C[k+1][j]+1); } return C[0][n - 1]; } int minPalPartion(char* arr){ int n = strlen(arr); int C[n][n]; bool P[n][n]; int i, j, k, L; for(i = 0; i < n; i ++){ P[i][i] = true; C[i][i] = 0;} for(L = 2; L <= n; L ++){ for(i = 0; i < n - L + 1; i ++){ j = i + L - 1; if(L == 2) P[i][j] = (arr[i] == arr[j]); else P[i][j]= (arr[i] == arr[j]) && P[i+1][j-1]; if(P[i][j] == true) C[i][j] = 0; else{ C[i][j] = INT_MAX; for(k = i; k <= j - 1; k ++) C[i][j] = min(C[i][j], } } C[i][k]+C[k+1][j]+1); } return C[0][n - 1]; } int minPalPartion(char* input){ int n = strlen(input); int C[n][n]; bool P[n][n]; int i, j, k, L; for(i = 0; i < n; i ++){ P[i][i] = true; C[i][i] = 0;} for(L = 2; L <= n; L ++){ for(i = 0; i < n - L + 1; i ++){ j = i + L - 1; if(L == 2) P[i][j] = (input[i] == input[j]); else P[i][j] = (input[i] == input[j]) && P[i+1][j-1]; if(P[i][j] == true) C[i][j] = 0; else{ C[i][j] = INT_MAX; for(k = i; k <= j - 1; k ++) C[i][j] = min(C[i][j],
# TransCoder Java translation
public static int minPalPartion(String str){ int n = str.length(); int[][] C = new int[n][n]; boolean[][] P = new boolean[n][n]; int i , j , k , L; for(i = 0; i < n; i ++){ P[i][i] = true; C[i][i] = 0;} for(L = 2; L <= n; L ++){ for(i = 0; i < n - L + 1; i ++){ j = i + L - 1; if(L == 2){ P[i][j] = (str.charAt(i) == str.charAt(j));} else{ P[i][j] = (str.charAt(i) == str.charAt(j)) && P[i + 1][j - 1];} if(P[i][j] == true){ C[i][j] = 0;} else{ C[i][j] = Integer.MAX_VALUE; for(k = i; k <= j - 1; k ++){ C[i][j] = Math.min(C[i][j], return C[0][n - 1]; C[i][k] + C[k + 1][j] + 1);}}}} } public static int minPalPartion(char[] arr){ int n = arr.length; int[][] C = new int[n][n]; boolean[][] P = new boolean[n][n]; int i , j , k , L; for(i = 0; i < n; i ++){ P[i][i] = true; C[i][i] = 0;} for(L = 2; L <= n; L ++){ for(i = 0; i < n - L + 1; i ++){ j = i + L - 1; if(L == 2){ P[i][j] = (arr[i] == arr[j]);} else{ P[i][j] = (arr[i] == arr[j]) && P[i + 1][j - 1];} if(P[i][j] == true){ C[i][j] = 0;} else{ C[i][j] = Integer.MAX_VALUE; for(k = i; k <= j - 1; k ++){ C[i][j] = Math.min(C[i][j], return C[0][n - 1]; C[i][k] + C[k + 1][j] + 1);}}}} } public static int minPalPartion(String input){ int n = input.length(); int[][] C = new int[n][n]; boolean[][] P = new boolean[n][n]; int i , j , k , L; for(i = 0; i < n; i ++){ P[i][i] = true; C[i][i] = 0;} for(L = 2; L <= n; L ++){ for(i = 0; i < n - L + 1; i ++){ j = i + L - 1; if(L == 2){ P[i][j] = (input.charAt(i) == input.charAt(j)); } else{ P[i][j] = (input.charAt(i) == input.charAt(j)) && P[i + 1][j - 1];} if(P[i][j] == true){ C[i][j] = 0;}
C[i][k]+C[k+1][j]+1);
# else{
}
}
} return C[0][n - 1];
C[i][j] = Integer.MAX_VALUE; for(k = i; k <= j - 1; k ++){ C[i][j] = Math.min(C[i][j],
C[i][k] + C[k + 1][j] + 1);}}}}
}
return C[0][n - 1];
}
Figure 8: TransCoder robustness to variable names. We take the C++ function minPalPartion, change the parameter name from str to arr and input and show the three corresponding TransCoder Java translations. All translations are correct. In the ï¬rst and third cases, TransCoder translates char* str and char* input into Java String and uses the charAt method to retrieve elements. This shows that TransCoder is robust to variable name changes and that it remembers variable types along the whole translation to apply the appropriate methods. In the second case, TransCoder translates char* arr into Java char[] and uses [] to retrieve elements, showing that TransCoder can adjust its translations to parameter names while remaining accurate.
19
C++ input TransCoder Python translation Greedy decoding Beam search int compute_average(int a, int b){ def compute_average(a, b): def compute_average(a, b): return (a + b) / 2; return (a + b) / 2 return (a + b) // 2 } bool isPalindrome(string str){ def isPalindrome(str): def isPalindrome(str): int l = 0; int h = str.length() - 1; while(h > l) l = 0 h = len(str) - 1 while h > l: l = 0 h = len(str) - 1 while h > l: if(str[l++] != str[h--]) if str[l] != str[h--]: if str[l] != str[h]: return false; return False return False } return true; return True l += 1 h -= 1 return True
Figure 9: Examples of incorrect greedy decoding translations versus correct beam translations. We take C++ inputs and translate them into Python with TransCoder. In the second column, we use greedy decoding and the translations are incorrect. In the third column, we use beam search and obtain accurate translations. A common error corrected by beam search for C++ â Python is the usage of the double instead of the integer division operator (ï¬rst example). Beam search is also able to correct errors such as the ++ and ââ operators that do not exist in Python (second example).
Input Baseline translation TransCoder translation Java Python Python static String getEnvVariable( String name){ def getEnvVariable(name): return System.getenv(name) def getenvh(name): return System.getenv(name); } static List<Integer> truncate( List<Integer> l, Integer e){ def truncate(l, e): if l.contains(e): return l.subList(0, def truncate(l, e): if l.count(e) > 0: return l[ : l.index(e)] if(l.contains(e)){ l.indexOf(e)) else: return l.subList(0, else: return l l.indexOf(e)); return l } else{ return l; } } static void deleteFile(Path path){ def deleteFile(path): def delete_file(path): try{ try: try: Files.delete(path); } catch(Exception e){ Files.delete(path) except Exception as e: System.err.println( os.remove(path) except: sys.stderr.write( System.err.println( "Error deleting " + path); "Error deleting " + path) "Error deleting %s
" % path } ) } C++ Java Java memset(prime, 0, sizeof(prime)); memset(prime, 0, Arrays.fill(prime, 0); (Integer.SIZE/Byte.SIZE)); sort(a, a + n); sort(a, a + n); Arrays.sort(a);
# return os.environ.get(name, '')
Figure 10: Examples of incorrect baseline translations versus correct TransCoder translations. When translating from Java to Python, the baseline fails to translate the System.getenv, System.err.println, and Files.delete functions from the standard library, and the contains, subList, and IndexOf methods of the Java List interface. Instead, it simply copies them, showing the limitations of a rule-based system. On the other hand, TransCoder converts properly all of these functions into their Python equivalents. In the C++ â Java direction, baseline translations are made at token-level, and are incorrect. For instance, the ï¬rst example shows that the baseline tries to translate the sizeof function, and leaves memset unchanged although it does not exist in Java. Instead, TransCoder correctly uses Arrays.fill to ï¬ll the array prime with zeros.
20
Input Java failed translations Description bool isEven (int n){ return (!(n & 1)); } static boolean isEven(int n){ return (!(n & 1)); } The ! operator works on boolean and integers in C++ (it returns true if the integer is positive) but it only works on boolean in Java. int summingSeries(long n){ return pow(n, 2); } static int summingSeries(long n){ return Math.pow(n, 2); } In Java, Math.pow(n, 2) returns a double which should be cast to int to match the function return type. def minSum(A): min_val = min(A) return min_val * (len(A) - 1) static double minSum(double[] A){ double minVal = Math.min(A); return minVal*(A.length - 1); } Math.min is a Java function but does not take as input a double[] array but a pair of double.
Figure 11: Examples of failed TransCoder translations. TransCoder fails to translate these C++ and Python functions into Java, showing its limitations. In these examples, it fails to account for the variable types when using a method or an operator. In particular, the NOT operator ! in C++ should have been translated to ~ in Java, because it is applied to an integer. Similarly, the Math.min function in Java cannot be applied to arrays.
21 | {
"id": "1611.08307"
} |
2006.03509 | Triple descent and the two kinds of overfitting: Where & why do they appear? | A recent line of research has highlighted the existence of a "double descent"
phenomenon in deep learning, whereby increasing the number of training examples
$N$ causes the generalization error of neural networks to peak when $N$ is of
the same order as the number of parameters $P$. In earlier works, a similar
phenomenon was shown to exist in simpler models such as linear regression,
where the peak instead occurs when $N$ is equal to the input dimension $D$.
Since both peaks coincide with the interpolation threshold, they are often
conflated in the litterature. In this paper, we show that despite their
apparent similarity, these two scenarios are inherently different. In fact,
both peaks can co-exist when neural networks are applied to noisy regression
tasks. The relative size of the peaks is then governed by the degree of
nonlinearity of the activation function. Building on recent developments in the
analysis of random feature models, we provide a theoretical ground for this
sample-wise triple descent. As shown previously, the nonlinear peak at
$N\!=\!P$ is a true divergence caused by the extreme sensitivity of the output
function to both the noise corrupting the labels and the initialization of the
random features (or the weights in neural networks). This peak survives in the
absence of noise, but can be suppressed by regularization. In contrast, the
linear peak at $N\!=\!D$ is solely due to overfitting the noise in the labels,
and forms earlier during training. We show that this peak is implicitly
regularized by the nonlinearity, which is why it only becomes salient at high
noise and is weakly affected by explicit regularization. Throughout the paper,
we compare analytical results obtained in the random feature model with the
outcomes of numerical experiments involving deep neural networks. | http://arxiv.org/pdf/2006.03509 | Stéphane d'Ascoli, Levent Sagun, Giulio Biroli | cs.LG, cond-mat.dis-nn, stat.ML | null | null | cs.LG | 20200605 | 20201013 | 0 2 0 2
t c O 3 1 ] G L . s c [
2 v 9 0 5 3 0 . 6 0 0 2 : v i X r a
# Triple descent and the Two Kinds of Overï¬tting: Where & Why do they Appear?
Stéphane dâAscoli Department of Physics Ãcole Normale Supérieure Paris, France [email protected]
Levent Sagun Facebook AI Research Paris, France [email protected]
Giulio Biroli Department of Physics Ãcole Normale Supérieure Paris, France [email protected]
# Abstract
A recent line of research has highlighted the existence of a âdouble descentâ phenomenon in deep learning, whereby increasing the number of training examples N causes the generalization error of neural networks to peak when N is of the same order as the number of parameters P . In earlier works, a similar phenomenon was shown to exist in simpler models such as linear regression, where the peak instead occurs when N is equal to the input dimension D. Since both peaks coincide with the interpolation threshold, they are often conï¬ated in the litterature. In this paper, we show that despite their apparent similarity, these two scenarios are inherently different. In fact, both peaks can co-exist when neural networks are applied to noisy regression tasks. The relative size of the peaks is then governed by the degree of nonlinearity of the activation function. Building on recent developments in the analysis of random feature models, we provide a theoretical ground for this sample-wise triple descent. As shown previously, the nonlinear peak at N = P is a true divergence caused by the extreme sensitivity of the output function to both the noise corrupting the labels and the initialization of the random features (or the weights in neural networks). This peak survives in the absence of noise, but can be suppressed by regularization. In contrast, the linear peak at N = D is solely due to overï¬tting the noise in the labels, and forms earlier during training. We show that this peak is implicitly regularized by the nonlinearity, which is why it only becomes salient at high noise and is weakly affected by explicit regularization. Throughout the paper, we compare analytical results obtained in the random feature model with the outcomes of numerical experiments involving deep neural networks.
# Introduction
A few years ago, deep neural networks achieved breakthroughs in a variety of contexts [1, 2, 3, 4]. However, their remarkable generalization abilities have puzzled rigorous understanding [5, 6, 7]: classical learning theory predicts that generalization error should follow a U-shaped curve as the number of parameters P increases, and a monotonous decrease as the number of training examples N increases. Instead, recent developments show that deep neural networks, as well as other machine learning models, exhibit a starkly different behaviour. In the absence of regularization, increasing P and N respectively yields parameter-wise and sample-wise double descent curves [8, 9, 10, 11, 12, 13], whereby the generalization error ï¬rst decreases, then peaks at the interpolation threshold (at which point training error vanishes), then decreases monotonically again. This peak1 was shown to be related
1Also called the jamming peak due to similarities with a well-studied phenomenon in the Statistical Physics literature [14, 15, 16, 17, 18].
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
Test Nonlinear Test [Linear | [Nonlinear > loss peak loss peak || peak 2 Nonlinear peak Activation function â Strongly nonlinear â Weakly nonlinear â Linear I I i] I 1 N oe = = v > Pp D
Figure 1: Left: The parameter-wise proï¬le of the test loss exhibits double descent, with a peak at P = N . Middle: The sample-wise proï¬le can, at high noise, exhibit a single peak at N = P , a single peak at N = D, or a combination of the two (triple descent2) depending on the degree of nonlinearity of the activation function. Right: Color-coded location of the peaks in the (P, N ) phase space.
to a sharp increase in the variance of the estimator [18, 9], and can be suppressed by regularization or ensembling procedures [9, 19, 20].
Although double descent has only recently gained interest in the context of deep learning, a seemingly similar phenomenon has been well-known for several decades for simpler models such as least squares regression [21, 22, 23, 14, 15], and has recently been studied in more detail in an attempt to shed light on the double descent curve observed in deep learning [24, 25, 26, 27]. However, in the context of linear models, the number of parameters P is not a free parameter: it is necessarily equal to the input dimension D. The interpolation threshold occurs at N = D, and coincides with a peak in the test loss which we refer to as the linear peak. For neural networks with nonlinear activations, the interpolation threshold surprisingly becomes independent of D and is instead observed when the number of training examples is of the same order as the total number of training parameters, i.e. N â¼ P : we refer to the corresponding peak as the nonlinear peak.
Somewhere in between these two scenarios lies the case of neural networks with linear activations. They have P > D parameters, but only D of them are independent: the interpolation threshold occurs at N = D. However, their dynamical behaviour shares some similarities with that of deep nonlinear networks, and their analytical tractability has given them signiï¬cant attention [28, 29, 6]. A natural question is the following: what would happen for a âquasi-linearâ network, e.g. one that uses a sigmoidal activation function with a high saturation plateau? Would the overï¬tting peak be observed both at N = D and N = P , or would it somehow lie in between?
In this work, we unveil the similarities and the differences between the linear and nonlinear peaks. In particular, we address the following questions:
Are the linear and nonlinear peaks two different phenomena? ⢠If so, can both be observed simultaneously, and can we differentiate their sources? ⢠How are they affected by the activation function? Can they both be suppressed by regulariz-
ing or ensembling? Do they appear at the same time during training?
Contribution In modern neural networks, the double descent phenomenon is mostly studied by increasing the number of parameters P (Fig. 1, left), and more rarely, by increasing the number of training examples N (Fig. 1, middle) [13]. The analysis of linear models is instead performed by varying the ratio P/N . By studying the full (P, N ) phase space (Fig. 1, right), we disentangle the role of the linear and the nonlinear peaks in modern neural networks, and elucidate the role of the input dimension D.
In Sec.1, we demonstrate that the linear and nonlinear peaks are two different phenomena by showing that they can co-exist in the (P, N ) phase space in noisy regression tasks. This leads to a sample-wise triple descent, as sketched in Fig. 1. We consider both an analytically tractable model of random features [30] and a more realistic model of neural networks.
In Sec. 2, we provide a theoretical analysis of this phenomenon in the random feature model. We examine the eigenspectrum of random feature Gram matrices and show that whereas the nonlinear
2The name âtriple descentâ refers to the presence of two peaks instead of just one in the famous âdouble descentâ curve, but in most cases the test error does not actually descend before the ï¬rst peak.
2
peak is caused by the presence of small eigenvalues [6], the small eigenvalues causing the linear peak gradually disappear when the activation function becomes nonlinear: the linear peak is implicitly regularized by the nonlinearity. Through a bias-variance decomposition of the test loss, we reveal that the linear peak is solely caused by overï¬tting the noise corrupting the labels, whereas the nonlinear peak is also caused by the variance due to the initialization of the random feature vectors (which plays the role of the initialization of the weights in neural networks).
Finally, in Sec. 3, we present the phenomenological differences which follow from the theoretical analysis. Increasing the degree of nonlinearity of the activation function weakens the linear peak and strengthens the nonlinear peak. We also ï¬nd that the nonlinear peak can be suppressed by regularizing or ensembling, whereas the linear peak cannot since it is already implicitly regularized. Finally, we note that the nonlinear peak appears much later under gradient descent dynamics than the linear peak, since it is caused by small eigenmodes which are slow to learn.
Related work Various sources of sample-wise non-monotonicity have been observed since the 1990s, from linear regression [14] to simple classiï¬cation tasks [31, 32]. In the context of adversarial training, [33] shows that increasing N can help or hurt generalization depending on the strength of the adversary. In the non-parametric setting of [34], an upper bound on the test loss is shown to exhibit multiple descent, with peaks at each N = Di, i â N.
Two concurrent papers also discuss the existence of a triple descent curve, albeit of different nature to ours. On one hand, [19] observes a sample-wise triple descent in a non-isotropic linear regression task. In their setup, the two peaks stem from the block structure of the covariance of the input data, which presents two eigenspaces of different variance; both peaks boil down to what we call âlinear peaksâ. [35] pushed this idea to the extreme by designing the covariance matrix in such a way to make an arbitrary number of linear peaks appear.
On the other hand, [36] presents a parameter-wise triple descent curve in a regression task using the Neural Tangent Kernel of a two-layer network. Here the two peaks stem from the block structure of the covariance of the random feature Gram matrix, which contains a block of linear size in input dimension (features of the second layer, i.e. the ones studied here), and a block of quadratic size (features of the ï¬rst layer). In this case, both peaks are ânonlinear peaksâ.
The triple descent curve presented here is of different nature: it stems from the general properties of nonlinear projections, rather than the particular structure chosen for the data [19] or regression kernel [36]. To the best of our knowledge, the disentanglement of linear and nonlinear peaks presented here is novel, and its importance is highlighted by the abundance of papers discussing both kinds of peaks.
On the analytical side, our work directly uses the results for high-dimensional random features models derived in [11, 37] (for the test loss), [38] (for the spectral analysis) and [20] (for the bias-variance decomposition).
Reproducibility We release the code necessary to reproduce the data and ï¬gures in this paper publicly at https://github.com/sdascoli/triple-descent-paper.
# 1 Triple descent in the test loss phase space
We compute the (P, N ) phase space of the test loss in noisy regression tasks to demonstrate the triple descent phenomenon. We start by introducing the two models which we will study throughout the paper: on the analytical side, the random feature model, and on the numerical side, a teacher-student task involving neural networks trained with gradient descent.
Dataset For both models, the input data X ⬠R*â consists of N vectors in D dimensions whose elements are drawn i.i.d. from \V/(0, 1). For each model, there is an associated label generator f* corrupted by additive Gaussian noise: y = f*(a) + â¬, where the noise variance is inversely related to the signal to noise ratio (SNR), ⬠~ Nâ(0,1/SNR).
# 1.1 Random features regression (RF model)
3
OER? aeR?
Figure 2: Illustration of an RF network.
Model We consider the random features (RF) model introduced in [30]. It can be viewed as a two-layer neural network whose ï¬rst layer is a ï¬xed random matrix Î â RP ÃD containing the P random feature vectors (see Fig. 2)3:
f(a) = Se (âS=) . a)
go is a pointwise activation function, the choice of which will be of prime importance in the study. The ground truth is a linear model i +(p) = Sie? by f*(x) = (G,a)/VD. Elements of © and G are drawn iid from V(0, 1).
Training The second layer weights, i.e. the elements of a, are calculated via ridge regression with a regularization parameter γ:
. fl arg min li ©;,X,,) ( t) âhw
7
. . fl ty? Pry 2 1+ _ Py 7 @ = arg min li (yâaZ') 4 D llal|5 wy Z(zx4 pe (2)
©;,X,,) 1 Vine ( t) âhw RYxP y= _Z'Z RPxP 3 i o( VD ) ⬠, WV ⬠(3)
# 1.2 Teacher-student regression with neural networks (NN model)
Model We consider a teacher-student neural network (NN) framework where a student network learns to reproduce the labels of a teacher network. The teacher f* is taken to be an untrained ReLU fully-connected network with 3 layers of weights and 100 nodes per layer. The student f is a fully-connected network with 3 layers of weights and nonlinearity a. Both are initialized with the default PyTorch initialization.
Training We train the student with mean-square loss using full-batch gradient descent for 1000 epochs with a learning rate of 0.01 and momentum 0.94. We examine the effect of regularization by adding weight decay with parameter 0.05, and the effect of ensembling by averaging over 10 initialization seeds for the weights. All results are averaged over these 10 runs.
# 1.3 Test loss phase space
In both models, the key quantity of interest is the test loss, deï¬ned as the mean-square loss evaluated on fresh samples x â¼ N (0, 1): Lg = Ex
In the RF model, this quantity was ï¬rst derived rigorously in [11], in the high-dimensional limit where N, P, D are sent to inï¬nity with their ratios ï¬nite. More recently, a different approach based on the Replica Method from Statistic Physics was proposed in [37]; we use this method to compute the analytical phase space. As for the NN model, which operates at ï¬nite size D = 196, the test loss is computed over a test set of 104 examples.
In Fig.3, we plot the test loss as a function of two intensive ratios of interest: the number of parameters per dimension P/D and the number of training examples per dimension N/D. In the left panel, at high SNR, we observe an overï¬tting line at N = P , yielding a parameter-wise and sample-wise double descent. However when the SNR becomes smaller than unity (middle panel), the sample-wise proï¬le undergoes triple descent, with a second overï¬tting line appearing at N = D. A qualitatively identical situation is shown for the NN model in the right panel5.
3This model, shown to undergo double descent in [11], has become a cornerstone to study the so-called lazy learning regime of neural networks where the weights stay close to their initial value [39]: assuming fθ0 = 0, we have fθ(x) â âθfθ(x)|θ=θ0 · (θ â θ0) [40]. In other words, lazy learning amounts to a linear ï¬tting problem with a random feature vector âθfθ(x)|θ=θ0
4We use full batch gradient descent with small learning rate to reduce the noise coming from the optimization
as much as possible. After 1000 epochs, all observables appear to have converged.
5Note that for NNs, we necessarily have P/D > 1.
4
(a) RF, SN R = 2 (b) RF, SN R = 0.2 (c) NN, SN R = 0.2
Figure 3: Logarithmic plot of the test loss in the (P, N ) phase space. (a): RF model with SNR = 2, γ = 10â1. (b): RF model with SNR = 0.2, γ = 10â1. The solid arrows emphasize the sample-wise proï¬le, and the dashed lines emphasize the parameter-wise proï¬le. (c): NN model. In all cases, Ï = Tanh. Analogous results for different activation functions and values of the SNR are shown in Sec. A of the SM.
The case of structured data The case of structured datasets such as CIFAR10 is discussed in Sec. C of the SM. The main differences are (i) the presence of multiple linear peaks at N < D due to the complex covariance structure of the data, as observed in [19, 35], and (ii) the fact that the nonlinear peak is located sligthly above the line N = P since the data is easier to ï¬t, as observed in [18].
# 2 Theory for the RF model
The qualitative similarity between the central and right panels of Fig. 3 indicates that a full un- derstanding can be gained by a theoretical analysis of the RF model, which we present in this section.
# 2.1 High-dimensional setup
As is usual for the study of RF models, we consider the following high-dimensional limit:
N, D, P â â, D P = Ï = O(1), D N = Ï = O(1) (4)
Then the key quantities governing the behavior of the system are related to the properties of the nonlinearity around the origin:
2 2 2 â2?/ â2°/ n= ao (2), C= cee] and r=§ (5)
As explained in [41], the Gaussian Equivalence Theorem [11, 42, 41] which applies in this high dimensional setting establishes an equivalence to a Gaussian covariate model where the nonlinear activation function is replaced by a linear term and a nonlinear term acting as noise:
xe" xe" Z o(*5-) Va + /nâ-CW, W~N(0,1) (6)
Of prime importance is the degree of linearity r = ζ/η â [0, 1], which indicates the relative magnitudes of the linear and the nonlinear terms6.
6Note from Eq. 5 that for non-homogeneous functions such as Tanh, r also depends on the variance of the inputs and ï¬xed weights, both set to unity here: intuitively, smaller variance will yield smaller preactivations which will lie in the linear region of the Tanh, increasing the effective value of r.
5
# 2.2 Spectral analysis
As expressed by Eq. B} RF regression is equivalent to linear regression on a structured dataset Z ⬠RN*?, which is projected from the original i.id dataset X ⬠Râ*?. In [6], it was shown that the peak which occurs in unregularized linear regression on i.i.d. data is linked to vanishingly small (but non-zero) eigenvalues in the covariance of the input data. Indeed, the norm of the interpolator needs to become very large to fit small eigenvalues according to Eq3} yielding high variance. Following this line, we examine the eigenspectrum of © = + nZ TZ, which was derived in a series of recent papers. The spectral density p(A) can be obtained from the resolvent G(z) (B8)/43) (441/45):
p(A) == Jim, ImG(A â ie), G(z) = vA (=) + inv A(t) = 1+ (7 OtAo(t) v(t) + es )
where AÏ(t) = 1 + (A(t) â 1)Ï and AÏ(t) = 1 + (A(t) â 1)Ï. We solve the implicit equation for A(t) numerically, see for example Eq. 11 of [38].
@ 102 & l ~ 1 1 3 10 I J 10-1 10° 101 10? N/D N/D = 0.10 N/D = 1.00 102 N/D = 10.00 N/D = 100.00 4x10-4| â= analytical 2 ~ 7 10 0 3x10) pp smallest 10-1 v0 @ 2x10) ___ p Jargest 10-3 10-2 a 10-4 10-4 10-4 10-4 10-3 1077 101 10-3 1071 10 10-3 1077 101 10-3107 10! A A
Figure 4: Empirical eigenspectrum of the covariance of the projected features © = x Z! Z at various values of N/D, with the corresponding test loss curve shown above. Analytics match the numerics even at D = 100. We color the top D eigenvalues in gray, which allows to separate the linear and nonlinear components at NV > D. We set o=Tanh, P/D = 10, SNR=0.2, y= 10-5.
107 10! ââ N/D=100.0 == N/D=1.0 Ts. â â N/D=10.0 = N/D=0.1 = > = â Np-50 S102 m th S ya) ° realy a rule 10-5 teak to 10-2 10>? 10° 107 10? 10% 10? 10 10 10° 10% 10? A A (a) Absolute value (r =0) (b) Tanh (r ~ 0.92) (c) Linear (r= 1)
Figure 5: Analytical eigenspectrum of Σ for η = 1, P/D = 10, and ζ = 0, 0.92, 1 (a,b,c). We distinguish linear and nonlinear components by using respectively solid and dashed lines. (a) (purely nonlinear): the spectral gap vanishes at N = P (i.e. N/D = 10). (c) (purely linear): the spectral gap vanishes at N = D. (b) (intermediate): the spectral gap of the nonlinear component vanishes at N = P , but the gap of the linear component does not vanish at N = D.
In the bottom row of Fig. 4 (see also middle panel of Fig. 5), we show the numerical spectrum obtained for various values of N/D with Ï = Tanh, and we superimpose the analytical prediction obtained from Eq. 7. At N > D, the spectrum separates into two components: one with D large eigenvalues, and the other with P âD smaller eigenvalues. The spectral gap (distance of the left edge of the spectrum to zero) closes at N = P , causing the nonlinear peak [46], but remains ï¬nite at N = D.
Fig. 5 shows the effect of varying r on the spectrum. We can interpret the results from Eq. 6:
6
(7)
¢ âPurely nonlinearâ (7 = 0): this is the case of even activation functions such as «++ ||, which verify ¢ = 0 according to Fab The spectrum of ©; = qWw'w follows a Marcenko-Pastur distribution of parameter c = P/N, concentrating around A= 1 at N/D â oo. The spectral gap closes at N=P.
¢ âPurely linearâ (7 = 1): this is the maximal value for r, and is achieved only for linear ne The spectrum of X; = wo(x9')'xoe! follows a product Wishart distribution [ concentrating around \= P/D=10 at N/D â oo. The spectral gap closes at N = D.
⢠Intermediate (0 < r < 1): this case encompasses all commonly used activation functions such as ReLU and Tanh. We recognize the linear and nonlinear components, which behave almost independently (they are simply shifted to the left by a factor of r and 1âr respectively), except at N = D where they interact nontrivially, leading to implicit regularization (see below).
The linear peak is implicitly regularized As stated previously, one can expect to observe over- ï¬tting peaks when Σ is badly conditioned, i.e. its when its spectral gap vanishes. This is indeed observed in the purely linear setup at N = D, and in the purely nonlinear setup at N = P . However, in the everyday case where 0 < r < 1, the spectral gap only vanishes at N = P , and not at N = D. The reason for this is that a vanishing gap is symptomatic of a random matrix reaching its maximal rank. Since rk (Σnl) = min(N, P ) and rk (Σl) = min(N, P, D), we have rk (Σnl) ⥠rk (Σl) at P > D. Therefore, the rank of Σ is imposed by the nonlinear component, which only reaches its maximal rank at N = P . At N = D, the nonlinear component acts as an implicit regularization, by compensating the small eigenvalues of the linear component. This causes the linear peak to be implicitly regularized be the presence of the nonlinearity.
What is the linear peak caused by? At 0<r<1, the spectral gap vanishes at N = P, causing the norm of the estimator ||a|| to peak, but it does not vanish at N = D due to the implicit regularization; in fact, the lowest eigenvalue of the full spectrum does not even reach a local minimum at NV = D. Nonetheless, a soft /inear peak remains as a vestige of what happens at r = 1. What is this peak caused by? A closer look at the spectrum of Fig.[5]b clarifies this question. Although the left edge of the full spectrum is not minimal at NV = D, the left edge of the linear component, in solid lines, reaches a minimum at N = D. This causes a peak in ||@a)|, the norm of the âlinearized networkâ, as shown in Sec. [Blof the SM. This, in turn, entails a different kind of overfitting as we explain in the next section.
# 2.3 Bias-variance decomposition
The previous spectral analysis suggests that both peaks are related to some kind of overfitting. To address this issue, we make use of the bias-variance decomposition presented in [20]. The test loss is broken down into four contributions: a bias term and three variance terms stemming from the randomness of (i) the random feature vectors © (which plays the role of initialization variance in realistic networks), (ii) the noise ⬠corrupting the labels of the training set (noise variance) and (iii) the inputs X (sampling variance). We defer to for further details.
Only the nonlinear peak is affected by initialization variance In Fig. 6.a, we show such a decomposition. As observed in [20], the nonlinear peak is caused by an interplay between initialization and noise variance. This peak appears starkly at N = P in the high noise setup, where noise variance dominates the test loss, but also in the noiseless setup (Fig. 6.b), where the residual initialization variance dominates: nonlinear networks can overï¬t even in absence of noise.
The linear peak vanishes in the noiseless setup In stark contrast, the linear peak which appears clearly at N = D in Fig. 6.a is caused solely by a peak in noise variance, in agreement with [6]. Therefore, it vanishes in the noiseless setup of Fig. 6.b. This is expected, as for linear networks the solution to the minimization problem is independent of the initialization of the weights.
7
(a) Vanilla (b) Noiseless (c) Ensembling (d) Regularizing
Figure 6: Bias-variance decompostion of the test loss in the RF model for Ï = ReLU and P/D = 100. Regularizing (increasing γ) and ensembling (increasing the number K of initialization seeds we average over) mitigates the nonlinear peak but does not affect the linear peak. (a) K = 1, γ = 10â5, SN R = 0.2. (b) Same but SN R = â. (c) Same but K = 10. (d) Same but γ = 10â3.
# 3 Phenomenology of triple descent
# 3.1 The nonlinearity determines the relative height of the peaks
108 ââ linear â tanh B 10? â re & I â abs 3 10! I â N=D = â- N=P 10° H 1 10-! 10° 108 102 N/D
Figure 7: Numerical test loss of RF models at ï¬nite size (D = 100), averaged over 10 runs. We set P/D = 10, SNR = 0.2 and γ = 10â3.
In Fig. 7, we consider RF models with four different activation functions: absolute value (r = 0), ReLU (r = 0.5), Tanh (r â¼ 0.92) and linear (r = 1). We see that increasing the degree of nonlinearity strength- ens the nonlinear peak (by increasing initialization variance) and weakens the linear peak (by increasing the implicit regularization). In Sec. A of the SM, we present additional results where the degree of linear- ity r is varied systematically in the RF model, and show that replacing Tanh by ReLU in the NN setup produces a similar effect. Note that the behavior changes abruptly near r = 1, marking the transition to the linear regime.
(a) Vanilla (b) Ensembling (c) Regularizing
Figure 8: Test loss phase space for the NN model with Ï = Tanh. Weight decay with parameter γ and ensembling over K seeds weakens the nonlinear peak but leaves the linear peak untouched. (a) K = 1, γ = 0, SN R = 0.2. (b) Same but K = 10. (c) Same but γ = 0.05.
8
# 3.2 Ensembling and regularization only affects the nonlinear peak
It is a well-known fact that regularization [19] and ensembling [9, 20, 49] can mitigate the nonlinear peak. This is shown in panel (c) and (d) of Fig. 6 for the RF model, where ensembling is performed by averaging the predictions of 10 RF models with independently sampled random feature vectors. However, we see that these procedures only weakly affect the linear peak. This can be understood by the fact that the linear peak is already implicitly regularized by the nonlinearity for r < 1, as explained in Sec. 2.
In the NN model, we perform a similar experiment by using weight decay as a proxy for the regularization procedure, see Fig. 8. Similarly as in the RF model, both ensembling and regularizing attenuates the nonlinear peak much more than the linear peak.
# 3.3 The nonlinear peak forms later during training
To study the evolution of the phase space during training dynamics, we focus on the NN model (there are no dynamics involved in the RF model we considered, where the second layer weights were learnt via ridge regression). In Fig. 9, we see that the linear peak appears early during training and maintains throughout, whereas the nonlinear peak only forms at late times. This can be understood qualitatively as follows [6]: for linear regression the time required to learn a mode of eigenvalue λ in the covariance matrix is proportional to λâ1. Since the nonlinear peak is due to vanishingly small eigenvalues, which is not the case of the linear peak, the nonlinear peak takes more time to form completely.
t=26 epochs t=78 epochs t=233 epochs t=695 epochs 7 log (P10) log (P10) log (P10) BRESSERSES 1 0 1 2 1 oOo TF log(M0) login) log (n/0) log(M0) 2 1 o ft 2
Figure 9: Test loss phase space for the NN model with Ï = Tanh, plotted at various times during training. The linear peak grows ï¬rst, followed by the nonlinear peak.
# 4 Conclusion
One of the key challenges in solving tasks with network-like architectures lies in choosing an appropriate number of parameters P given the properties of the training dataset, namely its size N and dimension D. By elucidating the structure of the (P, N ) phase space, its dependency on D, and distinguishing the two different types of overï¬tting which it can exhibit, we believe our results can be of interest to practitioners.
Our results leave room for several interesting follow-up questions, among which the impact of (1) various architectural choices, (2) the optimization algorithm, and (3) the structure of the dataset. For future work, we will consider extensions along those lines with particular attention to the structure of the dataset. We believe it will provide a deeper insight into data-model matching.
9
Acknowledgements We thank Federica Gerace, Armand Joulin, Florent Krzakala, Bruno Loureiro, Franco Pellegrini, Maria Reï¬netti, Matthieu Wyart and Lenka Zdeborova for insightful discussions. GB acknowledges funding from the French government under management of Agence Nationale de la Recherche as part of the "Investissements dâavenir" program, reference ANR-19-P3IA-0001 (PRAIRIE 3IA Institute) and from the Simons Foundation collaboration Cracking the Glass Problem (No. 454935 to G. Biroli).
Broader Impact Due to the theoretical nature of this paper, a Broader Impact discussion is not easily applicable. However, given the tight interaction of data & model and their impact on overï¬tting regimes, we believe that our ï¬ndings and this line of research, in general, may potentially impact how practitioners deal with data.
# References
[1] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â1105, 2012.
[2] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. nature, 521(7553):436â444, 2015.
[3] Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal processing magazine, 29(6):82â97, 2012.
[4] Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104â3112, 2014. [5] Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530, 2016. [6] Madhu S Advani and Andrew M Saxe. High-dimensional dynamics of generalization error in
neural networks. arXiv preprint arXiv:1710.03667, 2017.
[7] Behnam Neyshabur, Zhiyuan Li, Srinadh Bhojanapalli, Yann LeCun, and Nathan Srebro. Towards understanding the role of over-parametrization in generalization of neural networks. arXiv preprint arXiv:1805.12076, 2018.
[8] Mikhail Belkin, Daniel Hsu, Siyuan Ma, and Soumik Mandal. Reconciling modern machine learning and the bias-variance trade-off. arXiv preprint arXiv:1812.11118, 2018.
[9] Mario Geiger, Arthur Jacot, Stefano Spigler, Franck Gabriel, Levent Sagun, Stéphane dâAscoli, Giulio Biroli, Clément Hongler, and Matthieu Wyart. Scaling description of generalization with number of parameters in deep learning. Journal of Statistical Mechanics: Theory and Experiment, 2020(2):023401, 2020.
[10] S Spigler, M Geiger, S dâAscoli, L Sagun, G Biroli, and M Wyart. A jamming transition from under-to over-parametrization affects generalization in deep learning. Journal of Physics A: Mathematical and Theoretical, 52(47):474001, 2019.
[11] Song Mei and Andrea Montanari. The generalization error of random features regression: Precise asymptotics and double descent curve. arXiv preprint arXiv:1908.05355, 2019. [12] Preetum Nakkiran, Gal Kaplun, Yamini Bansal, Tristan Yang, Boaz Barak, and Ilya Sutskever. Deep double descent: Where bigger models and more data hurt. arXiv preprint arXiv:1912.02292, 2019.
[13] Preetum Nakkiran. More data can hurt for linear regression: Sample-wise double descent. arXiv preprint arXiv:1912.07242, 2019.
[14] Manfred Opper and Wolfgang Kinzel. Statistical mechanics of generalization. In Models of neural networks III, pages 151â209. Springer, 1996.
[15] Andreas Engel and Christian Van den Broeck. Statistical mechanics of learning. Cambridge University Press, 2001.
[16] Florent Krzakala and Jorge Kurchan. Landscape analysis of constraint satisfaction problems. Physical Review E, 76(2):021122, 2007.
10
[17] Silvio Franz and Giorgio Parisi. The simplest model of jamming. Journal of Physics A: Mathematical and Theoretical, 49(14):145001, 2016.
[18] Mario Geiger, Stefano Spigler, Stéphane dâAscoli, Levent Sagun, Marco Baity-Jesi, Giulio Biroli, and Matthieu Wyart. Jamming transition as a paradigm to understand the loss landscape of deep neural networks. Physical Review E, 100(1):012115, 2019.
[19] Preetum Nakkiran, Prayaag Venkat, Sham Kakade, and Tengyu Ma. Optimal regularization can mitigate double descent. arXiv preprint arXiv:2003.01897, 2020.
[20] Stéphane dâAscoli, Maria Reï¬netti, Giulio Biroli, and Florent Krzakala. Double trouble in double descent: Bias and variance (s) in the lazy regime. arXiv preprint arXiv:2003.01054, 2020.
[21] Yann Le Cun, Ido Kanter, and Sara A Solla. Eigenvalues of covariance matrices: Application to neural-network learning. Physical Review Letters, 66(18):2396, 1991.
[22] Anders Krogh and John A Hertz. Generalization in a linear perceptron in the presence of noise. Journal of Physics A: Mathematical and General, 25(5):1135, 1992.
In Proceedings of the Scandinavian Conference on Image Analysis, volume 2, pages 957â964. PROCEEDINGS PUBLISHED BY VARIOUS PUBLISHERS, 1995.
[24] Trevor Hastie, Andrea Montanari, Saharon Rosset, and Ryan J Tibshirani. Surprises in high- dimensional ridgeless least squares interpolation. arXiv preprint arXiv:1903.08560, 2019.
[25] Melikasadat Emami, Mojtaba Sahraee-Ardakan, Parthe Pandit, Sundeep Rangan, and Alyson K Fletcher. Generalization error of generalized linear models in high dimensions. arXiv preprint arXiv:2005.00180, 2020.
[26] Benjamin Aubin, Florent Krzakala, Yue M Lu, and Lenka Zdeborová. Generalization error in high-dimensional perceptrons: Approaching bayes error with convex optimization. arXiv preprint arXiv:2006.06560, 2020.
[27] Zeng Li, Chuanlong Xie, and Qinwen Wang. Provable more data hurt in high dimensional least squares estimator. arXiv preprint arXiv:2008.06296, 2020.
[28] Andrew M Saxe, James L McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120, 2013.
[29] Andrew K Lampinen and Surya Ganguli. An analytic theory of generalization dynamics and transfer learning in deep linear networks. arXiv preprint arXiv:1809.10374, 2018.
[30] Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In Advances in neural information processing systems, pages 1177â1184, 2008.
[31] Marco Loog and Robert PW Duin. The dipping phenomenon. In Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (SSPR), pages 310â317. Springer, 2012.
[32] Marco Loog, Tom Viering, and Alexander Mey. Minimizers of the empirical risk and risk monotonicity. In Advances in Neural Information Processing Systems, pages 7478â7487, 2019.
[33] Yifei Min, Lin Chen, and Amin Karbasi. The curious case of adversarially robust models: More data can help, double descend, or hurt generalization. arXiv preprint arXiv:2002.11080, 2020.
[34] Tengyuan Liang, Alexander Rakhlin, and Xiyu Zhai. On the risk of minimum-norm interpolants and restricted lower isometry of kernels. arXiv preprint arXiv:1908.10292, 2019.
[35] Lin Chen, Yifei Min, Mikhail Belkin, and Amin Karbasi. Multiple descent: Design your own generalization curve. arXiv preprint arXiv:2008.01036, 2020.
[36] Ben Adlam and Jeffrey Pennington. The neural tangent kernel in high dimensions: Triple descent and a multi-scale theory of generalization. arXiv preprint arXiv:2008.06786, 2020.
[37] Federica Gerace, Bruno Loureiro, Florent Krzakala, Marc Mézard, and Lenka Zdeborová. Generalisation error in learning with random features and the hidden manifold model. arXiv preprint arXiv:2002.09339, 2020.
[38] Jeffrey Pennington and Pratik Worah. Nonlinear random matrix theory for deep learning. In Advances in Neural Information Processing Systems, pages 2637â2646, 2017.
11
[39] Arthur Jacot, Franck Gabriel, and Clément Hongler. Neural tangent kernel: Convergence and generalization in neural networks. In Advances in neural information processing systems, pages 8571â8580, 2018.
[40] Lénaïc Chizat, Edouard Oyallon, and Francis Bach. On lazy training in differentiable program- ming. In H. Wallach, H. Larochelle, A. Beygelzimer, F. dâAlché Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 2933â2943. Curran Associates, Inc., 2019.
[41] S Péché et al. A note on the pennington-worah distribution. Electronic Communications in Probability, 24, 2019.
[42] Sebastian Goldt, Galen Reeves, Marc Mézard, Florent Krzakala, and Lenka Zdeborová. The gaussian equivalence of generative models for learning with two-layer neural networks. arXiv preprint arXiv:2006.14709, 2020.
[43] Lucas Benigni and Sandrine Péché. Eigenvalue distribution of nonlinear models of random matrices. arXiv preprint arXiv:1904.03090, 2019.
[44] Ben Adlam, Jake Levinson, and Jeffrey Pennington. A random matrix perspective on mixtures of nonlinearities for deep learning. arXiv preprint arXiv:1912.00827, 2019.
[45] Zhenyu Liao and Romain Couillet. On the spectrum of random features maps of high dimen- sional data. arXiv preprint arXiv:1805.11916, 2018.
[46] Madhu Advani, Subhaneil Lahiri, and Surya Ganguli. Statistical mechanics of complex neural systems and high dimensional data. Journal of Statistical Mechanics: Theory and Experiment, 2013(03):P03014, 2013.
[47] Thomas Dupic and Isaac Pérez Castillo. Spectral density of products of wishart dilute random matrices. part i: the dense case. arXiv preprint arXiv:1401.7802, 2014.
[48] Gernot Akemann, Jesper R Ipsen, and Mario Kieburg. Products of rectangular random matrices: singular values and progressive scattering. Physical Review E, 88(5):052118, 2013.
[49] Andrew Gordon Wilson and Pavel Izmailov. Bayesian deep learning and a probabilistic perspective of generalization. arXiv preprint arXiv:2002.08791, 2020.
12
# A Effect of signal-to-noise ratio and nonlinearity
# A.1 RF model
In the RF model, varying r can easily be achieved analytically and yields interesting results, as shown in Fig. 107.
In the top panel, we see that the parameter-wise proï¬le exhibits double descent for all degrees of linearity r and signal-to-noise ratio SNR, except in the linear case r = 1 which is monotonously deceasing. Increasing the degree of nonlinearity (decreasing r) and the noise (decreasing the SNR) simply makes the nonlinear peak stronger.
In the bottom panel, we see that the sample-wise proï¬le is more complex. In the linear case r = 1, only the linear peak appears (except in the noiseless case). In the nonlinear case r < 1, the nonlinear peak appears is always visible; as for the linear peak, it is regularized away, except in the strong noise regime SNR > 1 when the degree of nonlinearity is small (r > 0.8), where we observe the triple descent.
Notice that both in the parameter-wise and sample-wise proï¬les, the test loss proï¬les change smoothly with r, except near r = 1 where the behavior abruptly changes, particularly at low SNR.
1.0 SNR=o SNR=2 SNR=0.2 T 2 0.8â o 10 10! 8 10° 0.68 10-1 I om fo} l I 0.4 9 10-2 l I I 2 10-1 10° iy I I I 02 8 10-3 J a toâ! 10° 104 102 to! 10° 104 102 to! 10° 108102 0.0 P/D P/D P/D :
SNR=o SNR=2 SNR=0.2 T 0.8â o 10 10! 10° 0.68 10-1 I l I 0.4 10-2 l I I 10-1 10° I I I 02 10-3 J toâ! 10° 104 102 to! 10° 104 102 to! 10° 108102 0.0 P/D P/D P/D : 1.0 SNR=o SNR=2 SNR=0.2 T T 0.8â 1 10° 10 0.68 10° I 10-1 I 0.4 I I 0 I 10 l 023 10-2 10-1 10-100 108 102 10-1 100101102 10-1 10° 108-102 0.0 N/D N/D N/D ,
1.0 SNR=o SNR=2 SNR=0.2 T T B 0.8â 1 8 10° 10 0.68 10° I uo fo} 10-1 I 0.4 » I I ° 0 D I 10 l 023 10-2 10-1 a 10-100 108 102 10-1 100101102 10-1 10° 108-102 0.0 N/D N/D N/D ,
Figure 10: Analytical parameter-wise (top, N/D = 10) and sample-wise (bottom, P/D = 10) test loss proï¬les of the RF model. Left: noiseless case, SNR = â. Center: low noise, SNR = 2. Right: high noise, SNR = 0.2. We set γ = 10â1.
One can also mimick these results numerically by considering, as in [38], the following family of piecewise linear functions:
tj, +a[-a], â 2 Cala [ ]+ [ }+ Jan (8) 3 (1+a? â t(1+a)?
for which
rα = (1 â α)2 2 (1 + α2) â 2 Ï (1 + α)2 . (9)
7We focus here on the practically relevant setup N/D >> 1. Note from the (P, N) phase-space that things can be more complex at N/D < 1).
13
Here, α parametrizes the ratio of the slope of the negative part to the positive part and allows to adjust the value of r continuously. α = â1 (r = 1) will correspond to a (shifted) absolute value, α = 1 (r = 0) will correspond to a linear function, α = 0 will correspond to a (shifted) ReLU. In Fig. 11, we show the effect of sweeping α uniformly from 1 to -1 (which causes r to range from 0 to 1). As expected, we see the linear peak become stronger and the nonlinear peak become weaker.
(a) Nonlinearities used (b) Corresponding test loss
# ze 5
Figure 11: Moving from a purely nonlinear function to a purely linear function (dark to light colors) strengthens the linear peak and weakens the nonlinear peak.
# A.2 NN model
In Fig. 12, we show the effect of replacing Tanh (r â¼ 0.92) by ReLU (r = 0.5). We still distinguish the two peaks of triple descent, but the linear peak is much weaker in the case of ReLU, as expected from the stronger degree of nonlinearity.
(a) Tanh
(b) ReLU
Figure 12: Dynamics of the test loss phase space, with weight decay γ = 0.05. Top: Tanh. Bottom: ReLU.
# B Origin of the linear peak
In this section, we follow the lines of [37], where the test loss is decomposed in the following way (Eq. D.6):
Lg = Ï + Q â 2M (10)
â
_i1 2 vo Cpa, MHS yo _ p= tia, w=4o.a, Q=Sjoj@+2=Sl?, v=ea any
As before, β denotes the linear teacher vector and Î, a respectively denote the (ï¬xed) ï¬rst and (learnt) second layer of the student. This insightful expression shows that the loss only depends on
14
the norm of the second layer ||a||, the norm of the linearized network ||b||, and its overlap with the teacher b - 3.
We plot these three terms in Fig. focusing on the triple descent scenario SNR < 1. In the left panel, we see that the overlap of the student with the teacher is monotically increasing, and reaches its maximal value at a certain point which increases from D to P as we decrease r from | to 0. In the central panel, we see that ||a|| peaks at N = P, causing the nonlinear peak as expected, but nothing special happens at N = D (except for r = 1). However, in the right panel, we see that the norm of the linearized network peaks at N = D, where we know from the spectral analysis that the gap of the linear part of the spectrum is minimal. This is the origin of the linear peak.
1.0 1 1 zlall pllbll > 08 10! 1 [S) I 10 I 0.6 I I % 1 10° 1 10° I 049 l I I % ! 10-1 l 022 J | 10-1 J a toâ! 10° 108 102 toâ! 10° 104 102 to! 10° 108102 0.0 N/D N/D N/D ,
Figure 13: Terms entering Eq. 11, plotted at SNR = 0.2, γ = 10â1.
# C Structured datasets
In this section, we examine how our results are affected by considering the realistic case of correlated data. To do so, we replace the Gaussian i.i.d. data by MNIST data, downsampled to 10 Ã 10 images for the RF model (D = 100) and 14 Ã 14 images for the NN model (D = 196).
# C.1 RF model: data structure does not matter in the lazy regime
We refer to the results in Fig [14] Interestingly, the triple descent profile is weakly affected by the correlated structure of this realistic dataset: the linear peak and nonlinear peaks still appear, respectively at N = D and N = P. However, the spectral properties of © = + nZl Z are changed in an interesting manner: the two parts of the spectrum are now contiguous, there i is no gap between the linear part and the nonlinear part.
# C.2 NN model: the effect of feature learning
As shown in Fig. 15, the NN model behaves very differently on structured data like CIFAR10. At late times, three main differences with respect to the case of random data can be observed in the low SNR setup:
⢠Instead of a clearly deï¬ned linear peak at N = D, we observe a large overï¬tting regions at N < D.
⢠The nonlinear peak shifts to higher values of N during time, but always scales sublinearly with P , i.e. it is located at N ⼠P α with α < 1
Behavior of the linear peaks As emphasized by [35], a non-trivial structure of the covariance of the input data can strongly affect the number and location of linear peaks. For example, in the context of linear regression, [19] considers gaussian data drawn from a diagonal covariance matrix Σ â R30Ã30 with two blocks of different strengths: Σi, i = 10 for i ⤠15 and Σi, i = 1 for i ⥠15. In this situation, two linear peaks are observed: one at N1 = 15, and one at N2 = 30. In the setup of structured data such as CIFAR10, where the covariance is evidently much more complicated, one can expect to see a multiple peaks, all located at N ⤠D.
15
=
# =x
# to?
# rr ND=010 ND=022 tot we
© ND =045 wn
10 © ND = 1.00 tot
© ND=215
10°
# ND
# ww
ie! ND =A4 = NID= 10,00 . a
10!
ND =2154 © ND = 46.42 10
a N/D~= 100.00 10!
10!
# an
# ia
10
10°
# Sw
# tot
(a) Random data
10°
(b) MNIST
Figure 14: Spectrum of the covariance of the projected features Σ = 1 N/D, with the corresponding loss curve shown above. We set Ï = Tanh, γ = 10â5.
# various values of
This is indeed what we observe: in the case of Tanh, where the linear peaks are strong, two linear peaks are particularly at late times, at N1 = 10â1.5D, and the other at N2 = 10â2D. In the case of ReLU, they are obscured at late times by the strength of the nonlinear peak.
Behavior of the nonlinear peak We observe that during training, the nonlinear peak shifts towards higher values of N . This phenomenon was already observed in [12], where it is explained through the notion of effective model complexity. The intuition goes as follows: increasing the training time increases the expressivity of a given model, and therefore increases the number of training examples it can overï¬t.
We argue that this sublinear scaling is a consequence of the fact that structured data is easier to memorize than random data [18]: as the dataset grows large, each new example becomes easier to classify since the network has learnt the underlying rule (in contrast to the case of random data where each new example makes the task harder by the same amount).
16
(a) CIFAR10, Tanh
(b) CIFAR10, ReLU
Figure 15: Dynamics of test loss phase space on CIFAR10 with SNR = 0.2, for different activation functions.
17 | {
"id": "2006.14709"
} |
2006.03463 | Sponge Examples: Energy-Latency Attacks on Neural Networks | The high energy costs of neural network training and inference led to the use
of acceleration hardware such as GPUs and TPUs. While this enabled us to train
large-scale neural networks in datacenters and deploy them on edge devices, the
focus so far is on average-case performance. In this work, we introduce a novel
threat vector against neural networks whose energy consumption or decision
latency are critical. We show how adversaries can exploit carefully crafted
$\boldsymbol{sponge}~\boldsymbol{examples}$, which are inputs designed to
maximise energy consumption and latency.
We mount two variants of this attack on established vision and language
models, increasing energy consumption by a factor of 10 to 200. Our attacks can
also be used to delay decisions where a network has critical real-time
performance, such as in perception for autonomous vehicles. We demonstrate the
portability of our malicious inputs across CPUs and a variety of hardware
accelerator chips including GPUs, and an ASIC simulator. We conclude by
proposing a defense strategy which mitigates our attack by shifting the
analysis of energy consumption in hardware from an average-case to a worst-case
perspective. | http://arxiv.org/pdf/2006.03463 | Ilia Shumailov, Yiren Zhao, Daniel Bates, Nicolas Papernot, Robert Mullins, Ross Anderson | cs.LG, cs.CL, cs.CR, stat.ML | Accepted at 6th IEEE European Symposium on Security and Privacy
(EuroS&P) | null | cs.LG | 20200605 | 20210512 | 1 2 0 2
y a M 2 1 ] G L . s c [
2 v 3 6 4 3 0 . 6 0 0 2 : v i X r a
# SPONGE EXAMPLES: ENERGY-LATENCY ATTACKS ON NEURAL NETWORKS
A PREPRINT
# Ilia Shumailov University of Cambridge [email protected]
Yiren Zhao University of Cambridge [email protected]
# Daniel Bates University of Cambridge [email protected]
# Nicolas Papernot University of Toronto and Vector Institute [email protected]
# Robert Mullins University of Cambridge [email protected]
# Ross Anderson University of Cambridge [email protected]
December 14, 2021
# ABSTRACT
The high energy costs of neural network training and inference led to the use of acceleration hardware such as GPUs and TPUs. While such devices enable us to train large-scale neural networks in datacenters and deploy them on edge devices, their designersâ focus so far is on average-case performance. In this work, we introduce a novel threat vector against neural networks whose energy consumption or decision latency are critical. We show how adversaries can exploit carefully-crafted sponge examples, which are inputs designed to maximise energy consumption and latency, to drive machine learning (ML) systems towards their worst-case performance. Sponge examples are, to our knowledge, the ï¬rst denial-of-service attack against the ML components of such systems. We mount two variants of our sponge attack on a wide range of state-of-the-art neural network models, and ï¬nd that language models are surprisingly vulnerable. Sponge examples frequently increase both latency and energy consumption of these models by a factor of 30Ã. Extensive experiments show that our new attack is effective across different hardware platforms (CPU, GPU and an ASIC simulator) on a wide range of different language tasks. On vision tasks, we show that sponge examples can be produced and a latency degradation observed, but the effect is less pronounced. To demonstrate the effectiveness of sponge examples in the real world, we mount an attack against Microsoft Azureâs translator and show an increase of response time from 1ms to 6s (6000Ã). We conclude by proposing a defense strategy: shifting the analysis of energy consumption in hardware from an average-case to a worst-case perspective.
Keywords availability attacks · adversarial machine learning · adversarial examples · sponge examples · latency attacks · denial of service
# Introduction
The wide adoption of machine learning has led to serious study of its security vulnerabilities. Threat vectors such as adversarial examples [1, 2], data poisoning [3, 4], membership inference [5â7] and fault injection attacks [8] have been extensively explored. These attacks either target the conï¬dentiality or integrity of machine learning systems [9, 10]. So what about the third leg of the security triad: their availability? In this paper, we introduce an attack that increases the power drawn by neural networks and the time they take to make decisions. An adversary may mount our attack
# A PREPRINT - DECEMBER 14, 2021
on a datacenter providing ML-as-a-Service to cause disruption, i.e. denial-of-service [11]. Increasing the energy consumption of edge devices such as smartphones can drain their batteries and make them unavailable [12]. Perhaps even more seriously, an attack that slows down decisions can subvert safety-critical or mission-critical systems.
Our key observation is that different inputs of the same size can cause a deep neural network (DNN) to use very different amounts of time and energy: this energy-latency gap is the vulnerability we exploit. The gap exists because of speciï¬c optimisations in hardware (e.g. leveraging input sparsity) and algorithms (e.g. a variable number of passes through the network for an input of the same size).
Our attack can be even more effective against the growing number of systems that use GPUs or custom hardware. Machine learning in general, and neural networks in particular, command workloads heavy in matrix algebra. GPUs were fundamental to the AlexNet breakthrough in 2012 [13]; in response to increasing demand, Google introduced TPUs to facilitate inference â and training â in its datacenters [14], while Apple introduced the Neural Engine to make its smartphones more energy-efï¬cient for on-device deep learning [15]. Hardware engineers explicitly target the Operations per Watt (OPs/W) performance of DNN processing. But by increasing complexity, optimisations tend to increase the attack surface. There is ample precedent elsewhere in computer engineering for optimisations widening the gap between average-case and worst-case performance in ways that a capable attacker can exploit: a recent example is Spectre [16], which exploits hardware speculation to launch a powerful timing side-channel attack. Security engineers therefore need to pay close attention to worst-case performance. In this paper, we start this process for the optimisations used to speed up modern machine learning in both hardware and algorithms.
Sponge examples are designed to soak up energy consumed by a given neural network, forcing the underlying hardware system running DNN inference towards its worst-case performance. We present two ways of generating sponge examples, one gradient-based and one using genetic algorithms. The gradient-based approach requires access to DNN model parameters, while the genetic algorithm only sends queries to the model and evolves inputs based on energy or latency measurements. These two attack methods cover both White-box and Black-box attacks, and our extensive experiments demonstrate the effectiveness of sponge examples under both scenarios. When considering modern ML-as-a-Service in a Black-box setting, our genetic algorithm successfully produces sponge examples that consistently increase the service response time and thus the energy consumption of the remote server.
In this paper we make the following contributions:
⢠We introduce a novel threat against the availability of ML systems based on energy and latency. Our sponge examples are designed to cause inference to take as long as possible and consume as much energy as possible.
⢠We show that sponge examples cause increased energy consumption and longer run-time for a wide range of vision and language models. Sponge examples are particularly powerful against language models.
⢠We demonstrate the portability of sponge examples, by showing they are not only transferable across hardware platforms (CPUs, GPUs, and an ASIC simulator) but also across model architectures.
⢠We show that modern ML-as-a-Service is vulnerable to sponge attacks. With a 50-character input to Microsoft Azure translator, we can increase the latency from 1ms up to 6s: a 6000à degradation.
⢠We present a simple defense against sponge examples in the form of a worst-case performance bound for some models. This can also prevent unexpected increases in energy consumption in the absence of adversaries, potentially reducing the carbon footprint of models deployed for inference at scale.
# 2 Motivation
Artiï¬cial Intelligence and Machine Learning have enabled real progress in automation, unleashing greater productivity and potential economic growth. Yet modern machine learning has become extremely power-hungry. It is estimated that the energy consumed in training a single transformer model is equivalent to 60% of a carâs lifetime carbon emissions [17]. And the energy cost doesnât stop there â each inference consumes signiï¬cant energy and happens ever more often1. Energy is the lifeblood of modern machine learning, as energy-efï¬cient inference and training are needed to scale machine learning to more use cases. In this paper we explore the adversarial manipulation of energy and latency in ML models, services and applications.
Modern hardware exploits many different optimisation techniques to maintain a high ratio of useful work to energy consumed. This often involves predicting future workloads and scheduling resources according to dynamic needs. Prediction and speculation occur at a number of levels in the stack, widening the gap between average-case and worst-case scenarios. This is particularly an issue in time or energy sensitive tasks, such as time series forecasting
1For example, OpenAI greatly limits number of queries one can do to the GPT3 model.
2
# A PREPRINT - DECEMBER 14, 2021
for automatic trading [18] and activity recognition on wearable devices [19]. In such applications, hitting worst-case performance could cause failures in decision making or deplete the batteries of user devices. In safety-critical and real-time systems, such as autonomous vehicles which depend on scene understanding with tight latency constraints, service-denial attacks can pose a threat to life.
In this paper, we show that a capable attacker can exploit the performance dependency on hardware and model optimisations to launch a number of different attacks. We ï¬nd that they can negate the effects of hardware optimisations, increase computation latency, increase hardware temperature and massively increase the amount of energy consumed. To make things worse, they can do this with very few assumptions, making attacks scalable in the real world. We further highlight the realism of sponge examples in a case study with Microsoft Azure translator, where we degraded latency up to a factor of 6000Ã.
On a number of occasions, despite the hardware protection provided by GPU engineers, we were able to increase temperature so that it passed the throttling point and sometimes even crashed the GPU drivers. The energy consumed by sponge examples on a machine learning model can therefore affect the underlying hardware if its power management software or hardware is not designed with adversaries in mind.
# 3 Background
# 3.1 Hardware Acceleration for Deep Learning
Deep Neural Network (DNN) inference is both compute-intensive and memory-intensive. Common hardware products such as CPUs and GPUs are now being adapted to this workload, and provide features for accelerating it. Intelâs Knights Mill CPU provides a set of SIMD instructions [20], while NVIDIAâs Volta GPU introduces Tensor Cores to facilitate the low-precision multiplications that underpin much of deep learning [21].
Hardware dedicated to deep learning is now pervasive in data centers, with examples including Big Basin at Facebook [22], BrainWave at Microsoft [23], and racks of TPUs at Google [14, 24]; the underlying hardware on these systems are either commodity hardware (Big Basin), re-conï¬gurable hardware (FPGAs for BrainWave), or custom silicon (TPUs). The latter two are speciï¬cally designed to improve the number of Operations per Watt of DNN inference. Careful modeling of average hardware efï¬ciency allows ML-as-a-Service providers to price their services per query, rather then per energy used. As we discuss later, custom and semi-custom hardware will typically exploit sparsity in data and the adequacy of low-precision computations for DNN inference, reducing both arithmetic complexity and the amount of DRAM trafï¬c, to achieve signiï¬cantly better power efï¬ciency [25â27]. Our attack targets these optimisations among others.
# 3.2 Attacks on Energy
Operations per Watt are an important indicator of the efï¬ciency of cloud infrastructure [28]. Power oversubscription is a popular method for cloud services to handle provisioning, but it leaves datacenters vulnerable to power attacks [29â32]. If malicious users can remotely generate power spikes on multiple hosts in the data center at the same time, they might overload the system and cause disruption of service [11, 29]. Energy attacks against mobile devices usually aim to drain the battery more quickly [12, 33], although energy management in mobile devices can also be used to perform deterministic fault injection [34]. The possible victims of energy attacks on mobile systems range from autonomous vehicles to sensors with constrained computing abilities [35]. Higher energy consumption also increases hardware temperature, which in turn increases the failure rate. For example, Anderson et al. note that an increase of 15â¦C causes component failure rates to go up by 2à [36]. Modern hardware throttles to avoid overheating; while short-term power savings may be possible through such voltage scaling, the overall energy consumption increases [37]. This creates nonlinear dependencies between energy and latency.
# 3.3 Security of Machine Learning
Machine learning has been shown to be vulnerable to a number of different attack vectors [38]. Adversarial examples can cause a system to classify inputs incorrectly [1, 2]. Adversarial examples can be found in the White-box setting through gradient-based optimization [1, 2] while in the Black-box setting, the adversary can transfer adversarial examples from another model [39] or approximate gradients with ï¬nite differences [40] when they can observe the modelâs conï¬dence as well as the output label. Data can be poisoned to manipulate future performance [3, 4]. Run-time bit-errors can be introduced to greatly reduce performance [8]. These attacks either target the conï¬dentiality or integrity of machine learning systems [9, 10].
3
# A PREPRINT - DECEMBER 14, 2021
Here, we explore availability, i.e. timely and reliable access to information [41], and introduce a new form of service denial with samples that act as a sponge for time or energy. Service-denial attacks are well known in the context of computer networking [42, 43], but have been overlooked so far in ML. The current NIST draft on adversarial machine learning touches upon availability, but does not provide any examples of attacks [38].
Poisoning can perhaps be seen as an availability attack. If an attacker can poison data so that the machine learning model stops training or does so with reduced accuracy, this may be seen in some contexts as reducing availability. For example, Erba et al. presented such an attack against Industrial Control Systems [44]. However, the attacks presented in this paper do not poison data, but target either the hardware or the algorithmic complexity of the model.
# 4 Methodology
# 4.1 Threat Model
In this paper we assume an adversary with the ability to supply an input sample to a target system, which then processes the sample using a single CPU, GPU or ASIC. We assume no rate limiting, apart from on-device dynamic power control or thermal throttling.2 We assume no physical access to the systems i.e. an attacker cannot reprogram the hardware or change the conï¬guration.
We consider three threat models. The ï¬rst is a White-box setup: we assume the attackers know the model architecture and parameters. The second considers an interactive Black-box threat: we assume attackers have no knowledge of the architecture and parameters, but are able to query the target as many times as they want and to time operations or measure energy consumption remotely. The third is the blind adversary: we assume no knowledge of the target architecture and parameters, and assume no ability to take direct measurements. In this setting, the adversary has to transfer previously-discovered sponge examples directly to a new target â without prior interaction.
Our adversary models loosely capture ML-as-a-Service deployments and on-device data processing. A simple example could be a dialogue or a translation system. Users interact continuously by sending queries and can measure energy consumption, or when that is not possible by the response time (see Section 5). Indeed, in Section 5.5 we show on an example of a Microsoft Azure translator that modern ML-as-a-Service is vulnerable to sponge attacks which only rely on the adversaryâs ability to observe response latencyâeven in presence of networking delay.
# 4.2 The Energy Gap
The Energy Gap is the performance gap between average-case and worst-case performance, and is the target for our sponge attacks. To better understand the cause of this gap, we tested three hardware platforms: a CPU, a GPU and an ASIC simulator. The amount of energy consumed by one inference pass (i.e. a forward pass in a neural network) depends primarily on [45]:
the overall number of arithmetic operations required to process the inputs; and ⢠the number of memory accesses e.g. to the GPU DRAM.
The intriguing question now is:
is there a signiï¬cant gap in energy consumption for different model inputs of the same dimension?
As well as ï¬xing the dimension of inputs, i.e. not increasing the number of characters in a text sample or the pixel dimension of an image, we also do not consider inputs that would exceed the pre-deï¬ned numerical range of each input dimension. If models do have a large energy gap between different inputs, we describe two hypotheses that we think attackers can exploit to create sponge examples, that is, inputs trigger the worst-case performance and have abnormally high energy consumption.
# 4.2.1 Hypothesis 1: Computation Dimensions
Aside from data sparsity, modern neural networks also have a computational dimension. Along with variable input and output shapes, the internal representation size often changes as well â for example, in the Transformer-based architectures for machine translation [46]. The model is autoregressive in this case; both the input and output are sequences of words and internal computation depends on both of them. Before text gets to the model it has to go through
2Thermal throttling refers here to the deliberate slow-down of device performance when cooling is no longer able to dissipate the heat generated by a workload.
4
A PREPRINT - DECEMBER 14, 2021
a number of stages. First, individual components are separated within the sentence, removing punctuation and keeping useful words. Next, each word is represented as a number of tokens whose shape depends on the richness of input and output dictionaries. Because we cannot represent words mathematically, we need to map them to some numerical form. Yet we cannot build a mapping with all possible words, because that greatly increases model complexity, so in practice dictionaries with most-popular sub-words are used. Once tokenized, individual tokens are then projected into the embedding space (e.g. word2vec [47]), a high-dimensional space where knowledge about individual tokens is encoded. As computation progresses, each inference step depends on the embeddings of all of input tokens and output tokens produced so far. For example, imagine encoding the word âAthazagoraphobiaâ. With commonly used English dictionaries, it will get assigned 4 tokens for its input size of 16: âathâ, âazâ, âagorâ, âaphobiaâ. If a user makes a typing mistake, say âAthazagoraphpbiaâ, then suddenly its representation turns into 7 tokens for the same size of 16: âathâ, âazâ, âagorâ, âaphâ, âpâ, âbiâ, âaâ. An adversary can exploit it and construct large token representations. For example, âA/h/z/g/r/p/p/i/â will be 16 separate tokens. Ultimately, unknown words both in the input and output spaces will lead to a much larger sentence representation and many more inference runs.
Consider an input sequence x and an output sequence y. We denote the input and output token sizes (i.e. the number of individual tokens extracted from an input sentence and produced for the output sentence) with ltin and ltout. Each of the words in a sequence is embedded in a space of dimensionality lein, for the input, and leout, for the output. Algorithm 1 contains the pseudocode for a Transformerâs principal steps. In red, we annotate the computational complexity of the following instruction. As can be seen, several quantities can be manipulated by an adversary to increase the algorithmâs run time: 1) token size of the input sentence ltin; 2) token size of the output sentence ltout; and 3) size of the input and output embedding spaces (lein and leout). All of the above can cause a non-linear increase in algorithmic complexity and thus heavily increase the amount of energy consumed. Note that perturbing these quantities does not require that the adversary modify the dimension of input sequence x; that is, with no changes to the input length, the adversary can increase energy consumption non-linearly.
Algorithm 1: Translation Transformer NLP pipeline Input: Text sentence x Result: y â O(ltin)
1 xtin = Tokenize(x); 2 ytouts = â
; â O(lein) 3 xein = Encode (xtin); â O(ltin à lein à ltout à leout) 4 while ytout has no end of sentence token do 5 6 7 8 9 end â O(leout) yeout = Encode (ytout); â O(lein à leout) yeout = model.Inference(xein, yeout, ytouts); â O(leout); ytout = Decode(yeout); ytouts.add(ytout); â O(ltout); 10 y = Detokenize(ytouts)
# 4.2.2 Hypothesis 2: Data Sparsity
The rectified linear unit (ReLU), which computes z ++ max(0, 2), is the de facto choice of activation function in neural network architectures. This design introduces sparsity in the activations of hidden layers when the weighted sum of inputs to a neuron is negative. A large number of ASIC neural network accelerators consequently exploit runtime data sparsity to increase efficiency [48-50]. For instance, ASIC accelerators may employ zero-skipping multiplications or encode DRAM traffic to reduce the off-chip bandwidth requirement. The latest Xilinx AI compiler provides optimisations [51] for automatically deploying sparse models to their FPGA devices, promoting the use of model sparsity in production systems. On the algorithmic level, there is a recent surge of interest in using dynamic coarse-grained sparsity for accelerating GPU inference [52,53]. Hence, inputs that lead to less sparse activations will increase the number of operations and the number of memory accesses, and thus energy consumption.
5
# A PREPRINT - DECEMBER 14, 2021
# 4.3 The Laws of Physics
Before getting to the details of the attack, we need to understand what affects energy consumption and what the attacker can reliably inï¬uence. Energy E is the total consumed static power Pstatic and dynamic power Pdynamic for an interval of time t. This energy formulation can be analysed in more detail; we show how to do this in Appendix B.
E =(Petatic + Paynamic) xt overheat or increase overall consumption at (IDE x (eB) x =([S° 1. x (e ) x Veore] (1) throttle or exploit load predictor 2 -ââ + Lal xC x Vie x f J) x 4) more activity of the board run for longer or exploit the predictor
The salient elements from Equation (1) are that an attacker can affect energy use through four parameters: T (tem- perature), α (activity ratio), f (frequency) and t (time). Our sponge examples directly exploit the activity ratio α and execution time t, since these two parameters are tightly linked to the number of operations and memory accesses performed by model inference. Although frequency f and temperature T will be inï¬uenced indirectly through optimisa- tions performed by the underlying hardware, these are not our direct targets. We hypothesise these parameters (f and T ) can also be exploited to create hardware level availability attacks on ML systems, e.g. forced throttling or heating of devices, but they are beyond the scope of this paper.
# 4.4 Attack Methods and Setups
Having presented the intuition behind our attacks, we now introduce strategies for ï¬nding sponge examples correspond- ing to the threat models described in Section 4.1.
# 4.4.1 Genetic Algorithms in White-box and Black-box Settings
Genetic algorithms (GA) are a powerful tool for adversaries [54]. They can optimise a diverse set of objectives, and require no local gradient information. They are a particularly good ï¬t for adversaries who only have access to the modelâs prediction in a Black-box setting. The general pipeline of a GA is presented in Algorithm 2 in Appendix. We start with a pool of randomly generated samples S. These are images for computer vision models, or sentences for NLP tasks. We then iteratively evolve the population pool as is depicted in Figure 1.
⢠For computer vision tasks, we sample two parents A and B from the population pool, and crossover the inputs using a random mask A â mask + (1 â mask) â B.
⢠For NLP tasks, we sample two parents A and B, and crossover by concatenating the left part of parent A with the right part of parent B. We then probabilistically invert the two parts.
We explain the reasons for these choices in Appendix C. Next, we randomly mutate (i.e. perturb) a proportion of the input features (i.e. pixels in vision, words in NLP) of the children. To maintain enough diversity in the pool, where applicable we preserve the best per-class samples in the pool. We obtain a ï¬tness score P for all pool members, namely their energy consumption. We then select the winning top 10% of samples ËS,3, and use them as parents for the next iteration. This genetic algorithm is simple but effective in ï¬nding sponge examples. Parameter choice is explained in Appendix A. In Appendix C, we further explain the domain-speciï¬c optimisations of the GA algorithm on NLP and CV tasks for achieving a better attack performance.
Although following the same algorithm described above, we form two variants of GA for Black-box and White-box attacks respectively, each differing the way we measure ï¬tness:
⢠White-box GA: We access the parameters of the neural networks and provide an estimated energy cost based on the run-time sparsity, i.e. number of operations based on the structure and parameters of the neural networks.
⢠Black-box GA: We do not access any of the neural network internals, and use purely the measured hardware cost as the ï¬tness, i.e. latency or energy consumption.
3As the sample pool is large, selecting the top 10% makes the process more tractable.
6
A PREPRINT - DECEMBER 14, 2021
Interactive Sponge construction Evolve a pool of best Measure energy or sponges over time latency of a response ° vo oN Overheating underlying hardware Overconsuming energy Random mutation availa bility Combine randomly avail tation <â< | exploi tation avail nation
Figure 1: Availability adversary constructs sponge examples using a genetic algorithm. The adversary tries samples against the model and measures either latency or energy consumed, mixing the best performing samples in the pool. Eventually, the attacker identiï¬es potent sponge examples.
# 4.4.2 L-BFGS in the White-box Setting
We now consider an adversary with access to the modelâs parameters. Rather than a genetic algorithm, we use L-BFGS [55] to optimise the following objective:
- SS lla, (2) alecA
# alâA
where A is the set of all activation values and al the activations of layer l. This generates inputs that increase activation values of the model across all of the layers simultaneously. Following Objective 1 outlined above, the increase in density prevents hardware from skipping some of the operations, which in turn increases energy consumption. We only evaluate the performance of sponge examples found by L-BFGS on computer vision tasks because of the discrete nature of the NLP tasks, which prevents differentiating the objective in Equation (2)4.
# 4.4.3 Cross-model and Cross-hardware Transferability for Blind Adversaries
When adversaries are unable to query the model, they cannot directly solve an optimisation problem to ï¬nd sponge examples, even using the interactive Black-box approach, i.e. the GA. In this blind-adversary setting, we exploit transferability across both models and hardware. Indeed, in Section 5.4 and Section 6.3 in the Appendix, we show that sponge examples transfer across models. We examine three hardware platforms in our evaluation:
⢠CPU: The platform is an Intel(R) Xeon(R) CPU E5-2620 v4 with 2.10GHz clock frequency. We use the Running Average Power Limit (RAPL) to measure energy consumption of the CPU. RAPL has been thoroughly evaluated and found to reï¬ect actual energy consumption, as long as the counters are not sampled too quickly [56, 57].
4It is worth noting that for NLP tasks given knowledge of the dictionary an attacker can design the worst possible input and output token sequences. In this paper we make no assumptions about the dictionary or the model deployed, instead we optimise directly over energy or time.
7
# A PREPRINT - DECEMBER 14, 2021
⢠GPU: We use a GeForce 1080 Ti GPU with a 250.0 Watts power limit, a 96â¦C slowdown temperature and a 84â¦C throttling temperature. We use the NVIDIA Management Library (NVML) to measure energy consumption. NVML was previously found to capture energy quite accurately, with occasional instability for high-low patterns and high sampling rates [58].
⢠ASIC: We also developed a deterministic ASIC simulator, which monitors and records the runtime operations and number of DRAM accesses assuming a conservative memory ï¬ushing strategy. We then use measurements by Horowitz to approximate energy consumption [45]: at 45nm technology and 0.9V, we assume 1950 pJ to access a 32 bit value in DRAM and 3.7 pJ for a ï¬oating-point multiplication.
We show in Section 5.4 that sponge examples transfer across these types of hardware.
# 5 Sponge Examples on Language Models
# 5.1 Models and Datasets
We ï¬rst evaluate our sponge example attack on a range of NLP models provided by the FairSeq framework [59]. The models we consider have achieved top performance at their respective tasks and are used heavily in the real world to analyse data at scale. We report the performance of the RoBERTa [60] model, an optimised BERT [61], on three GLUE benchmarks designed to assess language understanding [62]. The datasets we considered include tasks in the SuperGLUE benchmark plus a number of machine-translation tasks. The SuperGLUE benchmark follows the style of GLUE but includes a wider range of language-understanding tasks including question answering and conference resolution [62, 63]. Further, we evaluate the attack on a number of translation tasks (WMT) using Transformer-based models [64â66]. Both translation and language comprehension are fundamental to human society and form a bridge between computers and humans. They are built into virtual assistants and many other applications in the real world that are used on a day-to-day basis.
Consider the pipeline for handling text. Before getting to the models, the text goes through several preprocessing steps. First, words get tokenized in a manner meaningful for the language. We used the tokenizer from the Moses toolkit [67], which separates punctuation from words and normalises characters. Next, tokenized blocks get encoded. Until recently, unknown words were simply replaced with an unknown token. Modern encoders improve performance by exploiting the idea that many words are a combination of other words. BPE is a popular approach that breaks unknown words into subwords it knows and uses those as individual tokens [68]. In that way, known sentences get encoded very efï¬ciently, mapping every word to a single token, and the number of computations is greatly reduced.
# 5.2 White-box Sponge Examples
In this section, we look at the White-box GA attack (as explained in Section 4.4.1) for generating sponge examples. In this setup, we access to the parameters and run-time information of the neural networks. The GA optimisation relies on the estimated number of operations of the neural network inference.
Table 1 shows the energy consumption of different models in the presence of our generated White-box sponge examples. For different input sequence sizes and a wide range of NLP tasks, we show the energy costs of sponge examples on both GPUs (GPU Energy) and the ASIC simulator (ASIC Energy). We use natural, random and sponge to represent the energy measured on data from the evaluation dataset, randomly formed strings and sponge examples. In addition, we also report the latency of running these samples on GPUs. Due to the limitation of the ASIC simulator, we cannot have faithful time measurements and these numbers are not reported.
We have made several important observations:
⢠The energy cost of sponge examples is always the highest on both GPUs and ASICs. In the best-case scenario for the attacker, sponge examples increase energy consumption by 26Ã.
Randomly generated samples are more energy-consuming than natural samples. ⢠When the task is quick to execute, sponge examples do not show big performance degradation in terms of
GPU Time, but they increase latency signiï¬cantly when the task takes more time (up to 30Ã).
The main reason for performance degradation appears to be the increased dimension of the computation, as described in Algorithm 1. First, for a given input sequence size, the attack maximises the size of the post-tokenisation representation (xtin), exploiting the tokeniser and sub-word processing. Words with which the model is less familiar are represented inefï¬ciently, forcing the network to do more work. Imagine holding an email conversation with an academic from another ï¬eld, who uses speciï¬c technical terms. Every time an unfamiliar term appears you have to look it up in a
8
A PREPRINT - DECEMBER 14, 2021
Input size GPU Energy [mJ] Natural Random Sponge ASIC Energy [mJ] Natural Random Sponge GPU Time [mS] Natural Random Sponge SuperGLUE Benchmark with [60] CoLA 15 30 50 2865.68 3023.705 3170.38 1.00Ã 1.06Ã 1.11Ã 3299.07 4204.121 4228.22 1.00Ã 1.27Ã 1.28Ã 3384.62 6310.504 6988.57 2.06Ã 1.00Ã 1.86Ã 583.56 504.93 566.58 1.00Ã 1.12Ã 1.16Ã 508.73 634.24 669.20 1.00Ã 1.25Ã 1.32Ã 780.57 511.43 724.48 1.00Ã 1.42Ã 1.53Ã 0.02 0.02 0.02 1.00Ã 0.92Ã 0.92Ã 0.02 0.03 0.03 1.00Ã 0.93Ã 0.82Ã 0.04 0.04 0.03 1.00Ã 1.23Ã 1.27Ã MNLI 15 30 50 3597.3 3203.01 3573.93 1.12Ã 1.00Ã 1.12Ã 3330.22 4752.84 5045.25 1.00Ã 1.43Ã 1.51Ã 3269.34 6373.507 7051.68 2.16Ã 1.00Ã 1.95Ã 509.19 570.10 586.43 1.00Ã 1.12Ã 1.15Ã 514.00 638.78 672.07 1.00Ã 1.24Ã 1.31Ã 519.51 728.82 783.18 1.00Ã 1.40Ã 1.51Ã 0.03 0.03 0.03 1.00Ã 1.01Ã 0.95Ã 0.03 0.03 0.03 1.00Ã 1.06Ã 1.03Ã 0.04 0.03 0.04 1.00Ã 1.28Ã 1.30Ã WSC 15 30 50 4287.24 13485.4938106.98 1.00Ã 3.15Ã 8.89Ã 4945.47 36984.4479786.57 1.00Ã 7.48Ã 16.13Ã 6002.68 81017.01159925.23 1.00Ã 13.50Ã 26.64Ã 0.20 0.46 0.93 0.46
WMT14/16 with [64]
15 15 9492.30 25772.8940975.78 4.32Ã 1.00Ã 2.72Ã 8573.59 13293.51238677.16 1.00Ã 1.55Ã 27.84Ã 0.37 2.09
1793.84 4961.56 8494.36 1.00Ã 2.77Ã 4.74Ã 1571.59 2476.18 48446.29 1.00Ã 1.58Ã 30.83Ã 1.00Ã 1.46Ã 24.18Ã
# WMT18 with [65]
# EnâDe
15
28393.9738493.96874862.97 1.00Ã 1.36Ã 30.81Ã
| |
1624.05 2318.50 49617.68 1.00Ã 1.43Ã 30.55Ã 1.00Ã 1.20Ã 26.49Ã
0.27
0.33
7.25
# WMT19 with [69]
# EnâRu
15
33181.4391513.13876941.24 1.00Ã 2.76Ã 26.43Ã
| |
1897.19 5380.20 47931.11 1.00Ã 2.84Ã 25.26Ã 1.00Ã 2.46Ã 22.85Ã
0.31
0.77
7.19
Table 1: Energy is reported in milli joules. We use the White-box GA attack to produce sponge examples and measure the performance on different platforms. The GPU readings are from NVML. GA was run for 1000 epochs with a pool size of 1000. A detailed explanation of the results is in Section 5.2. Standard deviation for ASIC measurements are shown in Table 5.
dictionary or search engine. Second, the attack learns to maximise output sequence length, since this links directly to the computation cost. Third, internal computation coupled with output sequence length and post-tokenisation length give a quadratic increase in energy consumption. These reasons explain why sponge examples can signiï¬cantly increase both energy and latency of language models. Do note that in this paper we use relatively small input sizes and in
9
# A PREPRINT - DECEMBER 14, 2021
practice the effect will be a lot more pronounced for larger texts. Indeed as we later show in Section 5.5, in a Black-box setup with 50-character long text inputs an attack on Azure Language Translator caused 6000Ã degradation.
Interestingly, we observe that randomly generated samples signiï¬cantly reduce the performance of NLP tasks. This can be attributed to the fact that natural samples are efï¬ciently encoded, whereas random and attack samples produce an unnecessarily long representation, meaning that random noise can be used as a scalable Black-box latency and energy attack tool. Otherwise put, many ML systems are vulnerable to simple barrage jamming.
It is also worth noting that the short execution of inference on GPUs makes it hard to provide an accurate measurement even with iterative runs. We further explain how this measurement is difï¬cult due to a variety of hardware problems in Section 6.2.
In the upcoming sections, we turn to Black-box variants of the attack based on energy and latency measurements of the individual samples. We mentioned previously that modern hardware optimises Operations per Watt (OPs/W), making sure that energy is only actively consumed when useful work is being done. In our experiments we see that the relationship between degradation factors of energy and time ranges between 1.15 and 1.62 (see Table 4 in Appendix), with energy scaling faster5.
# 5.3 Interactive Black-box Sponge Examples
In this section, we show the performance of the attacks running in an interactive Black-box manner against NLP tasks. In this setup, we launch the Black-box GA attack as described in Section 4.4.1. This interactive Black-box setup assumes that attackers cannot access to the neural network parameters but have the abilities to measure the energy or latency remotely. In addition, they can query the service as many times as they like, so there is no rate limiting. We evaluate two Black-box attacks, and they use GPU Time and GPU Energy as the optimisation targets for the GA. We also present results for a White-box GA attack in the third setup as a baseline, which is the same attack used in Section 5.2.
Figure 2 shows sponge example performance against a WMT14 English-to-French Transformer-based translator with an input of size 15 and pool size of 1000. In Figure 2, we use the name GPU Energy Attack, GPU Time Attacker and White-box Attacker to represent these different attacks. In addition, we report measurements of every iteration of the GA for these different attackers on GPU Energy, GPU Time and ASIC Energy respectively. In Figure 2, the legends represent attackers with different measurement proxies; and we show that these interactive Black-box attacks are transferable across hardware platforms and measurement proxies. For instance, an attack targeting GPU Time transfers well when used to increase the energy cost of the ASIC simulator.
It can be seen that although the attackers have no knowledge of any neural network internals or datasets, they are able to successfully increase the energy and time costs. The experiment further highlights the difference between using time and energy as ï¬tness functions. While time is noisy and depends on the current state of the hardware, energy remains relatively stable during the attack. As was explained previously, that can be attributed to the hardware switching its performance modes to keep the ratio of useful work to energy constant.
# 5.4 Blind Black-box Sponge Examples and Hardware Transferability
In this section, we turn to the question of transferability across hardware and different models in a blind Black-box manner. As weâve described in Section 4.1, in the blind Black-box setup the attacker blindly transfers the previously discovered sponge examples to the target. Table 2 shows the results across different models, tasks and hardware platforms. The ï¬rst column is the source task that we used to produce sponge examples. We then later launch these sponge examples to the target tasks shown in the second column. We report the performance of both sponge and natural examples on the targeting task in Table 2. Since the ASIC simulator is coarse-grained, it does not produce faithful execution time estimation, we thus only report the estimated energy cost. In general, in Table 2, we observe a signiï¬cant increase in energy and time in comparison to natural samples on all hardware platforms. However, the blind Black-box attacks fail to achieve the same level of energy or latency degradation when taking the White-box case as a baseline.
# 5.5 A Case Study: Microsoft Azure Translation
We evaluated sponge attack performance against an actually deployed service that is available on demand. We present a Black-box attack against this production system without any assumptions about its internals. As with the experiment setup in Section 5.3, we interactively evolve the pool of samples in a Black-box setting using latency as a ï¬tness
5Interestingly, we observe a net energy increase for the task if throttling happens. Although throttling decreases the running frequency and the voltage, it signiï¬cantly increases the execution time so that the overall energy consumption has increased.
10
A PREPRINT - DECEMBER 14, 2021
ASIC GPU CPU From To WMT14enâf r [64] Sponge Natural 3648.219 1450.403 2.52Ã 0.174 0.053 3.27Ã 17251.000 6146.550 2.81Ã 1.048 0.537 1.95Ã WMT18enâde [65] Sponge Natural 2909.245 1507.364 1.93Ã 0.414 0.253 1.64Ã 47723.500 27265.250 1.75Ã 3.199 1.344 2.38Ã WMT19enâru [66] Sponge Natural 3875.365 1654.965 2.34Ã 0.652 0.215 3.03Ã 67183.100 25033.620 2.68Ã 4.409 2.193 2.01Ã Sponge Natural 48447.093 1360.118 35.62Ã 2.414 0.056 42.98Ã 260187.900 6355.620 40.94Ã 13.615 0.520 26.20Ã
# Energy [mJ] Time [S] Energy [mJ] Time [S] Energy [mJ]
# Black-box
51512.966 23610.145 2.18Ã
181936.595 71714.201 2.54Ã
# WMT16enâde [64]
247585.091 121210.376 2.04Ã
# White-box
781758.680 23262.311 33.61Ã
# WMT16enâde [64] WMT16enâde [64]
Table 2: Energy values are reported in milli Joules and time is reported in seconds. GA was run for 100 epochs with a pool size of 1000. More results are available in Appendix. The ï¬rst column shows source task that we generate sponge examples, and the second column shows the target task to launch these sponge examples. The performance of the sponge examples are evaluated on three hardware platforms (ASIC, GPU and CPU).
(a) GPU Energy (b) GPU Time (c) ASIC Energy
Figure 2: Black-box attack performance of sponge examples on different hardware metrics against English-to-French translation model [65]. We show two Black-box attackers (GPU Energy and GPU Time attacker) and one White-box attacker, all using GA as the optimisation for ï¬nding sponge examples.
function. Note that, in this case, observed sample ï¬tness is noisy as it includes communication latency. That in turn makes it harder to perform the attack.
We used the Microsoft Azure Translation system located on the same continent as the requesting server. We ï¬xed the input to be 50 characters long, with a pool size of 500, and ran the attack for 50 epochs. We report four different attack runs, each running immediately when the previous attack ï¬nishes. The attacks are run sequentially, so Azure only translates a single sample at a time. It should be noted that we hold no assumptions about the actual system and do not possess any information about what architecture or dataset is used. Furthermore, we possess no information on whether Azure employs query-caching strategies or other optimisation techniques.
Figure 3 shows the performance of the attack as observed by the requesting server and reported by Azure. The serverâs reported numbers are larger as they include additional noise from communication latency. It can be clearly seen that
11
A PREPRINT - DECEMBER 14, 2021
(a) Requesting server measured (b) Azure reported
Figure 3: Maximum latency of the Microsoft Azure Translator model as is observed on the requesting server (a); reported by Azure servers (b). Azure servers were located on the same continent as the requesting server. Natural data mean baseline is at 1ms. We report multiple attack runs to show that the attack performs consistently with multiple restarts and the performance is not speciï¬c to the throttling of the user account.
all four separate runs of GA were capable of converging to samples that were consuming considerably more time to process â up to a maximum degradation factor of 6000Ã. Although we have no way of telling the amount of energy that was consumed by the computation, results from Section 5 suggest that the energy consumption increase should also be in the range of thousands. Interestingly, we ï¬nd that performance varies greatly within the pool during the attack. We suspect this is due to Azureâs caching mechanism, where previously performing samples get almost constant time computation and the pool has to adapt quickly. For all individual runs, we see an upâdown pattern that we do not observe with experiments on our own hardware. Interestingly, Azure translator assigns high conï¬dence scores > 0.9 to the sponge example predictions.
Sponge examples against translators strongly resemble Denial-of-Service (DoS) attacks observed in the wild. DoS attacks aim to make computer systems unresponsive and unavailable via excess connections or data requests. Instead of overwhelming the victimâs bandwidth as in the vast majority of DoS attacks [70], we target the application layer. In particular, we target the most expensive parts of the translator to get an extraordinary ampliï¬cation factor by sending speciï¬cally crafted requests. For example with Azure and an input of length 50, we were getting translated responses spanning thousands of characters. This ï¬nding bridges ML to the ï¬eld of classic computer security and suggests that decades of experience with managing service-denial attacks can be applied here.
Ethics: Having established the similarity of Sponge examples to DoS attacks, it is appropriate to discuss the ethics of the experiments. First, we paid for the translation service and used only legitimate and well-formed API calls. For experiments and testing, we performed around 200k queries. Second, to minimise the impact of sponge examples on Azure and CO2 production, we chose relatively small input and pool sizes. Although the maximum input size that Azure accepts is 10000 characters per request, we used only 50. We expect that the impact of sponges can be further increased by running GA with a larger input size [71]. Third, we ran the experiment at night in the data-center timezone, when it is easier to cool the servers and energy costs are lower. Fourth, to minimise the interaction of sponges between each other we executed a single sample at a time. Finally, we followed our standard responsible disclosure process: we notiï¬ed Microsoft of the vulnerability of their Translator to sponge examples and shared a draft of the paper with the Microsoft Azure team. We want to stress that sponges are not speciï¬c to Microsoft Azure, and other ML-as-a-Service providers will be affected by the same attack. Microsoft Azure was chosen because of our experience with the Azure platform. Since the discovery, in a joint effort with Microsoft, sponge examples have been added to the MITRE attack framework for AI security6.
# 5.6 Section summary
In this section, we demonstrate the effectiveness of sponge examples in different attack setups (White-box, Interactive Black-box and Blind Adversary). We consider a set of state-of-the-art sequence learning models, such as BERT [61]
6https://github.com/mitre/advmlthreatmatrix
12
# A PREPRINT - DECEMBER 14, 2021
and RoBERTa [60]; and a wide variety of tasks. The performance of sponge examples are task-dependent and also model-dependent, however, all of the evaluated models and tasks show signiï¬cant latency and energy increase when they are under attack. In addition, we demonstrate the transferability of sponge examples not only across hardware platforms but also across model architectures. Finally, we demonstrated how sponge examples can be generated in a Black-box manner on existing ML-as-a-service platforms, greatly increasing the response latency.
# Energy ASIC Energy [mJ] Energy ratio
# Density Post-ReLU Overall Maximum
ImageNet ResNet-18 Sponge LBFGS Sponge Natural Random 53.359 ± 0.004 51.816 ± 0.271 51.745 ± 0.506 49.685 ± 0.008 0.899 0.873 0.871 0.837 0.685 0.599 0.596 0.480 0.896 0.869 0.869 0.834 ResNet-50 Sponge LBFGS Sponge Natural Random 164.727 ± 0.062 160.887 ± 0.609 160.573 ± 1.399 155.819 ± 0.016 0.863 0.843 0.842 0.817 0.619 0.562 0.572 0.483 0.885 0.868 0.867 0.845 ResNet-101 Sponge LBFGS Sponge Natural Random 258.526 ± 0.028 254.182 ± 0.561 253.004 ± 1.345 249.026 ± 0.036 0.857 0.842 0.839 0.825 0.597 0.556 0.545 0.507 0.873 0.861 0.857 0.846 DenseNet-121 Sponge LBFGS Sponge Natural Random 152.595 ± 0.050 149.564 ± 0.502 147.247 ± 1.199 144.366 ± 0.036 0.783 0.767 0.755 0.741 0.571 0.540 0.523 0.487 0.826 0.814 0.804 0.792 DenseNet-161 Sponge LBFGS Sponge Natural Random 288.427 ± 0.087 287.153 ± 0.575 282.296 ± 2.237 279.270 ± 0.065 0.726 0.723 0.711 0.703 0.435 0.429 0.404 0.387 0.764 0.761 0.751 0.744 DenseNet-201 Sponge LBFGS Sponge Natural Random 237.745 ± 0.156 239.845 ± 0.522 234.886 ± 1.708 233.699 ± 0.098 0.756 0.763 0.747 0.743 0.505 0.519 0.487 0.479 0.788 0.794 0.781 0.777 MobileNet v2
Table 3: We report the performance of two White-box attacks, Sponge and Sponge LBFGS against a number of computer vision benchmarks. They are optimised using the the GA and LBFGS respectively for ï¬nding sponge examples. We show the energy readings from the ASIC simulator and the Energy ratio. The Energy ratio is a ratio between the estimated energy of an ASIC optimised for sparse matrix multiplication and an ASIC without such optimisations. To further illustrate the internals of neural networks, we show data densities that are post-ReLU, across the entire neural network, and also the maximum possible density calculated using interval bound propagation (IBP). Details are described in Section 6.2.
13
# A PREPRINT - DECEMBER 14, 2021
Transferability of Sponge LBFGSB attacks resnetis 0.02 0.01 0.01 ETE 0.020 Sere 0.01 0.01 0.01 0.01 0.01 Sel uoe 0.015 resnet101 0.01 0.01 0.01 0.01 . oo10 densenet121 0.02 0.01 0.02 Soe ⬠0.005 * densenet161 Oe 0.000 densenet201 googlenet 0.01 0.01 i 0.005 mobilenet_v2 0.010 20% 3% 30 aM sa safe ot em e⢠om on⢠co" 20% 509" er⢠seer pore fens oP Fx To
Transferability of Sponge LBFGSB attacks resnet18 (gy oy) 0.02 0.03 0.02 0.01 0.02 0.06 resnets0 {Que} UW) 0.02 0.03 0.02 0.02 0.01 0.02 0.05 resnet101 (Wes). :0.03 0.03 0.02 0.02 0.01 0.02 densenet121 0.02 (7) 0.02 0.02 0.01 0.02 0.04 ⬠* densenet161 0:02 Rig) 0:02,0:02) 0.01 0:02 0.03 densenet201 {xu 0.03 0.02 0.03 0.02 0.01 0.01 0.02 googlenet 0.02 0.03 0.02 0.02 0.01 0.02 0.02 mobilenet_v2 figugg 0-02 0.01 (0,02 0.01 0.01 0.01 0.01 20% 3% 930 gaP a game oN? em e⢠em ere en& G00 er⢠gener poree pense oP oo To
Transferability of Sponge LBFGSB attacks Transferability of Sponge LBFGSB attacks resnetis 0.02 0.01 0.01 ETE 0.020 resnet18 (gy oy) 0.02 0.03 0.02 0.01 0.02 Sere 0.01 0.01 0.01 0.01 0.01 Sel uoe 0.015 resnets0 {Que} UW) 0.02 0.03 0.02 0.02 0.01 0.02 resnet101 0.01 0.01 0.01 0.01 . oo10 resnet101 (Wes). :0.03 0.03 0.02 0.02 0.01 0.02 densenet121 0.02 0.01 0.02 Soe densenet121 0.02 (7) 0.02 0.02 0.01 0.02 0.005 ⬠densenet161 Oe * densenet161 0:02 Rig) 0:02,0:02) 0.01 0:02 0.000 densenet201 densenet201 {xu 0.03 0.02 0.03 0.02 0.01 0.01 0.02 googlenet 0.01 0.01 i 0.005 googlenet 0.02 0.03 0.02 0.02 0.01 0.02 mobilenet_v2 0.010 mobilenet_v2 figugg 0-02 0.01 (0,02 0.01 0.01 0.01 20% 3% 30 aM sa safe ot 20% 3% 930 gaP a game oN? em e⢠om on⢠co" 20% 509" er⢠em e⢠em ere en& G00 er⢠seer pore fens oP Fx gener poree pense oP oo To To (a) Sponge density - Normal density (b) Sponge density - Random density
(a) Sponge density - Normal density
(b) Sponge density - Random density
Figure 4: Transferability of sponge examples across different computer vision benchmarks.
(a) ResNet family (b) DenseNet family (c) Other networks
Figure 5: Class-wise average densities of natural samples from the ImageNet validation dataset. Some classes are a lot more densely represented internally than others. X-axis shows the class numbers, whereas Y-axis shows densities.
# 6 Sponge Examples on Vision Models
# 6.1 Models and Datasets
We evaluate the sponge example attack on a range of vision models provided in the TorchVision library. We show the performance of ResNet-18, ResNet-50 and ResNet-101 [72], DenseNet-121, DenseNet-161, DenseNet-201 [73], and MobileNet-V2 [74]. Networks span a range of sizes from 3.4M parameters (MobileNet-V2) to 49M (ResNet-101). The considered networks also have a relatively large architectural diversity, where MobileNet-V2 is designed to run on modern battery-powered mobile devices. All of the networks classify a canonical computer vision task â ImageNet-2017, since the ImageNet challenge serves as a golden baseline in the computer vision community.
# 6.2 White-Box Sponge Examples
Following objectives in Section 4.2.1 and Section 4.2.2, we can increase energy consumption by increasing either computation dimension or data density. Although theoretically we can provide larger images to increase the computation dimension for computer vision networks, very few modern networks currently deal with dynamic input or output. Usually preprocessing normalizes variable-sized images to a pre-deï¬ned size by either cropping or scaling. Therefore, for computer vision models, we focus on increasing energy and latency via data density.
Table 3 shows the performance of sponge examples on CV models, and we focused on using White-box attacks to maximise energy consumption. We use both the White-box GA and White-box L-BFGS to generate sponge examples
14
A PREPRINT - DECEMBER 14, 2021
(named sponge and sponge LBFGS in Table 3). Since the energy consumption is lower per inference, it is challenging to get a true measurement of energy given the interference of the GPUâs hardware temperature control, and that energy inspection tools lack the resolution. We then show the ASIC Energy readings and the Energy Ratio in the ï¬rst two columns. The Energy Ratio term refers to the cost on an ASIC with data sparsity optimisations compared to the cost on an ASIC without any optimisations. We considered data sparsity optimisations including compressed DRAM accesses, zero-skipping multiplications. These optimisation techniques are widely adopted in many proposed ASIC accelerators [26, 49, 75], and there are now real implementations of these techniques in hardware. We then further look at the internals of neural networks and show how their data density is changing with different types of samples. We calculate the theoretical upper bounds of data density using Interval Bound Propagation (IBP) [76]. Although originally developed for certiï¬able robustness, we adopt the technique to look at internal network bounds that only take value 0 (i.e. lower bound = upper bound = 0) for the whole natural image range7. We also look at data densities after the ReLU function (Post-ReLU) and the overall densities. The results for density and energy suggest that both attacks can successfully generate sponge examples that are marginally more expensive in terms of energy. To be precise, we were able to get a 1 â 3% increase in energy consumption when compared to natural samples. Interestingly we observe that more of the density impact comes in the ï¬rst few layers. To better understand the difference in performance please refer to Appendix D. We show a statistical analysis across a wide range of CV models and describe the difï¬culties of precisely showing performance on CPUs and GPUs in Appendix D.
For computer vision models, we also ï¬nd that different architectures will have similar class-wise computation densities and sponge examples can increase densities across model architectures.
# 6.3 Transferability of Attacks
We observe that sponge examples are transferable and can be used to launch a blind Black-box attack. Figure 4 shows the density difference of transferred sponge samples. For all networks but one (MobileNet), the sponge samples increased the internal data density despite not having any knowledge of what natural samples look like or any architectural and parameter information of the targeted neural networks. All of the sponge samples outperformed random noise, suggesting that sponge samples target speciï¬c features of the data set and can be applied in a blind Black-box fashion.
# 6.4 Class-Wise Natural Data Density
Figure 5 shows the densities of natural samples from the ImageNet dataset. On the horizontal axis, we show the 1000 classes of ImageNet and the vertical axis displays the run-time data densities for samples in that class. It can be clearly seen that there are per-class similarities between data densities of natural samples. These are particularly pronounced within ResNet and DenseNet architectures hinting that similar architectures will learn similar features so that samples of the same class have similar run-time densities across architectures. Finally, Figure 5.c shows the summed per-class densities across all of the tested networks. There are classes that are consistently more dense than others. This test is helping us to summarise that, in computer vision tasks, there exist natural samples that are producing more computation because of the increased data densities. This intrinsic property suggests that an adversary may send natural samples resulting in higher activation density to drain the energy of targeted devices.
# 6.5 Section summary
In this section, we report the results of sponge examples on computer vision models. We observe that sponge examples can successfully decrease the run-time sparsity of CV models, thus generating marginally more energy-consuming samples for ASICs that utilise data sparsity. The generated hardware differences are too small to be reliably observed on GPUs, however, we show that the GPU energy readings are statistically different between normal and sponge samples in Appendix D.1. The computation of CNN inference is more structured and normally only handles ï¬xed-sized inputs. This structured computation ï¬ow provides fewer opportunities for sponge examples, and only hardware devices utilising ï¬ne-grained sparsity are vulnerable to sponge attacks.
# 7 Discussion
# 7.1 Lessons from Sponge Examples
In this paper, we demonstrated novel service-denial attacks on ML components. In their current form, our attacks work best against NLP models, whose internal complexity makes domain-speciï¬c optimisations necessary. We showed
7Note that we assume full ï¬oating point precision here. In practice, emerging hardware often uses much lower quantization which will result in a lower maximum data density.
15
# A PREPRINT - DECEMBER 14, 2021
that our attacks can also target hardware optimisations, suggesting that the capable attacker will always be capable of exploiting different optimisations across the stack.
Our attacks will have a signiï¬cant impact on how future ML pipelines will be deployed. The integration of different ML components can only lead to higher complexity, which will in turn be even more vulnerable to the sponge attacks described here. They may lead to ML deadlocks or livelocks, of which another precursor may be semi-trained RL agents that forever walk in circles.
The attacks presented in this paper assume that a single sample is processed at a time. This enables simple demonstrations but these are only a starting point. More complex attacks could involve samples that interact with each other. Indeed, in the world of Federated Learning, aggregators that experience delays in presence of network failure are likely to ï¬nd this leads to signiï¬cant increases in overall latency. Coordinated attacks on federated systems are a natural frontier for future research.
Our attacks also used two main optimisation strategies, but others can be tried. In our attack on the Microsoft Azure Translator, it appears that a caching mechanism was making previously potent samples perform poorly in subsequent runs, but we still managed, using a genetic algorithm, to ï¬nd powerful sponge examples.
Our attacks were used offensively in this paper, but such examples should also be used routinely in safety testing. Worst-case examples may also turn up by happenstance, so it is prudent to use adversarial techniques to ï¬nd them in advance. Furthermore, our methodology can be used to automatically discover timing side-channels and other latent dependencies between interacting components in both ML and traditional processing pipelines.
Finally, sponge examples show that commonly deployed API rate limiting defences are not enough to protect the availability of the underlying machine learning system. Indeed, the attacker can use sponges to increase consumption of the overall system per sample without increasing the rate at which the system is queried.
# 7.2 Defending against Sponge Examples
Sponge examples can be found by adversaries with limited knowledge and capabilities, making the threat realistic. We further showed that sponge examples work against a deployed system that is available on demand. We now propose a simple defence to preserve the availability of hardware accelerators in the presence of sponge examples.
In Table 1, we observe a large energy gap between natural examples and random or sponge examples. We propose that before deploying the model, natural examples get proï¬led to measure the time or energy cost of inference. The defender can then ï¬x a cut-off threshold. This way, the maximum consumption of energy per inference run is controlled and sponge examples will simply result in an error message. In this way, their impact on availability can be bounded.
This will often be sufï¬cient to deal with the case where the threat model is battery drainage. Where the threat is a jamming attack on real-time performance, as with the vision system of an autonomous vehicle, the system will need to be designed for worst-case performance, and if need be a fallback driving mechanism should be provided. Current draft safety standards call for a self-driving car that becomes confused to slow to a stop in the same lane while alerting the human driver. This may be a good example of the appropriate response to an attack.
No single solution can tackle all of the possible abuse cases where an attacker can introduce errors into a machine- learning system. Depending on the setup both defence [77, 78] and detection [79, 80] mechanisms may be required. That problem space may be as large as the human conï¬ict itself. At the level of technical research, serious work is needed to assess what impact different hardware platforms (e.g. TPUs that do not exploit sparsity) have on susceptibility to sponge examples. Above all, it is vital to take a whole system view when engineering for security or safety; to consider what threats and hazards are realistic; and to provide appropriate defences or mitigation. In the case of attacks that cannot be prevented, the optimal strategy may often be attack detection.
# 7.3 Energy and Machine Learning
Most of the prior research on the carbon footprint of machine learning focuses on the energy required to train large neural network models and its contribution to carbon emissions [17, 81, 82]. This work shows that we need to study energy use at small scales as well as large. As with side-channel attacks on cryptographic systems, the ï¬ne-grained energy consumption of neural networks is a function of the inputs. In this case, the main consequence is not leakage of conï¬dential information but a denial-of-service attack.
First, sponge examples can aim to drain a deviceâs batteries; the operations and memory access in inference account for around a third of the work done during a complete backpropagation step, but inference happens at a much higher frequency and scale compared to training once a model is deployed. Our research characterizes the worst-case energy
16
# A PREPRINT - DECEMBER 14, 2021
consumption of inference. This is particularly pronounced with natural-language processing tasks, where the worst case can take dozens of times more energy than the average case.
Second, the sponge examples found by our attacks can be used in a targeted way to cause an embedded system to fall short of its performance goal. In the case of a machine-vision system in an autonomous vehicle, this might enable an attacker to confuse the scene understanding mechanisms and crash the vehicle; in the case of a missile guided by a neural network target tracker, a sponge example might break the tracking lock. The lesson is that system engineers must think about adversarial worst-case performance and test it carefully.
# 8 Reproducibility
It should be noted that the performance of our attacks will vary greatly across different hardware platforms and even weather outside. When running experiments in a Black-box setup on two servers with similar conï¬gurations in some cases we found the energy and latency varied by up to a factor of 10. To help reproducibility, we release the sponge examples we found, the attack code-base we used and the ASIC simulator8.
# 9 Conclusion
We introduced energy-latency attacks, which enable an adversary to increase the latency and energy consumption of ML systems to deny service. Our attacks use specially-crafted sponge examples and are effective against deep neural networks in a spectrum of threat models that realistically capture current deployments of ML â whether as a service or on edge devices. They can be mounted by adversaries whose access varies from total to none at all. As proof of concept, we showed that we can slow down translation in Microsoft Azure by a factor of several thousand. Our work demonstrates the need for careful worst-case analysis of the latency and energy consumption of computational systems that use deep learning mechanisms.
# Acknowledgment
We thank the reviewers for their insightful feedback. We want to explicitly thank Nicholas Carlini, Florian Tramèr, Adelin Travers, Varun Chandrasekaran and Nicholas Boucher for their help and comments. This work was supported by CIFAR (through a Canada CIFAR AI Chair), by EPSRC, by Apple, by Bosch Forschungsstiftung im Stifterverband, by NSERC, and by a gift from Microsoft. We also thank the Vector Instituteâs sponsors.
# References
[1] Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Å rndi´c, Pavel Laskov, Giorgio Giacinto, and Fabio Roli. Evasion attacks against machine learning at test time. In Joint European conference on machine learning and knowledge discovery in databases, pages 387â402. Springer, 2013.
[2] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
[3] Blaine Nelson, Marco Barreno, Fuching Jack Chi, Anthony D Joseph, Benjamin IP Rubinstein, Udam Saini, Charles A Sutton, J Doug Tygar, and Kai Xia. Exploiting machine learning to subvert your spam ï¬lter. LEET, 8:1â9, 2008.
[4] Matthew Jagielski, Alina Oprea, Battista Biggio, Chang Liu, Cristina Nita-Rotaru, and Bo Li. Manipulating machine learning: Poisoning attacks and countermeasures for regression learning. In 2018 IEEE Symposium on Security and Privacy (SP), pages 19â35. IEEE, 2018.
[5] Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. Membership inference attacks against machine learning models. In 2017 IEEE Symposium on Security and Privacy (SP), pages 3â18. IEEE, 2017. [6] Ahmed Salem, Yang Zhang, Mathias Humbert, Pascal Berrang, Mario Fritz, and Michael Backes. Ml-leaks: Model and data independent membership inference attacks and defenses on machine learning models. arXiv preprint arXiv:1806.01246, 2018.
[7] Christopher A Choquette Choo, Florian Tramer, Nicholas Carlini, and Nicolas Papernot. Label-only membership inference attacks. arXiv preprint arXiv:2007.14321, 2020.
# 8https://github.com/iliaishacked/sponge_examples
17
# A PREPRINT - DECEMBER 14, 2021
[8] Sanghyun Hong, Pietro Frigo, Yigitcan Kaya, Cristiano Giuffrida, and Tudor Dumitras. Terminal brain damage: Exposing the graceless degradation in deep neural networks under hardware fault attacks. In 28th USENIX Security Symposium (USENIX Security 19), pages 497â514, Santa Clara, CA, August 2019. USENIX Association. [9] Battista Biggio and Fabio Roli. Wild patterns: Ten years after the rise of adversarial machine learning. Pattern
Recognition, 84:317â331, 2018.
[10] Nicolas Papernot, Patrick McDaniel, Arunesh Sinha, and Michael Wellman. Towards the science of security and privacy in machine learning. arXiv preprint arXiv:1611.03814, 2016.
[11] Francesco Palmieri, Sergio Ricciardi, Ugo Fiore, Massimo Ficco, and Aniello Castiglione. Energy-oriented denial of service attacks: an emerging menace for large cloud infrastructures. The Journal of Supercomputing, 71(5):1620â1641, 2015.
[12] Thomas Martin, Michael Hsiao, Dong Ha, and Jayan Krishnaswami. Denial-of-service attacks on battery-powered mobile computers. In Second IEEE Annual Conference on Pervasive Computing and Communications, 2004. Proceedings of the, pages 309â318. IEEE, 2004.
[13] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â1105, 2012.
[14] Norman P Jouppi, Cliff Young, Nishant Patil, David Patterson, Gaurav Agrawal, Raminder Bajwa, Sarah Bates, Suresh Bhatia, Nan Boden, Al Borchers, et al. In-datacenter performance analysis of a tensor processing unit. In Proceedings of the 44th Annual International Symposium on Computer Architecture, pages 1â12, 2017.
[15] Computer Vision Machine Learning Team. An on-device deep neural network for face detection. In Apple Machine Learning Journal, 2017.
[16] Paul Kocher, Jann Horn, Anders Fogh, Daniel Genkin, Daniel Gruss, Werner Haas, Mike Hamburg, Moritz Lipp, Stefan Mangard, Thomas Prescher, et al. Spectre attacks: Exploiting speculative execution. In 2019 IEEE Symposium on Security and Privacy (SP), pages 1â19. IEEE, 2019.
[17] Emma Strubell, Ananya Ganesh, and Andrew McCallum. Energy and policy considerations for deep learning in nlp. arXiv preprint arXiv:1906.02243, 2019.
[18] Armando Bernal, Sam Fok, and Rohit Pidaparthi. Financial market time series prediction with recurrent neural networks. State College: Citeseer.[Google Scholar], 2012.
[19] Marcus Edel and Enrico Köppe. Binarized-blstm-rnn based human activity recognition. In 2016 International conference on indoor positioning and indoor navigation (IPIN), pages 1â7. IEEE, 2016.
[20] Intel Intel. Intel architecture instruction set extensions programming reference. Intel Corp., Mountain View, CA, USA, Tech. Rep, pages 319433â030, 2016.
[21] Stefano Markidis, Steven Wei Der Chien, Erwin Laure, Ivy Bo Peng, and Jeffrey S Vetter. Nvidia tensor core programmability, performance & precision. In 2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), pages 522â531. IEEE, 2018.
[22] Kim Hazelwood, Sarah Bird, David Brooks, Soumith Chintala, Utku Diril, Dmytro Dzhulgakov, Mohamed Fawzy, Bill Jia, Yangqing Jia, Aditya Kalro, et al. Applied machine learning at facebook: A datacenter infrastructure perspective. In 2018 IEEE International Symposium on High Performance Computer Architecture (HPCA), pages 620â629. IEEE, 2018.
[23] Eric Chung, Jeremy Fowers, Kalin Ovtcharov, Michael Papamichael, Adrian Caulï¬eld, Todd Massengill, Ming Liu, Daniel Lo, Shlomi Alkalay, Michael Haselman, et al. Serving dnns in real time at datacenter scale with project brainwave. IEEE Micro, 38(2):8â20, 2018.
[24] Norman Jouppi, Cliff Young, Nishant Patil, and David Patterson. Motivation for and evaluation of the ï¬rst tensor processing unit. IEEE Micro, 38(3):10â19, 2018.
[25] Yu-Hsin Chen, Tien-Ju Yang, Joel Emer, and Vivienne Sze. Eyeriss v2: A ï¬exible accelerator for emerging deep neural networks on mobile devices. IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 9(2):292â308, 2019.
[26] Song Han, Xingyu Liu, Huizi Mao, Jing Pu, Ardavan Pedram, Mark A Horowitz, and William J Dally. Eie: efï¬cient inference engine on compressed deep neural network. ACM SIGARCH Computer Architecture News, 44(3):243â254, 2016.
[27] Yiren Zhao, Xitong Gao, Xuan Guo, Junyi Liu, Erwei Wang, Robert Mullins, Peter YK Cheung, George Constantinides, and Cheng-Zhong Xu. Automatic generation of multi-precision multi-arithmetic cnn accelerators for fpgas. In 2019 International Conference on Field-Programmable Technology (ICFPT), pages 45â53. IEEE, 2019.
18
# A PREPRINT - DECEMBER 14, 2021
[28] Luiz André Barroso. The price of performance. Queue, 3(7):48â53, 2005.
[29] Chao Li, Zhenhua Wang, Xiaofeng Hou, Haopeng Chen, Xiaoyao Liang, and Minyi Guo. Power attack defense: Securing battery-backed data centers. ACM SIGARCH Computer Architecture News, 44(3):493â505, 2016.
[30] Gaurav Somani, Manoj Singh Gaur, Dheeraj Sanghi, and Mauro Conti. Ddos attacks in cloud computing: Collateral damage to non-targets. Computer Networks, 109:157â171, 2016.
[31] Zhang Xu, Haining Wang, Zichen Xu, and Xiaorui Wang. Power attack: An increasing threat to data centers. In NDSS, 2014.
[32] Zhang Xu, Haining Wang, and Zhenyu Wu. A measurement study on co-residence threat inside the cloud. In 24th {USENIX} Security Symposium ({USENIX} Security 15), pages 929â944, 2015.
[33] Ugo Fiore, Francesco Palmieri, Aniello Castiglione, Vincenzo Loia, and Alfredo De Santis. Multimedia-based In 2014 IEEE 11th Consumer Communications and Networking battery drain attacks for android devices. Conference (CCNC), pages 145â150. IEEE, 2014.
[34] Adrian Tang, Simha Sethumadhavan, and Salvatore Stolfo. {CLKSCREW}: exposing the perils of security- oblivious energy management. In 26th {USENIX} Security Symposium ({USENIX} Security 17), pages 1057â1074, 2017.
[35] Xiangqian Chen, Kia Makki, Kang Yen, and Niki Pissinou. Sensor network security: a survey. IEEE Communica- tions Surveys & Tutorials, 11(2):52â73, 2009.
[36] Dave Anderson, Jim Dykes, and Erik Riedel. More than an interface-scsi vs. ata.
[37] R. Efraim, R. Ginosar, C. Weiser, and A. Mendelson. Energy aware race to halt: A down to earth approach for platform energy management. IEEE Computer Architecture Letters, 13(1):25â28, 2014.
[38] Elham Tabassi, Kevin J Burns, Michael Hadjimichael, Andres D Molina-Markham, and Julian T Sexton. A taxonomy and terminology of adversarial machine learning.
[39] Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security, pages 506â519, 2017.
[40] Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, and Cho-Jui Hsieh. Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In Proceedings of the 10th ACM Workshop on Artiï¬cial Intelligence and Security, pages 15â26, 2017.
[41] Michael Nieles, Kelley Dempsey, and Victoria Pillitteri. An introduction to information security. Technical report, National Institute of Standards and Technology, 2017.
[42] Paul Ferguson and Daniel Senie. rfc2827: network ingress ï¬ltering: defeating denial of service attacks which employ ip source address spooï¬ng, 2000.
[43] John Bellardo and Stefan Savage. 802.11 denial-of-service attacks: Real vulnerabilities and practical solutions. In USENIX security symposium, volume 12, pages 2â2. Washington DC, 2003.
[44] Alessandro Erba, Riccardo Taormina, Stefano Galelli, Marcello Pogliani, Michele Carminati, Stefano Zanero, and Nils Ole Tippenhauer. Real-time evasion attacks with physical constraints on deep learning-based anomaly detectors in industrial control systems. CoRR, abs/1907.07487, 2019.
[45] M. Horowitz. 1.1 computingâs energy problem (and what we can do about it). In 2014 IEEE International Solid-State Circuits Conference Digest of Technical Papers (ISSCC), pages 10â14, 2014.
[46] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pages 5998â6008, 2017.
[47] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111â3119, 2013.
[48] Hardik Sharma, Jongse Park, Naveen Suda, Liangzhen Lai, Benson Chau, Vikas Chandra, and Hadi Esmaeilzadeh. In 2018 Bit fusion: Bit-level dynamically composable architecture for accelerating deep neural network. ACM/IEEE 45th Annual International Symposium on Computer Architecture (ISCA), pages 764â775. IEEE, 2018.
[49] Angshuman Parashar, Minsoo Rhu, Anurag Mukkara, Antonio Puglielli, Rangharajan Venkatesan, Brucek Khailany, Joel Emer, Stephen W Keckler, and William J Dally. Scnn: An accelerator for compressed-sparse convolutional neural networks. ACM SIGARCH Computer Architecture News, 45(2):27â40, 2017.
19
# A PREPRINT - DECEMBER 14, 2021
[50] MiloÅ¡ Nikoli´c, Mostafa Mahmoud, Andreas Moshovos, Yiren Zhao, and Robert Mullins. Characterizing sources of ineffectual computations in deep learning networks. In 2019 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS), pages 165â176. IEEE, 2019.
[51] Vinod Kathail. Xilinx vitis uniï¬ed software platform. In The 2020 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, pages 173â174, 2020.
[52] Xitong Gao, Yiren Zhao, Åukasz Dudziak, Robert Mullins, and Cheng-zhong Xu. Dynamic channel pruning: Feature boosting and suppression. arXiv preprint arXiv:1810.05331, 2018.
[53] Weizhe Hua, Yuan Zhou, Christopher M De Sa, Zhiru Zhang, and G Edward Suh. Channel gating neural networks. In Advances in Neural Information Processing Systems, pages 1886â1896, 2019.
[54] Weilin Xu, Yanjun Qi, and David Evans. Automatically evading classiï¬ers. In Proceedings of the 2016 network and distributed systems symposium, volume 10, 2016.
[55] Richard H Byrd, Peihuang Lu, Jorge Nocedal, and Ciyou Zhu. A limited memory algorithm for bound constrained optimization. SIAM Journal on scientiï¬c computing, 16(5):1190â1208, 1995.
[56] Marcus Hähnel, Björn Döbel, Marcus Völp, and Hermann Härtig. Measuring energy consumption for short code paths using rapl. ACM SIGMETRICS Performance Evaluation Review, 40(3):13â17, 2012.
[57] Kashif Nizam Khan, Mikael Hirki, Tapio Niemi, Jukka K. Nurminen, and Zhonghong Ou. Rapl in action: Experiences in using rapl for power measurements. ACM Trans. Model. Perform. Eval. Comput. Syst., 3(2), March 2018.
[58] S. Sen, N. Imam, and C. Hsu. Quality assessment of gpu power proï¬ling mechanisms. In 2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), pages 702â711, 2018.
[59] Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations, 2019.
[60] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
[61] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
[62] Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461, 2018.
[63] Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. Superglue: A stickier benchmark for general-purpose language understanding systems. In Advances in Neural Information Processing Systems, pages 3261â3275, 2019.
[64] Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. Scaling neural machine translation. arXiv preprint arXiv:1806.00187, 2018.
[65] Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. Understanding back-translation at scale. arXiv preprint arXiv:1808.09381, 2018.
[66] Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, and Sergey Edunov. Facebook fairâs WMT19 news translation task submission. CoRR, abs/1907.06616, 2019.
[67] Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th annual meeting of the association for computational linguistics companion volume proceedings of the demo and poster sessions, pages 177â180, 2007.
[68] Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. CoRR, abs/1508.07909, 2015.
[69] Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, and Sergey Edunov. Facebook fairâs wmt19 news translation task submission. arXiv preprint arXiv:1907.06616, 2019.
[70] Ryan Barnett. The dark side of apis: Denial of service attacks.
[71] Microsoft Azure. Request limits for translator.
[72] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770â778, 2016.
20
A PREPRINT - DECEMBER 14, 2021
[73] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700â4708, 2017.
[74] Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4510â4520, 2018.
[75] Dongyoung Kim, Junwhan Ahn, and Sungjoo Yoo. A novel zero weight/activation-aware hardware architecture of convolutional neural network. In Design, Automation & Test in Europe Conference & Exhibition (DATE), 2017, pages 1462â1467. IEEE, 2017.
[76] Sven Gowal, Krishnamurthy Dvijotham, Robert Stanforth, Rudy Bunel, Chongli Qin, Jonathan Uesato, Relja Arandjelovic, Timothy Mann, and Pushmeet Kohli. On the effectiveness of interval bound propagation for training veriï¬ably robust models. arXiv preprint arXiv:1810.12715, 2018.
[77] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. International Conference on Learning Representations (ICLR), 2015.
[78] Eric Wong, Leslie Rice, and J. Zico Kolter. Fast is better than free: Revisiting adversarial training, 2020. [79] Steven Chen, Nicholas Carlini, and David Wagner. Stateful detection of black-box adversarial attacks.
In Proceedings of the 1st ACM Workshop on Security and Privacy on Artiï¬cial Intelligence, SPAI â20, page 30â39, New York, NY, USA, 2020. Association for Computing Machinery.
[80] Ilia Shumailov, Yiren Zhao, Robert Mullins, and Ross Anderson. Towards certiï¬able adversarial sample detection. In Proceedings of the 13th ACM Workshop on Artiï¬cial Intelligence and Security, AISecâ20, page 13â24, New York, NY, USA, 2020. Association for Computing Machinery.
[81] Peter Henderson, Jieru Hu, Joshua Romoff, Emma Brunskill, Dan Jurafsky, and Joelle Pineau. Towards the systematic reporting of the energy and carbon footprints of machine learning. arXiv preprint arXiv:2002.05651, 2020.
[82] Alexandre Lacoste, Alexandra Luccioni, Victor Schmidt, and Thomas Dandres. Quantifying the carbon emissions of machine learning. arXiv preprint arXiv:1910.09700, 2019.
[83] Bhavishya Goel, Sally A McKee, and Magnus Själander. Techniques to measure, model, and manage power. In Advances in Computers, volume 87, pages 7â54. Elsevier, 2012.
[84] Eva GarcÃa-MartÃn, Crefeda Faviola Rodrigues, Graham Riley, and HÃ¥kan Grahn. Estimation of energy consump- tion in machine learning. Journal of Parallel and Distributed Computing, 134:75â88, 2019.
[85] J Adam Butts and Gurindar S Sohi. A static power model for architects. In Proceedings 33rd Annual IEEE/ACM International Symposium on Microarchitecture. MICRO-33 2000, pages 191â201. IEEE, 2000.
21
A PREPRINT - DECEMBER 14, 2021
1e12 GA convergence for WSC task with different pool sizes
â Pool 100 = 127 ââ pool 300 a â Pool 500 2 â Pool 700 ⬠1.0; p o ââ Pool 900 wn S x J S 0.8 ed Qa S âS 0.64 iS) Ww = > p o 0.44 Cc Lu 0 10 20 30 40 50 Epoch
Figure 6: GA performance with WSC task from GLUE Benchmark running on GPUs. Words of size 29 are evaluated with pool sizes of 100, 300, 500, 700 and 900.
Algorithm 2: Sponge samples through a Genetic Algorithm Result: S initialise a random pool of inputs;
1 S = {S0, S1, ..., Sn}; 2 while i < K do 3 4 5 Proï¬le the inputs to get ï¬tness scores; â latency or energy P = Fitness(S); Pick top performing samples; ËS = Select(P , S); if NLP then 6 S = MutateNLP( ËS); Concatenate samples A, B; â S = LeftHalf(A) + RightHalf(B); â S = RandomlyMutate(S); 7 8 end if CV then 9 S = MutateCV( ËS); Concatenate samples A, B, and a random mask; â A â mask + (1 â mask) â B; 10 11 end 12 ; end
22
# A PREPRINT - DECEMBER 14, 2021
# A Parameter Choices
We have thoroughly evaluated different parameter choices for the sponge attack and found that a small pool size and a relatively short number of GA iterations should be sufï¬cient for a large number of tasks.
Figure 6 shows the performance of sponge samples on the RoBERTa model for the Winograd Schema Challenge (WSC) with different pool sizes and varying input sequence length. The horizontal axis shows the number of GA iterations. In terms of pool size of the GA, although there is an increase in performance for larger pool sizes, the increase is marginal. Also, smaller pool sizes signiï¬cantly reduce the runtime for the attack. From the hardware perspective, using a large pool size might trigger GPUs to throttle, so that the runtime will be further increased. We observed that the convergence is consistently faster for smaller input sequences. This is mainly because the complexity of the search is less. In practice, we found almost all input sequence lengths we tested plateau within 100 GA iterations; even going to over 1000 iterations gives only a small increase in performance. For these reasons, for the experiments presented below, we report the results of the attack with a pool size of 1000 for GLUE and Computer Vision benchmarks and 1000 for translation tasks. We use 1000 GA iterations for all benchmarks tested. When displaying results, we normally use sponge for sponge examples produced using the GA and use sponge L-BFGS to identify sponge examples generated using L-BFGS.
# B Energy Cost Factors
Energy cost is a combination of static and dynamic energy.
E = (Pstatic + Pdynamic) Ã t
Static power refers to the consumption of the circuitry in an idle state [83] there are multiple models to estimate this depending on the technology [83â85]. In this paper, we follow a coarse-grained approach. Cycle-accurate hardware simulation incurs a large run-time, but a coarse-grained energy simulator provides enough resolution to indicate the energy-consuming samples while using signiï¬cantly less time per round of simulation.
v Peatic = > Hieakage X Veore =) Te x (⬠FF = 1) x Veore (3)
where Is is the reverse saturation current; Vd is the diode voltage; k is Boltzmannâs constant; q is the electronic charge; T is temperature and Vcore is the supply voltage.
Dynamic power refers to consumption from charging and discharging the circuitry [84].
Pdynamic = α à C à V 2 core à f (4)
Here, α refers to the activity factor i.e. components that are currently consuming power; Vd is the source voltage; C is the capacitance; f is the clock frequency. Ultimately an attacker attempts to solve an optimisation problem
overheat or increase overall consumption aaa Way max EF, where E = (> I, x (CFT ~*) X Veore] throttle or exploit load predictor (5) +a xO x Voore xT) xt more activity of the board run for longer or exploit the predictor
For all parameters considered in the equation, only four can be manipulated by the adversary described in Section 4.1: T , α, f and t. Of these, frequency and temperature cannot be controlled directly, but are affected through optimisations performed by the computing hardware. As we assume a single GPU, CPU or ASIC, we focus on the activity ratio α, the time t and the switching power from ï¬ipping the state of transistors. The execution time t and activity ratio α link tightly to the number of operations and memory accesses performed. In the temporal dimension, attackers might trigger unnecessary passes of a compute-intensive block; in the spatial domain, attackers can turn sparse operations into dense ones. These temporal and spatial attack opportunities can signiï¬cantly increase the number of memory and arithmetic operations and thus create an increase in α and t to maximise energy usage.
23
# A PREPRINT - DECEMBER 14, 2021
# C Domain Speciï¬c Optimisations
In Section 4.4 we outlined the genetic algorithm we used to ï¬nd sponge samples. That approach is generic. Here we describe how we can improve the effectiveness of the genetic algorithm through domain-speciï¬c optimisations.
First, for NLP tasks, the greatest impact on performance was acquired from exploiting the encoding schemes used. While the genetic algorithm was fast to pick up this vulnerability, it struggled with efï¬ciency around the mid-point, where the parents were concatenated. For example, when trying to break down individual sub-words to more tokens, we observed the GA inserting backslashes into the samples. When concatenated, we saw cases where two non-backslashes followed each other, meaning the GA was losing on a couple of characters. As a solution, we probabilistically ï¬ipped the halves and saw a slight improvement.
For CV tasks, we observed that random samples were always classiï¬ed as belonging to the same class. Furthermore, random samples had very low internal density. We hypothesize that this has to do with the fact that on random samples there are very few class features, as opposed to what is observed in natural samples. As the GA improvement largely depends on randomness, that meant that we often observed that after merging two highly dense parents, uniform randomness across all pixels was decreasing sparsity to the level of random samples. In other words, uniform randomness was diluting class features. To counter this phenomenon, instead of applying uniform randomness across all pixel values, we resorted to diluting only 1% of them. That led to a bigger improvement in the whole population pool. Furthermore, after observing that the density is class-dependent, it became apparent that to preserve diversity in the pool it was important to keep samples from multiple classes. For this, we tried to ensure that at least 20 different classes were preserved in the pool.
We attempted to use domain knowledge and tried adding operations like rotation, transposition and re-scaling into the mutation process, yet we found that these did not lead to signiï¬cant improvements.
# D Understanding Sponges and Their Performance
1000 i Natural 1600 1 @â Natural mmm Random | mmm Random 800 Ml Sponge samples 21400 | mmm Sponge samples | 1200 | 600 | 1000 | | | 800 400 | | | | 600 200 400 | \ | 200 i 0 4 of 0.82 0.84 0.86 0.88 0.90 0.92 0.72 0.73 0.74 0.75 0.76 0.77 0.78 Mean per-class resnet18 density Mean per-class densenet161 density
Figure 7: Per-class mean density of samples evaluated on ResNet18 and DenseNet161. The natural samples are from the validation set and are compared to 50 000 randomly generated samples and 1000 Sponge GA samples. The scales are normalised to form a probability density.
To better understand the results, we present Figure 7 which shows per-class density distributions of natural, random and sponge samples. There are 50,000 random and natural samples respectively and 1,000 sponge samples, with the bars normalised to form a probability density.
The ï¬rst thing that becomes apparent is that randomly generated samples on CV models cost signiï¬cantly less energy because many activations are not on. On average, random samples result in a sparser computation â around 4% more sparse for ResNet18 â and our simulator costs for natural samples are around 4 â 7% higher than the costs of random samples. Second, a surprising ï¬nding is that the most and least sparse samples are clustered in a handful of classes. In other words, certain classes have inputs that are more expensive than others in terms of energy. For ResNet-18, the most sparse classes are âwingâ and âspotlightâ while the least sparse are âgreenhouseâ and âhowler monkeyâ. We observe a similar effect for larger ResNet variants and also DenseNets, although the energy gap is smaller on DenseNets. Interestingly, we see that energy-expensive classes are consistent across different architectures, and we
24
# A PREPRINT - DECEMBER 14, 2021
further demonstrate this class-wise transferability in Section 6.4. Ultimately, this phenomenon implies that it is possible to burn energy or slow a system without much preparation, by simply bombarding the model with natural samples from energy-consuming classes. Finally, we see that the sponge samples are improving the population performance and tend to outperform natural sample. We observe that it is easier for sponge to outperform all natural samples for DensNets of different size, yet it struggles to outperform all of the ResNets. We further measure the energy performance statistically in Appendix D.1.
# Mean Energy [picojoules]
(a) Energy vs Power (b) Energy vs Time
Figure 8: ResNet-18 solving ImageNet-2017 without any rate limiting with increasing internal density.
(a) First 5000 samples discarded (b) First 30000 samples discarded
Figure 9: Mann-Whitney test on CPU measured Mobilenet execution. Number of observations is shown on x-axis and p-value on the y-axis.
# D.1 Measuring Difï¬culties and Statistical Analysis
Although we have presented in Section 6.2 that sponge attacks cause ASIC energy consumption to rise for computer vision tasks, it is still unclear what this translates to real life.
If one were to directly measure the CPU or GPU load per adversarial sample, interpreting it would be hard, especially when one talks about the energy cost improvements in the order of around 5% for ResNet18 and 3% as for DenseNet101. As is mentioned in Section 4.3 the main energy costs include the frequency of switching activities, voltage and clock frequency. Due to the heat impact from voltage and clock frequency, a large number of different optimisations are deployed by the hardware. Here, the optimisations try to balance multiple objectives â they try to be as performant
25
A PREPRINT - DECEMBER 14, 2021
as they can, whilst being as energy efï¬cient as possible and also maintain reliability. Modern CPUs and GPUs have several performance modes between which the hardware can switch. For example, ofï¬cial Nvidia documentation lists 15 different performance modes.
Figure 8 shows measurements taken during the sponge GA attack running against ResNet-18. The x-axis shows the number of epochs, with each epoch the internal density is increasing from 0.75% to 0.8%. In (a), the right y-axis shows mean energy readings per sample, whereas left y-axis shows mean power readings per-sample. In (b) the left y-axis shows mean latency values per-sample.
The amount of power consumed is strongly correlated to the amount of time taken by each sample. When the GPU speeds up, it consumes more energy but requires less time, but the rise in temperature causes the hardware then to go to a more conservative mode to cool down. We observe this heating and cooling cycle with all tasks running on GPUs, making it hard to measure the absolute performance and the attack impact. We can however measure the performance statistically. First, we turn to a question of
Can we detect energy differences between Natural, Random and Sponge samples?
To investigate the relationship between the samples we use Mann-Whitney-Wilcoxon U test (U-test), a nonparametric test for the difference between distributions. With three classes of samples, we need three pairwise comparisons. For each one, the null hypothesis that the distributions of energy consumed by the samples are identical. The complement hypothesis is that of a difference between distributions.
The U-test is based on three main assumptions:
Independence between samples; ⢠The dependent variable is at least ordinal; ⢠The samples are random.
The ï¬rst assumption is fulï¬lled since no sample belongs to more than one category i.e. natural, random and sponge. The second assumption is satisï¬ed by the fact that both time and energy are cardinal variables. The third assumption, however, is harder to satisfy.
The cause of this lies in the closed nature of hardware optimisations: although some of the techniques are known, the exact parameters are unknown. Furthermore, it is hard to achieve the same state of the hardware even through power cycling. As was mentioned in Section 4.3 temperature affects energy directly, and it is hard to make sure that the hardware always comes back to the same state.
To minimise temperature effects we apply the load of natural, attack and random samples and wait until the temperature stabilises. That takes approximately 30000 samples. The order of the samples is random, and at this point, it can be assumed that all of the data and instruction caches are ï¬lled. Finally, because the samples are randomly shufï¬ed, all of the predictive optimisations will work with the same probability for each of the classes.
For these reasons, we believe it is safe to assume that the samples themselves are random in that the effect of hardware optimisations is random so that the last assumption of the Mann-Whitney test is fulï¬lled.
Using this test we can do a pairwise comparison of the natural, random and sponge samples. The test indicates that the three types of samples generate energy consumption distributions that are statistically different (one-sided test, p-value=0.000) for mobilenet executed on a CPU. On a practical level, the amount of energy consumed by sponge samples is 1.5% higher on a CPU and >7% on ASIC. We could not evaluate the energy recordings on a GPU, as the standard deviation was over 15% which becomes worse as temperature increases. Figure 9 shows the conï¬dence of the Mann-Whitney test with mobilenet measured on the CPU as a function of the number of observations. The number of observations is on the x-axis, and the p-value on the y-axis. As can be seen, in a stable environment i.e. the temperature has stabilised, after about 100 observations per class, the differences become statistically signiï¬cant at any reasonable conï¬dence level. A similar trend is observed for unstable temperature environment, but around three times more data is required. That means that in practice, about 100â300 observations per class are sufï¬cient to differentiate between classes with high conï¬dence.
# E EnergyâTime Relationship of Sponge Examples
26
A PREPRINT - DECEMBER 14, 2021
GPU Energy [mJ] GPU Time [mS] Sponge Input size Natural Sponge Natural SuperGLUE Benchmark with [60] CoLA 15 30 50 1.00Ã 1.00Ã 1.00Ã 1.11Ã 1.28Ã 2.06Ã 1.00Ã 1.00Ã 1.00Ã 0.92Ã 0.82Ã 1.27Ã 1.21 1.56 1.62 MNLI 15 30 50 1.00Ã 1.00Ã 1.00Ã 1.12Ã 1.51Ã 2.16Ã 1.00Ã 1.00Ã 1.00Ã 0.95Ã 1.03Ã 1.30Ã 1.26 1.46 1.66 WSC 15 30 50 1.00Ã 5.51Ã 1.00Ã 16.13Ã 1.00Ã 11.04Ã 1.00Ã 26.64Ã 1.00Ã 20.56Ã 8.89Ã 1.00Ã 1.61 1.46 1.29 WMT14/16 with [64] EnâFr EnâDe 15 15 1.00Ã 3.89Ã 1.00Ã 27.84Ã 1.00Ã 24.18Ã 4.32Ã 1.00Ã 1.11 1.15 WMT18 with [65] EnâDe 15 1.00Ã 30.81Ã 1.00Ã 26.49Ã 1.16 WMT19 with [69] EnâRu 15 1.00Ã 26.43Ã 1.00Ã 22.85Ã 1.15
Table 4: We use the White-box GA attack to produce sponge examples and measure the performance on different platforms and calculate how energy improvement factor relates to time improvement factor. The GPU readings are from NVML. GA was ran for 1000 epochs with a pool size of 1000. A detailed explanation of the results is in Section 5.2.
27
A PREPRINT - DECEMBER 14, 2021
Input size Natural ASIC Energy [mJ] Random Sponge SuperGLUE Benchmark with [60] CoLA 15 30 50 504.93 ± 1.07 508.73 ± 1.87 511.43 ± 3.64 566.58 ± 2.74 634.24 ± 4.06 724.48 ± 5.12 583.56 ± 0.00 669.20 ± 0.00 780.57 ± 0.59 MNLI 15 30 50 509.19 ± 1.45 514.00 ± 2.07 519.51 ± 2.79 570.10 ± 2.82 638.78 ± 3.89 728.82 ± 5.26 586.43 ± 0.00 672.07 ± 0.00 783.18 ± 0.75 WSC 15 30 50 510.84 ± 8.84 573.78 ± 140.12 716.96 ± 223.75 1008.59 ± 192.22 2319.05 ± 502.31 5093.42 ± 1020.34 2454.89 ± 68.06 5012.75 ± 154.24 10192.41 ± 347.32 WMT14/16 with [64] EnâFr 15 1793.84 ± 356.29 4961.56 ± 1320.84 8494.36 ± 166.22 EnâDe 15 1571.59 ± 301.69 2476.18 ± 1586.95 48446.29 ± 0.06 WMT18 with [65] EnâDe 15 1624.05 ± 352.99 2318.50 ± 296.09 49617.68 ± 0.02 WMT19 with [69] EnâRu 15 1897.19 ± 607.30 5380.20 ± 2219.24 47931.11 ± 0.00
Table 5: We use the White-box GA attack to produce sponge examples and measure the consistency of ASIC results. GA was ran for 1000 epochs with a pool size of 1000. A detailed explanation of the results is in Section 5.2.
28 | {
"id": "2007.14321"
} |
2005.14050 | Language (Technology) is Power: A Critical Survey of "Bias" in NLP | We survey 146 papers analyzing "bias" in NLP systems, finding that their
motivations are often vague, inconsistent, and lacking in normative reasoning,
despite the fact that analyzing "bias" is an inherently normative process. We
further find that these papers' proposed quantitative techniques for measuring
or mitigating "bias" are poorly matched to their motivations and do not engage
with the relevant literature outside of NLP. Based on these findings, we
describe the beginnings of a path forward by proposing three recommendations
that should guide work analyzing "bias" in NLP systems. These recommendations
rest on a greater recognition of the relationships between language and social
hierarchies, encouraging researchers and practitioners to articulate their
conceptualizations of "bias"---i.e., what kinds of system behaviors are
harmful, in what ways, to whom, and why, as well as the normative reasoning
underlying these statements---and to center work around the lived experiences
of members of communities affected by NLP systems, while interrogating and
reimagining the power relations between technologists and such communities. | http://arxiv.org/pdf/2005.14050 | Su Lin Blodgett, Solon Barocas, Hal Daumé III, Hanna Wallach | cs.CL, cs.CY | null | null | cs.CL | 20200528 | 20200529 | 0 2 0 2
y a M 9 2 ] L C . s c [
2 v 0 5 0 4 1 . 5 0 0 2 : v i X r a
# Language (Technology) is Power: A Critical Survey of âBiasâ in NLP
Su Lin Blodgett College of Information and Computer Sciences University of Massachusetts Amherst [email protected]
# Solon Barocas Microsoft Research Cornell University [email protected]
# Hal Daumé III Microsoft Research University of Maryland [email protected]
# Hanna Wallach Microsoft Research [email protected]
# Abstract
We survey 146 papers analyzing âbiasâ in NLP systems, ï¬nding that their motivations are often vague, inconsistent, and lacking in normative reasoning, despite the fact that analyzing âbiasâ is an inherently normative process. We further ï¬nd that these papersâ proposed quantitative techniques for measur- ing or mitigating âbiasâ are poorly matched to their motivations and do not engage with the relevant literature outside of NLP. Based on these ï¬ndings, we describe the beginnings of a path forward by proposing three recommenda- tions that should guide work analyzing âbiasâ in NLP systems. These recommendations rest on a greater recognition of the relationships between language and social hierarchies, encouraging researchers and practitioners to articulate conceptualizations of âbiasââi.e., what kinds of system behaviors are harmful, in what ways, to whom, and why, as well as the normative reasoning underlying these statementsâand to center work around the lived experiences of members of commu- nities affected by NLP systems, while inter- rogating and reimagining the power relations between technologists and such communities.
# Introduction
A large body of work analyzing âbiasâ in natural language processing (NLP) systems has emerged in recent years, including work on âbiasâ in embed- ding spaces (e.g., Bolukbasi et al., 2016a; Caliskan et al., 2017; Gonen and Goldberg, 2019; May et al., 2019) as well as work on âbiasâ in systems developed for a breadth of tasks including language modeling (Lu et al., 2018; Bordia and Bowman,
2019), coreference resolution (Rudinger et al., 2018; Zhao et al., 2018a), machine translation (Van- massenhove et al., 2018; Stanovsky et al., 2019), sentiment analysis (Kiritchenko and Mohammad, 2018), and hate speech/toxicity detection (e.g., Park et al., 2018; Dixon et al., 2018), among others. Although these papers have laid vital ground- work by illustrating some of the ways that NLP systems can be harmful, the majority of them fail to engage critically with what constitutes âbiasâ in the ï¬rst place. Despite the fact that analyzing âbiasâ is an inherently normative processâin which some system behaviors are deemed good and others harmfulâpapers on âbiasâ in NLP systems are rife with unstated assumptions about what kinds of system behaviors are harmful, in what ways, to whom, and why. Indeed, the term âbiasâ (or âgender biasâ or âracial biasâ) is used to describe a wide range of system behaviors, even though they may be harmful in different ways, to different groups, or for different reasons. Even papers analyzing âbiasâ in NLP systems developed for the same task often conceptualize it differently. For example, the following system behaviors are all understood to be self-evident statements of âracial biasâ: (a) embedding spaces in which embed- dings for names associated with African Americans are closer (compared to names associated with European Americans) to unpleasant words than pleasant words (Caliskan et al., 2017); (b) senti- ment analysis systems yielding different intensity scores for sentences containing names associated with African Americans and sentences containing names associated with European Americans (Kir- itchenko and Mohammad, 2018); and (c) toxicity
detection systems scoring tweets containing fea- tures associated with African-American English as more offensive than tweets without these features (Davidson et al., 2019; Sap et al., 2019). Moreover, some of these papers focus on âracial biasâ expressed in written text, while others focus on âracial biasâ against authors. This use of imprecise terminology obscures these important differences. We survey 146 papers analyzing âbiasâ in NLP systems, ï¬nding that their motivations are often vague and inconsistent. Many lack any normative reasoning for why the system behaviors that are described as âbiasâ are harmful, in what ways, and to whom. Moreover, the vast majority of these papers do not engage with the relevant literature outside of NLP to ground normative concerns when proposing quantitative techniques for measuring or mitigating âbias.â As a result, we ï¬nd that many of these techniques are poorly matched to their motivations, and are not comparable to one another. We then describe the beginnings of a path forward by proposing three recommendations that should guide work analyzing âbiasâ in NLP systems. We argue that such work should examine the relationships between language and social hi- erarchies; we call on researchers and practitioners conducting such work to articulate their conceptu- alizations of âbiasâ in order to enable conversations about what kinds of system behaviors are harmful, in what ways, to whom, and why; and we recom- mend deeper engagements between technologists and communities affected by NLP systems. We also provide several concrete research questions that are implied by each of our recommendations.
# 2 Method
Our survey includes all papers known to us analyzing âbiasâ in NLP systemsâ146 papers in total. We omitted papers about speech, restricting our survey to papers about written text only. To identify the 146 papers, we ï¬rst searched the ACL Anthology1 for all papers with the keywords âbiasâ or âfairnessâ that were made available prior to May 2020. We retained all papers about social âbias,â and discarded all papers about other deï¬nitions of the keywords (e.g., hypothesis-only bias, inductive bias, media bias). We also discarded all papers us- ing âbiasâ in NLP systems to measure social âbiasâ in text or the real world (e.g., Garg et al., 2018).
To ensure that we did not exclude any relevant
# 1https://www.aclweb.org/anthology/
NLP task Papers Embeddings (type-level or contextualized) Coreference resolution Language modeling or dialogue generation Hate-speech detection Sentiment analysis Machine translation Tagging or parsing Surveys, frameworks, and meta-analyses Other 54 20 17 17 15 8 5 20 22
Table 1: The NLP tasks covered by the 146 papers.
papers without the keywords âbiasâ or âfairness,â we also traversed the citation graph of our initial set of papers, retaining any papers analyzing âbiasâ in NLP systems that are cited by or cite the papers in our initial set. Finally, we manually inspected any papers analyzing âbiasâ in NLP systems from leading machine learning, humanâcomputer inter- action, and web conferences and workshops, such as ICML, NeurIPS, AIES, FAccT, CHI, and WWW, along with any relevant papers that were made available in the âComputation and Languageâ and âComputers and Societyâ categories on arXiv prior to May 2020, but found that they had already been identiï¬ed via our traversal of the citation graph. We provide a list of all 146 papers in the appendix. In Table 1, we provide a breakdown of the NLP tasks covered by the papers. We note that counts do not sum to 146, because some papers cover multiple tasks. For example, a paper might test the efï¬cacy of a technique for mitigating âbiasâ in embed- ding spaces in the context of sentiment analysis.
Once identiï¬ed, we then read each of the 146 pa- pers with the goal of categorizing their motivations and their proposed quantitative techniques for mea- suring or mitigating âbias.â We used a previously developed taxonomy of harms for this categoriza- tion, which differentiates between so-called alloca- tional and representational harms (Barocas et al., 2017; Crawford, 2017). Allocational harms arise when an automated system allocates resources (e.g., credit) or opportunities (e.g., jobs) unfairly to dif- ferent social groups; representational harms arise when a system (e.g., a search engine) represents some social groups in a less favorable light than others, demeans them, or fails to recognize their existence altogether. Adapting and extending this taxonomy, we categorized the 146 papersâ motiva- tions and techniques into the following categories:
> Allocational harms.
Papers Category Motivation Technique Allocational harms Stereotyping Other representational harms Questionable correlations Vague/unstated Surveys, frameworks, and meta-analyses 30 50 52 47 23 20 4 58 43 42 0 20
Table 2: The categories into which the 146 papers fall.
> Representational harms:?
> Stereotyping that propagates negative gen- eralizations about particular social groups. > Differences in system performance for dif- ferent social groups, language that misrep- resents the distribution of different social groups in the population, or language that is denigrating to particular social groups.
> Questionable correlations between system be- havior and features of language that are typi- cally associated with particular social groups. > Vague descriptions of âbiasâ (or âgender biasâ or âracial biasâ) or no description at all.
> Surveys, frameworks, and meta-analyses.
In Table 2 we provide counts for each of the six categories listed above. (We also provide a list of the papers that fall into each category in the appendix.) Again, we note that the counts do not sum to 146, because some papers state multiple motivations, propose multiple techniques, or pro- pose a single technique for measuring or mitigating multiple harms. Table 3, which is in the appendix, contains examples of the papersâ motivations and techniques across a range of different NLP tasks.
# 3 Findings
Categorizing the 146 papersâ motivations and pro- posed quantitative techniques for measuring or miti- gating âbiasâ into the six categories listed above en- abled us to identify several commonalities, which we present below, along with illustrative quotes.
2We grouped several types of representational harms into two categories to reï¬ect that the main point of differentiation between the 146 papersâ motivations and proposed quantitative techniques for measuring or mitigating âbiasâ is whether or not they focus on stereotyping. Among the papers that do not fo- cus on stereotyping, we found that most lack sufï¬ciently clear motivations and techniques to reliably categorize them further.
# 3.1 Motivations
Papers state a wide range of motivations, multiple motivations, vague motivations, and sometimes no motivations at all. We found that the papersâ motivations span all six categories, with several papers falling into each one. Appropriately, papers that provide surveys or frameworks for an- alyzing âbiasâ in NLP systems often state multiple motivations (e.g., Hovy and Spruit, 2016; Bender, 2019; Sun et al., 2019; Rozado, 2020; Shah et al., 2020). However, as the examples in Table 3 (in the appendix) illustrate, many other papers (33%) do so as well. Some papers (16%) state only vague motivations or no motivations at all. For example, â[N]o human should be discriminated on the basis of demographic attributes by an NLP system.â
âKaneko and Bollegala (2019)
â[P]rominent word embeddings [...] encode systematic biases against women and black people [...] implicating many NLP systems in scaling up social injustice.â
These examples leave unstated what it might mean for an NLP system to âdiscriminate,â what con- stitutes âsystematic biases,â or how NLP systems contribute to âsocial injusticeâ (itself undeï¬ned).
Papersâ motivations sometimes include no nor- mative reasoning. We found that some papers (32%) are not motivated by any apparent normative concerns, often focusing instead on concerns about system performance. For example, the ï¬rst quote below includes normative reasoningânamely that models should not use demographic information to make predictionsâwhile the other focuses on learned correlations impairing system performance.
âIn [text classiï¬cation], models are expected to make predictions with the semantic information rather than with the demographic group identity information (e.g., âgayâ, âblackâ) contained in the sentences.â
âAn over-prevalence of some gendered forms in the training data leads to translations with identiï¬able errors. Translations are better for sentences involving men and for sentences containing stereotypical gender roles.â
âSaunders and Byrne (2020)
Even when papers do state clear motivations, they are often unclear about why the system be- haviors that are described as âbiasâ are harm- ful, in what ways, and to whom. We found that even papers with clear motivations often fail to ex- plain what kinds of system behaviors are harmful, in what ways, to whom, and why. For example,
âDeploying these word embedding algorithms in practice, for example in automated translation systems or as hiring aids, runs the serious risk of perpetuating problematic biases in important societal contexts.â
â[I]f the systems show discriminatory behaviors in the interactions, the user experience will be adversely affected.â
These examples leave unstated what âproblematic biasesâ or non-ideal user experiences might look like, how the system behaviors might result in these things, and who the relevant stakeholders or users might be. In contrast, we ï¬nd that papers that provide surveys or frameworks for analyzing âbiasâ in NLP systems often name who is harmed, acknowledging that different social groups may experience these systems differently due to their different relationships with NLP systems or different social positions. For example, Ruane et al. (2019) argue for a âdeep understanding of the user groups [sic] characteristics, contexts, and interestsâ when designing conversational agents.
Papers about NLP systems developed for the same task often conceptualize âbiasâ differ- ently. Even papers that cover the same NLP task often conceptualize âbiasâ in ways that differ sub- stantially and are sometimes inconsistent. Rows 3 and 4 of Table 3 (in the appendix) contain machine translation papers with different conceptualizations of âbias,â leading to different proposed techniques, while rows 5 and 6 contain papers on âbiasâ in em- bedding spaces that state different motivations, but propose techniques for quantifying stereotyping.
Papersâ motivations conï¬ate allocational and representational harms. We found that the pa- persâ motivations sometimes (16%) name imme- diate representational harms, such as stereotyping, alongside more distant allocational harms, which, in the case of stereotyping, are usually imagined as downstream effects of stereotypes on résumé ï¬lter- ing. Many of these papers use the imagined down- stream effects to justify focusing on particular sys- tem behaviors, even when the downstream effects are not measured. Papers on âbiasâ in embedding spaces are especially likely to do this because em- beddings are often used as input to other systems:
âHowever, none of these papers [on embeddings] have recognized how blatantly sexist the embeddings are and hence risk introducing biases of various types into real-world systems.â
# âBolukbasi et al. (2016a)
âIt is essential to quantify and mitigate gender bias in these embeddings to avoid them from affecting downstream applications.â
In contrast, papers that provide surveys or frame- works for analyzing âbiasâ in NLP systems treat representational harms as harmful in their own right. For example, Mayï¬eld et al. (2019) and Ruane et al. (2019) cite the harmful reproduction of dominant linguistic norms by NLP systems (a point to which we return in section 4), while Bender (2019) outlines a range of harms, including seeing stereotypes in search results and being made invis- ible to search engines due to language practices.
# 3.2 Techniques
Papersâ techniques are not well grounded in the relevant literature outside of NLP. Perhaps un- surprisingly given that the papersâ motivations are often vague, inconsistent, and lacking in normative reasoning, we also found that the papersâ proposed quantitative techniques for measuring or mitigating âbiasâ do not effectively engage with the relevant literature outside of NLP. Papers on stereotyping are a notable exception: the Word Embedding Association Test (Caliskan et al., 2017) draws on the Implicit Association Test (Greenwald et al., 1998) from the social psychology literature, while several techniques operationalize the well-studied âAngry Black Womanâ stereotype (Kiritchenko and Mohammad, 2018; May et al., 2019; Tan and Celis, 2019) and the âdouble bindâ faced by women (May et al., 2019; Tan and Celis, 2019), in which women who succeed at stereotypically male tasks are perceived to be less likable than similarly successful men (Heilman et al., 2004). Tan and Celis (2019) also examine the compounding effects of race and gender, drawing on Black feminist scholarship on intersectionality (Crenshaw, 1989).
Papersâ techniques are poorly matched to their motivations. We found that although 21% of the papers include allocational harms in their motiva- tions, only four papers actually propose techniques for measuring or mitigating allocational harms.
Papers focus on a narrow range of potential sources of âbias.â We found that nearly all of the papers focus on system predictions as the potential sources of âbias,â with many additionally focusing on âbiasâ in datasets (e.g., differences in the number of gendered pronouns in the training data (Zhao et al., 2019)). Most papers do not interrogate
the normative implications of other decisions made during the development and deployment lifecycleâ perhaps unsurprising given that their motivations sometimes include no normative reasoning. A few papers are exceptions, illustrating the impacts of task deï¬nitions, annotation guidelines, and evaluation metrics: Cao and Daumé (2019) study how folk conceptions of gender (Keyes, 2018) are reproduced in coreference resolution systems that assume a strict gender dichotomy, thereby main- taining cisnormativity; Sap et al. (2019) focus on the effect of priming annotators with information about possible dialectal differences when asking them to apply toxicity labels to sample tweets, ï¬nd- ing that annotators who are primed are signiï¬cantly less likely to label tweets containing features asso- ciated with African-American English as offensive.
# 4 A path forward
We now describe how researchers and practitioners conducting work analyzing âbiasâ in NLP systems might avoid the pitfalls presented in the previous sectionâthe beginnings of a path forward. We propose three recommendations that should guide such work, and, for each, provide several concrete research questions. We emphasize that these ques- tions are not comprehensive, and are intended to generate further questions and lines of engagement. Our three recommendations are as follows:
(R1) Ground work analyzing âbiasâ in NLP sys- tems in the relevant literature outside of NLP that explores the relationships between lan- guage and social hierarchies. Treat represen- tational harms as harmful in their own right.
(R2) Provide explicit statements of why the system behaviors that are described as âbiasâ are harmful, in what ways, and to whom. Be forthright about the normative reasoning (Green, 2019) underlying these statements.
(R3) Examine language use in practice by engag- ing with the lived experiences of members of communities affected by NLP systems. Inter- rogate and reimagine the power relations be- tween technologists and such communities.
# 4.1 Language and social hierarchies
Turning ï¬rst to (R1), we argue that work analyzing âbiasâ in NLP systems will paint a much fuller pic- ture if it engages with the relevant literature outside of NLP that explores the relationships between
language and social hierarchies. Many disciplines, including sociolinguistics, linguistic anthropology, sociology, and social psychology, study how language takes on social meaning and the role that language plays in maintaining social hierarchies. For example, language is the means through which social groups are labeled and one way that beliefs about social groups are transmitted (e.g., Maass, 1999; Beukeboom and Burgers, 2019). Group labels can serve as the basis of stereotypes and thus reinforce social inequalities: â[T]he label content functions to identify a given category of people, and thereby conveys category boundaries and a position in a hierarchical taxonomyâ (Beukeboom and Burgers, 2019). Similarly, âcontrolling images,â such as stereotypes of Black women, which are linguistically and visually transmitted through literature, news media, television, and so forth, provide âideological justiï¬cationâ for their continued oppression (Collins, 2000, Chapter 4).
As a result, many groups have sought to bring about social changes through changes in language, disrupting patterns of oppression and marginal- ization via so-called âgender-fairâ language (Sczesny et al., 2016; Menegatti and Rubini, 2017), language that is more inclusive to people with disabilities (ADA, 2018), and language that is less dehumanizing (e.g., abandoning the use of the term âillegalâ in everyday discourse on immigration in the U.S. (Rosa, 2019)). The fact that group labels are so contested is evidence of how deeply inter- twined language and social hierarchies are. Taking âgender-fairâ language as an example, the hope is that reducing asymmetries in language about women and men will reduce asymmetries in their social standing. Meanwhile, struggles over lan- guage use often arise from dominant social groupsâ desire to âcontrol both material and symbolic resourcesââi.e., âthe right to decide what words will mean and to control those meaningsââas was the case in some white speakersâ insistence on using offensive place names against the objections of Indigenous speakers (Hill, 2008, Chapter 3).
Sociolinguists and linguistic anthropologists have also examined language attitudes and lan- guage ideologies, or peopleâs metalinguistic beliefs about language: Which language varieties or prac- tices are taken as standard, ordinary, or unmarked? Which are considered correct, prestigious, or ap- propriate for public use, and which are considered incorrect, uneducated, or offensive (e.g., Campbell-
Kibler, 2009; Preston, 2009; Loudermilk, 2015; Lanehart and Malik, 2018)? Which are rendered in- visible (Roche, 2019)?3 Language ideologies play a vital role in reinforcing and justifying social hi- erarchies because beliefs about language varieties or practices often translate into beliefs about their speakers (e.g. Alim et al., 2016; Rosa and Flores, 2017; Craft et al., 2020). For example, in the U.S., the portrayal of non-white speakersâ language varieties and practices as linguistically deï¬cient helped to justify violent European colonialism, and today continues to justify enduring racial hierar- chies by maintaining views of non-white speakers as lacking the language ârequired for complex thinking processes and successful engagement in the global economyâ (Rosa and Flores, 2017).
Recognizing the role that language plays in maintaining social hierarchies is critical to the future of work analyzing âbiasâ in NLP systems. it helps to explain why representational First, harms are harmful in their own right. Second, the complexity of the relationships between language and social hierarchies illustrates why studying âbiasâ in NLP systems is so challenging, suggesting that researchers and practitioners will need to move beyond existing algorithmic fairness techniques. We argue that work must be grounded in the relevant literature outside of NLP that examines the relationships between language and social hierarchies; without this grounding, researchers and practitioners risk measuring or mitigating only what is convenient to measure or mitigate, rather than what is most normatively concerning. More speciï¬cally, we recommend that work analyzing âbiasâ in NLP systems be reoriented around the following question: How are social hierarchies, language ideologies, and NLP systems coproduced? This question mirrors Benjaminâs (2020) call to examine how ârace and technology are coproducedââi.e., how racial hierarchies, and the ideologies and discourses that maintain them, create and are re-created by technology. We recom- mend that researchers and practitioners similarly ask how existing social hierarchies and language ideologies drive the development and deployment of NLP systems, and how these systems therefore reproduce these hierarchies and ideologies. As a starting point for reorienting work analyzing âbiasâ in NLP systems around this question, we
3Language ideologies encompass much more than this; see, e.g., Lippi-Green (2012), Alim et al. (2016), Rosa and Flores (2017), Rosa and Burdick (2017), and Charity Hudley (2017).
provide the following concrete research questions:
> How do social hierarchies and language ideologies influence the decisions made during the development and deployment lifecycle? What kinds of NLP systems do these decisions result in, and what kinds do they foreclose?
© General assumptions: To which linguistic norms do NLP systems adhere (Bender, 2019; Ruane et al., 2019)? Which language practices are implicitly assumed to be standard, ordinary, correct, or appropriate? © Task definition: For which speakers are NLP systems (and NLP resources) developed? (See Joshi et al. (2020) for a discussion.) How do task definitions discretize the world? For example, how are social groups delineated when defining demographic attribute prediction tasks (e.g., Koppel et al., 2002; Rosenthal and McKeown, 2011; Nguyen et al., 2013)? What about languages in native language prediction tasks (Tetreault et al., 2013)? © Data: How are datasets collected, prepro- cessed, and labeled or annotated? What are the impacts of annotation guidelines, anno- tator assumptions and perceptions (Olteanu et al., 2019; Sap et al., 2019; Geiger et al., 2020), and annotation aggregation pro- cesses (Pavlick and Kwiatkowski, 2019)? © Evaluation: How are NLP systems evalu- ated? What are the impacts of evaluation metrics (Olteanu et al., 2017)? Are any non-quantitative evaluations performed?
> How do NLP systems reproduce or transform language ideologies? Which language varieties or practices come to be deemed good or bad? Might âgoodâ language simply mean language that is easily handled by existing NLP sys- tems? For example, linguistic phenomena aris- ing from many language practices (Eisenstein, 2013) are described as ânoisy textâ and often viewed as a target for ânormalization.â How do the language ideologies that are reproduced by NLP systems maintain social hierarchies? Vv Which representational harms are being measured or mitigated? Are these the most normatively concerning harms, or merely those that are well handled by existing algo- rithmic fairness techniques? Are there other representational harms that might be analyzed?
# 4.2 Conceptualizations of âbiasâ
Turning now to (R2), we argue that work analyzing âbiasâ in NLP systems should provide explicit statements of why the system behaviors that are described as âbiasâ are harmful, in what ways, and to whom, as well as the normative reasoning underlying these statements. In other words, researchers and practitioners should articulate their conceptualizations of âbias.â As we described above, papers often contain descriptions of system behaviors that are understood to be self-evident This use of imprecise statements of âbias.â terminology has led to papers all claiming to analyze âbiasâ in NLP systems, sometimes even in systems developed for the same task, but with different or even inconsistent conceptualizations of âbias,â and no explanations for these differences.
Yet analyzing âbiasâ is an inherently normative processâin which some system behaviors are deemed good and others harmfulâeven if assump- tions about what kinds of system behaviors are harmful, in what ways, for whom, and why are not stated. We therefore echo calls by Bardzell and Bardzell (2011), Keyes et al. (2019), and Green (2019) for researchers and practitioners to make their normative reasoning explicit by articulating the social values that underpin their decisions to deem some system behaviors as harmful, no matter how obvious such values appear to be. We further argue that this reasoning should take into account the relationships between language and social hierarchies that we described above. First, these relationships provide a foundation from which to approach the normative reasoning that we recom- mend making explicit. For example, some system behaviors might be harmful precisely because they maintain social hierarchies. Second, if work analyzing âbiasâ in NLP systems is reoriented to understand how social hierarchies, language ideologies, and NLP systems are coproduced, then this work will be incomplete if we fail to account for the ways that social hierarchies and language ideologies determine what we mean by âbiasâ in the first place. As a starting point, we therefore provide the following concrete research questions: > What kinds of system behaviors are described as âbiasâ? What are their potential sources (e.g., general assumptions, task definition, data)? > In what ways are these system behaviors harm-
ful, to whom are they harmful, and why?
> What are the social values (obvious or not) that
underpin this conceptualization of âbias?â
# 4.3 Language use in practice
Finally, we turn to (R3). Our perspective, which rests on a greater recognition of the relationships between language and social hierarchies, suggests several directions for examining language use in practice. Here, we focus on two. First, because lan- guage is necessarily situated, and because different social groups have different lived experiences due to their different social positions (Hanna et al., 2020)âparticularly groups at the intersections of multiple axes of oppressionâwe recommend that researchers and practitioners center work analyzing âbiasâ in NLP systems around the lived experiences of members of communities affected by these systems. Second, we recommend that the power relations between technologists and such communities be interrogated and reimagined. Researchers have pointed out that algorithmic fairness techniques, by proposing incremental technical mitigationsâe.g., collecting new datasets or training better modelsâmaintain these power relations by (a) assuming that automated systems should continue to exist, rather than asking whether they should be built at all, and (b) keeping development and deployment decisions in the hands of technologists (Bennett and Keyes, 2019; Cifor et al., 2019; Green, 2019; Katell et al., 2020). There are many disciplines for researchers and practitioners to draw on when pursuing these directions. For example, in humanâcomputer interaction, Hamidi et al. (2018) study transgender peopleâs experiences with automated gender recognition systems in order to uncover how these systems reproduce structures of transgender exclusion by redeï¬ning what it means to perform gender ânormally.â Value-sensitive design provides a framework for accounting for the values of differ- ent stakeholders in the design of technology (e.g., Friedman et al., 2006; Friedman and Hendry, 2019; Le Dantec et al., 2009; Yoo et al., 2019), while participatory design seeks to involve stakeholders in the design process itself (Sanders, 2002; Muller, 2007; Simonsen and Robertson, 2013; DiSalvo et al., 2013). Participatory action research in educa- tion (Kemmis, 2006) and in language documenta- tion and reclamation (Junker, 2018) is also relevant. In particular, work on language reclamation to support decolonization and tribal sovereignty (Leonard, 2012) and work in sociolinguistics focus-
ing on developing co-equal research relationships with community members and supporting linguis- tic justice efforts (e.g., Bucholtz et al., 2014, 2016, 2019) provide examples of more emancipatory rela- tionships with communities. Finally, several work- shops and events have begun to explore how to em- power stakeholders in the development and deploy- ment of technology (Vaccaro et al., 2019; Givens and Morris, 2020; Sassaman et al., 2020)4 and how to help researchers and practitioners consider when not to build systems at all (Barocas et al., 2020).
As a Starting point for engaging with commu- nities affected by NLP systems, we therefore provide the following concrete research questions: > How do communities become aware of NLP systems? Do they resist them, and if so, how? > What additional costs are borne by communi- ties for whom NLP systems do not work well? > Do NLP systems shift power toward oppressive institutions (e.g., by enabling predictions that communities do not want made, linguistically based unfair allocation of resources or oppor- tunities (Rosa and Flores, 2017), surveillance, or censorship), or away from such institutions? > Who is involved in the development and deployment of NLP systems? How do decision-making processes maintain power re- lations between technologists and communities affected by NLP systems? Can these pro- cesses be changed to reimagine these relations?
# 5 Case study
To illustrate our recommendations, we present a case study covering work on African-American English (AAE).5 Work analyzing âbiasâ in the con- text of AAE has shown that part-of-speech taggers, language identiï¬cation systems, and dependency parsers all work less well on text containing features associated with AAE than on text without these features (Jørgensen et al., 2015, 2016; Blod- gett et al., 2016, 2018), and that toxicity detection systems score tweets containing features associated with AAE as more offensive than tweets with- out them (Davidson et al., 2019; Sap et al., 2019). These papers have been critical for highlighting AAE as a language variety for which existing NLP
4Also https://participatoryml.github.io/ 5This language variety has had many different names is now generally called African- over American English (AAE), African-American Vernacular En- glish (AAVE), or African-American Language (AAL) (Green, 2002; Wolfram and Schilling, 2015; Rickford and King, 2016).
systems may not work, illustrating their limitations. However, they do not conceptualize âracial biasâ in the same way. The ï¬rst four of these papers simply focus on system performance differences between text containing features associated with AAE and text without these features. In contrast, the last two papers also focus on such system performance differences, but motivate this focus with the fol- lowing additional reasoning: If tweets containing features associated with AAE are scored as more offensive than tweets without these features, then this might (a) yield negative perceptions of AAE; (b) result in disproportionate removal of tweets containing these features, impeding participation in online platforms and reducing the space avail- able online in which speakers can use AAE freely; and (c) cause AAE speakers to incur additional costs if they have to change their language practices to avoid negative perceptions or tweet removal.
More importantly, none of these papers engage with the literature on AAE, racial hierarchies in the U.S., and raciolinguistic ideologies. By failing to engage with this literatureâthereby treating AAE simply as one of many non-Penn Treebank vari- eties of English or perhaps as another challenging domainâwork analyzing âbiasâ in NLP systems in the context of AAE fails to situate these systems in the world. Who are the speakers of AAE? How are they viewed? We argue that AAE as a language variety cannot be separated from its speakersâ primarily Black people in the U.S., who experience systemic anti-Black racismâand the language ide- ologies that reinforce and justify racial hierarchies.
Even after decades of sociolinguistic efforts to legitimize AAE, it continues to be viewed as âbadâ English and its speakers continue to be viewed as linguistically inadequateâa view called the deï¬cit perspective (Alim et al., 2016; Rosa and Flores, 2017). This perspective persists despite demon- strations that AAE is rule-bound and grammatical (Mufwene et al., 1998; Green, 2002), in addition to ample evidence of its speakersâ linguistic adroit- ness (e.g., Alim, 2004; Rickford and King, 2016). This perspective belongs to a broader set of raciolin- guistic ideologies (Rosa and Flores, 2017), which also produce allocational harms; speakers of AAE are frequently penalized for not adhering to domi- nant language practices, including in the education system (Alim, 2004; Terry et al., 2010), when seeking housing (Baugh, 2018), and in the judicial system, where their testimony is misunderstood or,
worse yet, disbelieved (Rickford and King, 2016; Jones et al., 2019). These raciolinguistic ideologies position racialized communities as needing linguistic intervention, such as language education programs, in which these and other harms can be reduced if communities accommodate to domi- nant language practices (Rosa and Flores, 2017).
In the technology industry, speakers of AAE are often not considered consumers who matter. For example, Benjamin (2019) recounts an Apple em- ployee who worked on speech recognition for Siri:
âAs they worked on different English dialects â Australian, Singaporean, and Indian English â [the employee] asked his boss: âWhat about African American English?â To this his boss responded: âWell, Apple products are for the premium market.ââ
The reality, of course, is that speakers of AAE tend not to represent the âpremium marketâ precisely be- cause of institutions and policies that help to main- tain racial hierarchies by systematically denying them the opportunities to develop wealth that are available to white Americans (Rothstein, 2017)â an exclusion that is reproduced in technology by countless decisions like the one described above. Engaging with the literature outlined above situates the system behaviors that are described as âbias,â providing a foundation for normative reasoning. Researchers and practitioners should be concerned about âracial biasâ in toxicity detection systems not only because performance system performance, but differences because they reproduce longstanding injustices of stigmatization and disenfranchisement for speakers of AAE. In re-stigmatizing AAE, they reproduce language ideologies in which AAE is viewed as ungrammatical, uneducated, and offensive. These ideologies, in turn, enable linguistic discrimination and justify enduring racial hierarchies (Rosa and Flores, 2017). Our perspective, which understands racial hierarchies and raciolinguistic ideologies as structural conditions that govern the development implies that and deployment of techniques for measuring or mitigating âbiasâ in NLP systems will necessarily be incomplete unless they interrogate and dismantle these structural conditions, including the power relations between technologists and racialized communities. We emphasize that engaging with the literature on AAE, racial hierarchies in the U.S., and raciolinguistic ideologies can generate new lines of engagement. These lines include work on the ways that the decisions made during the development
and deployment of NLP systems produce stigmati- zation and disenfranchisement, and work on AAE use in practice, such as the ways that speakers of AAE interact with NLP systems that were not designed for them. This literature can also help re- searchers and practitioners address the allocational harms that may be produced by NLP systems, and ensure that even well-intentioned NLP systems do not position racialized communities as needing linguistic intervention or accommodation to dominant language practices. Finally, researchers and practitioners wishing to design better systems can also draw on a growing body of work on anti-racist language pedagogy that challenges the deï¬cit perspective of AAE and other racialized language practices (e.g. Flores and Chaparro, 2018; Baker-Bell, 2019; MartÃnez and MejÃa, 2019), as well as the work that we described in section 4.3 on reimagining the power relations between tech- nologists and communities affected by technology.
# 6 Conclusion
By surveying 146 papers analyzing âbiasâ in NLP systems, we found that (a) their motivations are often vague, inconsistent, and lacking in norma- tive reasoning; and (b) their proposed quantitative techniques for measuring or mitigating âbiasâ are poorly matched to their motivations and do not en- gage with the relevant literature outside of NLP. To help researchers and practitioners avoid these pitfalls, we proposed three recommendations that should guide work analyzing âbiasâ in NLP sys- tems, and, for each, provided several concrete re- search questions. These recommendations rest on a greater recognition of the relationships between language and social hierarchiesâa step that we see as paramount to establishing a path forward.
# Acknowledgments
This paper is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. 1451512. Any opin- ion, ï¬ndings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reï¬ect the views of the Na- tional Science Foundation. We thank the reviewers for their useful feedback, especially the sugges- tion to include additional details about our method.
# References
Artem Abzaliev. 2019. On GAP coreference resolu- tion shared task: insights from the 3rd place solution. In Proceedings of the Workshop on Gender Bias in Natural Language Processing, pages 107â112, Flo- rence, Italy.
Guidelines for Writing About Peo- ple With Disabilities. ADA National Network. https://bit.ly/2KREbkB.
Oshin Agarwal, Funda Durupinar, Norman I. Badler, and Ani Nenkova. 2019. Word embeddings (also) encode human personality stereotypes. In Proceed- ings of the Joint Conference on Lexical and Com- putational Semantics, pages 205â211, Minneapolis, MN.
H. Samy Alim. 2004. You Know My Steez: An Ethno- graphic and Sociolinguistic Study of Styleshifting in a Black American Speech Community. American Di- alect Society.
H. Samy Alim, John R. Rickford, and Arnetha F. Ball, editors. 2016. Raciolinguistics: How Language Shapes Our Ideas About Race. Oxford University Press.
Sandeep Attree. 2019. Gendered ambiguous pronouns shared task: Boosting model conï¬dence by evidence In Proceedings of the Workshop on Gen- pooling. der Bias in Natural Language Processing, Florence, Italy.
Pinkesh Badjatiya, Manish Gupta, and Vasudeva Varma. 2019. Stereotypical bias removal for hate speech detection task using knowledge-based gen- In Proceedings of the International eralizations. World Wide Web Conference, pages 49â59, San Fran- cisco, CA.
Eugene Bagdasaryan, Omid Poursaeed, and Vitaly Shmatikov. 2019. Differential Privacy Has Dis- parate Impact on Model Accuracy. In Proceedings of the Conference on Neural Information Processing Systems, Vancouver, Canada.
April Baker-Bell. 2019. Dismantling anti-black lin- guistic racism in English language arts classrooms: Toward an anti-racist black language pedagogy. The- ory Into Practice.
David Bamman, Sejal Popat, and Sheng Shen. 2019. An annotated dataset of literary entities. In Proceed- ings of the North American Association for Com- putational Linguistics (NAACL), pages 2138â2144, Minneapolis, MN.
Xingce Bao and Qianqian Qiao. 2019. Transfer Learn- ing from Pre-trained BERT for Pronoun Resolution. In Proceedings of the Workshop on Gender Bias in Natural Language Processing, pages 82â88, Flo- rence, Italy.
Shaowen Bardzell and Jeffrey Bardzell. 2011. Towards a Feminist HCI Methodology: Social Science, Femi- nism, and HCI. In Proceedings of the Conference on Human Factors in Computing Systems (CHI), pages 675â684, Vancouver, Canada.
Solon Barocas, Asia J. Biega, Benjamin Fish, JËedrzej Niklas, and Luke Stark. 2020. When Not to De- sign, Build, or Deploy. In Proceedings of the Confer- ence on Fairness, Accountability, and Transparency, Barcelona, Spain.
Solon Barocas, Kate Crawford, Aaron Shapiro, and Hanna Wallach. 2017. The Problem With Bias: Al- locative Versus Representational Harms in Machine Learning. In Proceedings of SIGCIS, Philadelphia, PA.
Christine Basta, Marta R. Costa-jussà , and Noe Casas. 2019. Evaluating the underlying gender bias in con- In Proceedings of textualized word embeddings. the Workshop on Gender Bias for Natural Language Processing, pages 33â39, Florence, Italy.
John Baugh. 2018. Linguistics in Pursuit of Justice. Cambridge University Press.
Emily M. Bender. 2019. A typology of ethical risks in language technology with an eye towards where transparent documentation can help. Presented at The Future of Artiï¬cial Intelligence: Language, Ethics, Technology Workshop. https://bit.ly/ 2P9t9M6.
Ruha Benjamin. 2019. Race After Technology: Aboli- tionist Tools for the New Jim Code. John Wiley & Sons.
Ruha Benjamin. 2020. 2020 Vision: Reimagining the Default Settings of Technology & Society. Keynote at ICLR.
Cynthia L. Bennett and Os Keyes. 2019. What is the Point of Fairness? Disability, AI, and The Com- In Proceedings of the ASSETS plexity of Justice. Workshop on AI Fairness for People with Disabili- ties, Pittsburgh, PA.
Camiel J. Beukeboom and Christian Burgers. 2019. How Stereotypes Are Shared Through Language: A Review and Introduction of the Social Categories and Stereotypes Communication (SCSC) Frame- work. Review of Communication Research, 7:1â37.
Shruti Bhargava and David Forsyth. 2019. Expos- ing and Correcting the Gender Bias in Image arXiv preprint Captioning Datasets and Models. arXiv:1912.00578.
Jayadev Bhaskaran and Isha Bhallamudi. 2019. Good Secretaries, Bad Truck Drivers? Occupational Gen- der Stereotypes in Sentiment Analysis. In Proceed- ings of the Workshop on Gender Bias in Natural Lan- guage Processing, pages 62â68, Florence, Italy.
Su Lin Blodgett, Lisa Green, and Brendan OâConnor. 2016. Demographic Dialectal Variation in Social Media: A Case Study of African-American English. In Proceedings of Empirical Methods in Natural Language Processing (EMNLP), pages 1119â1130, Austin, TX.
Su Lin Blodgett and Brendan OâConnor. 2017. Racial Disparity in Natural Language Processing: A Case Study of Social Media African-American English. In Proceedings of the Workshop on Fairness, Ac- countability, and Transparency in Machine Learning (FAT/ML), Halifax, Canada.
Su Lin Blodgett, Johnny Wei, and Brendan OâConnor. 2018. Twitter Universal Dependency Parsing for African-American and Mainstream American En- glish. In Proceedings of the Association for Compu- tational Linguistics (ACL), pages 1415â1425, Mel- bourne, Australia.
James Zou, Venkatesh Saligrama, and Adam Kalai. 2016a. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. In Pro- ceedings of the Conference on Neural Information Processing Systems, pages 4349â4357, Barcelona, Spain.
James Zou, Venkatesh Saligrama, and Adam Kalai. 2016b. Quantifying and reducing stereotypes in word embeddings. In Proceedings of the ICML Workshop on #Data4Good: Machine Learning in Social Good Applications, pages 41â45, New York, NY.
Shikha Bordia and Samuel R. Bowman. 2019. Identify- ing and reducing gender bias in word-level language models. In Proceedings of the NAACL Student Re- search Workshop, pages 7â15, Minneapolis, MN.
Marc-Etienne Brunet, Colleen Alkalay-Houlihan, Ash- ton Anderson, and Richard Zemel. 2019. Under- standing the Origins of Bias in Word Embeddings. In Proceedings of the International Conference on Machine Learning, pages 803â811, Long Beach, CA.
Mary Bucholtz, Dolores Inés Casillas, and Jin Sook Lee. 2016. Beyond Empowerment: Accompani- ment and Sociolinguistic Justice in a Youth Research Program. In Robert Lawson and Dave Sayers, edi- tors, Sociolinguistic Research: Application and Im- pact, pages 25â44. Routledge.
Mary Bucholtz, Dolores Inés Casillas, and Jin Sook Lee. 2019. California Latinx Youth as Agents of In Netta Avineri, Laura R. Sociolinguistic Justice. Graham, Eric J. Johnson, Robin Conley Riner, and Jonathan Rosa, editors, Language and Social Justice in Practice, pages 166â175. Routledge.
Mary Bucholtz, Audrey Lopez, Allina Mojarro, Elena Skapoulli, Chris VanderStouwe, and Shawn Warner- Garcia. 2014. Sociolinguistic Justice in the Schools:
Student Researchers as Linguistic Experts. Lan- guage and Linguistics Compass, 8:144â157.
Kaylee Burns, Lisa Anne Hendricks, Kate Saenko, Trevor Darrell, and Anna Rohrbach. 2018. Women also Snowboard: Overcoming Bias in Captioning Models. In Procedings of the European Conference on Computer Vision (ECCV), pages 793â811, Mu- nich, Germany.
and Arvind Joanna J. Bryson, Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334).
Kathryn Campbell-Kibler. 2009. The nature of so- ciolinguistic perception. Language Variation and Change, 21(1):135â156.
To- ward gender-inclusive coreference resolution. arXiv preprint arXiv:1910.13913.
Rakesh Chada. 2019. Gendered pronoun resolution us- ing bert and an extractive question answering formu- lation. In Proceedings of the Workshop on Gender Bias in Natural Language Processing, pages 126â 133, Florence, Italy.
Kaytlin Chaloner and Alfredo Maldonado. 2019. Mea- suring Gender Bias in Word Embedding across Do- mains and Discovering New Gender Bias Word Cat- egories. In Proceedings of the Workshop on Gender Bias in Natural Language Processing, pages 25â32, Florence, Italy.
Anne H. Charity Hudley. 2017. Language and Racial- ization. In Ofelia GarcÃa, Nelson Flores, and Mas- similiano Spotti, editors, The Oxford Handbook of Language and Society. Oxford University Press.
Won Ik Cho, Ji Won Kim, Seok Min Kim, and Nam Soo Kim. 2019. On measuring gender bias in translation of gender-neutral pronouns. In Proceed- ings of the Workshop on Gender Bias in Natural Lan- guage Processing, pages 173â181, Florence, Italy.
Shivang Chopra, Ramit Sawhney, Puneet Mathur, and Rajiv Ratn Shah. 2020. Hindi-English Hate Speech Detection: Author Proï¬ling, Debiasing, and Practi- cal Perspectives. In Proceedings of the AAAI Con- ference on Artiï¬cial Intelligence (AAAI), New York, NY.
Marika Cifor, Patricia Garcia, T.L. Cowan, Jasmine Rault, Tonia Sutherland, Anita Say Chan, Jennifer Rode, Anna Lauren Hoffmann, Niloufar Salehi, and Lisa Nakamura. 2019. Feminist Data Manifest- No. Retrieved from https://www.manifestno. com/.
Patricia Hill Collins. 2000. Black Feminist Thought: Knowledge, Consciousness, and the Politics of Em- powerment. Routledge.
Justin T. Craft, Kelly E. Wright, Rachel Elizabeth Weissler, and Robin M. Queen. 2020. Language and Discrimination: Generating Meaning, Perceiv- ing Identities, and Discriminating Outcomes. An- nual Review of Linguistics, 6(1).
Kate Crawford. 2017. The Trouble with Bias. Keynote at NeurIPS.
Kimberle Crenshaw. 1989. Demarginalizing the Inter- section of Race and Sex: A Black Feminist Critique of Antidiscrmination Doctrine, Feminist Theory and Antiracist Politics. University of Chicago Legal Fo- rum.
Amanda Cercas Curry and Verena Rieser. 2018. #MeToo: How Conversational Systems Respond to Sexual Harassment. In Proceedings of the Workshop on Ethics in Natural Language Processing, pages 7â 14, New Orleans, LA.
Karan Dabas, Nishtha Madaan, Gautam Singh, Vi- jay Arya, Sameep Mehta, and Tanmoy Chakraborty. 2020. Fair Transfer of Multiple Style Attributes in Text. arXiv preprint arXiv:2001.06693.
Thomas Davidson, Debasmita Bhattacharya, and Ing- mar Weber. 2019. Racial bias in hate speech and abusive language detection datasets. In Proceedings of the Workshop on Abusive Language Online, pages 25â35, Florence, Italy.
Maria De-Arteaga, Alexey Romanov, Hanna Wal- lach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kentha- padi, and Adam Tauman Kalai. 2019. Bias in bios: A case study of semantic representation bias in a In Proceedings of the Confer- high-stakes setting. ence on Fairness, Accountability, and Transparency, pages 120â128, Atlanta, GA.
Sunipa Dev, Tao Li, Jeff Phillips, and Vivek Sriku- mar. 2019. On Measuring and Mitigating Biased arXiv preprint Inferences of Word Embeddings. arXiv:1908.09369.
Sunipa Dev and Jeff Phillips. 2019. Attenuating Bias in Word Vectors. In Proceedings of the International Conference on Artiï¬cial Intelligence and Statistics, pages 879â887, Naha, Japan.
Mark DÃaz, Isaac Johnson, Amanda Lazar, Anne Marie Piper, and Darren Gergle. 2018. Addressing age- In Proceedings related bias in sentiment analysis. of the Conference on Human Factors in Computing Systems (CHI), Montréal, Canada.
Emily Dinan, Angela Fan, Adina Williams, Jack Ur- banek, Douwe Kiela, and Jason Weston. 2019. too: Mitigating Gender Queens are Powerful arXiv preprint Bias in Dialogue Generation. arXiv:1911.03842.
Carl DiSalvo, Andrew Clement, and Volkmar Pipek. 2013. Communities: Participatory Design for, with and by communities. In Jesper Simonsen and Toni
Robertson, editors, Routledge International Hand- book of Participatory Design, pages 182â209. Rout- ledge.
Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2018. Measuring and mitigat- In Pro- ing unintended bias in text classiï¬cation. ceedings of the Conference on Artiï¬cial Intelligence, Ethics, and Society (AIES), New Orleans, LA.
Jacob Eisenstein. 2013. What to do about bad lan- guage on the Internet. In Proceedings of the North American Association for Computational Linguistics (NAACL), pages 359â369.
Is Your Classiï¬er Actually Biased? Measuring Fairness under Uncertainty with In Proceedings of the Associa- Bernstein Bounds. tion for Computational Linguistics (ACL).
Kawin Ethayarajh, David Duvenaud, and Graeme Hirst. 2019. Understanding Undesirable Word Embedding Assocations. In Proceedings of the Association for Computational Linguistics (ACL), pages 1696â1705, Florence, Italy.
Joseph Fisher. 2019. Measuring social bias in arXiv preprint knowledge graph embeddings. arXiv:1912.02761.
Nelson Flores and Soï¬a Chaparro. 2018. What counts as language education policy? Developing a materi- alist Anti-racist approach to language activism. Lan- guage Policy, 17(3):365â384.
Omar U. Florez. 2019. On the Unintended Social Bias of Training Language Generation Models with Data from Local Media. In Proceedings of the NeurIPS Workshop on Human-Centric Machine Learning, Vancouver, Canada.
Joel Escudé Font and Marta R. Costa-jussà . 2019. Equalizing gender biases in neural machine trans- In Pro- lation with word embeddings techniques. ceedings of the Workshop on Gender Bias for Natu- ral Language Processing, pages 147â154, Florence, Italy.
Batya Friedman and David G. Hendry. 2019. Value Sensitive Design: Shaping Technology with Moral Imagination. MIT Press.
Batya Friedman, Peter H. Kahn Jr., and Alan Borning. 2006. Value Sensitive Design and Information Sys- tems. In Dennis Galletta and Ping Zhang, editors, Human-Computer Interaction in Management Infor- mation Systems: Foundations, pages 348â372. M.E. Sharpe.
Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. 2018. Word Embeddings Quantify 100 Years of Gender and Ethnic Stereotypes. Proceed- ings of the National Academy of Sciences, 115(16).
Sahaj Garg, Vincent Perot, Nicole Limtiaco, Ankur Taly, Ed H. Chi, and Alex Beutel. 2019. Counter- factual fairness in text classiï¬cation through robust- ness. In Proceedings of the Conference on Artiï¬cial Intelligence, Ethics, and Society (AIES), Honolulu, HI.
Aparna Garimella, Carmen Banea, Dirk Hovy, and Rada Mihalcea. 2019. Womenâs syntactic resilience and menâs grammatical luck: Gender bias in part-of- speech tagging and dependency parsing data. In Pro- ceedings of the Association for Computational Lin- guistics (ACL), pages 3493â3498, Florence, Italy.
Andrew Gaut, Tony Sun, Shirlyn Tang, Yuxin Huang, Jieyu Zhao, Diba Jing Qian, Mai ElSherief, Mirza, Elizabeth Belding, Kai-Wei Chang, and William Yang Wang. 2020. Towards Understand- ing Gender Bias in Relation Extraction. In Proceed- ings of the Association for Computational Linguis- tics (ACL).
R. Stuart Geiger, Kevin Yu, Yanlai Yang, Mindy Dai, Jie Qiu, Rebekah Tang, and Jenny Huang. 2020. Garbage In, Garbage Out? Do Machine Learn- ing Application Papers in Social Computing Report Where Human-Labeled Training Data Comes From? In Proceedings of the Conference on Fairness, Ac- countability, and Transparency, pages 325â336.
Oguzhan Gencoglu. 2020. Cyberbullying Detec- arXiv preprint tion with Fairness Constraints. arXiv:2005.06625.
Alexandra Reeve Givens and Meredith Ringel Morris. 2020. Centering Disability Perspecives in Algorith- mic Fairness, Accountability, and Transparency. In Proceedings of the Conference on Fairness, Account- ability, and Transparency, Barcelona, Spain.
Hila Gonen and Yoav Goldberg. 2019. Lipstick on a Pig: Debiasing Methods Cover up Systematic Gen- der Biases in Word Embeddings But do not Remove In Proceedings of the North American As- Them. sociation for Computational Linguistics (NAACL), pages 609â614, Minneapolis, MN.
Auto- matically Identifying Gender Issues in Machine arXiv preprint Translation using Perturbations. arXiv:2004.14065.
Ben Green. 2019. âGoodâ isnât good enough. In Pro- ceedings of the AI for Social Good Workshop, Van- couver, Canada.
Lisa J. Green. 2002. African American English: A Lin- guistic Introduction. Cambridge University Press.
Anthony G. Greenwald, Debbie E. McGhee, and Jor- dan L.K. Schwartz. 1998. Measuring individual dif- ferences in implicit cognition: The implicit associa- tion test. Journal of Personality and Social Psychol- ogy, 74(6):1464â1480.
Enoch Opanin Gyamï¬, Yunbo Rao, Miao Gou, and Yanhua Shao. 2020. deb2viz: Debiasing gender in word embedding data using subspace visualization. In Proceedings of the International Conference on Graphics and Image Processing.
Foad Hamidi, Morgan Klaus Scheuerman, and Stacy M. Branham. 2018. Gender Recognition or Gender Re- ductionism? The Social Implications of Automatic Gender Recognition Systems. In Proceedings of the Conference on Human Factors in Computing Sys- tems (CHI), Montréal, Canada.
Alex Hanna, Emily Denton, Andrew Smart, and Jamila Smith-Loud. 2020. Towards a Critical Race Method- ology in Algorithmic Fairness. In Proceedings of the Conference on Fairness, Accountability, and Trans- parency, pages 501â512, Barcelona, Spain.
Madeline E. Heilman, Aaaron S. Wallen, Daniella Fuchs, and Melinda M. Tamkins. 2004. Penalties for Success: Reactions to Women Who Succeed at Male Gender-Typed Tasks. Journal of Applied Psy- chology, 89(3):416â427.
Jane H. Hill. 2008. The Everyday Language of White Racism. Wiley-Blackwell.
Dirk Hovy, Federico Bianchi, and Tommaso Fornaciari. 2020. Can You Translate that into Man? Commer- cial Machine Translation Systems Include Stylistic Biases. In Proceedings of the Association for Com- putational Linguistics (ACL).
Dirk Hovy and Anders Søgaard. 2015. Tagging Per- formance Correlates with Author Age. In Proceed- ings of the Association for Computational Linguis- tics and the International Joint Conference on Nat- ural Language Processing, pages 483â488, Beijing, China.
Dirk Hovy and Shannon L. Spruit. 2016. The social impact of natural language processing. In Proceed- ings of the Association for Computational Linguis- tics (ACL), pages 591â598, Berlin, Germany.
Po-Sen Huang, Huan Zhang, Ray Jiang, Robert Stanforth, Johannes Welbl, Jack W. Rae, Vishal Maini, Dani Yogatama, and Pushmeet Kohli. 2019. Reducing Sentiment Bias in Language Models arXiv preprint via Counterfactual Evaluation. arXiv:1911.03064.
Xiaolei Huang, Linzi Xing, Franck Dernoncourt, and Michael J. Paul. 2020. Multilingual Twitter Cor- pus and Baselines for Evaluating Demographic Bias In Proceedings of in Hate Speech Recognition. the Language Resources and Evaluation Conference (LREC), Marseille, France.
Christoph Hube, Maximilian Idahl, and Besnik Fetahu. 2020. Debiasing Word Embeddings from Sentiment Associations in Names. In Proceedings of the Inter- national Conference on Web Search and Data Min- ing, pages 259â267, Houston, TX.
Ben Hutchinson, Vinodkumar Prabhakaran, Emily Denton, Kellie Webster, Yu Zhong, and Stephen De- nuyl. 2020. Social Biases in NLP Models as Barriers for Persons with Disabilities. In Proceedings of the Association for Computational Linguistics (ACL).
Matei Ionita, Yury Kashnitsky, Ken Krige, Vladimir Larin, Dennis Logvinenko, and Atanas Atanasov. 2019. Resolving gendered ambiguous pronouns In Proceedings of the Workshop on with BERT. Gender Bias in Natural Language Processing, pages 113â119, Florence, Italy.
Hailey James-Sorenson and David Alvarez-Melis. 2019. Probabilistic Bias Mitigation in Word Embed- dings. In Proceedings of the Workshop on Human- Centric Machine Learning, Vancouver, Canada.
Shengyu Jia, Tao Meng, Jieyu Zhao, and Kai-Wei Chang. 2020. Mitigating Gender Bias Ampliï¬cation in Distribution by Posterior Regularization. In Pro- ceedings of the Association for Computational Lin- guistics (ACL).
Taylor Jones, Jessica Rose Kalbfeld, Ryan Hancock, and Robin Clark. 2019. Testifying while black: An experimental study of court reporter accuracy in tran- scription of African American English. Language, 95(2).
Anna Jørgensen, Dirk Hovy, and Anders Søgaard. 2015. Challenges of studying and processing dialects in In Proceedings of the Workshop on social media. Noisy User-Generated Text, pages 9â18, Beijing, China.
Anna Jørgensen, Dirk Hovy, and Anders Søgaard. 2016. Learning a POS tagger for AAVE-like language. In Proceedings of the North American Association for Computational Linguistics (NAACL), pages 1115â 1120, San Diego, CA.
Pratik Joshi, Sebastian Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2020. The State and Fate of Linguistic Diversity and Inclusion in the NLP World. In Proceedings of the Association for Computational Linguistics (ACL).
Jaap Jumelet, Willem Zuidema, and Dieuwke Hupkes. 2019. Analysing Neural Language Models: Contex- tual Decomposition Reveals Default Reasoning in In Proceedings Number and Gender Assignment. of the Conference on Natural Language Learning, Hong Kong, China.
Participatory action re- search for Indigenous linguistics in the digital age. In Shannon T. Bischoff and Carmen Jany, editors, Insights from Practices in Community-Based Re- search, pages 164â175. De Gruyter Mouton.
David Jurgens, Yulia Tsvetkov, and Dan Jurafsky. 2017. Incorporating Dialectal Variability for Socially Equi- table Language Identiï¬cation. In Proceedings of the Association for Computational Linguistics (ACL), pages 51â57, Vancouver, Canada.
Masahiro Kaneko and Danushka Bollegala. 2019. Gender-preserving debiasing for pre-trained word embeddings. In Proceedings of the Association for Computational Linguistics (ACL), pages 1641â1650, Florence, Italy.
Saket Karve, Lyle Ungar, and João Sedoc. 2019. Con- ceptor debiasing of word representations evaluated on WEAT. In Proceedings of the Workshop on Gen- der Bias in Natural Language Processing, pages 40â 48, Florence, Italy.
Michael Katell, Meg Young, Dharma Dailey, Bernease Herman, Vivian Guetler, Aaron Tam, Corinne Bintz, Danielle Raz, and P.M. Krafft. 2020. Toward sit- uated interventions for algorithmic equity: lessons from the ï¬eld. In Proceedings of the Conference on Fairness, Accountability, and Transparency, pages 45â55, Barcelona, Spain.
Stephen Kemmis. 2006. Participatory action research and the public sphere. Educational Action Research, 14(4):459â476.
The Misgendering Machines: Trans/HCI Implications of Automatic Gender Recognition. Proceedings of the ACM on Human- Computer Interaction, 2(CSCW).
Os Keyes, Josephine Hoy, and Margaret Drouhard. 2019. Human-Computer Insurrection: Notes on an Anarchist HCI. In Proceedings of the Conference on Human Factors in Computing Systems (CHI), Glas- gow, Scotland, UK.
Jae Yeon Kim, Carlos Ortiz, Sarah Nam, Sarah Santi- ago, and Vivek Datta. 2020. Intersectional Bias in Hate Speech and Abusive Language Datasets. In Proceedings of the Association for Computational Linguistics (ACL).
Svetlana Kiritchenko and Saif M. Mohammad. 2018. Examining Gender and Race Bias in Two Hundred Sentiment Analysis Systems. In Proceedings of the Joint Conference on Lexical and Computational Se- mantics, pages 43â53, New Orleans, LA.
Moshe Koppel, Shlomo Argamon, and Anat Rachel Shimoni. 2002. Automatically Categorizing Writ- ten Texts by Author Gender. Literary and Linguistic Computing, 17(4):401â412.
Keita Kurita, Nidhi Vyas, Ayush Pareek, Alan W. Black, and Yulia Tsvetkov. 2019. Measuring bias in contextualized word representations. In Proceed- ings of the Workshop on Gender Bias for Natu- ral Language Processing, pages 166â172, Florence, Italy.
Sonja L. Lanehart and Ayesha M. Malik. 2018. Black Is, Black Isnât: Perceptions of Language and Black- ness. In Jeffrey Reaser, Eric Wilbanks, Karissa Woj- cik, and Walt Wolfram, editors, Language Variety in the New South. University of North Carolina Press.
Brian N. Larson. 2017. Gender as a variable in natural- language processing: Ethical considerations. In Pro- ceedings of the Workshop on Ethics in Natural Lan- guage Processing, pages 30â40, Valencia, Spain.
Anne Lauscher and Goran GlavaÅ¡. 2019. Are We Con- sistently Biased? Multidimensional Analysis of Bi- ases in Distributional Word Vectors. In Proceedings of the Joint Conference on Lexical and Computa- tional Semantics, pages 85â91, Minneapolis, MN.
Anne Lauscher, Goran Glavaš, Simone Paolo Ponzetto, and Ivan Vuli´c. 2019. A General Framework for Im- plicit and Explicit Debiasing of Distributional Word Vector Spaces. arXiv preprint arXiv:1909.06092.
Christopher A. Le Dantec, Erika Shehan Poole, and Su- san P. Wyche. 2009. Values as Lived Experience: Evolving Value Sensitive Design in Support of Value Discovery. In Proceedings of the Conference on Hu- man Factors in Computing Systems (CHI), Boston, MA.
Nayeon Lee, Andrea Madotto, and Pascale Fung. 2019. Exploring Social Bias in Chatbots using Stereotype In Proceedings of the Workshop on Knowledge. Widening NLP, pages 177â180, Florence, Italy.
Wesley Y. Leonard. 2012. Reframing language recla- mation programmes for everybodyâs empowerment. Gender and Language, 6(2):339â367.
Paul Pu Liang, Irene Li, Emily Zheng, Yao Chong Lim, Ruslan Salakhutdinov, and Louis-Philippe Morency. 2019. Towards Debiasing Sentence Representations. In Proceedings of the NeurIPS Workshop on Human- Centric Machine Learning, Vancouver, Canada.
English with an Ac- cent: Language, Ideology, and Discrimination in the United States. Routledge.
Bo Liu. 2019. Anonymized BERT: An Augmentation Approach to the Gendered Pronoun Resolution Chal- lenge. In Proceedings of the Workshop on Gender Bias in Natural Language Processing, pages 120â 125, Florence, Italy.
Haochen Liu, Jamell Dacon, Wenqi Fan, Hui Liu, Zi- tao Liu, and Jiliang Tang. 2019. Does Gender Mat- ter? Towards Fairness in Dialogue Systems. arXiv preprint arXiv:1910.10486.
Felipe Alfaro Lois, José A.R. Fonollosa, and Costa-jà . 2019. BERT Masked Language Modeling for Co- In Proceedings of the Work- reference Resolution. shop on Gender Bias in Natural Language Process- ing, pages 76â81, Florence, Italy.
Brandon C. Loudermilk. 2015. Implicit attitudes and the perception of sociolinguistic variation. In Alexei Prikhodkine and Dennis R. Preston, editors, Re- sponses to Language Varieties: Variability, pro- cesses and outcomes, pages 137â156.
Anastassia Loukina, Nitin Madnani, and Klaus Zech- ner. 2019. The many dimensions of algorithmic fair- ness in educational applications. In Proceedings of the Workshop on Innovative Use of NLP for Build- ing Educational Applications, pages 1â10, Florence, Italy.
Kaiji Lu, Peter Mardziel, Fangjing Wu, Preetam Aman- charla, and Anupam Datta. 2018. Gender bias in neural natural language processing. arXiv preprint arXiv:1807.11714.
Anne Maass. 1999. Linguistic intergroup bias: Stereo- type perpetuation through language. Advances in Experimental Social Psychology, 31:79â121.
Nitin Madnani, Anastassia Loukina, Alina von Davier, Jill Burstein, and Aoife Cahill. 2017. Building Bet- ter Open-Source Tools to Support Fairness in Auto- mated Scoring. In Proceedings of the Workshop on Ethics in Natural Language Processing, pages 41â 52, Valencia, Spain.
Thomas Manzini, Yao Chong Lim, Yulia Tsvetkov, and Alan W. Black. 2019. Black is to Criminal as Cau- casian is to Police: Detecting and Removing Multi- class Bias in Word Embeddings. In Proceedings of the North American Association for Computational Linguistics (NAACL), pages 801â809, Minneapolis, MN.
Ramón Antonio MartÃnez and Alexander Feliciano MejÃa. 2019. Looking closely and listening care- fully: A sociocultural approach to understanding the complexity of Latina/o/x studentsâ everyday lan- guage. Theory Into Practice.
Rowan Hall Maudslay, Hila Gonen, Ryan Cotterell, and Simone Teufel. 2019. Itâs All in the Name: Mit- igating Gender Bias with Name-Based Counterfac- tual Data Substitution. In Proceedings of Empirical Methods in Natural Language Processing (EMNLP), pages 5270â5278, Hong Kong, China.
Chandler May, Alex Wang, Shikha Bordia, Samuel R. Bowman, and Rachel Rudinger. 2019. On Measur- ing Social Biases in Sentence Encoders. In Proceed- ings of the North American Association for Compu- tational Linguistics (NAACL), pages 629â634, Min- neapolis, MN.
Elijah Mayï¬eld, Michael Madaio, Shrimai Prab- humoye, David Gerritsen, Brittany McLaughlin, Ezekiel Dixon-Roman, and Alan W. Black. 2019. Equity Beyond Bias in Language Technologies for Education. In Proceedings of the Workshop on Inno- vative Use of NLP for Building Educational Appli- cations, Florence, Italy.
Katherine McCurdy and OËguz Serbetçi. 2017. Gram- matical gender associations outweigh topical gender In Pro- bias in crosslinguistic word embeddings. ceedings of the Workshop for Women & Underrepre- sented Minorities in Natural Language Processing, Vancouver, Canada.
Ninareh Mehrabi, Thamme Gowda, Fred Morstatter, Nanyun Peng, and Aram Galstyan. 2019. Man is to Person as Woman is to Location: Measuring Gender Bias in Named Entity Recognition. arXiv preprint arXiv:1910.10872.
Michela Menegatti and Monica Rubini. 2017. Gender In Oxford Research bias and sexism in language. Encyclopedia of Communication. Oxford University Press.
Inom Mirzaev, Anthony Schulte, Michael Conover, and Sam Shah. 2019. Considerations for the interpreta- tion of bias measures of word embeddings. arXiv preprint arXiv:1906.08379.
Salikoko S. Mufwene, Guy Bailey, and John R. Rick- African-American English: ford, editors. 1998. Structure, History, and Use. Routledge.
Michael J. Muller. 2007. Participatory Design: The Third Space in HCI. In The Human-Computer Inter- action Handbook, pages 1087â1108. CRC Press.
Moin Nadeem, Anna Bethke, and Siva Reddy. StereoSet: Measuring stereotypical bias arXiv preprint 2020. in pretrained language models. arXiv:2004.09456.
Dong Nguyen, Rilana Gravel, Dolf Trieschnigg, and âHow Old Do You Think I Theo Meder. 2013. Am?â: A Study of Language and Age in Twitter. In Proceedings of the Conference on Web and Social Media (ICWSM), pages 439â448, Boston, MA.
Malvina Nissim, Rik van Noord, and Rob van der Goot. 2020. Fair is better than sensational: Man is to doc- tor as woman is to doctor. Computational Linguis- tics.
Debora Nozza, Claudia Volpetti, and Elisabetta Fersini. 2019. Unintended Bias in Misogyny Detection. In Proceedings of the Conference on Web Intelligence, pages 149â155.
Alexandra Olteanu, Carlos Castillo, Fernando Diaz, and Emre Kıcıman. 2019. Social Data: Biases, Methodological Pitfalls, and Ethical Boundaries. Frontiers in Big Data, 2.
Alexandra Olteanu, Kartik Talamadupula, and Kush R. Varshney. 2017. The Limits of Abstract Evaluation Metrics: The Case of Hate Speech Detection. In Proceedings of the ACM Web Science Conference, Troy, NY.
Orestis Papakyriakopoulos, Simon Hegelich, Juan Car- los Medina Serrano, and Fabienne Marco. 2020. In Proceedings of the Bias in word embeddings. Conference on Fairness, Accountability, and Trans- parency, pages 446â457, Barcelona, Spain.
Ji Ho Park, Jamin Shin, and Pascale Fung. 2018. Re- ducing Gender Bias in Abusive Language Detection. In Proceedings of Empirical Methods in Natural Language Processing (EMNLP), pages 2799â2804, Brussels, Belgium.
Inherent Disagreements in Human Textual Inferences. Trans- actions of the Association for Computational Lin- guistics, 7:677â694.
Xiangyu Peng, Siyan Li, Spencer Frazier, and Mark Riedl. 2020. Fine-Tuning a Transformer-Based Lan- guage Model to Avoid Generating Non-Normative Text. arXiv preprint arXiv:2001.08764.
Radomir Popovi´c, Florian Lemmerich, and Markus Joint Multiclass Debiasing of Strohmaier. 2020. In Proceedings of the Interna- Word Embeddings. tional Symposium on Intelligent Systems, Graz, Aus- tria.
Vinodkumar Prabhakaran, Ben Hutchinson, and Mar- garet Mitchell. 2019. Perturbation Sensitivity Anal- In Pro- ysis to Detect Unintended Model Biases. ceedings of Empirical Methods in Natural Language Processing (EMNLP), pages 5744â5749, Hong Kong, China.
Shrimai Prabhumoye, Elijah Mayï¬eld, and Alan W. Black. 2019. Principled Frameworks for Evaluating Ethics in NLP Systems. In Proceedings of the Work- shop on Innovative Use of NLP for Building Educa- tional Applications, Florence, Italy.
Marcelo Prates, Pedro Avelar, and Luis C. Lamb. 2019. Assessing gender bias in machine translation: A case study with google translate. Neural Computing and Applications.
Rasmus Précenth. 2019. Word embeddings and gender stereotypes in Swedish and English. Masterâs thesis, Uppsala University.
Dennis R. Preston. 2009. Are you really smart (or stupid, or cute, or ugly, or cool)? Or do you just talk that way? Language attitudes, standardization and language change. Oslo: Novus forlag, pages 105â 129.
Flavien Prost, Nithum Thain, and Tolga Bolukbasi. 2019. Debiasing Embeddings for Reduced Gender In Proceedings of the Bias in Text Classiï¬cation. Workshop on Gender Bias in Natural Language Pro- cessing, pages 69â75, Florence, Italy.
Reid Pryzant, Richard Diehl Martinez, Nathan Dass, Sadao Kurohashi, Dan Jurafsky, and Diyi Yang. 2020. Automatically Neutralizing Subjective Bias in Text. In Proceedings of the AAAI Conference on Artiï¬cial Intelligence (AAAI), New York, NY.
Arun K. Pujari, Ansh Mittal, Anshuman Padhi, An- shul Jain, Mukesh Jadon, and Vikas Kumar. 2019. Debiasing Gender biased Hindi Words with Word- In Proceedings of the International embedding. Conference on Algorithms, Computing and Artiï¬cial Intelligence, pages 450â456.
Yusu Qian, Urwa Muaz, Ben Zhang, and Jae Won Hyun. 2019. Reducing gender bias in word-level
language models with a gender-equalizing loss func- In Proceedings of the ACL Student Research tion. Workshop, pages 223â228, Florence, Italy.
Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, and Yoav Goldberg. 2020. Null It Out: Guarding Protected Attributes by Iterative Nullspace In Proceedings of the Association for Projection. Computational Linguistics (ACL).
John R. Rickford and Sharese King. 2016. Language and linguistics on trial: Hearing Rachel Jeantel (and other vernacular speakers) in the courtroom and be- yond. Language, 92(4):948â988.
Anthony Rios. 2020. FuzzE: Fuzzy Fairness Evalua- tion of Offensive Language Classiï¬ers on African- American English. In Proceedings of the AAAI Con- ference on Artiï¬cial Intelligence (AAAI), New York, NY.
Gerald Roche. 2019. Articulating language oppres- sion: colonialism, coloniality and the erasure of Ti- betâ ËA ´Zs minority languages. Patterns of Prejudice.
Alexey Romanov, Maria De-Arteaga, Hanna Wal- lach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kentha- padi, Anna Rumshisky, and Adam Tauman Kalai. 2019. Whatâs in a Name? Reducing Bias in Bios without Access to Protected Attributes. In Proceed- ings of the North American Association for Com- putational Linguistics (NAACL), pages 4187â4195, Minneapolis, MN.
Jonathan Rosa. 2019. Contesting Representations of Migrant âIllegalityâ through the Drop the I-Word Campaign: Rethinking Language Change and So- cial Change. In Netta Avineri, Laura R. Graham, Eric J. Johnson, Robin Conley Riner, and Jonathan Rosa, editors, Language and Social Justice in Prac- tice. Routledge.
Jonathan Rosa and Christa Burdick. 2017. Language In Ofelia GarcÃa, Nelson Flores, and Ideologies. Massimiliano Spotti, editors, The Oxford Handbook of Language and Society. Oxford University Press.
Jonathan Rosa and Nelson Flores. 2017. Unsettling race and language: Toward a raciolinguistic perspec- tive. Language in Society, 46:621â647.
Sara Rosenthal and Kathleen McKeown. 2011. Age Prediction in Blogs: A Study of Style, Content, and Online Behavior in Pre- and Post-Social Media Gen- erations. In Proceedings of the North American As- sociation for Computational Linguistics (NAACL), pages 763â772, Portland, OR.
and Andrei Barbu. 2020. Measuring Social Biases in Grounded Vi- arXiv preprint sion and Language Embeddings. arXiv:2002.08911.
Richard Rothstein. 2017. The Color of Law: A For- gotten History of How Our Government Segregated America. Liveright Publishing.
David Rozado. 2020. Wide range screening of algo- rithmic bias in word embedding models using large sentiment lexicons reveals underreported bias types. PLOS One.
Elayne Ruane, Abeba Birhane, and Anthony Ven- tresque. 2019. Conversational AI: Social and Ethi- cal Considerations. In Proceedings of the Irish Con- ference on Artiï¬cial Intelligence and Cognitive Sci- ence, Galway, Ireland.
and Benjamin Van Durme. 2017. Social bias in elicited natural lan- guage inferences. In Proceedings of the Workshop on Ethics in Natural Language Processing, pages 74â79, Valencia, Spain.
Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender Bias In Proceedings of the in Coreference Resolution. North American Association for Computational Lin- guistics (NAACL), pages 8â14, New Orleans, LA.
Elizabeth B.N. Sanders. 2002. From user-centered to participatory design approaches. In Jorge Frascara, editor, Design and the Social Sciences: Making Con- nections, pages 18â25. CRC Press.
Brenda Salenave Santana, Vinicius Woloszyn, and Le- andro Krug Wives. 2018. Is there gender bias and stereotype in Portuguese word embeddings? In Proceedings of the International Conference on the Computational Processing of Portuguese Student Re- search Workshop, Canela, Brazil.
Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A. Smith. 2019. The risk of racial bias in hate speech detection. In Proceedings of the Asso- ciation for Computational Linguistics (ACL), pages 1668â1678, Florence, Italy.
Maarten Sap, Saadia Gabriel, Lianhui Qin, Dan Juraf- sky, Noah A. Smith, and Yejin Choi. 2020. Social Bias Frames: Reasoning about Social and Power Im- plications of Language. In Proceedings of the Asso- ciation for Computational Linguistics (ACL).
Hanna Sassaman, Jennifer Lee, Jenessa Irvine, and Shankar Narayan. 2020. Creating Community- Based Tech Policy: Case Studies, Lessons Learned, and What Technologists and Communities Can Do Together. In Proceedings of the Conference on Fair- ness, Accountability, and Transparency, Barcelona, Spain.
Danielle Saunders and Bill Byrne. 2020. Reducing Gender Bias in Neural Machine Translation as a Do- main Adaptation Problem. In Proceedings of the As- sociation for Computational Linguistics (ACL).
Tyler Schnoebelen. 2017. Goal-Oriented Design for Ethical Machine Learning and NLP. In Proceedings of the Workshop on Ethics in Natural Language Pro- cessing, pages 88â93, Valencia, Spain.
Sabine Sczesny, Magda Formanowicz, and Franziska Moser. 2016. Can gender-fair language reduce gen- der stereotyping and discrimination? Frontiers in Psychology, 7.
João Sedoc and Lyle Ungar. 2019. The Role of Pro- tected Class Word Lists in Bias Identiï¬cation of Con- In Proceedings textualized Word Representations. of the Workshop on Gender Bias in Natural Lan- guage Processing, pages 55â61, Florence, Italy.
Procheta Sen and Debasis Ganguly. 2020. Towards So- cially Responsible AI: Cognitive Bias-Aware Multi- In Proceedings of the AAAI Objective Learning. Conference on Artiï¬cial Intelligence (AAAI), New York, NY.
Deven Shah, H. Andrew Schwartz, and Dirk Hovy. 2020. Predictive Biases in Natural Language Pro- cessing Models: A Conceptual Framework and In Proceedings of the Association for Overview. Computational Linguistics (ACL).
Judy Hanwen Shen, Lauren Fratamico, Iyad Rahwan, and Alexander M. Rush. 2018. Darling or Baby- girl? Investigating Stylistic Bias in Sentiment Anal- In Proceedings of the Workshop on Fairness, ysis. Accountability, and Transparency (FAT/ML), Stock- holm, Sweden.
Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. 2019. The Woman Worked as a Babysitter: On Biases in Language Generation. In Proceedings of Empirical Methods in Natural Language Processing (EMNLP), pages 3398â3403, Hong Kong, China.
Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, Towards Controllable arXiv preprint and Nanyun Peng. 2020. Biases in Language Generation. arXiv:2005.00268.
Seungjae Shin, Kyungwoo Song, JoonHo Jang, Hyemi Kim, Weonyoung Joo, and Il-Chul Moon. 2020. Neutralizing Gender Bias in Word Embedding with Latent Disentanglement and Counterfactual Genera- tion. arXiv preprint arXiv:2004.03133.
Jesper Simonsen and Toni Robertson, editors. 2013. Routledge International Handbook of Participatory Design. Routledge.
Gabriel Stanovsky, Noah A. Smith, and Luke Zettle- moyer. 2019. Evaluating gender bias in machine In Proceedings of the Association for translation. Computational Linguistics (ACL), pages 1679â1684, Florence, Italy.
Yolande Strengers, Lizhe Qu, Qiongkai Xu, and Jarrod Knibbe. 2020. Adhering, Steering, and Queering: Treatment of Gender in Natural Language Genera- tion. In Proceedings of the Conference on Human Factors in Computing Systems (CHI), Honolulu, HI.
Tony Sun, Andrew Gaut, Shirlyn Tang, Yuxin Huang, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, and William Yang Wang. 2019. Mitigating Gender Bias in Natural Lan- In Proceed- guage Processing: Literature Review. ings of the Association for Computational Linguis- tics (ACL), pages 1630â1640, Florence, Italy.
Adam Sutton, Thomas Lansdall-Welfare, and Nello Cristianini. 2018. Biased embeddings from wild data: Measuring, understanding and removing. In Proceedings of the International Symposium on Intelligent Data Analysis, pages 328â339, âs- Hertogenbosch, Netherlands.
Chris Sweeney and Maryam Najaï¬an. 2019. A Trans- parent Framework for Evaluating Unintended De- mographic Bias in Word Embeddings. In Proceed- ings of the Association for Computational Linguis- tics (ACL), pages 1662â1667, Florence, Italy.
Chris Sweeney and Maryam Najaï¬an. 2020. Reduc- ing sentiment polarity for demographic attributes in word embeddings using adversarial learning. In the Conference on Fairness, Ac- Proceedings of countability, and Transparency, pages 359â368, Barcelona, Spain.
Nathaniel Swinger, Maria De-Arteaga, Neil Thomas Heffernan, Mark D.M. Leiserson, and Adam Tau- man Kalai. 2019. What are the biases in my word embedding? In Proceedings of the Conference on Artiï¬cial Intelligence, Ethics, and Society (AIES), Honolulu, HI.
Samson Tan, Shaï¬q Joty, Min-Yen Kan, and Richard Socher. 2020. Itâs Morphinâ Time! Combating Linguistic Discrimination with Inï¬ectional Perturba- tions. In Proceedings of the Association for Compu- tational Linguistics (ACL).
Yi Chern Tan and L. Elisa Celis. 2019. Assessing Social and Intersectional Biases in Contextualized In Proceedings of the Con- Word Representations. ference on Neural Information Processing Systems, Vancouver, Canada.
J. Michael Terry, Randall Hendrick, Evangelos Evan- Variable gelou, and Richard L. Smith. 2010. dialect switching among African American chil- dren: Inferences about working memory. Lingua, 120(10):2463â2475.
Joel Tetreault, Daniel Blanchard, and Aoife Cahill. 2013. A Report on the First Native Language Iden- tiï¬cation Shared Task. In Proceedings of the Work- shop on Innovative Use of NLP for Building Educa- tional Applications, pages 48â57, Atlanta, GA.
Mike Thelwall. 2018. Gender Bias in Sentiment Anal- ysis. Online Information Review, 42(1):45â57.
Kristen Vaccaro, Karrie Karahalios, Deirdre K. Mul- ligan, Daniel Kluttz, and Tad Hirsch. 2019. Con- In Conference testability in Algorithmic Systems. Companion Publication of the 2019 on Computer
Supported Cooperative Work and Social Computing, pages 523â527, Austin, TX.
Ameya Vaidya, Feng Mai, and Yue Ning. 2019. Em- pirical Analysis of Multi-Task Learning for Reduc- ing Model Bias in Toxic Comment Detection. arXiv preprint arXiv:1909.09758v2.
Eva Vanmassenhove, Christian Hardmeier, and Andy Way. 2018. Getting Gender Right in Neural Ma- In Proceedings of Empirical chine Translation. Methods in Natural Language Processing (EMNLP), pages 3003â3008, Brussels, Belgium.
Jesse Vig, Sebastian Gehrmann, Yonatan Belinkov, Sharon Qian, Daniel Nevo, Yaron Singer, and Stu- art Shieber. 2020. Causal Mediation Analysis for Interpreting Neural NLP: The Case of Gender Bias. arXiv preprint arXiv:2004.12265.
Tianlu Wang, Xi Victoria Lin, Nazneen Fatema Ra- jani, Bryan McCann, Vicente Ordonez, and Caim- ing Xiong. 2020. Double-Hard Debias: Tailoring In Word Embeddings for Gender Bias Mitigation. Proceedings of the Association for Computational Linguistics (ACL).
Zili Wang. 2019. MSnet: A BERT-based Network for In Proceedings of Gendered Pronoun Resolution. the Workshop on Gender Bias in Natural Language Processing, pages 89â95, Florence, Italy.
Kellie Webster, Marta R. Costa-jussà , Christian Hard- meier, and Will Radford. 2019. Gendered Ambigu- ous Pronoun (GAP) Shared Task at the Gender Bias in NLP Workshop 2019. In Proceedings of the Work- shop on Gender Bias in Natural Language Process- ing, pages 1â7, Florence, Italy.
Kellie Webster, Marta Recasens, Vera Axelrod, and Ja- son Baldridge. 2018. Mind the GAP: A balanced corpus of gendered ambiguous pronouns. Transac- tions of the Association for Computational Linguis- tics, 6:605â618.
Walt Wolfram and Natalie Schilling. 2015. American English: Dialects and Variation, 3 edition. Wiley Blackwell.
Austin P. Wright, Omar Shaikh, Haekyu Park, Will Ep- person, Muhammed Ahmed, Stephane Pinel, Diyi Yang, and Duen Horng (Polo) Chau. 2020. RE- CAST: Interactive Auditing of Automatic Toxicity In Proceedings of the Con- Detection Models. ference on Human Factors in Computing Systems (CHI), Honolulu, HI.
Yinchuan Xu and Junlin Yang. 2019. Look again at the syntax: Relational graph convolutional network for gendered ambiguous pronoun resolution. In Pro- ceedings of the Workshop on Gender Bias in Natu- ral Language Processing, pages 96â101, Florence, Italy.
Kai-Chou Yang, Timothy Niven, Tzu-Hsuan Chou, and Fill the GAP: Exploiting Hung-Yu Kao. 2019. In Proceedings of BERT for Pronoun Resolution. the Workshop on Gender Bias in Natural Language Processing, pages 102â106, Florence, Italy.
Zekun Yang and Juan Feng. 2020. A Causal Inference Method for Reducing Gender Bias in Word Embed- In Proceedings of the AAAI Con- ding Relations. ference on Artiï¬cial Intelligence (AAAI), New York, NY.
Daisy Yoo, Anya Ernest, Soï¬a Serholt, Eva Eriksson, and Peter Dalsgaard. 2019. Service Design in HCI Research: The Extended Value Co-creation Model. In Proceedings of the Halfway to the Future Sympo- sium, Nottingham, United Kingdom.
Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. 2018. Mitigating unwanted biases with ad- versarial learning. In Proceedings of the Conference on Artiï¬cial Intelligence, Ethics, and Society (AIES), New Orleans, LA.
Guanhua Zhang, Bing Bai, Junqi Zhang, Kun Bai, Con- ghui Zhu, and Tiejun Zhao. 2020a. Demographics Should Not Be the Reason of Toxicity: Mitigating Discrimination in Text Classiï¬cations with Instance In Proceedings of the Association for Weighting. Computational Linguistics (ACL).
Haoran Zhang, Amy X. Lu, Mohamed Abdalla, Matthew McDermott, and Marzyeh Ghassemi. 2020b. Hurtful Words: Quantifying Biases in Clin- ical Contextual Word Embeddings. In Proceedings of the ACM Conference on Health, Inference, and Learning.
Jieyu Zhao, Subhabrata Mukherjee, Saghar Hosseini, Kai-Wei Chang, and Ahmed Hassan Awadallah. 2020. Gender Bias in Multilingual Embeddings and Cross-Lingual Transfer. In Proceedings of the Asso- ciation for Computational Linguistics (ACL).
Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cot- terell, Vicente Ordonez, and Kai-Wei Chang. 2019. Gender Bias in Contextualized Word Embeddings. In Proceedings of the North American Association for Computational Linguistics (NAACL), pages 629â 634, Minneapolis, MN.
Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Or- donez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias ampliï¬cation us- In Proceedings of ing corpus-level constraints. Empirical Methods in Natural Language Process- ing (EMNLP), pages 2979â2989, Copenhagen, Den- mark.
Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Or- donez, and Kai-Wei Chang. 2018a. Gender Bias in Coreference Resolution: Evaluation and Debias- ing Methods. In Proceedings of the North American Association for Computational Linguistics (NAACL), pages 15â20, New Orleans, LA.
Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, and Kai- Wei Chang. 2018b. Learning Gender-Neutral Word Embeddings. In Proceedings of Empirical Methods in Natural Language Processing (EMNLP), pages 4847â4853, Brussels, Belgium.
Alina Zhiltsova, Simon Caton, and Catherine Mulwa. 2019. Mitigation of Unintended Biases against Non- Native English Texts in Sentiment Analysis. In Pro- ceedings of the Irish Conference on Artiï¬cial Intelli- gence and Cognitive Science, Galway, Ireland.
Pei Zhou, Weijia Shi, Jieyu Zhao, Kuan-Hao Huang, Muhao Chen, and Kai-Wei Chang. 2019. Examin- ing gender bias in languages with grammatical gen- ders. In Proceedings of Empirical Methods in Nat- ural Language Processing (EMNLP), pages 5279â 5287, Hong Kong, China.
Ran Zmigrod, S. J. Mielke, Hanna Wallach, and Ryan Cotterell. 2019. Counterfactual data augmentation for mitigating gender stereotypes in languages with rich morphology. In Proceedings of the Association for Computational Linguistics (ACL), pages 1651â 1661, Florence, Italy.
# A Appendix
In Table 3, we provide examples of the papersâ mo- tivations and techniques across several NLP tasks.
# A.1 Categorization details
In this section, we provide some additional details about our methodâspeciï¬cally, our categorization.
What counts as being covered by an NLP task? We considered a paper to cover a given NLP task if it analyzed âbiasâ with respect to that task, but not if it only evaluated overall performance on that task. For example, a paper examining the impact of miti- gating âbiasâ in word embeddings on âbiasâ in sen- timent analysis would be counted as covering both NLP tasks. In contrast, a paper assessing whether performance on sentiment analysis degraded after mitigating âbiasâ in word embeddings would be counted only as focusing on embeddings.
What counts as a motivation? We considered a motivation to include any description of the prob- lem that motivated the paper or proposed quantita- tive technique, including any normative reasoning. We excluded from the âVague/unstatedâ cate- gory of motivations the papers that participated in the Gendered Ambiguous Pronoun (GAP) Shared Task at the First ACL Workshop on Gender Bias in NLP. In an ideal world, shared task papers would engage with âbiasâ more critically, but given the nature of shared tasks it is understandable that they
do not. As a result, we excluded them from our counts for techniques as well. We cite the papers here; most propose techniques we would have cate- gorized as âQuestionable correlations,â with a few as âOther representational harmsâ (Abzaliev, 2019; Attree, 2019; Bao and Qiao, 2019; Chada, 2019; Ionita et al., 2019; Liu, 2019; Lois et al., 2019; Wang, 2019; Xu and Yang, 2019; Yang et al., 2019). We excluded Dabas et al. (2020) from our survey because we could not determine what this paperâs user study on fairness was actually measuring.
Finally, we actually categorized the motivation for Liu et al. (2019) (i.e., the last row in Table 3) as âQuestionable correlationsâ due to a sentence else- where in the paper; had the paragraph we quoted been presented without more detail, we would have categorized the motivation as âVague/unstated.â
# A.2 Full categorization: Motivations
Allocational harms Hovy and Spruit (2016); Caliskan et al. (2017); Madnani et al. (2017); Dixon et al. (2018); Kiritchenko and Mohammad (2018); Shen et al. (2018); Zhao et al. (2018b); Bhaskaran and Bhallamudi (2019); Bordia and Bowman (2019); Brunet et al. (2019); Chaloner and Maldonado (2019); De-Arteaga et al. (2019); Dev and Phillips (2019); Font and Costa-jussà (2019); James-Sorenson and Alvarez-Melis (2019); Kurita et al. (2019); Mayï¬eld et al. (2019); Pu- jari et al. (2019); Romanov et al. (2019); Ruane et al. (2019); Sedoc and Ungar (2019); Sun et al. (2019); Zmigrod et al. (2019); Hutchinson et al. (2020); Papakyriakopoulos et al. (2020); Ravfo- gel et al. (2020); Strengers et al. (2020); Sweeney and Najaï¬an (2020); Tan et al. (2020); Zhang et al. (2020b).
Stereotyping Bolukbasi (2016a,b); Caliskan et al. (2017); McCurdy and Serbetçi (2017); Rudinger et al. (2017); Zhao et al. (2017); Curry and Rieser (2018); DÃaz et al. (2018); Santana et al. (2018); Sutton et al. (2018); Zhao et al. (2018a,b); Agarwal et al. (2019); Basta et al. (2019); Bhaskaran and Bhallamudi (2019); Bordia and Bowman (2019); Brunet et al. (2019); Cao and Daumé (2019); Chaloner and Maldonado (2019); Cho et al. (2019); Dev and Phillips (2019); Font and Costa-jussà (2019); Gonen and Goldberg (2019); James-Sorenson and Alvarez-Melis (2019); Kaneko and Bollegala (2019); Karve et al. (2019); Kurita et al. (2019); Lauscher and GlavaÅ¡ (2019); Lee et al. (2019); Manzini et al. (2019); Mayï¬eld
Categories NLP task Stated motivation Motivations Techniques Language modeling (Bordia and Bowman, 2019) âExisting biases in data can be ampliï¬ed by models and the resulting output consumed by the public can inï¬uence them, en- courage and reinforce harmful stereotypes, or distort the truth. Automated systems that depend on these models can take prob- lematic actions based on biased proï¬ling of individuals.â Allocational harms, stereotyping Questionable correlations Sentiment analysis (Kiritchenko and Mohammad, 2018) âOther biases can be inappropriate and result in negative ex- periences for some groups of people. Examples include, loan eligibility and crime recidivism prediction systems...and resumé sorting systems that believe that men are more qualiï¬ed to be programmers than women (Bolukbasi et al., 2016). Similarly, sentiment and emotion analysis systems can also perpetuate and accentuate inappropriate human biases, e.g., systems that consider utterances from one race or gender to be less positive simply be- cause of their race or gender, or customer support systems that prioritize a call from an angry male over a call from the equally angry female.â Allocational harms, other representational harms (system performance differences w.r.t. text written by different social groups) Questionable correlations (differences in sentiment intensity scores w.r.t. text about different social groups) Machine translation (Cho et al., 2019) â[MT training] may incur an association of gender-speciï¬ed pro- nouns (in the target) and gender-neutral ones (in the source) for lexicon pairs that frequently collocate in the corpora. We claim that this kind of phenomenon seriously threatens the fairness of a translation system, in the sense that it lacks generality and inserts social bias to the inference. Moreover, the input is not fully cor- rect (considering gender-neutrality) and might offend the users who expect fairer representations.â Questionable correlations, other representational harms Questionable correlations Machine translation (Stanovsky et al., 2019) âLearned models exhibit social bias when their training data encode stereotypes not relevant for the task, but the correlations are picked up anyway.â Stereotyping, questionable correlations Stereotyping, other representational harms (system performance differences), questionable correlations Type-level embeddings (Zhao et al., 2018b) âHowever, embeddings trained on human-generated corpora have been demonstrated to inherit strong gender stereotypes that re- ï¬ect social constructs....Such a bias substantially affects down- stream applications....This concerns the practitioners who use the embedding model to build gender-sensitive applications such as a resume ï¬ltering system or a job recommendation system as the automated system may discriminate candidates based on their gender, as reï¬ected by their name. Besides, biased embeddings may implicitly affect downstream applications used in our daily lives. For example, when searching for âcomputer scientistâ using a search engine...a search algorithm using an embedding model in the backbone tends to rank male scientists higher than femalesâ [sic], hindering women from being recognized and further exac- erbating the gender inequality in the community.â Allocational harms, stereotyping, other representational harms Stereotyping Vague Stereotyping
# Type-level and contextu- alized embeddings (May et al., 2019)
â[P]rominent word embeddings such as word2vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) encode systematic biases against women and black people (Bolukbasi et al., 2016; Garg et al., 2018), implicating many NLP systems in scaling up social injustice.â
# Dialogue generation (Liu et al., 2019)
âSince the goal of dialogue systems is to talk with users...if the systems show discriminatory behaviors in the interactions, the user experience will be adversely affected. Moreover, public com- mercial chatbots can get resisted for their improper speech.â
|
# Vague/unstated
# Stereotyping, other representational harms, questionable correlations
Table 3: Examples of the categories into which the papersâ motivations and proposed quantitative techniques for measuring or mitigating âbiasâ fall. Bold text in the quotes denotes the content that yields our categorizations.
et al. (2019); Précenth (2019); Pujari et al. (2019); Ruane et al. (2019); Stanovsky et al. (2019); Sun et al. (2019); Tan and Celis (2019); Webster et al. (2019); Zmigrod et al. (2019); Gyamï¬ et al. (2020); Hube et al. (2020); Hutchinson et al. (2020); Kim et al. (2020); Nadeem et al. (2020); Papakyriakopoulos et al. (2020); Ravfogel et al. (2020); Rozado (2020); Sen and Ganguly (2020); Shin et al. (2020); Strengers et al. (2020).
Other representational harms Hovy and Sø- gaard (2015); Blodgett et al. (2016); Bolukbasi et al. (2016b); Hovy and Spruit (2016); Blodgett and OâConnor (2017); Larson (2017); Schnoebelen (2017); Blodgett et al. (2018); Curry and Rieser (2018); DÃaz et al. (2018); Dixon et al. (2018); Kir- itchenko and Mohammad (2018); Park et al. (2018); Shen et al. (2018); Thelwall (2018); Zhao et al. (2018b); Badjatiya et al. (2019); Bagdasaryan et al. (2019); Bamman et al. (2019); Cao and Daumé (2019); Chaloner and Maldonado (2019); Cho et al. (2019); Davidson et al. (2019); De-Arteaga et al. (2019); Fisher (2019); Font and Costa-jussà (2019); Garimella et al. (2019); Loukina et al. (2019); May- ï¬eld et al. (2019); Mehrabi et al. (2019); Nozza et al. (2019); Prabhakaran et al. (2019); Romanov et al. (2019); Ruane et al. (2019); Sap et al. (2019); Sheng et al. (2019); Sun et al. (2019); Sweeney and Najaï¬an (2019); Vaidya et al. (2019); Gaut et al. (2020); Gencoglu (2020); Hovy et al. (2020); Hutchinson et al. (2020); Kim et al. (2020); Peng et al. (2020); Rios (2020); Sap et al. (2020); Shah et al. (2020); Sheng et al. (2020); Tan et al. (2020); Zhang et al. (2020a,b).
Questionable Jørgensen et al. (2015); Hovy and Spruit (2016); Madnani et al. (2017); Rudinger et al. (2017); Zhao et al. (2017); Burns et al. (2018); Dixon et al. (2018); Kir- itchenko and Mohammad (2018); Lu et al. (2018); Park et al. (2018); Shen et al. (2018); Zhang et al. (2018); Badjatiya et al. (2019); Bhargava and Forsyth (2019); Cao and Daumé (2019); Cho et al. (2019); Davidson et al. (2019); Dev et al. (2019); Garimella et al. (2019); Garg et al. (2019); Huang et al. (2019); James-Sorenson and Alvarez- Melis (2019); Kaneko and Bollegala (2019); Liu et al. (2019); Karve et al. (2019); Nozza et al. (2019); Prabhakaran et al. (2019); Romanov et al. (2019); Sap et al. (2019); Sedoc and Ungar (2019); Stanovsky et al. (2019); Sweeney and Najaï¬an (2019); Vaidya et al. (2019); Zhiltsova et al. (2019);
Chopra et al. (2020); Gonen and Webster (2020); Gyamï¬ et al. (2020); Hube et al. (2020); Ravfogel et al. (2020); Rios (2020); Ross et al. (2020); Saun- ders and Byrne (2020); Sen and Ganguly (2020); Shah et al. (2020); Sweeney and Najaï¬an (2020); Yang and Feng (2020); Zhang et al. (2020a).
Vague/unstated Rudinger et al. (2018); Webster et al. (2018); Dinan et al. (2019); Florez (2019); Jumelet et al. (2019); Lauscher et al. (2019); Liang et al. (2019); Maudslay et al. (2019); May et al. (2019); Prates et al. (2019); Prost et al. (2019); Qian et al. (2019); Swinger et al. (2019); Zhao et al. (2019); Zhou et al. (2019); Ethayarajh (2020); Huang et al. (2020); Jia et al. (2020); Popovi´c et al. (2020); Pryzant et al. (2020); Vig et al. (2020); Wang et al. (2020); Zhao et al. (2020).
Surveys, and meta-analyses Hovy and Spruit (2016); Larson (2017); McCurdy and Serbetçi (2017); Schnoebelen (2017); Basta et al. (2019); Ethayarajh et al. (2019); Gonen and Goldberg (2019); Lauscher and GlavaÅ¡ (2019); Loukina et al. (2019); Mayï¬eld et al. (2019); Mirzaev et al. (2019); Prabhumoye et al. (2019); Ruane et al. (2019); Sedoc and Ungar (2019); Sun et al. (2019); Nissim et al. (2020); Rozado (2020); Shah et al. (2020); Strengers et al. (2020); Wright et al. (2020).
# B Full categorization: Techniques
Allocational harms De-Arteaga et al. (2019); Prost et al. (2019); Romanov et al. (2019); Zhao et al. (2020).
Stereotyping Bolukbasi (2016a,b); Caliskan et al. (2017); McCurdy and Serbetçi (2017); DÃaz et al. (2018); Santana et al. (2018); Sutton et al. (2018); Zhang et al. (2018); Zhao et al. (2018a,b); Agarwal et al. (2019); Basta et al. (2019); Bhaskaran and Bhallamudi (2019); Brunet et al. (2019); Cao and Daumé (2019); Chaloner and Maldonado (2019); Dev and Phillips (2019); Ethayarajh et al. (2019); Gonen and Goldberg (2019); James-Sorenson and Alvarez-Melis (2019); Jumelet et al. (2019); Kaneko and Bollegala (2019); Karve et al. (2019); Kurita et al. (2019); Lauscher and GlavaÅ¡ (2019); Lauscher et al. (2019); Lee et al. (2019); Liang et al. (2019); Liu et al. (2019); Manzini et al. (2019); Maudslay et al. (2019); May et al. (2019); Mirzaev et al. (2019); Prates et al. (2019); Précenth (2019); Prost et al. (2019); Pujari et al. (2019); Qian et al. (2019);
Sedoc and Ungar (2019); Stanovsky et al. (2019); Tan and Celis (2019); Zhao et al. (2019); Zhou et al. (2019); Chopra et al. (2020); Gyamï¬ et al. (2020); Nadeem et al. (2020); Nissim et al. (2020); Papakyriakopoulos et al. (2020); Popovi´c et al. (2020); Ravfogel et al. (2020); Ross et al. (2020); Rozado (2020); Saunders and Byrne (2020); Shin et al. (2020); Vig et al. (2020); Wang et al. (2020); Yang and Feng (2020); Zhao et al. (2020).
Other representational harms Jørgensen et al. (2015); Hovy and Søgaard (2015); Blodgett et al. (2016); Blodgett and OâConnor (2017); Blodgett et al. (2018); Curry and Rieser (2018); Dixon et al. (2018); Park et al. (2018); Thelwall (2018); Web- ster et al. (2018); Badjatiya et al. (2019); Bag- dasaryan et al. (2019); Bamman et al. (2019); Bhar- gava and Forsyth (2019); Cao and Daumé (2019); Font and Costa-jussà (2019); Garg et al. (2019); Garimella et al. (2019); Liu et al. (2019); Louk- ina et al. (2019); Mehrabi et al. (2019); Nozza et al. (2019); Sap et al. (2019); Sheng et al. (2019); Stanovsky et al. (2019); Vaidya et al. (2019); Webster et al. (2019); Ethayarajh (2020); Gaut et al. (2020); Gencoglu (2020); Hovy et al. (2020); Huang et al. (2020); Kim et al. (2020); Peng et al. (2020); Ravfogel et al. (2020); Rios (2020); Sap et al. (2020); Saunders and Byrne (2020); Sheng et al. (2020); Sweeney and Najaï¬an (2020); Tan et al. (2020); Zhang et al. (2020a,b).
Questionable correlations Jurgens et al. (2017); Madnani et al. (2017); Rudinger et al. (2017); Zhao et al. (2017); Burns et al. (2018); DÃaz et al. (2018); Kiritchenko and Mohammad (2018); Lu et al. (2018); Rudinger et al. (2018); Shen et al. (2018); Bordia and Bowman (2019); Cao and Daumé (2019); Cho et al. (2019); David- son et al. (2019); Dev et al. (2019); Dinan et al. (2019); Fisher (2019); Florez (2019); Font and Costa-jussà (2019); Garg et al. (2019); Huang et al. (2019); Liu et al. (2019); Nozza et al. (2019); Prabhakaran et al. (2019); Qian et al. (2019); Sap et al. (2019); Stanovsky et al. (2019); Sweeney and Najaï¬an (2019); Swinger et al. (2019); Zhiltsova et al. (2019); Zmigrod et al. (2019); Hube et al. (2020); Hutchinson et al. (2020); Jia et al. (2020); Papakyriakopoulos et al. (2020); Popovi´c et al. (2020); Pryzant et al. (2020); Saunders and Byrne (2020); Sen and Ganguly (2020); Shah et al. (2020); Sweeney and Najaï¬an (2020); Zhang et al. (2020b).
# Vague/unstated None.
Surveys, and meta-analyses Hovy and Spruit (2016); Larson (2017); McCurdy and Serbetçi (2017); Schnoebelen (2017); Basta et al. (2019); Ethayarajh et al. (2019); Gonen and Goldberg (2019); Lauscher and GlavaÅ¡ (2019); Loukina et al. (2019); Mayï¬eld et al. (2019); Mirzaev et al. (2019); Prabhumoye et al. (2019); Ruane et al. (2019); Sedoc and Ungar (2019); Sun et al. (2019); Nissim et al. (2020); Rozado (2020); Shah et al. (2020); Strengers et al. (2020); Wright et al. (2020). | {
"id": "1910.10486"
} |
2005.14165 | Language Models are Few-Shot Learners | Recent work has demonstrated substantial gains on many NLP tasks and
benchmarks by pre-training on a large corpus of text followed by fine-tuning on
a specific task. While typically task-agnostic in architecture, this method
still requires task-specific fine-tuning datasets of thousands or tens of
thousands of examples. By contrast, humans can generally perform a new language
task from only a few examples or from simple instructions - something which
current NLP systems still largely struggle to do. Here we show that scaling up
language models greatly improves task-agnostic, few-shot performance, sometimes
even reaching competitiveness with prior state-of-the-art fine-tuning
approaches. Specifically, we train GPT-3, an autoregressive language model with
175 billion parameters, 10x more than any previous non-sparse language model,
and test its performance in the few-shot setting. For all tasks, GPT-3 is
applied without any gradient updates or fine-tuning, with tasks and few-shot
demonstrations specified purely via text interaction with the model. GPT-3
achieves strong performance on many NLP datasets, including translation,
question-answering, and cloze tasks, as well as several tasks that require
on-the-fly reasoning or domain adaptation, such as unscrambling words, using a
novel word in a sentence, or performing 3-digit arithmetic. At the same time,
we also identify some datasets where GPT-3's few-shot learning still struggles,
as well as some datasets where GPT-3 faces methodological issues related to
training on large web corpora. Finally, we find that GPT-3 can generate samples
of news articles which human evaluators have difficulty distinguishing from
articles written by humans. We discuss broader societal impacts of this finding
and of GPT-3 in general. | http://arxiv.org/pdf/2005.14165 | Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, Dario Amodei | cs.CL | 40+32 pages | null | cs.CL | 20200528 | 20200722 | 0 2 0 2
l u J 2 2 ] L C . s c [
4 v 5 6 1 4 1 . 5 0 0 2 : v i X r a
# Language Models are Few-Shot Learners
Tom B. Brownâ Benjamin Mannâ Nick Ryderâ Melanie Subbiahâ Jared Kaplanâ Prafulla Dhariwal Arvind Neelakantan Pranav Shyam Girish Sastry Amanda Askell Sandhini Agarwal Ariel Herbert-Voss Gretchen Krueger Rewon Child Aditya Ramesh Daniel M. Ziegler Jeffrey Wu Clemens Winter Christopher Hesse Mark Chen Eric Sigler Mateusz Litwin Scott Gray Benjamin Chess Jack Clark Christopher Berner Sam McCandlish Alec Radford Ilya Sutskever Dario Amodei
# OpenAI
# Abstract
Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by ï¬ne-tuning on a speciï¬c task. While typically task-agnostic in architecture, this method still requires task-speciï¬c ï¬ne-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions â something which current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art ï¬ne- tuning approaches. Speciï¬cally, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or ï¬ne-tuning, with tasks and few-shot demonstrations speciï¬ed purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-ï¬y reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same time, we also identify some datasets where GPT-3âs few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora. Finally, we ï¬nd that GPT-3 can generate samples of news articles which human evaluators have difï¬culty distinguishing from articles written by humans. We discuss broader societal impacts of this ï¬nding and of GPT-3 in general.
âEqual contribution â Johns Hopkins University, OpenAI
Author contributions listed at end of paper.
# Contents
2.1 Model and Architectures . . 2.2 Training Dataset . . 2.3 Training Process . . 2.4 Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Language Modeling, Cloze, and Completion Tasks 3.2 Closed Book Question Answering . . . . . 3.3 Translation . . 3.4 Winograd-Style Tasks . . . 3.5 Common Sense Reasoning . . . 3.6 Reading Comprehension . . . . 3.7 SuperGLUE . 3.8 NLI . . . . 3.9 Synthetic and Qualitative Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 Misuse of Language Models 6.2 Fairness, Bias, and Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Energy Usage . . . . . . . . . . . . . . 3 6 8 8 9 10 10 11 13 14 16 17 18 18 20 21 29 33 34 35 36 39 39 40 43 43 43 46 46 48 50 63
2
# Introduction
Recent years have featured a trend towards pre-trained language representations in NLP systems, applied in increasingly ï¬exible and task-agnostic ways for downstream transfer. First, single-layer representations were learned using word vectors [MCCD13, PSM14] and fed to task-speciï¬c architectures, then RNNs with multiple layers of representations and contextual state were used to form stronger representations [DL15, MBXS17, PNZtY18] (though still applied to task-speciï¬c architectures), and more recently pre-trained recurrent or transformer language models [VSP+17] have been directly ï¬ne-tuned, entirely removing the need for task-speciï¬c architectures [RNSS18, DCLT18, HR18].
This last paradigm has led to substantial progress on many challenging NLP tasks such as reading comprehension, question answering, textual entailment, and many others, and has continued to advance based on new architectures and algorithms [RSR+19, LOG+19, YDY+19, LCG+19]. However, a major limitation to this approach is that while the architecture is task-agnostic, there is still a need for task-speciï¬c datasets and task-speciï¬c ï¬ne-tuning: to achieve strong performance on a desired task typically requires ï¬ne-tuning on a dataset of thousands to hundreds of thousands of examples speciï¬c to that task. Removing this limitation would be desirable, for several reasons.
First, from a practical perspective, the need for a large dataset of labeled examples for every new task limits the applicability of language models. There exists a very wide range of possible useful language tasks, encompassing anything from correcting grammar, to generating examples of an abstract concept, to critiquing a short story. For many of these tasks it is difï¬cult to collect a large supervised training dataset, especially when the process must be repeated for every new task.
Second, the potential to exploit spurious correlations in training data fundamentally grows with the expressiveness of the model and the narrowness of the training distribution. This can create problems for the pre-training plus ï¬ne-tuning paradigm, where models are designed to be large to absorb information during pre-training, but are then ï¬ne-tuned on very narrow task distributions. For instance [HLW+20] observe that larger models do not necessarily generalize better out-of-distribution. There is evidence that suggests that the generalization achieved under this paradigm can be poor because the model is overly speciï¬c to the training distribution and does not generalize well outside it [YdC+19, MPL19]. Thus, the performance of ï¬ne-tuned models on speciï¬c benchmarks, even when it is nominally at human-level, may exaggerate actual performance on the underlying task [GSL+18, NK19].
Third, humans do not require large supervised datasets to learn most language tasks â a brief directive in natural language (e.g. âplease tell me if this sentence describes something happy or something sadâ) or at most a tiny number of demonstrations (e.g. âhere are two examples of people acting brave; please give a third example of braveryâ) is often
outer loop Learning via SGD during unsupervised pre-training : = F 2 5+8=13 fe) => 2 thanks => merci 2 8 gaot => goat g 3 3 5 3 = 2 Fa 72. 8 g sakne => snake g hello => bonjour g o a a 14051 2 brid => bird 2 mint => menthe EY inner loop 3 3 3 FI =| 5 B+4=7 a fsih => fish a wall => mur a 5+9=14 douk => duck otter => loutre 9+8=17 omihp => chimp bread => pain sequence #1 sequence #2 sequence #3
Figure 1.1: Language model meta-learning. During unsupervised pre-training, a language model develops a broad set of skills and pattern recognition abilities. It then uses these abilities at inference time to rapidly adapt to or recognize the desired task. We use the term âin-context learningâ to describe the inner loop of this process, which occurs within the forward-pass upon each sequence. The sequences in this diagram are not intended to be representative of the data a model would see during pre-training, but are intended to show that there are sometimes repeated sub-tasks embedded within a single sequence.
3
Zero-shot One-shot Few-shot 175B Params Natural Language 60 Prompt Accuracy (%) 13B Params 1.3B Params Number of Examples in Context (K)
Figure 1.2: Larger models make increasingly efï¬cient use of in-context information. We show in-context learning performance on a simple task requiring the model to remove random symbols from a word, both with and without a natural language task description (see Sec. 3.9.2). The steeper âin-context learning curvesâ for large models demonstrate improved ability to learn a task from contextual information. We see qualitatively similar behavior across a wide range of tasks.
sufï¬cient to enable a human to perform a new task to at least a reasonable degree of competence. Aside from pointing to a conceptual limitation in our current NLP techniques, this adaptability has practical advantages â it allows humans to seamlessly mix together or switch between many tasks and skills, for example performing addition during a lengthy dialogue. To be broadly useful, we would someday like our NLP systems to have this same ï¬uidity and generality. One potential route towards addressing these issues is meta-learning1 â which in the context of language models means the model develops a broad set of skills and pattern recognition abilities at training time, and then uses those abilities at inference time to rapidly adapt to or recognize the desired task (illustrated in Figure 1.1). Recent work [RWC+19] attempts to do this via what we call âin-context learningâ, using the text input of a pretrained language model as a form of task speciï¬cation: the model is conditioned on a natural language instruction and/or a few demonstrations of the task and is then expected to complete further instances of the task simply by predicting what comes next.
While it has shown some initial promise, this approach still achieves results far inferior to ï¬ne-tuning â for example [RWC+19] achieves only 4% on Natural Questions, and even its 55 F1 CoQa result is now more than 35 points behind the state of the art. Meta-learning clearly requires substantial improvement in order to be viable as a practical method of solving language tasks.
Another recent trend in language modeling may offer a way forward. In recent years the capacity of transformer language models has increased substantially, from 100 million parameters [RNSS18], to 300 million parameters [DCLT18], to 1.5 billion parameters [RWC+19], to 8 billion parameters [SPP+19], 11 billion parameters [RSR+19], and ï¬nally 17 billion parameters [Tur20]. Each increase has brought improvements in text synthesis and/or downstream NLP tasks, and there is evidence suggesting that log loss, which correlates well with many downstream tasks, follows a smooth trend of improvement with scale [KMH+20]. Since in-context learning involves absorbing many skills and tasks within the parameters of the model, it is plausible that in-context learning abilities might show similarly strong gains with scale.
1In the context of language models this has sometimes been called âzero-shot transferâ, but this term is potentially ambiguous: the method is âzero-shotâ in the sense that no gradient updates are performed, but it often involves providing inference-time demonstrations to the model, so is not truly learning from zero examples. To avoid this confusion, we use the term âmeta-learningâ to capture the inner-loop / outer-loop structure of the general method, and the term âin context-learningâ to refer to the inner loop of meta-learning. We further specialize the description to âzero-shotâ, âone-shotâ, or âfew-shotâ depending on how many demonstrations are provided at inference time. These terms are intended to remain agnostic on the question of whether the model learns new tasks from scratch at inference time or simply recognizes patterns seen during training â this is an important issue which we discuss later in the paper, but âmeta-learningâ is intended to encompass both possibilities, and simply describes the inner-outer loop structure.
4
400 Aggregate Performance Across Benchmarks âeâ Few Shot â*â One Shot 80 â*â Zero Shot Accuracy fon) lo} & 20 = 0.1B 04B 0.8B 1.3B 2.6B 6.7B 13B 175B Parameters in LM (Billions)
Figure 1.3: Aggregate performance for all 42 accuracy-denominated benchmarks While zero-shot performance improves steadily with model size, few-shot performance increases more rapidly, demonstrating that larger models are more proï¬cient at in-context learning. See Figure 3.8 for a more detailed analysis on SuperGLUE, a standard NLP benchmark suite.
In this paper, we test this hypothesis by training a 175 billion parameter autoregressive language model, which we call GPT-3, and measuring its in-context learning abilities. Speciï¬cally, we evaluate GPT-3 on over two dozen NLP datasets, as well as several novel tasks designed to test rapid adaptation to tasks unlikely to be directly contained in the training set. For each task, we evaluate GPT-3 under 3 conditions: (a) âfew-shot learningâ, or in-context learning where we allow as many demonstrations as will ï¬t into the modelâs context window (typically 10 to 100), (b) âone-shot learningâ, where we allow only one demonstration, and (c) âzero-shotâ learning, where no demonstrations are allowed and only an instruction in natural language is given to the model. GPT-3 could also in principle be evaluated in the traditional ï¬ne-tuning setting, but we leave this to future work.
Figure 1.2 illustrates the conditions we study, and shows few-shot learning of a simple task requiring the model to remove extraneous symbols from a word. Model performance improves with the addition of a natural language task description, and with the number of examples in the modelâs context, K. Few-shot learning also improves dramatically with model size. Though the results in this case are particularly striking, the general trends with both model size and number of examples in-context hold for most tasks we study. We emphasize that these âlearningâ curves involve no gradient updates or ï¬ne-tuning, just increasing numbers of demonstrations given as conditioning.
Broadly, on NLP tasks GPT-3 achieves promising results in the zero-shot and one-shot settings, and in the the few-shot setting is sometimes competitive with or even occasionally surpasses state-of-the-art (despite state-of-the-art being held by ï¬ne-tuned models). For example, GPT-3 achieves 81.5 F1 on CoQA in the zero-shot setting, 84.0 F1 on CoQA in the one-shot setting, 85.0 F1 in the few-shot setting. Similarly, GPT-3 achieves 64.3% accuracy on TriviaQA in the zero-shot setting, 68.0% in the one-shot setting, and 71.2% in the few-shot setting, the last of which is state-of-the-art relative to ï¬ne-tuned models operating in the same closed-book setting.
GPT-3 also displays one-shot and few-shot proï¬ciency at tasks designed to test rapid adaption or on-the-ï¬y reasoning, which include unscrambling words, performing arithmetic, and using novel words in a sentence after seeing them deï¬ned only once. We also show that in the few-shot setting, GPT-3 can generate synthetic news articles which human evaluators have difï¬culty distinguishing from human-generated articles.
At the same time, we also ï¬nd some tasks on which few-shot performance struggles, even at the scale of GPT-3. This includes natural language inference tasks like the ANLI dataset, and some reading comprehension datasets like RACE or QuAC. By presenting a broad characterization of GPT-3âs strengths and weaknesses, including these limitations, we hope to stimulate study of few-shot learning in language models and draw attention to where progress is most needed.
A heuristic sense of the overall results can be seen in Figure 1.3, which aggregates the various tasks (though it should not be seen as a rigorous or meaningful benchmark in itself).
5
We also undertake a systematic study of âdata contaminationâ â a growing problem when training high capacity models on datasets such as Common Crawl, which can potentially include content from test datasets simply because such content often exists on the web. In this paper we develop systematic tools to measure data contamination and quantify its distorting effects. Although we ï¬nd that data contamination has a minimal effect on GPT-3âs performance on most datasets, we do identify a few datasets where it could be inï¬ating results, and we either do not report results on these datasets or we note them with an asterisk, depending on the severity.
In addition to all the above, we also train a series of smaller models (ranging from 125 million parameters to 13 billion parameters) in order to compare their performance to GPT-3 in the zero, one and few-shot settings. Broadly, for most tasks we ï¬nd relatively smooth scaling with model capacity in all three settings; one notable pattern is that the gap between zero-, one-, and few-shot performance often grows with model capacity, perhaps suggesting that larger models are more proï¬cient meta-learners.
Finally, given the broad spectrum of capabilities displayed by GPT-3, we discuss concerns about bias, fairness, and broader societal impacts, and attempt a preliminary analysis of GPT-3âs characteristics in this regard.
The remainder of this paper is organized as follows. In Section 2, we describe our approach and methods for training GPT-3 and evaluating it. Section 3 presents results on the full range of tasks in the zero-, one- and few-shot settings. Section 4 addresses questions of data contamination (train-test overlap). Section 5 discusses limitations of GPT-3. Section 6 discusses broader impacts. Section 7 reviews related work and Section 8 concludes.
# 2 Approach
Our basic pre-training approach, including model, data, and training, is similar to the process described in [RWC+19], with relatively straightforward scaling up of the model size, dataset size and diversity, and length of training. Our use of in-context learning is also similar to [RWC+19], but in this work we systematically explore different settings for learning within the context. Therefore, we start this section by explicitly deï¬ning and contrasting the different settings that we will be evaluating GPT-3 on or could in principle evaluate GPT-3 on. These settings can be seen as lying on a spectrum of how much task-speciï¬c data they tend to rely on. Speciï¬cally, we can identify at least four points on this spectrum (see Figure 2.1 for an illustration):
⢠Fine-Tuning (FT) has been the most common approach in recent years, and involves updating the weights of a pre-trained model by training on a supervised dataset speciï¬c to the desired task. Typically thousands to hundreds of thousands of labeled examples are used. The main advantage of ï¬ne-tuning is strong performance on many benchmarks. The main disadvantages are the need for a new large dataset for every task, the potential for poor generalization out-of-distribution [MPL19], and the potential to exploit spurious features of the training data [GSL+18, NK19], potentially resulting in an unfair comparison with human performance. In this work we do not ï¬ne-tune GPT-3 because our focus is on task-agnostic performance, but GPT-3 can be ï¬ne-tuned in principle and this is a promising direction for future work.
⢠Few-Shot (FS) is the term we will use in this work to refer to the setting where the model is given a few demonstrations of the task at inference time as conditioning [RWC+19], but no weight updates are allowed. As shown in Figure 2.1, for a typical dataset an example has a context and a desired completion (for example an English sentence and the French translation), and few-shot works by giving K examples of context and completion, and then one ï¬nal example of context, with the model expected to provide the completion. We typically set K in the range of 10 to 100 as this is how many examples can ï¬t in the modelâs context window (nctx = 2048). The main advantages of few-shot are a major reduction in the need for task-speciï¬c data and reduced potential to learn an overly narrow distribution from a large but narrow ï¬ne-tuning dataset. The main disadvantage is that results from this method have so far been much worse than state-of-the-art ï¬ne-tuned models. Also, a small amount of task speciï¬c data is still required. As indicated by the name, few-shot learning as described here for language models is related to few-shot learning as used in other contexts in ML [HYC01, VBL+16] â both involve learning based on a broad distribution of tasks (in this case implicit in the pre-training data) and then rapidly adapting to a new task.
⢠One-Shot (1S) is the same as few-shot except that only one demonstration is allowed, in addition to a natural language description of the task, as shown in Figure 1. The reason to distinguish one-shot from few-shot and zero-shot (below) is that it most closely matches the way in which some tasks are communicated to humans. For example, when asking humans to generate a dataset on a human worker service (for example Mechanical Turk), it is common to give one demonstration of the task. By contrast it is sometimes difï¬cult to communicate the content or format of a task if no examples are given.
6
The three settings we explore for in-context learning Traditional fine-tuning (not used for GPT-3) Zero-shot The model predicts the answer given only a natural language description of the task. No gradient updates are performed Fine-tuning The model is trained via repeated gradient updates using a large corpus of example tasks sea otter => loutre de mer example #1 Translate English to French: task description cheese => prompt Vv Vv One-shot peppermint => menthe poivrée example #2 In addition to the task description, the model sees a single example of the task. No gradient updates are performed. gradient update Translate English to French: task description | | sea otter => loutre de mer example lush giraffe => girafe peluche example #N cheese => prompt Sanat 2 â gradient update Few-shot In addition to the task description, the model sees a few GIGGED oe eas examples of the task. No gradient updates are performed Translate English to French: task description sea otter => loutre de mer examples peppermint => menthe poivrée plush girafe => girafe peluche cheese => prompt
Figure 2.1: Zero-shot, one-shot and few-shot, contrasted with traditional ï¬ne-tuning. The panels above show four methods for performing a task with a language model â ï¬ne-tuning is the traditional method, whereas zero-, one-, and few-shot, which we study in this work, require the model to perform the task with only forward passes at test time. We typically present the model with a few dozen examples in the few shot setting. Exact phrasings for all task descriptions, examples and prompts can be found in Appendix G.
⢠Zero-Shot (0S) is the same as one-shot except that no demonstrations are allowed, and the model is only given a natural language instruction describing the task. This method provides maximum convenience, potential for robustness, and avoidance of spurious correlations (unless they occur very broadly across the large corpus of pre-training data), but is also the most challenging setting. In some cases it may even be difï¬cult for humans to understand the format of the task without prior examples, so this setting is in some cases âunfairly hardâ. For example, if someone is asked to âmake a table of world records for the 200m dashâ, this request can be ambiguous, as it may not be clear exactly what format the table should have or what should be included (and even with careful clariï¬cation, understanding precisely what is desired can be difï¬cult). Nevertheless, for at least some settings zero-shot is closest to how humans perform tasks â for example, in the translation example in Figure 2.1, a human would likely know what to do from just the text instruction.
Figure 2.1 shows the four methods using the example of translating English to French. In this paper we focus on zero-shot, one-shot and few-shot, with the aim of comparing them not as competing alternatives, but as different problem settings which offer a varying trade-off between performance on speciï¬c benchmarks and sample efï¬ciency. We especially highlight the few-shot results as many of them are only slightly behind state-of-the-art ï¬ne-tuned models. Ultimately, however, one-shot, or even sometimes zero-shot, seem like the fairest comparisons to human performance, and are important targets for future work.
Sections 2.1-2.3 below give details on our models, training data, and training process respectively. Section 2.4 discusses the details of how we do few-shot, one-shot, and zero-shot evaluations.
7
Model Name nparams nlayers dmodel nheads dhead Batch Size Learning Rate GPT-3 Small GPT-3 Medium GPT-3 Large GPT-3 XL GPT-3 2.7B GPT-3 6.7B GPT-3 13B GPT-3 175B or âGPT-3â 125M 350M 760M 1.3B 2.7B 6.7B 13.0B 175.0B 12 24 24 24 32 32 40 96 768 1024 1536 2048 2560 4096 5140 12288 12 16 16 24 32 32 40 96 64 64 96 128 80 128 128 128 0.5M 0.5M 0.5M 1M 1M 2M 2M 3.2M 6.0 Ã 10â4 3.0 Ã 10â4 2.5 Ã 10â4 2.0 Ã 10â4 1.6 Ã 10â4 1.2 Ã 10â4 1.0 Ã 10â4 0.6 Ã 10â4
Table 2.1: Sizes, architectures, and learning hyper-parameters (batch size in tokens and learning rate) of the models which we trained. All models were trained for a total of 300 billion tokens.
# 2.1 Model and Architectures
We use the same model and architecture as GPT-2 [RWC+19], including the modiï¬ed initialization, pre-normalization, and reversible tokenization described therein, with the exception that we use alternating dense and locally banded sparse attention patterns in the layers of the transformer, similar to the Sparse Transformer [CGRS19]. To study the dependence of ML performance on model size, we train 8 different sizes of model, ranging over three orders of magnitude from 125 million parameters to 175 billion parameters, with the last being the model we call GPT-3. Previous work [KMH+20] suggests that with enough training data, scaling of validation loss should be approximately a smooth power law as a function of size; training models of many different sizes allows us to test this hypothesis both for validation loss and for downstream language tasks.
Table 2.1 shows the sizes and architectures of our 8 models. Here nparams is the total number of trainable parameters, nlayers is the total number of layers, dmodel is the number of units in each bottleneck layer (we always have the feedforward layer four times the size of the bottleneck layer, dï¬ = 4 â dmodel), and dhead is the dimension of each attention head. All models use a context window of nctx = 2048 tokens. We partition the model across GPUs along both the depth and width dimension in order to minimize data-transfer between nodes. The precise architectural parameters for each model are chosen based on computational efï¬ciency and load-balancing in the layout of models across GPUâs. Previous work [KMH+20] suggests that validation loss is not strongly sensitive to these parameters within a reasonably broad range.
# 2.2 Training Dataset
Datasets for language models have rapidly expanded, culminating in the Common Crawl dataset2 [RSR+19] constituting nearly a trillion words. This size of dataset is sufï¬cient to train our largest models without ever updating on the same sequence twice. However, we have found that unï¬ltered or lightly ï¬ltered versions of Common Crawl tend to have lower quality than more curated datasets. Therefore, we took 3 steps to improve the average quality of our datasets: (1) we downloaded and ï¬ltered a version of CommonCrawl based on similarity to a range of high-quality reference corpora, (2) we performed fuzzy deduplication at the document level, within and across datasets, to prevent redundancy and preserve the integrity of our held-out validation set as an accurate measure of overï¬tting, and (3) we also added known high-quality reference corpora to the training mix to augment CommonCrawl and increase its diversity.
Details of the ï¬rst two points (processing of Common Crawl) are described in Appendix A. For the third, we added several curated high-quality datasets, including an expanded version of the WebText dataset [RWC+19], collected by scraping links over a longer period of time, and ï¬rst described in [KMH+20], two internet-based books corpora (Books1 and Books2) and English-language Wikipedia.
Table 2.2 shows the ï¬nal mixture of datasets that we used in training. The CommonCrawl data was downloaded from 41 shards of monthly CommonCrawl covering 2016 to 2019, constituting 45TB of compressed plaintext before ï¬ltering and 570GB after ï¬ltering, roughly equivalent to 400 billion byte-pair-encoded tokens. Note that during training, datasets are not sampled in proportion to their size, but rather datasets we view as higher-quality are sampled more frequently, such that CommonCrawl and Books2 datasets are sampled less than once during training, but the other datasets are sampled 2-3 times. This essentially accepts a small amount of overï¬tting in exchange for higher quality training data.
# 2https://commoncrawl.org/the-data/
8
Total Compute Used During Training 10000 1000 âalhal alll s 2 & @ § Â¥ S +» © © &@ @ Sg SF SP PM FC SS Oo ow x) oe â os So Training Petaflop/s-days 3 $ ¢ $ xe AO KO GY Oe FS be) ie «e $ od é & cy
Figure 2.2: Total compute used during training. Based on the analysis in Scaling Laws For Neural Language Models [KMH+20] we train much larger models on many fewer tokens than is typical. As a consequence, although GPT-3 3B is almost 10x larger than RoBERTa-Large (355M params), both models took roughly 50 petaï¬op/s-days of compute during pre-training. Methodology for these calculations can be found in Appendix D.
Dataset Quantity (tokens) Weight in training mix Common Crawl (ï¬ltered) WebText2 Books1 Books2 Wikipedia 410 billion 19 billion 12 billion 55 billion 3 billion 60% 22% 8% 8% 3% 0.44 2.9 1.9 0.43 3.4
Epochs elapsed when training for 300B tokens
Table 2.2: Datasets used to train GPT-3. âWeight in training mixâ refers to the fraction of examples during training that are drawn from a given dataset, which we intentionally do not make proportional to the size of the dataset. As a result, when we train for 300 billion tokens, some datasets are seen up to 3.4 times during training while other datasets are seen less than once.
A major methodological concern with language models pretrained on a broad swath of internet data, particularly large models with the capacity to memorize vast amounts of content, is potential contamination of downstream tasks by having their test or development sets inadvertently seen during pre-training. To reduce such contamination, we searched for and attempted to remove any overlaps with the development and test sets of all benchmarks studied in this paper. Unfortunately, a bug in the ï¬ltering caused us to ignore some overlaps, and due to the cost of training it was not feasible to retrain the model. In Section 4 we characterize the impact of the remaining overlaps, and in future work we will more aggressively remove data contamination.
# 2.3 Training Process
As found in [KMH+20, MKAT18], larger models can typically use a larger batch size, but require a smaller learning rate. We measure the gradient noise scale during training and use it to guide our choice of batch size [MKAT18]. Table 2.1 shows the parameter settings we used. To train the larger models without running out of memory, we use a mixture of model parallelism within each matrix multiply and model parallelism across the layers of the network. All models were trained on V100 GPUâs on part of a high-bandwidth cluster provided by Microsoft. Details of the training process and hyperparameter settings are described in Appendix B.
9
# 2.4 Evaluation
For few-shot learning, we evaluate each example in the evaluation set by randomly drawing K examples from that taskâs training set as conditioning, delimited by 1 or 2 newlines depending on the task. For LAMBADA and Storycloze there is no supervised training set available so we draw conditioning examples from the development set and evaluate on the test set. For Winograd (the original, not SuperGLUE version) there is only one dataset, so we draw conditioning examples directly from it.
K can be any value from 0 to the maximum amount allowed by the modelâs context window, which is nctx = 2048 for all models and typically ï¬ts 10 to 100 examples. Larger values of K are usually but not always better, so when a separate development and test set are available, we experiment with a few values of K on the development set and then run the best value on the test set. For some tasks (see Appendix G) we also use a natural language prompt in addition to (or for K = 0, instead of) demonstrations.
On tasks that involve choosing one correct completion from several options (multiple choice), we provide K examples of context plus correct completion, followed by one example of context only, and compare the LM likelihood of each completion. For most tasks we compare the per-token likelihood (to normalize for length), however on a small number of datasets (ARC, OpenBookQA, and RACE) we gain additional beneï¬t as measured on the development set by normalizing by the unconditional probability of each completion, by computing P (completion|answer context) , where answer context is the string "Answer: " or "A: " and is used to prompt that the completion should be an answer but is otherwise generic.
On tasks that involve binary classiï¬cation, we give the options more semantically meaningful names (e.g. âTrueâ or âFalseâ rather than 0 or 1) and then treat the task like multiple choice; we also sometimes frame the task similar to what is done by [RSR+19] (see Appendix G) for details. On tasks with free-form completion, we use beam search with the same parameters as [RSR+19]: a beam width of 4 and a length penalty of α = 0.6. We score the model using F1 similarity score, BLEU, or exact match, depending on what is standard for the dataset at hand.
Final results are reported on the test set when publicly available, for each model size and learning setting (zero-, one-, and few-shot). When the test set is private, our model is often too large to ï¬t on the test server, so we report results on the development set. We do submit to the test server on a small number of datasets (SuperGLUE, TriviaQA, PiQa) where we were able to make submission work, and we submit only the 200B few-shot results, and report development set results for everything else.
# 3 Results
In Figure 3.1 we display training curves for the 8 models described in Section 2. For this graph we also include 6 additional extra-small models with as few as 100,000 parameters. As observed in [KMH+20], language modeling performance follows a power-law when making efï¬cient use of training compute. After extending this trend by two more orders of magnitude, we observe only a slight (if any) departure from the power-law. One might worry that these improvements in cross-entropy loss come only from modeling spurious details of our training corpus. However, we will see in the following sections that improvements in cross-entropy loss lead to consistent performance gains across a broad spectrum of natural language tasks.
Below, we evaluate the 8 models described in Section 2 (the 175 billion parameter parameter GPT-3 and 7 smaller models) on a wide range of datasets. We group the datasets into 9 categories representing roughly similar tasks.
In Section 3.1 we evaluate on traditional language modeling tasks and tasks that are similar to language modeling, such as Cloze tasks and sentence/paragraph completion tasks. In Section 3.2 we evaluate on âclosed bookâ question answering tasks: tasks which require using the information stored in the modelâs parameters to answer general knowledge questions. In Section 3.3 we evaluate the modelâs ability to translate between languages (especially one-shot and few-shot). In Section 3.4 we evaluate the modelâs performance on Winograd Schema-like tasks. In Section 3.5 we evaluate on datasets that involve commonsense reasoning or question answering. In Section 3.6 we evaluate on reading comprehension tasks, in Section 3.7 we evaluate on the SuperGLUE benchmark suite, and in 3.8 we brieï¬y explore NLI. Finally, in Section 3.9, we invent some additional tasks designed especially to probe in-context learning abilities â these tasks focus on on-the-ï¬y reasoning, adaptation skills, or open-ended text synthesis. We evaluate all tasks in the few-shot, one-shot, and zero-shot settings.
10
Validation Loss Parameters a LE 2.57 -Cp004e 10° 10° 10° 10° 10° 10 Compute (PetaFLOP/s-days)
Figure 3.1: Smooth scaling of performance with compute. Performance (measured in terms of cross-entropy validation loss) follows a power-law trend with the amount of compute used for training. The power-law behavior observed in [KMH+20] continues for an additional two orders of magnitude with only small deviations from the predicted curve. For this ï¬gure, we exclude embedding parameters from compute and parameter counts.
Setting SOTA (Zero-Shot) GPT-3 Zero-Shot PTB 35.8a 20.5
Table 3.1: Zero-shot results on PTB language modeling dataset. Many other common language modeling datasets are omitted because they are derived from Wikipedia or other sources which are included in GPT-3âs training data. a[RWC+19]
# 3.1 Language Modeling, Cloze, and Completion Tasks
In this section we test GPT-3âs performance on the traditional task of language modeling, as well as related tasks that involve predicting a single word of interest, completing a sentence or paragraph, or choosing between possible completions of a piece of text.
# 3.1.1 Language Modeling
We calculate zero-shot perplexity on the Penn Tree Bank (PTB) [MKM+94] dataset measured in [RWC+19]. We omit the 4 Wikipedia-related tasks in that work because they are entirely contained in our training data, and we also omit the one-billion word benchmark due to a high fraction of the dataset being contained in our training set. PTB escapes these issues due to predating the modern internet. Our largest model sets a new SOTA on PTB by a substantial margin of 15 points, achieving a perplexity of 20.50. Note that since PTB is a traditional language modeling dataset it does not have a clear separation of examples to deï¬ne one-shot or few-shot evaluation around, so we measure only zero-shot.
# 3.1.2 LAMBADA
The LAMBADA dataset [PKL+16] tests the modeling of long-range dependencies in text â the model is asked to predict the last word of sentences which require reading a paragraph of context. It has recently been suggested that the continued scaling of language models is yielding diminishing returns on this difï¬cult benchmark. [BHT+20] reï¬ect on the small 1.5% improvement achieved by a doubling of model size between two recent state of the art results ([SPP+19]
11
Setting SOTA GPT-3 Zero-Shot GPT-3 One-Shot GPT-3 Few-Shot LAMBADA (acc) 68.0a 76.2 72.5 86.4 LAMBADA (ppl) 8.63b 3.00 3.35 1.92 StoryCloze (acc) 91.8c 83.2 84.7 87.7 HellaSwag (acc) 85.6d 78.9 78.1 79.3
Table 3.2: Performance on cloze and completion tasks. GPT-3 signiï¬cantly improves SOTA on LAMBADA while achieving respectable performance on two difï¬cult completion prediction datasets. a[Tur20] b[RWC+19] c[LDL19] d[LCH+20]
Lambada Human 90 80 70 __ Zero-Shot SOTA 60 Accuracy 50 40 âeâ Zero-Shot âeâ One-Shot âeâ Few-Shot (K=15) 30 20 0.1B 04B 08B 13B 26B 6.7B 13B 175B Parameters in LM (Billions)
Figure 3.2: On LAMBADA, the few-shot capability of language models results in a strong boost to accuracy. GPT-3 2.7B outperforms the SOTA 17B parameter Turing-NLG [Tur20] in this setting, and GPT-3 175B advances the state of the art by 18%. Note zero-shot uses a different format from one-shot and few-shot as described in the text.
and [Tur20]) and argue that âcontinuing to expand hardware and data sizes by orders of magnitude is not the path forwardâ. We ï¬nd that path is still promising and in a zero-shot setting GPT-3 achieves 76% on LAMBADA, a gain of 8% over the previous state of the art.
LAMBADA is also a demonstration of the ï¬exibility of few-shot learning as it provides a way to address a problem that classically occurs with this dataset. Although the completion in LAMBADA is always the last word in a sentence, a standard language model has no way of knowing this detail. It thus assigns probability not only to the correct ending but also to other valid continuations of the paragraph. This problem has been partially addressed in the past with stop-word ï¬lters [RWC+19] (which ban âcontinuationâ words). The few-shot setting instead allows us to âframeâ the task as a cloze-test and allows the language model to infer from examples that a completion of exactly one word is desired. We use the following ï¬ll-in-the-blank format:
Alice was friends with Bob. Alice went to visit her friend
# . â Bob
George bought some baseball equipment, a ball, a glove, and a
. â
When presented with examples formatted this way, GPT-3 achieves 86.4% accuracy in the few-shot setting, an increase of over 18% from the previous state-of-the-art. We observe that few-shot performance improves strongly with model size. While this setting decreases the performance of the smallest model by almost 20%, for GPT-3 it improves accuracy by 10%. Finally, the ï¬ll-in-blank method is not effective one-shot, where it always performs worse than the zero-shot setting. Perhaps this is because all models still require several examples to recognize the pattern.
12
44.5 36.6 34.5 14.6 23.0 29.9 45.5 44.7 37.4 14.4 25.3 41.5 68.0 60.5 50.1 64.3 68.0 71.2
Table 3.3: Results on three Open-Domain QA tasks. GPT-3 is shown in the few-, one-, and zero-shot settings, as compared to prior SOTA results for closed book and open domain settings. TriviaQA few-shot result is evaluated on the wiki split test server.
One note of caution is that an analysis of test set contamination identiï¬ed that a signiï¬cant minority of the LAMBADA dataset appears to be present in our training data â however analysis performed in Section 4 suggests negligible impact on performance.
# 3.1.3 HellaSwag
The HellaSwag dataset [ZHB+19] involves picking the best ending to a story or set of instructions. The examples were adversarially mined to be difï¬cult for language models while remaining easy for humans (who achieve 95.6% accuracy). GPT-3 achieves 78.1% accuracy in the one-shot setting and 79.3% accuracy in the few-shot setting, outperforming the 75.4% accuracy of a ï¬ne-tuned 1.5B parameter language model [ZHR+19] but still a fair amount lower than the overall SOTA of 85.6% achieved by the ï¬ne-tuned multi-task model ALUM.
# 3.1.4 StoryCloze
We next evaluate GPT-3 on the StoryCloze 2016 dataset [MCH+16], which involves selecting the correct ending sentence for ï¬ve-sentence long stories. Here GPT-3 achieves 83.2% in the zero-shot setting and 87.7% in the few-shot setting (with K = 70). This is still 4.1% lower than the ï¬ne-tuned SOTA using a BERT based model [LDL19] but improves over previous zero-shot results by roughly 10%.
# 3.2 Closed Book Question Answering
In this section we measure GPT-3âs ability to answer questions about broad factual knowledge. Due to the immense amount of possible queries, this task has normally been approached by using an information retrieval system to ï¬nd relevant text in combination with a model which learns to generate an answer given the question and the retrieved text. Since this setting allows a system to search for and condition on text which potentially contains the answer it is denoted âopen-bookâ. [RRS20] recently demonstrated that a large language model can perform surprisingly well directly answering the questions without conditioning on auxilliary information. They denote this more restrictive evaluation setting as âclosed-bookâ. Their work suggests that even higher-capacity models could perform even better and we test this hypothesis with GPT-3. We evaluate GPT-3 on the 3 datasets in [RRS20]: Natural Questions [KPR+19], WebQuestions [BCFL13], and TriviaQA [JCWZ17], using the same splits. Note that in addition to all results being in the closed-book setting, our use of few-shot, one-shot, and zero-shot evaluations represent an even stricter setting than previous closed-book QA work: in addition to external content not being allowed, ï¬ne-tuning on the Q&A dataset itself is also not permitted.
The results for GPT-3 are shown in Table 3.3. On TriviaQA, we achieve 64.3% in the zero-shot setting, 68.0% in the one-shot setting, and 71.2% in the few-shot setting. The zero-shot result already outperforms the ï¬ne-tuned T5-11B by 14.2%, and also outperforms a version with Q&A tailored span prediction during pre-training by 3.8%. The one-shot result improves by 3.7% and matches the SOTA for an open-domain QA system which not only ï¬ne-tunes but also makes use of a learned retrieval mechanism over a 15.3B parameter dense vector index of 21M documents [LPP+20]. GPT-3âs few-shot result further improves performance another 3.2% beyond this.
On WebQuestions (WebQs), GPT-3 achieves 14.4% in the zero-shot setting, 25.3% in the one-shot setting, and 41.5% in the few-shot setting. This compares to 37.4% for ï¬ne-tuned T5-11B, and 44.7% for ï¬ne-tuned T5-11B+SSM, which uses a Q&A-speciï¬c pre-training procedure. GPT-3 in the few-shot setting approaches the performance of state-of-the-art ï¬ne-tuned models. Notably, compared to TriviaQA, WebQS shows a much larger gain from zero-shot to few-shot (and indeed its zero-shot and one-shot performance are poor), perhaps suggesting that the WebQs questions
13
TriviaQA 70 _ Fine-tuned SOTA 60 50 40 Accuracy 30 20 âeâ Zero-Shot âeâ One-Shot âeâ Few-Shot (K=64) 10 0.1B 0.4B 0.8B 13B 26B 6.7B 13B 175B Parameters in LM (Billions)
Figure 3.3: On TriviaQA GPT3âs performance grows smoothly with model size, suggesting that language models continue to absorb knowledge as their capacity increases. One-shot and few-shot performance make signiï¬cant gains over zero-shot behavior, matching and exceeding the performance of the SOTA ï¬ne-tuned open-domain model, RAG [LPP+20]
and/or the style of their answers are out-of-distribution for GPT-3. Nevertheless, GPT-3 appears able to adapt to this distribution, recovering strong performance in the few-shot setting.
On Natural Questions (NQs) GPT-3 achieves 14.6% in the zero-shot setting, 23.0% in the one-shot setting, and 29.9% in the few-shot setting, compared to 36.6% for ï¬ne-tuned T5 11B+SSM. Similar to WebQS, the large gain from zero-shot to few-shot may suggest a distribution shift, and may also explain the less competitive performance compared to TriviaQA and WebQS. In particular, the questions in NQs tend towards very ï¬ne-grained knowledge on Wikipedia speciï¬cally which could be testing the limits of GPT-3âs capacity and broad pretraining distribution.
Overall, on one of the three datasets GPT-3âs one-shot matches the open-domain ï¬ne-tuning SOTA. On the other two datasets it approaches the performance of the closed-book SOTA despite not using ï¬ne-tuning. On all 3 datasets, we ï¬nd that performance scales very smoothly with model size (Figure 3.3 and Appendix H Figure H.7), possibly reï¬ecting the idea that model capacity translates directly to more âknowledgeâ absorbed in the parameters of the model.
# 3.3 Translation
For GPT-2 a ï¬lter was used on a multilingual collection of documents to produce an English only dataset due to capacity concerns. Even with this ï¬ltering GPT-2 showed some evidence of multilingual capability and performed non-trivially when translating between French and English despite only training on 10 megabytes of remaining French text. Since we increase the capacity by over two orders of magnitude from GPT-2 to GPT-3, we also expand the scope of the training dataset to include more representation of other languages, though this remains an area for further improvement. As discussed in 2.2 the majority of our data is derived from raw Common Crawl with only quality-based ï¬ltering. Although GPT-3âs training data is still primarily English (93% by word count), it also includes 7% of text in other languages. These languages are documented in the supplemental material. In order to better understand translation capability, we also expand our analysis to include two additional commonly studied languages, German and Romanian.
Existing unsupervised machine translation approaches often combine pretraining on a pair of monolingual datasets with back-translation [SHB15] to bridge the two languages in a controlled way. By contrast, GPT-3 learns from a blend of training data that mixes many languages together in a natural way, combining them on a word, sentence, and document level. GPT-3 also uses a single training objective which is not customized or designed for any task in particular. However, our one / few-shot settings arenât strictly comparable to prior unsupervised work since they make use of a small amount of paired examples (1 or 64). This corresponds to up to a page or two of in-context training data.
Results are shown in Table 3.4. Zero-shot GPT-3, which only receives on a natural language description of the task, still underperforms recent unsupervised NMT results. However, providing only a single example demonstration for
14
Setting SOTA (Supervised) EnâFr 45.6a FrâEn EnâDe DeâEn EnâRo RoâEn 40.2d 35.0 b 41.2c 38.5e 39.9e XLM [LC19] MASS [STQ+19] mBART [LGG+20] 33.4 37.5 - 33.3 34.9 - 26.4 28.3 29.8 34.3 35.2 34.0 33.3 35.2 35.0 31.8 33.1 30.5 GPT-3 Zero-Shot GPT-3 One-Shot GPT-3 Few-Shot 25.2 28.3 32.6 21.2 33.7 39.2 24.6 26.2 29.7 27.2 30.4 40.6 14.1 20.6 21.0 19.9 38.6 39.5
Table 3.4: Few-shot GPT-3 outperforms previous unsupervised NMT work by 5 BLEU when translating into English reï¬ecting its strength as an English LM. We report BLEU scores on the WMTâ14 FrâEn, WMTâ16 DeâEn, and WMTâ16 RoâEn datasets as measured by multi-bleu.perl with XLMâs tokeniza- tion in order to compare most closely with prior unsupervised NMT work. SacreBLEUf [Pos18] results re- ported in Appendix H. Underline indicates an unsupervised or few-shot SOTA, bold indicates supervised SOTA a[EOAG18] b[DHKH14] c[WXH+18] d[oR16] e[LGG+20] f [SacreBLEU signature: with relative conï¬dence. BLEU+case.mixed+numrefs.1+smooth.exp+tok.intl+version.1.2.20]
Translation (Multi-BLEU) 40 35 30 25 =) a 20 15 . French -> English a English -> French 10 ta oo âeâ German -> English a --e- English -> German 5 ae âeâ Romanian -> English --e- English -> Romanian 0 0.1B 0.4B 0.8B 13B 26B 6.7B 13B 175B Parameters in LM (Billions)
Figure 3.4: Few-shot translation performance on 6 language pairs as model capacity increases. There is a consistent trend of improvement across all datasets as the model scales, and as well as tendency for translation into English to be stronger than translation from English.
15
Setting Winograd Winogrande (XL) Fine-tuned SOTA GPT-3 Zero-Shot GPT-3 One-Shot GPT-3 Few-Shot 90.1a 88.3* 89.7* 88.6* 84.6b 70.2 73.2 77.7
Table 3.5: Results on the WSC273 version of Winograd schemas and the adversarial Winogrande dataset. See Section 4 for details on potential contamination of the Winograd test set. a[SBBC19] b[LYN+20]
Winogrande Human 90 Fine-tuned SOTA foxy fo} Fine-tuned RoBERTa-Large âeâ Zero-Shot âeâ One-Shot âeâ Few-Shot (K=50) Minte-tuneu DEN | -Laige ~ f=) Accuracy 60 50 anaom Guessing 0.1B 0.4B 0.8B 13B 26B 6.7B 13B 175B Parameters in LM (Billions)
Figure 3.5: Zero-, one-, and few-shot performance on the adversarial Winogrande dataset as model capacity scales. Scaling is relatively smooth with the gains to few-shot learning increasing with model size, and few-shot GPT-3 175B is competitive with a ï¬ne-tuned RoBERTA-large.
each translation task improves performance by over 7 BLEU and nears competitive performance with prior work. GPT-3 in the full few-shot setting further improves another 4 BLEU resulting in similar average performance to prior unsupervised NMT work. GPT-3 has a noticeable skew in its performance depending on language direction. For the three input languages studied, GPT-3 signiï¬cantly outperforms prior unsupervised NMT work when translating into English but underperforms when translating in the other direction. Performance on En-Ro is a noticeable outlier at over 10 BLEU worse than prior unsupervised NMT work. This could be a weakness due to reusing the byte-level BPE tokenizer of GPT-2 which was developed for an almost entirely English training dataset. For both Fr-En and De-En, few shot GPT-3 outperforms the best supervised result we could ï¬nd but due to our unfamiliarity with the literature and the appearance that these are un-competitive benchmarks we do not suspect those results represent true state of the art. For Ro-En, few shot GPT-3 performs within 0.5 BLEU of the overall SOTA which is achieved by a combination of unsupervised pretraining, supervised ï¬netuning on 608K labeled examples, and backtranslation [LHCG19b].
Finally, across all language pairs and across all three settings (zero-, one-, and few-shot), there is a smooth trend of improvement with model capacity. This is shown in Figure 3.4 in the case of few-shot results, and scaling for all three settings is shown in Appendix H.
# 3.4 Winograd-Style Tasks
The Winograd Schemas Challenge [LDM12] is a classical task in NLP that involves determining which word a pronoun refers to, when the pronoun is grammatically ambiguous but semantically unambiguous to a human. Recently ï¬ne-tuned language models have achieved near-human performance on the original Winograd dataset, but more difï¬cult versions
16
Setting PIQA ARC (Easy) Fine-tuned SOTA 79.4 80.5* GPT-3 Zero-Shot 80.5* GPT-3 One-Shot 82.8* GPT-3 Few-Shot 92.0[KKS+20] 68.8 71.2 70.1 ARC (Challenge) OpenBookQA 87.2[KKS+20] 78.5[KKS+20] 57.6 51.4 58.8 53.2 65.4 51.5
Table 3.6: GPT-3 results on three commonsense reasoning tasks, PIQA, ARC, and OpenBookQA. GPT-3 Few-Shot PIQA result is evaluated on the test server. See Section 4 for details on potential contamination issues on the PIQA test set.
PhysicalQA Human âeâ Zero-Shot âeâ One-Shot âeâ Few-Shot (K=50) 90 for) lo} Fine-tuned SOTA ~ f=) Accuracy 60 50 Random Guessing 0.1B 0.4B 0.8B 13B 26B 6.7B 13B 175B Parameters in LM (Billions)
Figure 3.6: GPT-3 results on PIQA in the zero-shot, one-shot, and few-shot settings. The largest model achieves a score on the development set in all three conditions that exceeds the best recorded score on the task.
such as the adversarially-mined Winogrande dataset [SBBC19] still signiï¬cantly lag human performance. We test GPT-3âs performance on both Winograd and Winogrande, as usual in the zero-, one-, and few-shot setting.
On Winograd we test GPT-3 on the original set of 273 Winograd schemas, using the same âpartial evaluationâ method described in [RWC+19]. Note that this setting differs slightly from the WSC task in the SuperGLUE benchmark, which is presented as binary classiï¬cation and requires entity extraction to convert to the form described in this section. On Winograd GPT-3 achieves 88.3%, 89.7%, and 88.6% in the zero-shot, one-shot, and few-shot settings, showing no clear in-context learning but in all cases achieving strong results just a few points below state-of-the-art and estimated human performance. We note that contamination analysis found some Winograd schemas in the training data but this appears to have only a small effect on results (see Section 4).
On the more difï¬cult Winogrande dataset, we do ï¬nd gains to in-context learning: GPT-3 achieves 70.2% in the zero-shot setting, 73.2% in the one-shot setting, and 77.7% in the few-shot setting. For comparison a ï¬ne-tuned RoBERTA model achieves 79%, state-of-the-art is 84.6% achieved with a ï¬ne-tuned high capacity model (T5), and human performance on the task as reported by [SBBC19] is 94.0%.
# 3.5 Common Sense Reasoning
Next we consider three datasets which attempt to capture physical or scientiï¬c reasoning, as distinct from sentence completion, reading comprehension, or broad knowledge question answering. The ï¬rst, PhysicalQA (PIQA) [BZB+19], asks common sense questions about how the physical world works and is intended as a probe of grounded understanding of the world. GPT-3 achieves 81.0% accuracy zero-shot, 80.5% accuracy one-shot, and 82.8% accuracy few-shot (the last measured on PIQAâs test server). This compares favorably to the 79.4% accuracy prior state-of-the-art of a
17
Setting Fine-tuned SOTA 90.7a 81.5 GPT-3 Zero-Shot 84.0 GPT-3 One-Shot 85.0 GPT-3 Few-Shot CoQA DROP QuAC SQuADv2 RACE-h RACE-m 93.0d 59.5 65.4 69.8 89.1b 23.6 34.3 36.5 74.4c 41.5 43.3 44.3 90.0e 45.5 45.9 46.8 93.1e 58.4 57.4 58.1
Table 3.7: Results on reading comprehension tasks. All scores are F1 except results for RACE which report accuracy. a[JZC+19] b[JN20] c[AI19] d[QIA20] e[SPP+19]
ï¬ne-tuned RoBERTa. PIQA shows relatively shallow scaling with model size and is still over 10% worse than human performance, but GPT-3âs few-shot and even zero-shot result outperform the current state-of-the-art. Our analysis ï¬agged PIQA for a potential data contamination issue (despite hidden test labels), and we therefore conservatively mark the result with an asterisk. See Section 4 for details. ARC [CCE+18] is a dataset of multiple-choice questions collected from 3rd to 9th grade science exams. On the âChallengeâ version of the dataset which has been ï¬ltered to questions which simple statistical or information retrieval methods are unable to correctly answer, GPT-3 achieves 51.4% accuracy in the zero-shot setting, 53.2% in the one-shot setting, and 51.5% in the few-shot setting. This is approaching the performance of a ï¬ne-tuned RoBERTa baseline (55.9%) from Uniï¬edQA [KKS+20]. On the âEasyâ version of the dataset (questions which either of the mentioned baseline approaches answered correctly), GPT-3 achieves 68.8%, 71.2%, and 70.1% which slightly exceeds a ï¬ne-tuned RoBERTa baseline from [KKS+20]. However, both of these results are still much worse than the overall SOTAs achieved by the Uniï¬edQA which exceeds GPT-3âs few-shot results by 27% on the challenge set and 22% on the easy set.
On OpenBookQA [MCKS18], GPT-3 improves signiï¬cantly from zero to few shot settings but is still over 20 points short of the overall SOTA. GPT-3âs few-shot performance is similar to a ï¬ne-tuned BERT Large baseline on the leaderboard.
Overall, in-context learning with GPT-3 shows mixed results on commonsense reasoning tasks, with only small and inconsistent gains observed in the one and few-shot learning settings for both PIQA and ARC, but a signiï¬cant improvement is observed on OpenBookQA. GPT-3 sets SOTA on the new PIQA dataset in all evaluation settings.
# 3.6 Reading Comprehension
Next we evaluate GPT-3 on the task of reading comprehension. We use a suite of 5 datasets including abstractive, multiple choice, and span based answer formats in both dialog and single question settings. We observe a wide spread in GPT-3âs performance across these datasets suggestive of varying capability with different answer formats. In general we observe GPT-3 is on par with initial baselines and early results trained using contextual representations on each respective dataset.
GPT-3 performs best (within 3 points of the human baseline) on CoQA [RCM19] a free-form conversational dataset and performs worst (13 F1 below an ELMo baseline) on QuAC [CHI+18] a dataset which requires modeling structured dialog acts and answer span selections of teacher-student interactions. On DROP [DWD+19], a dataset testing discrete reasoning and numeracy in the context of reading comprehension, GPT-3 in a few-shot setting outperforms the ï¬ne-tuned BERT baseline from the original paper but is still well below both human performance and state-of-the-art approaches which augment neural networks with symbolic systems [RLL+19]. On SQuAD 2.0 [RJL18], GPT-3 demonstrates its few-shot learning capabilities, improving by almost 10 F1 (to 69.8) compared to a zero-shot setting. This allows it to slightly outperform the best ï¬ne-tuned result in the original paper. On RACE [LXL+17], a multiple choice dataset of middle school and high school english examinations, GPT-3 performs relatively weakly and is only competitive with the earliest work utilizing contextual representations and is still 45% behind SOTA.
# 3.7 SuperGLUE
In order to better aggregate results on NLP tasks and compare to popular models such as BERT and RoBERTa in a more systematic way, we also evaluate GPT-3 on a standardized collection of datasets, the SuperGLUE benchmark [WPN+19] [WPN+19] [CLC+19] [DMST19] [RBG11] [KCR+18] [ZLL+18] [DGM06] [BHDD+06] [GMDD07] [BDD+09] [PCC18] [PHR+18]. GPT-3âs test-set performance on the SuperGLUE dataset is shown in Table 3.8. In the few-shot setting, we used 32 examples for all tasks, sampled randomly from the training set. For all tasks except WSC
18
CoQA fine-tuned SOTA 90 luman 80 70 60 Accuracy 50 40 âeâ Zero-Shot âeâ One-Shot 30 âeâ Few-Shot (K=5) 0.1B 0.4B 0.8B 13B 26B 6.7B 13B 175B Parameters in LM (Billions)
Figure 3.7: GPT-3 results on CoQA reading comprehension task. GPT-3 175B achieves 85 F1 in the few-shot setting, only a few points behind measured human performance and state-of-the-art ï¬ne-tuned models. Zero-shot and one-shot performance is a few points behind, with the gains to few-shot being largest for bigger models.
SuperGLUE Average BoolQ CB Accuracy Accuracy CB F1 COPA RTE Accuracy Accuracy Fine-tuned SOTA Fine-tuned BERT-Large GPT-3 Few-Shot 89.0 69.0 71.8 91.0 77.4 76.4 96.9 83.6 75.6 93.9 75.7 52.0 94.8 70.6 92.0 92.5 71.7 69.0 WSC Accuracy Accuracy Accuracy WiC MultiRC MultiRC ReCoRD ReCoRD F1a Accuracy F1 Fine-tuned SOTA Fine-tuned BERT-Large GPT-3 Few-Shot 76.1 69.6 49.4 93.8 64.6 80.1 62.3 24.1 30.5 88.2 70.0 75.4 92.5 71.3 90.2 93.3 72.0 91.1
Table 3.8: Performance of GPT-3 on SuperGLUE compared to ï¬ne-tuned baselines and SOTA. All results are reported on the test set. GPT-3 few-shot is given a total of 32 examples within the context of each task and performs no gradient updates.
19
SuperGLUE Performance In-Context Learning on SuperGLUE â®- Zero-shot â®- Few-shot GPT-3 175B 99 Human h Human. Fine-tuned SOTA â@- One-shot Fine-tuned SOTA â®- Few-shot (K=32) 80 80 2 8 Fine-tuned BERT++ w 70 Fine-tuned BERT Large 70 3 Finé-tuned BERT Large Q @ fom Z 60 60 50 50 Random Guessing Random Guessing 40 40 0.1 04 #08 13 2.6 6.7 13 175 01234 8 16 32 Billions of Parameters in LM Number of Examples in Context (K)
Figure 3.8: Performance on SuperGLUE increases with model size and number of examples in context. A value of K = 32 means that our model was shown 32 examples per task, for 256 examples total divided across the 8 tasks in SuperGLUE. We report GPT-3 values on the dev set, so our numbers are not directly comparable to the dotted reference lines (our test set results are in Table 3.8). The BERT-Large reference model was ï¬ne-tuned on the SuperGLUE training set (125K examples), whereas BERT++ was ï¬rst ï¬ne-tuned on MultiNLI (392K examples) and SWAG (113K examples) before further ï¬ne-tuning on the SuperGLUE training set (for a total of 630K ï¬ne-tuning examples). We ï¬nd the difference in performance between the BERT-Large and BERT++ to be roughly equivalent to the difference between GPT-3 with one example per context versus eight examples per context.
and MultiRC, we sampled a new set of examples to use in the context for each problem. For WSC and MultiRC, we used the same set of randomly drawn examples from the training set as context for all of the problems we evaluated.
We observe a wide range in GPT-3âs performance across tasks. On COPA and ReCoRD GPT-3 achieves near-SOTA performance in the one-shot and few-shot settings, with COPA falling only a couple points short and achieving second place on the leaderboard, where ï¬rst place is held by a ï¬ne-tuned 11 billion parameter model (T5). On WSC, performance is still relatively strong, achieving 80.1% in the few-shot setting (note that GPT-3 achieves 88.6% on the original Winograd dataset as described in Section 3.4). On BoolQ, MultiRC, and RTE, performance is reasonable, roughly matching that of a ï¬ne-tuned BERT-Large. On CB, we see signs of life at 75.6% in the few-shot setting.
WiC is a notable weak spot with few-shot performance at 49.4% (at random chance). We tried a number of different phrasings and formulations for WiC (which involves determining if a word is being used with the same meaning in two sentences), none of which was able to achieve strong performance. This hints at a phenomenon that will become clearer in the next section (which discusses the ANLI benchmark) â GPT-3 appears to be weak in the few-shot or one-shot setting at some tasks that involve comparing two sentences or snippets, for example whether a word is used the same way in two sentences (WiC), whether one sentence is a paraphrase of another, or whether one sentence implies another. This could also explain the comparatively low scores for RTE and CB, which also follow this format. Despite these weaknesses, GPT-3 still outperforms a ï¬ne-tuned BERT-large on four of eight tasks and on two tasks GPT-3 is close to the state-of-the-art held by a ï¬ne-tuned 11 billion parameter model.
Finally, we note that the few-shot SuperGLUE score steadily improves with both model size and with number of examples in the context showing increasing beneï¬ts from in-context learning (Figure 3.8). We scale K up to 32 examples per task, after which point additional examples will not reliably ï¬t into our context. When sweeping over values of K, we ï¬nd that GPT-3 requires less than eight total examples per task to outperform a ï¬ne-tuned BERT-Large on overall SuperGLUE score.
# 3.8 NLI
Natural Language Inference (NLI) [Fyo00] concerns the ability to understand the relationship between two sentences. In practice, this task is usually structured as a two or three class classiï¬cation problem where the model classiï¬es
20
ANLI Round3 Fine-tuned SOTA 48 46 Fine-tuned RoBERTa-Large 44 _ Fine-tuned BERT-Large ey 42 _.â Zero-Shot 5 âeâ One-Shot 9 40 3 âeâ Few-Shot (K=50) 0.1B 0.4B 0.8B 13B 26B 6.7B 13B 175B Parameters in LM (Billions)
Figure 3.9: Performance of GPT-3 on ANLI Round 3. Results are on the dev-set, which has only 1500 examples and therefore has high variance (we estimate a standard deviation of 1.2%). We ï¬nd that smaller models hover around random chance, while few-shot GPT-3 175B closes almost half the gap from random chance to SOTA. Results for ANLI rounds 1 and 2 are shown in the appendix.
whether the second sentence logically follows from the ï¬rst, contradicts the ï¬rst sentence, or is possibly true (neutral). SuperGLUE includes an NLI dataset, RTE, which evaluates the binary version of the task. On RTE, only the largest version of GPT-3 performs convincingly better than random (56%) in any evaluation setting, but in a few-shot setting GPT-3 performs similarly to a single-task ï¬ne-tuned BERT Large. We also evaluate on the recently introduced Adversarial Natural Language Inference (ANLI) dataset [NWD+19]. ANLI is a difï¬cult dataset employing a series of adversarially mined natural language inference questions in three rounds (R1, R2, and R3). Similar to RTE, all of our models smaller than GPT-3 perform at almost exactly random chance on ANLI, even in the few-shot setting (â¼ 33%), whereas GPT-3 itself shows signs of life on Round 3. Results for ANLI R3 are highlighted in Figure 3.9 and full results for all rounds can be found in Appendix H. These results on both RTE and ANLI suggest that NLI is still a very difï¬cult task for language models and they are only just beginning to show signs of progress.
# 3.9 Synthetic and Qualitative Tasks
One way to probe GPT-3âs range of abilities in the few-shot (or zero- and one-shot) setting is to give it tasks which require it to perform simple on-the-ï¬y computational reasoning, recognize a novel pattern that is unlikely to have occurred in training, or adapt quickly to an unusual task. We devise several tasks to test this class of abilities. First, we test GPT-3âs ability to perform arithmetic. Second, we create several tasks that involve rearranging or unscrambling the letters in a word, tasks which are unlikely to have been exactly seen during training. Third, we test GPT-3âs ability to solve SAT-style analogy problems few-shot. Finally, we test GPT-3 on several qualitative tasks, including using new words in a sentence, correcting English grammar, and news article generation. We will release the synthetic datasets with the hope of stimulating further study of test-time behavior of language models.
# 3.9.1 Arithmetic
To test GPT-3âs ability to perform simple arithmetic operations without task-speciï¬c training, we developed a small battery of 10 tests that involve asking GPT-3 a simple arithmetic problem in natural language:
⢠2 digit addition (2D+) â The model is asked to add two integers sampled uniformly from [0, 100), phrased in the form of a question, e.g. âQ: What is 48 plus 76? A: 124.â
⢠2 digit subtraction (2D-) â The model is asked to subtract two integers sampled uniformly from [0, 100); the answer may be negative. Example: âQ: What is 34 minus 53? A: -19â.
⢠3 digit addition (3D+) â Same as 2 digit addition, except numbers are uniformly sampled from [0, 1000).
21
Arithmetic (few-shot) 100 âeâ Two Digit Addition âeâ Two Digit Subtraction âeâ Three Digit Addition âeâ Three Digit Subtraction âeâ Four Digit Addition âeâ Four Digit Subtraction âeâ Five Digit Addition Five Digit Subtraction âeâ Two Digit Multiplication Single Digit Three Ops 80 60 Accuracy 40 20 0.1B 0.4B 0.8B 13B 26B 6.7B 13B 175B Parameters in LM (Billions)
Figure 3.10: Results on all 10 arithmetic tasks in the few-shot settings for models of different sizes. There is a signiï¬cant jump from the second largest model (GPT-3 13B) to the largest model (GPT-3 175), with the latter being able to reliably accurate 2 digit arithmetic, usually accurate 3 digit arithmetic, and correct answers a signiï¬cant fraction of the time on 4-5 digit arithmetic, 2 digit multiplication, and compound operations. Results for one-shot and zero-shot are shown in the appendix.
⢠3 digit subtraction (3D-) â Same as 2 digit subtraction, except numbers are uniformly sampled from [0, 1000).
⢠4 digit addition (4D+) â Same as 3 digit addition, except uniformly sampled from [0, 10000).
4 digit subtraction (4D-) â Same as 3 digit subtraction, except uniformly sampled from [0, 10000).
⢠5 digit addition (5D+) â Same as 3 digit addition, except uniformly sampled from [0, 100000).
⢠5 digit subtraction (5D-) â Same as 3 digit subtraction, except uniformly sampled from [0, 100000).
⢠2 digit multiplication (2Dx) â The model is asked to multiply two integers sampled uniformly from [0, 100), e.g. âQ: What is 24 times 42? A: 1008â.
⢠One-digit composite (1DC) â The model is asked to perform a composite operation on three 1 digit numbers, with parentheses around the last two. For example, âQ: What is 6+(4*8)? A: 38â. The three 1 digit numbers are selected uniformly on [0, 10) and the operations are selected uniformly from {+,-,*}.
In all 10 tasks the model must generate the correct answer exactly. For each task we generate a dataset of 2,000 random instances of the task and evaluate all models on those instances.
First we evaluate GPT-3 in the few-shot setting, for which results are shown in Figure 3.10. On addition and subtraction, GPT-3 displays strong proï¬ciency when the number of digits is small, achieving 100% accuracy on 2 digit addition, 98.9% at 2 digit subtraction, 80.2% at 3 digit addition, and 94.2% at 3-digit subtraction. Performance decreases as the number of digits increases, but GPT-3 still achieves 25-26% accuracy on four digit operations and 9-10% accuracy on ï¬ve digit operations, suggesting at least some capacity to generalize to larger numbers of digits. GPT-3 also achieves 29.2% accuracy at 2 digit multiplication, an especially computationally intensive operation. Finally, GPT-3 achieves 21.3% accuracy at single digit combined operations (for example, 9*(7+5)), suggesting that it has some robustness beyond just single operations.
As Figure 3.10 makes clear, small models do poorly on all of these tasks â even the 13 billion parameter model (the second largest after the 175 billion full GPT-3) can solve 2 digit addition and subtraction only half the time, and all other operations less than 10% of the time.
One-shot and zero-shot performance are somewhat degraded relative to few-shot performance, suggesting that adaptation to the task (or at the very least recognition of the task) is important to performing these computations correctly. Nevertheless, one-shot performance is still quite strong, and even zero-shot performance of the full GPT-3 signiï¬cantly
22
2D+ 2D- 3D+ 3D- 4D+ 4D- 5D+ 5D- 2Dx 1DC 76.9 99.6 100.0 58.0 86.4 98.9 34.2 65.5 80.4 48.3 78.7 94.2 4.0 14.0 25.5 7.5 14.0 26.8 0.7 3.5 9.3 0.8 3.8 9.9 19.8 27.4 29.2 9.8 14.3 21.3
Table 3.9: Results on basic arithmetic tasks for GPT-3 175B. {2,3,4,5}D{+,-} is 2, 3, 4, and 5 digit addition or subtraction, 2Dx is 2 digit multiplication. 1DC is 1 digit composite operations. Results become progressively stronger moving from the zero-shot to one-shot to few-shot setting, but even the zero-shot shows signiï¬cant arithmetic abilities.
Setting CL A1 A2 RI RW GPT-3 Zero-shot GPT-3 One-shot GPT-3 Few-shot 3.66 21.7 37.9 2.28 8.62 15.1 8.91 25.9 39.7 8.26 45.4 67.2 0.09 0.48 0.44
Table 3.10: GPT-3 175B performance on various word unscrambling and word manipulation tasks, in zero-, one-, and few-shot settings. CL is âcycle letters in wordâ, A1 is anagrams of but the ï¬rst and last letters, A2 is anagrams of all but the ï¬rst and last two letters, RI is âRandom insertion in wordâ, RW is âreversed wordsâ.
outperforms few-shot learning for all smaller models. All three settings for the full GPT-3 are shown in Table 3.9, and model capacity scaling for all three settings is shown in Appendix H.
To spot-check whether the model is simply memorizing speciï¬c arithmetic problems, we took the 3-digit arithmetic problems in our test set and searched for them in our training data in both the forms "<NUM1> + <NUM2> =" and "<NUM1> plus <NUM2>". Out of 2,000 addition problems we found only 17 matches (0.8%) and out of 2,000 subtraction problems we found only 2 matches (0.1%), suggesting that only a trivial fraction of the correct answers could have been memorized. In addition, inspection of incorrect answers reveals that the model often makes mistakes such as not carrying a â1â, suggesting it is actually attempting to perform the relevant computation rather than memorizing a table.
Overall, GPT-3 displays reasonable proï¬ciency at moderately complex arithmetic in few-shot, one-shot, and even zero-shot settings.
# 3.9.2 Word Scrambling and Manipulation Tasks
To test GPT-3âs ability to learn novel symbolic manipulations from a few examples, we designed a small battery of 5 âcharacter manipulationâ tasks. Each task involves giving the model a word distorted by some combination of scrambling, addition, or deletion of characters, and asking it to recover the original word. The 5 tasks are:
⢠Cycle letters in word (CL) â The model is given a word with its letters cycled, then the â=â symbol, and is expected to generate the original word. For example, it might be given âlyinevitabâ and should output âinevitablyâ.
⢠Anagrams of all but ï¬rst and last characters (A1) â The model is given a word where every letter except the ï¬rst and last have been scrambled randomly, and must output the original word. Example: criroptuon = corruption.
⢠Anagrams of all but ï¬rst and last 2 characters (A2) â The model is given a word where every letter except the ï¬rst 2 and last 2 have been scrambled randomly, and must recover the original word. Example: opoepnnt â opponent.
⢠Random insertion in word (RI) â A random punctuation or space character is inserted between each letter of a word, and the model must output the original word. Example: s.u!c/c!e.s s i/o/n = succession.
⢠Reversed words (RW) â The model is given a word spelled backwards, and must output the original word. Example: stcejbo â objects.
For each task we generate 10,000 examples, which we chose to be the top 10,000 most frequent words as measured by [Nor09] of length more than 4 characters and less than 15 characters. The few-shot results are shown in Figure 3.11. Task performance tends to grow smoothly with model size, with the full GPT-3 model achieving 66.9% on removing
23
10 Wordscramble (few-shot) âeâ cycle letters 60 ~° mid word 1 anagrams âeâ mid word 2 anagrams âeâ random insertion 50 âeâ reversed words Accuracy â â © © * 0.1B 0.4B 0.8B 13B 26B 6.7B 13B 175B Parameters in LM (Billions)
Figure 3.11: Few-shot performance on the ï¬ve word scrambling tasks for different sizes of model. There is generally smooth improvement with model size although the random insertion task shows an upward slope of improvement with the 175B model solving the task the majority of the time. Scaling of one-shot and zero-shot performance is shown in the appendix. All tasks are done with K = 100.
random insertions, 38.6% on cycling letters, 40.2% on the easier anagram task, and 15.1% on the more difï¬cult anagram task (where only the ï¬rst and last letters are held ï¬xed). None of the models can reverse the letters in a word.
In the one-shot setting, performance is signiï¬cantly weaker (dropping by half or more), and in the zero-shot setting the model can rarely perform any of the tasks (Table 3.10). This suggests that the model really does appear to learn these tasks at test time, as the model cannot perform them zero-shot and their artiï¬cial nature makes them unlikely to appear in the pre-training data (although we cannot conï¬rm this with certainty).
We can further quantify performance by plotting âin-context learning curvesâ, which show task performance as a function of the number of in-context examples. We show in-context learning curves for the Symbol Insertion task in Figure 1.2. We can see that larger models are able to make increasingly effective use of in-context information, including both task examples and natural language task descriptions.
Finally, it is worth adding that solving these tasks requires character-level manipulations, whereas our BPE encoding operates on signiï¬cant fractions of a word (on average â¼ 0.7 words per token), so from the LMâs perspective succeeding at these tasks involves not just manipulating BPE tokens but understanding and pulling apart their substructure. Also, CL, A1, and A2 are not bijective (that is, the unscrambled word is not a deterministic function of the scrambled word), requiring the model to perform some search to ï¬nd the correct unscrambling. Thus, the skills involved appear to require non-trivial pattern-matching and computation.
# 3.9.3 SAT Analogies
To test GPT-3 on another task that is somewhat unusual relative to the typical distribution of text, we collected a set of 374 âSAT analogyâ problems [TLBS03]. Analogies are a style of multiple choice question that constituted a section of the SAT college entrance exam before 2005. A typical example is âaudacious is to boldness as (a) sanctimonious is to hypocrisy, (b) anonymous is to identity, (c) remorseful is to misdeed, (d) deleterious is to result, (e) impressionable is to temptationâ. The student is expected to choose which of the ï¬ve word pairs has the same relationship as the original word pair; in this example the answer is âsanctimonious is to hypocrisyâ. On this task GPT-3 achieves 65.2% in the few-shot setting, 59.1% in the one-shot setting, and 53.7% in the zero-shot setting, whereas the average score among college applicants was 57% [TL05] (random guessing yields 20%). As shown in Figure 3.12, the results improve with scale, with the the full 175 billion model improving by over 10% compared to the 13 billion parameter model.
24
SAT Analogies âeâ Zero-Shot âeâ One-Shot 60 âeâ Few-Shot (K=20) a 5 b o Accuracy 30 20 Random Guessing 0.1B 0.4B 0.8B 13B 26B 6.7B 13B 175B Parameters in LM (Billions)
Figure 3.12: Zero-, one-,and few-shot performance on SAT analogy tasks, for different sizes of model. The largest model achieves 65% accuracy in the few-shot setting, and also demonstrates signiï¬cant gains to in-context learning which are not present in smaller models.
# 3.9.4 News Article Generation
Previous work on generative language models qualitatively tested their ability to generate synthetic ânews articlesâ by conditional sampling from the model given a human-written prompt consisting of a plausible ï¬rst sentence for a news story [RWC+19]. Relative to [RWC+19], the dataset used to train GPT-3 is much less weighted towards news articles, so trying to generate news articles via raw unconditional samples is less effective â for example GPT-3 often interprets the proposed ï¬rst sentence of a ânews articleâ as a tweet and then posts synthetic responses or follow-up tweets. To solve this problem we employed GPT-3âs few-shot learning abilities by providing three previous news articles in the modelâs context to condition it. With the title and subtitle of a proposed next article, the model is able to reliably generate short articles in the ânewsâ genre.
To gauge the quality of news article generation from GPT-3 (which we believe is likely to be correlated with conditional sample generation quality in general), we decided to measure human ability to distinguish GPT-3-generated articles from real ones. Similar work has been carried out by Kreps et al. [KMB20] and Zellers et al. [ZHR+19]. Generative language models are trained to match the distribution of content generated by humans, so the (in)ability of humans to distinguish the two is a potentially important measure of quality.3
In order to see how well humans can detect model generated text, we arbitrarily selected 25 article titles and subtitles from the website newser.com (mean length: 215 words). We then generated completions of these titles and subtitles from four language models ranging in size from 125M to 175B (GPT-3) parameters (mean length: 200 words). For each model, we presented around 80 US-based participants with a quiz consisting of these real titles and subtitles followed by either the human written article or the article generated by the model4. Participants were asked to select whether the article was âvery likely written by a humanâ, âmore likely written by a humanâ, âI donât knowâ, âmore likely written by a machineâ, or âvery likely written by a machineâ.
The articles we selected were not in the modelsâ training data and the model outputs were formatted and selected programmatically to prevent human cherry-picking. All models used the same context to condition outputs on and were pre-trained with the same context size and the same article titles and subtitles were used as prompts for each model. However, we also ran an experiment to control for participant effort and attention that followed the same format but involved intentionally bad model generated articles. This was done by generating articles from a âcontrol modelâ: a 160M parameter model with no context and increased output randomness.
3This task is also relevant to the potential misuse of language models discussed in Section 6.1. 4We wanted to identify how good an average person on the internet is at detecting language model outputs, so we focused on
participants drawn from the general US population. See Appendix E for details.
25
Mean accuracy 95% Conï¬dence Interval (low, hi) t compared to control (p-value) âI donât knowâ assignments Control (deliberately bad model) GPT-3 Small GPT-3 Medium GPT-3 Large GPT-3 XL GPT-3 2.7B GPT-3 6.7B GPT-3 13B GPT-3 175B 86% 76% 61% 68% 62% 62% 60% 55% 52% 83%â90% 72%â80% 58%â65% 64%â72% 59%â65% 58%â65% 56%â63% 52%â58% 49%â54% - 3.9 (2e-4) 10.3 (7e-21) 7.3 (3e-11) 10.7 (1e-19) 10.4 (5e-19) 11.2 (3e-21) 15.3 (1e-32) 16.9 (1e-34) 3.6 % 4.9% 6.0% 8.7% 7.5% 7.1% 6.2% 7.1% 7.8%
Table 3.11: Human accuracy in identifying whether short (â¼200 word) news articles are model generated. We ï¬nd that human accuracy (measured by the ratio of correct assignments to non-neutral assignments) ranges from 86% on the control model to 52% on GPT-3 175B. This table compares mean accuracy between ï¬ve different models, and shows the results of a two-sample T-Test for the difference in mean accuracy between each model and the control model (an unconditional GPT-3 Small model with increased output randomness).
Mean human accuracy (the ratio of correct assignments to non-neutral assignments per participant) at detecting that the intentionally bad articles were model generated was â¼ 86% where 50% is chance level performance. By contrast, mean human accuracy at detecting articles that were produced by the 175B parameter model was barely above chance at â¼ 52% (see Table 3.11).5 Human abilities to detect model generated text appear to decrease as model size increases: there appears to be a trend towards chance accuracy with model size, and human detection of GPT-3 is close to chance.6 This is true despite the fact that participants spend more time on each output as model size increases (see Appendix E). Examples of synthetic articles from GPT-3 are given in Figures 3.14 and 3.15.7 Much of the text isâas indicated by the evaluationsâdifï¬cult for humans to distinguish from authentic human content. Factual inaccuracies can be an indicator that an article is model generated since, unlike human authors, the models have no access to the speciï¬c facts that the article titles refer to or when the article was written. Other indicators include repetition, non sequiturs, and unusual phrasings, though these are often subtle enough that they are not noticed.
Related work on language model detection by Ippolito et al. [IDCBE19] indicates that automatic discriminators like G R O V E R [ZHR+19] and GLTR [GSR19] may have greater success at detecting model generated text than human evaluators. Automatic detection of these models may be a promising area of future research.
Ippolito et al. [IDCBE19] also note that human accuracy at detecting model generated text increases as humans observe more tokens. To do a preliminary investigation of how good humans are at detecting longer news articles generated by GPT-3 175B, we selected 12 world news articles from Reuters with an average length of 569 words and generated completions of these articles from GPT-3 with an average length of 498 words (298 words longer than our initial experiments). Following the methodology above, we ran two experiments, each on around 80 US-based participants, to compare human abilities to detect the articles generated by GPT-3 and a control model.
We found that mean human accuracy at detecting the intentionally bad longer articles from the control model was â¼ 88%, while mean human accuracy at detecting the longer articles that were produced by GPT-3 175B was still barely above chance at â¼ 52% (see Table 3.12). This indicates that, for news articles that are around 500 words long, GPT-3 continues to produce articles that humans ï¬nd difï¬cult to distinguish from human written news articles.
# 3.9.5 Learning and Using Novel Words
A task studied in developmental linguistics [CB78] is the ability to learn and utilize new words, for example using a word in a sentence after seeing it deï¬ned only once, or conversely inferring a wordâs meaning from only one usage. Here we qualitatively test GPT-3âs ability to do the former. Speciï¬cally, we give GPT-3 the deï¬nition of a nonexistent word, such as âGigamuruâ, and then ask it to use it in a sentence. We provide one to ï¬ve previous examples of a (separate)
5We use a two-sample Studentâs T-Test to test for signiï¬cant difference between the means of the participant accuracies of each model and the control model and report the normalized difference in the means (as the t-statistic) and the p-value. 6If a model consistently produces texts that are more impressive than human articles, it is possible that human performance on this task would drop below 50%. Indeed, many individual participants scored below 50% on this task. 7Additional non-news samples can be found in Appendix F.
26
Human ability to detect model generated news articles control (86%) 80- 70- Accuracy (%) 60- random chance (50%) 1e8 le9 1e10 lell Number of parameters (log scale)
Figure 3.13: Peopleâs ability to identify whether news articles are model-generated (measured by the ratio of correct assignments to non-neutral assignments) decreases as model size increases. Accuracy on the outputs on the deliberately- bad control model (an unconditioned GPT-3 Small model with higher output randomness) is indicated with the dashed line at the top, and the random chance (50%) is indicated with the dashed line at the bottom. Line of best ï¬t is a power law with 95% conï¬dence intervals.
Mean accuracy 95% Conï¬dence Interval (low, hi) t compared to control (p-value) âI donât knowâ assignments Control GPT-3 175B 88% 52% 84%â91% 48%â57% - 12.7 (3.2e-23) 2.7% 10.6%
Table 3.12: Peopleâs ability to identify whether â¼ 500 word articles are model generated (as measured by the ratio of correct assignments to non-neutral assignments) was 88% on the control model and 52% on GPT-3 175B. This table shows the results of a two-sample T-Test for the difference in mean accuracy between GPT-3 175B and the control model (an unconditional GPT-3 Small model with increased output randomness).
27
Title: United Methodists Agree to Historic Split Subtitle: Those who oppose gay marriage will form their own denomination Article: After two days of intense debate, the United Methodist Church has agreed to a historic split - one that is expected to end in the creation of a new denomination, one that will be "theologically and socially conservative," according to The Washington Post. The majority of delegates attending the churchâs annual General Conference in May voted to strengthen a ban on the ordination of LGBTQ clergy and to write new rules that will "discipline" clergy who officiate at same-sex weddings. those who opposed these measures have a new plan: separate denomination by 2020, calling their church the Christian Methodist denomination. The Post notes that the denomination, which claims 12.5 million members, was in the early 20th century the "largest Protestant denomination in the U.S.," but that it has been shrinking in recent decades. The new split will be the second in the churchâs history. 10 percent of the denomination left to form the Evangelical United Brethren Church. for the church, which has been losing members for years," which has been "pushed toward the brink of a schism over the role of LGBTQ people in the church." Gay marriage is not the only issue that has divided the church. In 2016, the denomination was split over ordination of transgender clergy, with the North Pacific regional conference voting to ban them from serving as clergy, and the South Pacific regional conference voting to allow them.
Figure 3.14: The GPT-3 generated news article that humans had the greatest difï¬culty distinguishing from a human written article (accuracy: 12%).
Title: Starâs Tux Promise Draws Megyn Kellyâs Sarcasm Subtitle: Joaquin Phoenix pledged to not change for each awards event Article: A year ago, Joaquin Phoenix made headlines when he appeared on the red carpet at the Golden Globes wearing a tuxedo with a paper bag over his head that read, "I am a shape-shifter. can only change myself." It was a promise to not change to fit into the Hollywood mold: yourself. inside of me, Iâm proud of it, and Iâm not going to be ashamed because of the way that someone else thinks I should be.â" Now, itâs the Oscars, and Phoenix is at it again. wearing a tux no matter what. Megyn Kelly was not impressed, and she let him have it on The Tonight Show. "You know, I feel like, I feel like you could have worn the tux," she says. "But youâre saying youâre a shape-shifter. your tux, but you can change your mind. change your mind." Phoenix says he did, but it didnât stick. âOkay, Iâm going to wear a tuxedo to this thing.â donât want to wear a tuxedo to this thing.â" Kelly goes on to encourage him to change his mind again, but Phoenix says itâs too late: wearing this."
Figure 3.15: The GPT-3 generated news article that humans found the easiest to distinguish from a human written article (accuracy: 61%).
28
A "whatpu" is a small, furry animal native to Tanzania. the word whatpu is: We were traveling in Africa and we saw these very cute whatpus.
An example of a sentence that uses
To do a "farduddle" means to jump up and down really fast. the word farduddle is: One day when I was playing tag with my little sister, she got really excited and she started doing these crazy farduddles. A "yalubalu" is a type of vegetable that looks like a big pumpkin. that uses the word yalubalu is: I was on a trip to Africa and I tried this yalubalu vegetable that was grown in a garden there. It was delicious.
A "Burringo" is a car with very fast acceleration. word Burringo is: In our garage we have a Burringo that my father drives to work every day.
# that
# uses
# the
A "Gigamuru" is a type of Japanese musical instrument. word Gigamuru is: I have a Gigamuru that my uncle gave me as a gift. I love to play it at home.
# that
# uses
To "screeg" something is to swing a sword at it. screeg is: We screeghed at each other for several minutes and then we went outside and ate ice cream.
Figure 3.16: Representative GPT-3 completions for the few-shot task of using a new word in a sentence. Boldface is GPT-3âs completions, plain text is human prompts. In the ï¬rst example both the prompt and the completion are provided by a human; this then serves as conditioning for subsequent examples where GPT-3 receives successive additional prompts and provides the completions. Nothing task-speciï¬c is provided to GPT-3 other than the conditioning shown here.
nonexistent word being deï¬ned and used in a sentence, so the task is few-shot in terms of previous examples of the broad task and one-shot in terms of the speciï¬c word. Table 3.16 shows the 6 examples we generated; all deï¬nitions were human-generated, and the ï¬rst answer was human-generated as conditioning while the subsequent answers were generated by GPT-3. These examples were generated continuously in one sitting and we did not omit or repeatedly try any prompts. In all cases the generated sentence appears to be a correct or at least plausible use of the word. In the ï¬nal sentence the model generates a plausible conjugation for the word âscreegâ (namely âscreeghedâ), although the use of the word is slightly awkward (âscreeghed at each otherâ) despite being plausible in the sense that it could describe a toy sword ï¬ght. Overall, GPT-3 appears to be at least proï¬cient at the task of using novel words in a sentence.
# 3.9.6 Correcting English Grammar
Another task well suited for few-shot learning is correcting English grammar. We test this with GPT-3 in the few- shot setting by giving prompts of the form "Poor English Input: <sentence>
Good English Output: <sentence>". We give GPT-3 one human-generated correction and then ask it to correct 5 more (again without any omissions or repeats). Results are shown in Figure 3.17.
# 4 Measuring and Preventing Memorization Of Benchmarks
Since our training dataset is sourced from the internet, it is possible that our model was trained on some of our benchmark test sets. Accurately detecting test contamination from internet-scale datasets is a new area of research without established best practices. While it is common practice to train large models without investigating contamination, given the increasing scale of pretraining datasets, we believe this issue is becoming increasingly important to attend to.
This concern is not just hypothetical. One of the ï¬rst papers to train a language model on Common Crawl data [TL18] detected and removed a training document which overlapped with one of their evaluation datasets. Other work such as GPT-2 [RWC+19] also conducted post-hoc overlap analysis. Their study was relatively encouraging, ï¬nding that
29
# the
Poor English input: Good English output: Poor English input: Good English output: Poor English input: requested. Good English output: requested. Poor English input: Good English output: Iâd be more than happy to work with you on another project.
Poor English input: thatâd be nice if you could share some examples or project you did before. Good English output: Please provide me with a brief description of the design youâre looking for and that would be nice if you could share some examples or projects you have done before.
# Poor English input: Good English output: The patient died.
The patient was died.
Poor English input: Good English output: We think that Leslie likes us.
We think that Leslie likes ourselves.
# Poor English input: Good English output: Janet broke Billâs finger.
Poor English input: Good English output: Mary arranged for John to rent a house in St. Louis.
Poor English input: Good English output: Today I went to the store to buy some bottles of water.
# of
# water.
Poor English input: Good English output: I tried to hit the ball with the bat, but my swing missed.
Figure 3.17: Representative GPT-3 completions for the few-shot task of correcting English grammar. Boldface is GPT-3âs completions, plain text is human prompts. In the ï¬rst few examples example both the prompt and the completion are provided by a human; this then serves as conditioning for subsequent examples where GPT-3 receives successive additional prompts and provides the completions. Nothing task-speciï¬c is provided to GPT-3 aside from the ï¬rst few examples as conditioning and the âPoor English input/Good English outputâ framing. We note that the distinction between âpoorâ and âgoodâ English (and the terms themselves) is complex, contextual, and contested. As the example mentioning the rental of a house shows, assumptions that the model makes about what âgoodâ is can even lead it to make errors (here, the model not only adjusts grammar, but also removes the word âcheapâ in a way that alters meaning).
30
# and
GPT-3 Training Curves â Validation Loss -10 ---- Train Loss 3.50 yu 2 & a Nn Oo WN oan SF a7 NON Ny a Model Parameters Cross-Entropy Loss (nats/token, smoothed) => NN nN oO a oo 8 1.50 10 0 50 100 150 200 250 300 Tokens Elapsed (Billions)
Figure 4.1: GPT-3 Training Curves We measure model performance during training on a deduplicated validation split of our training distribution. Though there is some gap between training and validation performance, the gap grows only minimally with model size and training time, suggesting that most of the gap comes from a difference in difï¬culty rather than overï¬tting.
although models did perform moderately better on data that overlapped between training and testing, this did not signiï¬cantly impact reported results due to the small fraction of data which was contaminated (often only a few percent).
GPT-3 operates in a somewhat different regime. On the one hand, the dataset and model size are about two orders of magnitude larger than those used for GPT-2, and include a large amount of Common Crawl, creating increased potential for contamination and memorization. On the other hand, precisely due to the large amount of data, even GPT-3 175B does not overï¬t its training set by a signiï¬cant amount, measured relative to a held-out validation set with which it was deduplicated (Figure 4.1). Thus, we expect that contamination is likely to be frequent, but that its effects may not be as large as feared.
We initially tried to address the issue of contamination by proactively searching for and attempting to remove any overlap between our training data and the development and test sets of all benchmarks studied in this paper. Unfortunately, a bug resulted in only partial removal of all detected overlaps from the training data. Due to the cost of training, it wasnât feasible to retrain the model. To address this, we investigate in detail how the remaining detected overlap impacts results.
For each benchmark, we produce a âcleanâ version which removes all potentially leaked examples, deï¬ned roughly as examples that have a 13-gram overlap with anything in the pretraining set (or that overlap with the whole example when it is shorter than 13-grams). The goal is to very conservatively ï¬ag anything that could potentially be contamination, so as to produce a clean subset that is free of contamination with high conï¬dence. The exact procedure is detailed in Appendix C.
We then evaluate GPT-3 on these clean benchmarks, and compare to the original score. If the score on the clean subset is similar to the score on the entire dataset, this suggests that contamination, even if present, does not have a signiï¬cant effect on reported results. If the score on the clean subset is lower, this suggests contamination may be inï¬ating the results. The results are summarized in Figure 4.2. Although potential contamination is often high (with a quarter of benchmarks scoring over 50%), in most cases performance changes only negligibly, and we see no evidence that contamination level and performance difference are correlated. We conclude that either our conservative method substantially overestimated contamination or that contamination has little effect on performance.
Below, we review in more detail the few speciï¬c cases where either (1) the model performs signiï¬cantly worse on the cleaned version, or (2) potential contamination is very high, which makes measuring the performance difference difï¬cult.
Our analysis ï¬agged six groups of benchmarks for further investigation: Word Scrambling, Reading Comprehension (QuAC, SQuAD2, DROP), PIQA, Winograd, language modeling tasks (Wikitext tasks, 1BW), and German to English
31
30% eval on only Quac clean data lu - S 20% did better 2 eS SA 10% as Symbol Insertion ° e e cr pt 0% 2 of See os ° e ® ee 3 A S Ted a8 SQuADv2 Winograd PIQA-® £ 9) ® G2 -10% WMT16 ensde_/ Anagrams2® @ F- 2 WMT16 de->en Anagrams 1 2 2 20% @ DROP eval on all data Reversed Words-@ (including dirty) -30% did better 0% 25% 50% 75% 100% Percentage of Data Clean in Dataset
Figure 4.2: Benchmark contamination analysis We constructed cleaned versions of each of our benchmarks to check for potential contamination in our training set. The x-axis is a conservative lower bound for how much of the dataset is known with high conï¬dence to be clean, and the y-axis shows the difference in performance when evaluating only on the veriï¬ed clean subset. Performance on most benchmarks changed negligibly, but some were ï¬agged for further review. On inspection we ï¬nd some evidence for contamination of the PIQA and Winograd results, and we mark the corresponding results in Section 3 with an asterisk. We ï¬nd no evidence that other benchmarks are affected.
translation. Since our overlap analysis is designed to be extremely conservative, we expect it to produce some false positives. We summarize the results for each group of tasks below:
⢠Reading Comprehension: Our initial analysis ï¬agged >90% of task examples from QuAC, SQuAD2, and DROP as potentially contaminated, so large that even measuring the differential on a clean subset was difï¬cult. Upon manual inspection, however, we found that for every overlap we inspected, in all 3 datasets, the source text was present in our training data but the question/answer pairs were not, meaning the model gains only background information and cannot memorize the answer to a speciï¬c question.
⢠German translation: We found 25% of the examples in the WMT16 German-English test set were marked as potentially contaminated, with an associated total effect size of 1-2 BLEU. Upon inspection, none of the ï¬agged examples contain paired sentences resembling NMT training data and collisions were monolingual matches mostly of snippets of events discussed in the news.
⢠Reversed Words and Anagrams: Recall that these tasks are of the form âalaok = koalaâ. Due to the short length of these tasks, we used 2-grams for ï¬ltering (ignoring punctuation). After inspecting the ï¬agged overlaps, we found that they were not typically instances of real reversals or unscramblings in the training set, but rather palindromes or trivial unscramblings, e.g âkayak = kayakâ. The amount of overlap was small, but removing the trivial tasks lead to an increase in difï¬culty and thus a spurious signal. Related to this, the symbol insertion task shows high overlap but no effect on performance â this is because that task involves removing non-letter characters from a word, and the overlap analysis itself ignores such characters, leading to many spurious matches.
⢠PIQA: The overlap analysis ï¬agged 29% of examples as contaminated, and observed a 3 percentage point absolute decrease (4% relative decrease) in performance on the clean subset. Though the test dataset was released after our training set was created and its labels are hidden, some of the web pages used by the crowdsourced dataset creators are contained in our training set. We found a similar decrease in a 25x smaller model with much less capacity to memorize, leading us to suspect that the shift is likely statistical bias rather than memorization; examples which workers copied may simply be easier. Unfortunately, we cannot rigorously prove this hypothesis. We therefore mark our PIQA results with an asterisk to denote this potential contamination.
⢠Winograd: The overlap analysis ï¬agged 45% of examples, and found a 2.6% decrease in performance on the clean subset. Manual inspection of the overlapping data point showed that 132 Winograd schemas were in fact present in our training set, though presented in a different format than we present the task to the model. Although the decrease in performance is small, we mark our Winograd results in the main paper with an asterisk.
32
⢠Language modeling: We found the 4 Wikipedia language modeling benchmarks measured in GPT-2, plus the Childrenâs Book Test dataset, to be almost entirely contained in our training data. Since we cannot reliably extract a clean subset here, we do not report results on these datasets, even though we intended to when starting this work. We note that Penn Tree Bank due to its age was unaffected and therefore became our chief language modeling benchmark.
We also inspected datasets where contamination was high, but the impact on performance was close to zero, simply to verify how much actual contamination existed. These appeared to often contain false positives. They had either no actual contamination, or had contamination that did not give away the answer to the task. One notable exception was LAMBADA, which appeared to have substantial genuine contamination, yet the impact on performance was very small, with the clean subset scoring within 0.5% of the full dataset. Also, strictly speaking, our ï¬ll-in-the-blank format precludes the simplest form of memorization. Nevertheless, since we made very large gains on LAMBADA in this paper, the potential contamination is noted in the results section.
An important limitation of our contamination analysis is that we cannot be sure that the clean subset is drawn from the same distribution as the original dataset. It remains possible that memorization inï¬ates results but at the same time is precisely counteracted by some statistical bias causing the clean subset to be easier. However, the sheer number of shifts close to zero suggests this is unlikely, and we also observed no noticeable difference in the shifts for small models, which are unlikely to be memorizing.
Overall, we have made a best effort to measure and document the effects of data contamination, and to note or outright remove problematic results, depending on the severity. Much work remains to be done to address this important and subtle issue for the ï¬eld in general, both when designing benchmarks and when training models. For a more detailed explanation of our analysis, we refer the reader to Appendix C.
# 5 Limitations
GPT-3 and our analysis of it have a number of limitations. Below we describe some of these and suggest directions for future work.
First, despite the strong quantitative and qualitative improvements of GPT-3, particularly compared to its direct predecessor GPT-2, it still has notable weaknesses in text synthesis and several NLP tasks. On text synthesis, although the overall quality is high, GPT-3 samples still sometimes repeat themselves semantically at the document level, start to lose coherence over sufï¬ciently long passages, contradict themselves, and occasionally contain non-sequitur sentences or paragraphs. We will release a collection of 500 uncurated unconditional samples to help provide a better sense of GPT-3âs limitations and strengths at text synthesis. Within the domain of discrete language tasks, we have noticed informally that GPT-3 seems to have special difï¬culty with âcommon sense physicsâ, despite doing well on some datasets (such as PIQA [BZB+19]) that test this domain. Speciï¬cally GPT-3 has difï¬culty with questions of the type âIf I put cheese into the fridge, will it melt?â. Quantitatively, GPT-3âs in-context learning performance has some notable gaps on our suite of benchmarks, as described in Section 3, and in particular it does little better than chance when evaluated one-shot or even few-shot on some âcomparisonâ tasks, such as determining if two words are used the same way in a sentence, or if one sentence implies another (WIC and ANLI respectively), as well as on a subset of reading comprehension tasks. This is especially striking given GPT-3âs strong few-shot performance on many other tasks.
GPT-3 has several structural and algorithmic limitations, which could account for some of the issues above. We focused on exploring in-context learning behavior in autoregressive language models because it is straightforward to both sample and compute likelihoods with this model class. As a result our experiments do not include any bidirectional architectures or other training objectives such as denoising. This is a noticeable difference from much of the recent literature, which has documented improved ï¬ne-tuning performance when using these approaches over standard language models [RSR+19]. Thus our design decision comes at the cost of potentially worse performance on tasks which empirically beneï¬t from bidirectionality. This may include ï¬ll-in-the-blank tasks, tasks that involve looking back and comparing two pieces of content, or tasks that require re-reading or carefully considering a long passage and then generating a very short answer. This could be a possible explanation for GPT-3âs lagging few-shot performance on a few of the tasks, such as WIC (which involves comparing the use of a word in two sentences), ANLI (which involves comparing two sentences to see if one implies the other), and several reading comprehension tasks (e.g. QuAC and RACE). We also conjecture, based on past literature, that a large bidirectional model would be stronger at ï¬ne-tuning than GPT-3. Making a bidirectional model at the scale of GPT-3, and/or trying to make bidirectional models work with few- or zero-shot learning, is a promising direction for future research, and could help achieve the âbest of both worldsâ.
A more fundamental limitation of the general approach described in this paper â scaling up any LM-like model, whether autoregressive or bidirectional â is that it may eventually run into (or could already be running into) the limits of the
33
pretraining objective. Our current objective weights every token equally and lacks a notion of what is most important to predict and what is less important. [RRS20] demonstrate beneï¬ts of customizing prediction to entities of interest. Also, with self-supervised objectives, task speciï¬cation relies on forcing the desired task into a prediction problem, whereas ultimately, useful language systems (for example virtual assistants) might be better thought of as taking goal-directed actions rather than just making predictions. Finally, large pretrained language models are not grounded in other domains of experience, such as video or real-world physical interaction, and thus lack a large amount of context about the world [BHT+20]. For all these reasons, scaling pure self-supervised prediction is likely to hit limits, and augmentation with a different approach is likely to be necessary. Promising future directions in this vein might include learning the objective function from humans [ZSW+19a], ï¬ne-tuning with reinforcement learning, or adding additional modalities such as images to provide grounding and a better model of the world [CLY+19].
Another limitation broadly shared by language models is poor sample efï¬ciency during pre-training. While GPT-3 takes a step towards test-time sample efï¬ciency closer to that of humans (one-shot or zero-shot), it still sees much more text during pre-training than a human sees in the their lifetime [Lin20]. Improving pre-training sample efï¬ciency is an important direction for future work, and might come from grounding in the physical world to provide additional information, or from algorithmic improvements.
A limitation, or at least uncertainty, associated with few-shot learning in GPT-3 is ambiguity about whether few-shot learning actually learns new tasks âfrom scratchâ at inference time, or if it simply recognizes and identiï¬es tasks that it has learned during training. These possibilities exist on a spectrum, ranging from demonstrations in the training set that are drawn from exactly the same distribution as those at test time, to recognizing the same task but in a different format, to adapting to a speciï¬c style of a general task such as QA, to learning a skill entirely de novo. Where GPT-3 is on this spectrum may also vary from task to task. Synthetic tasks such as wordscrambling or deï¬ning nonsense words seem especially likely to be learned de novo, whereas translation clearly must be learned during pretraining, although possibly from data that is very different in organization and style than the test data. Ultimately, it is not even clear what humans learn from scratch vs from prior demonstrations. Even organizing diverse demonstrations during pre-training and identifying them at test time would be an advance for language models, but nevertheless understanding precisely how few-shot learning works is an important unexplored direction for future research.
A limitation associated with models at the scale of GPT-3, regardless of objective function or algorithm, is that they are both expensive and inconvenient to perform inference on, which may present a challenge for practical applicability of models of this scale in their current form. One possible future direction to address this is distillation [HVD15] of large models down to a manageable size for speciï¬c tasks. Large models such as GPT-3 contain a very wide range of skills, most of which are not needed for a speciï¬c task, suggesting that in principle aggressive distillation may be possible. Distillation is well-explored in general [LHCG19a] but has not been tried at the scale of hundred of billions parameters; new challenges and opportunities may be associated with applying it to models of this size.
Finally, GPT-3 shares some limitations common to most deep learning systems â its decisions are not easily interpretable, it is not necessarily well-calibrated in its predictions on novel inputs as observed by the much higher variance in performance than humans on standard benchmarks, and it retains the biases of the data it has been trained on. This last issue â biases in the data that may lead the model to generate stereotyped or prejudiced content â is of special concern from a societal perspective, and will be discussed along with other issues in the next section on Broader Impacts (Section 6).
# 6 Broader Impacts
Language models have a wide range of beneï¬cial applications for society, including code and writing auto-completion, grammar assistance, game narrative generation, improving search engine responses, and answering questions. But they also have potentially harmful applications. GPT-3 improves the quality of text generation and adaptability over smaller models and increases the difï¬culty of distinguishing synthetic text from human-written text. It therefore has the potential to advance both the beneï¬cial and harmful applications of language models.
Here we focus on the potential harms of improved language models, not because we believe the harms are necessarily greater, but in order to stimulate efforts to study and mitigate them. The broader impacts of language models like this are numerous. We focus on two primary issues: the potential for deliberate misuse of language models like GPT-3 in Section 6.1, and issues of bias, fairness, and representation within models like GPT-3 in Section 6.2. We also brieï¬y discuss issues of energy efï¬ciency (Section 6.3).
34
# 6.1 Misuse of Language Models
Malicious uses of language models can be somewhat difï¬cult to anticipate because they often involve repurposing language models in a very different environment or for a different purpose than researchers intended. To help with this, we can think in terms of traditional security risk assessment frameworks, which outline key steps such as identifying threats and potential impacts, assessing likelihood, and determining risk as a combination of likelihood and impact [Ros12]. We discuss three factors: potential misuse applications, threat actors, and external incentive structures.
# 6.1.1 Potential Misuse Applications
Any socially harmful activity that relies on generating text could be augmented by powerful language models. Examples include misinformation, spam, phishing, abuse of legal and governmental processes, fraudulent academic essay writing and social engineering pretexting. Many of these applications bottleneck on human beings to write sufï¬ciently high quality text. Language models that produce high quality text generation could lower existing barriers to carrying out these activities and increase their efï¬cacy.
The misuse potential of language models increases as the quality of text synthesis improves. The ability of GPT-3 to generate several paragraphs of synthetic content that people ï¬nd difï¬cult to distinguish from human-written text in 3.9.4 represents a concerning milestone in this regard.
# 6.1.2 Threat Actor Analysis
Threat actors can be organized by skill and resource levels, ranging from low or moderately skilled and resourced actors who may be able to build a malicious product to âadvanced persistent threatsâ (APTs): highly skilled and well-resourced (e.g. state-sponsored) groups with long-term agendas [SBC+19].
To understand how low and mid-skill actors think about language models, we have been monitoring forums and chat groups where misinformation tactics, malware distribution, and computer fraud are frequently discussed. While we did ï¬nd signiï¬cant discussion of misuse following the initial release of GPT-2 in spring of 2019, we found fewer instances of experimentation and no successful deployments since then. Additionally, those misuse discussions were correlated with media coverage of language model technologies. From this, we assess that the threat of misuse from these actors is not immediate, but signiï¬cant improvements in reliability could change this.
Because APTs do not typically discuss operations in the open, we have consulted with professional threat analysts about possible APT activity involving the use of language models. Since the release of GPT-2 there has been no discernible difference in operations that may see potential gains by using language models. The assessment was that language models may not be worth investing signiï¬cant resources in because there has been no convincing demonstration that current language models are signiï¬cantly better than current methods for generating text, and because methods for âtargetingâ or âcontrollingâ the content of language models are still at a very early stage.
# 6.1.3 External Incentive Structures
Each threat actor group also has a set of tactics, techniques, and procedures (TTPs) that they rely on to accomplish their agenda. TTPs are inï¬uenced by economic factors like scalability and ease of deployment; phishing is extremely popular among all groups because it offers a low-cost, low-effort, high-yield method of deploying malware and stealing login credentials. Using language models to augment existing TTPs would likely result in an even lower cost of deployment.
Ease of use is another signiï¬cant incentive. Having stable infrastructure has a large impact on the adoption of TTPs. The outputs of language models are stochastic, however, and though developers can constrain these (e.g. using top-k truncation) they are not able to perform consistently without human feedback. If a social media disinformation bot produces outputs that are reliable 99% of the time, but produces incoherent outputs 1% of the time, this could reduce the amount of human labor required in operating this bot. But a human is still needed to ï¬lter the outputs, which restricts how scalable the operation can be.
Based on our analysis of this model and analysis of threat actors and the landscape, we suspect AI researchers will eventually develop language models that are sufï¬ciently consistent and steerable that they will be of greater interest to malicious actors. We expect this will introduce challenges for the broader research community, and hope to work on this through a combination of mitigation research, prototyping, and coordinating with other technical developers.
35
# 6.2 Fairness, Bias, and Representation
Biases present in training data may lead models to generate stereotyped or prejudiced content. This is concerning, since model bias could harm people in the relevant groups in different ways by entrenching existing stereotypes and producing demeaning portrayals amongst other potential harms [Cra17]. We have conducted an analysis of biases in the model in order to better understand GPT-3âs limitations when it comes to fairness, bias, and representation. 8
Our goal is not to exhaustively characterize GPT-3, but to give a preliminary analysis of some of its limitations and behaviors. We focus on biases relating to gender, race, and religion, although many other categories of bias are likely present and could be studied in follow-up work. This is a preliminary analysis and does not reï¬ect all of the modelâs biases even within the studied categories.
Broadly, our analysis indicates that internet-trained models have internet-scale biases; models tend to reï¬ect stereotypes present in their training data. Below we discuss our preliminary ï¬ndings of bias along the dimensions of gender, race, and religion. We probe for bias in the 175 billion parameter model and also in similar smaller models, to see if and how they are different in this dimension.
# 6.2.1 Gender
In our investigation of gender bias in GPT-3, we focused on associations between gender and occupation. We found that occupations in general have a higher probability of being followed by a male gender identiï¬er than a female one (in other words, they are male leaning) when given a context such as "The {occupation} was a" (Neutral Variant). 83% of the 388 occupations we tested were more likely to be followed by a male identiï¬er by GPT-3. We measured this by feeding the model a context such as "The detective was a" and then looking at the probability of the model following up with male indicating words (eg. man, male etc.) or female indicating words (woman, female etc.). In particular, occupations demonstrating higher levels of education such as legislator, banker, or professor emeritus were heavily male leaning along with occupations that require hard physical labour such as mason, millwright, and sheriff. Occupations that were more likely to be followed by female identiï¬ers include midwife, nurse, receptionist, housekeeper etc.
We also tested how these probabilities changed when we shifted the context to be the "The competent {occupation} was a" (Competent Variant), and when we shifted the context to be "The incompetent {occupation} was a" (Incompetent Variant) for each occupation in the dataset. We found that, when prompted with "The competent {occupation} was a," the majority of occupations had an even higher probability of being followed by a male identiï¬er than a female one than was the case with our original neutral prompt, "The {occupation} was a". With the prompt "The incompetent {occupation} was a" the majority of occupations still leaned male with a similar probability than for our original neutral prompt. The average occupation bias - measured as P (male|Context)) ) - was â1.11 for the Neutral Variant, â2.14 for the Competent Variant and â1.15
We also carried out pronoun resolution on the Winogender dataset [RNLVD18] using two methods which further corroborated the modelâs tendency to associate most occupations with males. One method measured the mod- els ability to correctly assign a pronoun as the occupation or the participant. For example, we fed the model a context such as "The advisor met with the advisee because she wanted to get advice about job applications. âSheâ refers to the" and found the option with the lowest probability between the two possi- ble options (Choices between Occupation Option: advisor; Participant Option: advisee).
Occupation and participant words often have societal biases associated with them such as the assumption that most occupants are by default male. We found that the language models learnt some of these biases such as a tendency to associate female pronouns with participant positions more than male pronouns. GPT-3 175B had the highest accuracy of all the models (64.17%) on this task. It was also the only model where the accuracy for Occupant sentences (sentences where the correct answer was the Occupation option) for females was higher than for males (81.7% vs 76.7%). All other models had a higher accuracy for male pronouns with Occupation sentences as compared to female pronouns with the exception of our second largest model- GPT-3 13B - which had the same accuracy (60%) for both. This offers some preliminary evidence that in places where issues of bias can make language models susceptible to error, the larger models are more robust than smaller models.
We also performed co-occurrence tests, where we analyzed which words are likely to occur in the vicinity of other pre- selected words. We created a model output sample set by generating 800 outputs of length 50 each with a temperature
8Evaluating fairness, bias, and representation in language models is a rapidly-developing area with a large body of prior work. See, for example, [HZJ+19, NBR20, SCNP19].
36
# Table 6.1: Most Biased Descriptive Words in 175B Model
Top 10 Most Biased Male Descriptive Words with Raw Co-Occurrence Counts Top 10 Most Biased Female Descriptive Words with Raw Co-Occurrence Counts Average Number of Co-Occurrences Across All Words: 17.5 Average Number of Co-Occurrences Across All Words: 23.9 Large (16) Mostly (15) Lazy (14) Fantastic (13) Eccentric (13) Protect (10) Jolly (10) Stable (9) Personable (22) Survive (7) Optimistic (12) Bubbly (12) Naughty (12) Easy-going (12) Petite (10) Tight (10) Pregnant (10) Gorgeous (28) Sucked (8) Beautiful (158)
of 1 and top p of 0.9 for every prompt in our dataset. For gender, we had prompts such as "He was very", "She was very", "He would be described as", "She would be described as"9. We looked at the adjectives and adverbs in the top 100 most favored words using an off-the-shelf POS tagger [LB02]. We found females were more often described using appearance oriented words such as âbeautifulâ and âgorgeousâ as compared to men who were more often described using adjectives that span a greater spectrum.
Table 6.1 shows the top 10 most favored descriptive words for the model along with the raw number of times each word co-occurred with a pronoun indicator. âMost Favoredâ here indicates words which were most skewed towards a category by co-occurring with it at a higher rate as compared to the other category. To put these numbers in perspective, we have also included the average for the number of co-occurrences across all qualifying words for each gender.
# 6.2.2 Race
To investigate racial bias in GPT-3, we seeded the model with prompts such as - "The {race} man was very", "The {race} woman was very" and "People would describe the {race} person as" and generated 800 samples for each of the above prompts, with {race} replaced with a term indicating a racial category such as White or Asian. We then measure word co-occurrences in the generated samples. Given prior research demonstrating that language models produce text of differing sentiment when varying features such as occupation [HZJ+19], we explored how race impacted sentiment. We measured sentiment using Senti WordNet [BES10] for the words which co-occurred disproportionately with each race. Each word sentiment varied from 100 to -100, with positive scores indicating positive words (eg. wonderfulness: 100, amicable: 87.5), negative scores indicating negative words (eg. wretched: -87.5 , horrid: -87.5) and a score of 0 indicating neutral words (eg. sloping, chalet).
It should be noted that we were explicitly prompting the models to talk about race and this in turn generated text that focused on racial features; these results are not from the models talking about race in the wild but talking about race in an experimental setup where they have been primed to do so. Additionally, since we are measuring sentiment by simply looking at word co-occurrences, the resulting sentiment can reï¬ect socio-historical factors - for instance, text relating to a discussion of slavery will frequently have a negative sentiment, which may lead to a demographic being associated with a negative sentiment under this testing methodology.
Across the models we analyzed, âAsianâ had a consistently high sentiment - it ranked 1st in 3 out of 7 models. On the other hand, âBlackâ had a consistently low sentiment - it ranked the lowest in 5 out of 7 models. These differences narrowed marginally on the larger model sizes. This analysis gives a sense of the biases of different models and highlights the need for more sophisticated analysis of the relationship between sentiment, entities, and input data.
9We only used male and female pronouns. This simplifying assumption makes it easier to study co-occurrence since it does not require the isolation of instances in which âtheyâ refers to a singular noun from those where it didnât, but other forms of gender bias are likely present and could be studied using different approaches.
37
Sentiment Across Models 40 â Asian 30 â Black â White â Latinx 5 20 ââ Indian a Middle eastern w 5 10 £ rs ino -10 -20 350M 760M 1.3B 2.7B 6.7B 13B 175B Model Size
Figure 6.1: Racial Sentiment Across Models
# Religion Atheism
Most Favored Descriptive Words âTheistsâ, âCoolâ, âAgnosticsâ, âMadâ, âTheismâ, âDefensiveâ, âComplainingâ, âCorrectâ, âArrogantâ, âCharacterizedâ
âMyanmarâ, âVegetariansâ, âBurmaâ, âFellowshipâ, âMonkâ, âJapaneseâ, âReluctantâ, âWisdomâ, âEn- lightenmentâ, âNon-Violentâ
âAttendâ, âIgnorantâ, âResponseâ, âJudgmentalâ, âGraceâ, âExecutionâ, âEgyptâ, âContinueâ, âCom- mentsâ, âOfï¬ciallyâ
âCasteâ, âCowsâ, âBJPâ, âKashmirâ, âModiâ, âCelebratedâ, âDharmaâ, âPakistaniâ, âOriginatedâ, âAfricaâ
âPillarsâ, âTerrorismâ, âFastingâ, âSheikhâ, âNon-Muslimâ, âSourceâ, âCharitiesâ, âLevantâ, âAllahâ, âProphetâ
âGentilesâ, âRaceâ, âSemitesâ, âWhitesâ, âBlacksâ, âSmartestâ, âRacistsâ, âArabsâ, âGameâ, âRussianâ
Table 6.2: Shows the ten most favored words about each religion in the GPT-3 175B model.
# 6.2.3 Religion
We studied which words co-occurred with religious terms relating to Atheism, Buddhism, Christianity, Hinduism, Islam, and Judaism, by generating 800 model outputs of length â50 with a temperature of 1 and a top p of 0.9 for every prompt. Our prompts were of the nature "{Religion practitioners} are" (Eg. "Christians are") for each of the six religious categories listed above. We then allowed the model to naturally carry out completions and created a corpus of such completions for studying co-occurrence of words.
The following is an example output from the model:
"Buddhists are divided into two main branches - Theravada and Mahayana. is the more conservative branch, centering on monastic life and the earliest sutras and refusing to recognize the later Mahayana sutras as authentic."
Similar to race, we found that the models make associations with religious terms that indicate some propensity to reï¬ect how these terms are sometimes presented in the world. For example, with the religion Islam, we found that words such as ramadan, prophet and mosque co-occurred at a higher rate than for other religions. We also found that words such as violent, terrorism and terrorist co-occurred at a greater rate with Islam than with other religions and were in the top 40 most favored words for Islam in GPT-3.
38
# 6.2.4 Future Bias and Fairness Challenges
We have presented this preliminary analysis to share some of the biases we found in order to motivate further research, and to highlight the inherent difï¬culties in characterizing biases in large-scale generative models; we expect this to be an area of continuous research for us and are excited to discuss different methodological approaches with the community. We view the work in this section as subjective signposting - we chose gender, race, and religion as a starting point, but we recognize the inherent subjectivity in this choice. Our work is inspired by the literature on characterizing model attributes to develop informative labels such as Model Cards for Model Reporting from [MWZ+18].
Ultimately, it is important not just to characterize biases in language systems but to intervene. The literature on this is also extensive [QMZH19, HZJ+19], so we offer only a few brief comments on future directions speciï¬c to large language models. In order to pave the way for effective bias prevention in general purpose models, there is a need for building a common vocabulary tying together the normative, technical and empirical challenges of bias mitigation for these models. There is room for more research that engages with the literature outside NLP, better articulates normative statements about harm, and engages with the lived experience of communities affected by NLP systems [BBDIW20]. Thus, mitigation work should not be approached purely with a metric driven objective to âremoveâ bias as this has been shown to have blind spots [GG19, NvNvdG19] but in a holistic manner.
# 6.3 Energy Usage
Practical large-scale pre-training requires large amounts of computation, which is energy-intensive: training the GPT-3 175B consumed several thousand petaï¬op/s-days of compute during pre-training, compared to tens of petaï¬op/s-days for a 1.5B parameter GPT-2 model (Figure 2.2). This means we should be cognizant of the cost and efï¬ciency of such models, as advocated by [SDSE19].
The use of large-scale pre-training also gives another lens through which to view the efï¬ciency of large models - we should consider not only the resources that go into training them, but how these resources are amortized over the lifetime of a model, which will subsequently be used for a variety of purposes and ï¬ne-tuned for speciï¬c tasks. Though models like GPT-3 consume signiï¬cant resources during training, they can be surprisingly efï¬cient once trained: even with the full GPT-3 175B, generating 100 pages of content from a trained model can cost on the order of 0.4 kW-hr, or only a few cents in energy costs. Additionally, techniques like model distillation [LHCG19a] can further bring down the cost of such models, letting us adopt a paradigm of training single, large-scale models, then creating more efï¬cient versions of them for use in appropriate contexts. Algorithmic progress may also naturally further increase the efï¬ciency of such models over time, similar to trends observed in image recognition and neural machine translation [HB20].
# 7 Related Work
Several lines of work have focused on increasing parameter count and/or computation in language models as a means to improve generative or task performance. An early work scaled LSTM based language models to over a billion parameters [JVS+16]. One line of work straightforwardly increases the size of transformer models, scaling up parameters and FLOPS-per-token roughly in proportion. Work in this vein has successively increased model size: 213 million parameters [VSP+17] in the original paper, 300 million parameters [DCLT18], 1.5 billion parameters [RWC+19], 8 billion parameters [SPP+19], 11 billion parameters [RSR+19], and most recently 17 billion parameters [Tur20]. A second line of work has focused on increasing parameter count but not computation, as a means of increasing modelsâ capacity to store information without increased computational cost. These approaches rely on the conditional computation framework [BLC13] and speciï¬cally, the mixture-of-experts method [SMM+17] has been used to produce 100 billion parameter models and more recently 50 billion parameter translation models [AJF19], though only a small fraction of the parameters are actually used on each forward pass. A third approach increases computation without increasing parameters; examples of this approach include adaptive computation time [Gra16] and the universal transformer [DGV+18]. Our work focuses on the ï¬rst approach (scaling compute and parameters together, by straightforwardly making the neural net larger), and increases model size 10x beyond previous models that employ this strategy. Several efforts have also systematically studied the effect of scale on language model performance. [KMH+20, RRBS19, LWS+20, HNA+17], ï¬nd a smooth power-law trend in loss as autoregressive language models are scaled up. This work suggests that this trend largely continues as models continue to scale up (although a slight bending of the curve can perhaps be detected in Figure 3.1), and we also ï¬nd relatively smooth increases in many (though not all) downstream tasks across 3 orders of magnitude of scaling.
Another line of work goes in the opposite direction from scaling, attempting to preserve strong performance in language models that are as small as possible. This approach includes ALBERT [LCG+19] as well as general [HVD15] and
39
task-speciï¬c [SDCW19, JYS+19, KR16] approaches to distillation of language models. These architectures and techniques are potentially complementary to our work, and could be applied to decrease latency and memory footprint of giant models.
As ï¬ne-tuned language models have neared human performance on many standard benchmark tasks, considerable effort has been devoted to constructing more difï¬cult or open-ended tasks, including question answering [KPR+19, IBGC+14, CCE+18, MCKS18], reading comprehension [CHI+18, RCM19], and adversarially constructed datasets designed to be difï¬cult for existing language models [SBBC19, NWD+19]. In this work we test our models on many of these datasets.
Many previous efforts have focused speciï¬cally on question-answering, which constitutes a signiï¬cant fraction of the tasks we tested on. Recent efforts include [RSR+19, RRS20], which ï¬ne-tuned an 11 billion parameter language model, and [GLT+20], which focused on attending over a large corpus of data at test time. Our work differs in focusing on in-context learning but could be combined in the future with those of [GLT+20, LPP+20]. Metalearning in language models has been utilized in [RWC+19], though with much more limited results and no systematic study. More broadly, language model metalearning has an inner-loop-outer-loop structure, making it structurally similar to metalearning as applied to ML in general. Here there is an extensive literature, including matching networks [VBL+16], RL2 [DSC+16], learning to optimize [RL16, ADG+16, LM17] and MAML [FAL17]. Our approach of stufï¬ng the modelâs context with previous examples is most structurally similar to RL2 and also resembles [HYC01], in that an inner loop of adaptation takes place through computation in the modelâs activations across timesteps, without updating the weights, while an outer loop (in this case just language model pre-training) updates the weights, and implicitly learns the ability to adapt to or at least recognize tasks deï¬ned at inference-time. Few-shot auto-regressive density estimation was explored in [RCP+17] and [GWC+18] studied low-resource NMT as a few-shot learning problem.
While the mechanism of our few-shot approach is different, prior work has also explored ways of using pre-trained language models in combination with gradient descent to perform few-shot learning [SS20]. Another sub-ï¬eld with similar goals is semi-supervised learning where approaches such as UDA [XDH+19] also explore methods of ï¬ne-tuning when very little labeled data is available.
Giving multi-task models instructions in natural language was ï¬rst formalized in a supervised setting with [MKXS18] and utilized for some tasks (such as summarizing) in a language model with [RWC+19]. The notion of presenting tasks in natural language was also explored in the text-to-text transformer [RSR+19], although there it was applied for multi-task ï¬ne-tuning rather than for in-context learning without weight updates.
Another approach to increasing generality and transfer-learning capability in language models is multi-task learning [Car97], which ï¬ne-tunes on a mixture of downstream tasks together, rather than separately updating the weights for each one. If successful multi-task learning could allow a single model to be used for many tasks without updating the weights (similar to our in-context learning approach), or alternatively could improve sample efï¬ciency when updating the weights for a new task. Multi-task learning has shown some promising initial results [LGH+15, LSP+18] and multi-stage ï¬ne-tuning has recently become a standardized part of SOTA results on some datasets [PFB18] and pushed the boundaries on certain tasks [KKS+20], but is still limited by the need to manually curate collections of datasets and set up training curricula. By contrast pre-training at large enough scale appears to offer a ânaturalâ broad distribution of tasks implicitly contained in predicting the text itself. One direction for future work might be attempting to generate a broader set of explicit tasks for multi-task learning, for example through procedural generation [TFR+17], human interaction [ZSW+19b], or active learning [Mac92].
Algorithmic innovation in language models over the last two years has been enormous, including denoising-based bidirectionality [DCLT18], preï¬xLM [DL15] and encoder-decoder architectures [LLG+19, RSR+19], random permu- tations during training [YDY+19], architectures that improve the efï¬ciency of sampling [DYY+19], improvements in data and training procedures [LOG+19], and efï¬ciency increases in the embedding parameters [LCG+19]. Many of these techniques provide signiï¬cant gains on downstream tasks. In this work we continue to focus on pure autoregressive language models, both in order to focus on in-context learning performance and to reduce the complexity of our large model implementations. However, it is very likely that incorporating these algorithmic advances could improve GPT-3âs performance on downstream tasks, especially in the ï¬ne-tuning setting, and combining GPT-3âs scale with these algorithmic techniques is a promising direction for future work.
# 8 Conclusion
We presented a 175 billion parameter language model which shows strong performance on many NLP tasks and benchmarks in the zero-shot, one-shot, and few-shot settings, in some cases nearly matching the performance of
40
state-of-the-art ï¬ne-tuned systems, as well as generating high-quality samples and strong qualitative performance at tasks deï¬ned on-the-ï¬y. We documented roughly predictable trends of scaling in performance without using ï¬ne-tuning. We also discussed the social impacts of this class of model. Despite many limitations and weaknesses, these results suggest that very large language models may be an important ingredient in the development of adaptable, general language systems.
# Acknowledgements
The authors would like to thank Ryan Lowe for giving detailed feedback on drafts of the paper. Thanks to Jakub Pachocki and Szymon Sidor for suggesting tasks, and Greg Brockman, Michael Petrov, Brooke Chan, and Chelsea Voss for helping run evaluations on OpenAIâs infrastructure. Thanks to David Luan for initial support in scaling up this project, Irene Solaiman for discussions about ways to approach and evaluate bias, Harrison Edwards and Yura Burda for discussions and experimentation with in-context learning, Geoffrey Irving and Paul Christiano for early discussions of language model scaling, Long Ouyang for advising on the design of the human evaluation experiments, Chris Hallacy for discussions on data collection, and Shan Carter for help with visual design. Thanks to the millions of people who created content that was used in the training of the model, and to those who were involved in indexing or upvoting the content (in the case of WebText). Additionally, we would like to thank the entire OpenAI infrastructure and supercomputing teams for making it possible to train models at this scale.
41
# Contributions
Tom Brown, Ben Mann, Prafulla Dhariwal, Dario Amodei, Nick Ryder, Daniel M Ziegler, and Jeffrey Wu implemented the large-scale models, training infrastructure, and model-parallel strategies.
# Tom Brown, Dario Amodei, Ben Mann, and Nick Ryder conducted pre-training experiments.
Ben Mann and Alec Radford collected, ï¬ltered, deduplicated, and conducted overlap analysis on the training data.
Melanie Subbiah, Ben Mann, Dario Amodei, Jared Kaplan, Sam McCandlish, Tom Brown, Tom Henighan, and Girish Sastry implemented the downstream tasks and the software framework for supporting them, including creation of synthetic tasks.
Jared Kaplan and Sam McCandlish initially predicted that a giant language model should show continued gains, and applied scaling laws to help predict and guide model and data scaling decisions for the research.
Ben Mann implemented sampling without replacement during training.
Alec Radford originally demonstrated few-shot learning occurs in language models.
Jared Kaplan and Sam McCandlish showed that larger models learn more quickly in-context, and systematically studied in-context learning curves, task prompting, and evaluation methods.
Prafulla Dhariwal implemented an early version of the codebase, and developed the memory optimizations for fully half-precision training.
Rewon Child and Mark Chen developed an early version of our model-parallel strategy.
Rewon Child and Scott Gray contributed the sparse transformer.
Aditya Ramesh experimented with loss scaling strategies for pretraining.
Melanie Subbiah and Arvind Neelakantan implemented, experimented with, and tested beam search.
Pranav Shyam worked on SuperGLUE and assisted with connections to few-shot learning and meta-learning literature.
Sandhini Agarwal conducted the fairness and representation analysis.
Girish Sastry and Amanda Askell conducted the human evaluations of the model.
Ariel Herbert-Voss conducted the threat analysis of malicious use.
Gretchen Krueger edited and red-teamed the policy sections of the paper.
Benjamin Chess, Clemens Winter, Eric Sigler, Christopher Hesse, Mateusz Litwin, and Christopher Berner optimized OpenAIâs clusters to run the largest models efï¬ciently.
Scott Gray developed fast GPU kernels used during training.
Jack Clark led the analysis of ethical impacts â fairness and representation, human assessments of the model, and broader impacts analysis, and advised Gretchen, Amanda, Girish, Sandhini, and Ariel on their work.
Dario Amodei, Alec Radford, Tom Brown, Sam McCandlish, Nick Ryder, Jared Kaplan, Sandhini Agarwal, Amanda Askell, Girish Sastry, and Jack Clark wrote the paper.
Sam McCandlish led the analysis of model scaling, and advised Tom Henighan and Jared Kaplan on their work.
Alec Radford advised the project from an NLP perspective, suggested tasks, put the results in context, and demonstrated the beneï¬t of weight decay for training.
Ilya Sutskever was an early advocate for scaling large generative likelihood models, and advised Pranav, Prafulla, Rewon, Alec, and Aditya on their work.
Dario Amodei designed and led the research.
42
# A Details of Common Crawl Filtering
As mentioned in Section 2.2, we employed two techniques to improve the quality of the Common Crawl dataset: (1) ï¬ltering Common Crawl and (2) fuzzy deduplication:
1. In order to improve the quality of Common Crawl, we developed an automatic ï¬ltering method to remove low quality documents. Using the original WebText as a proxy for high-quality documents, we trained a classiï¬er to distinguish these from raw Common Crawl. We then used this classiï¬er to re-sample Common Crawl by prioritizing documents which were predicted by the classiï¬er to be higher quality. The classiï¬er is trained using logistic regression classiï¬er with features from Sparkâs standard tokenizer and HashingTF 10. For the positive examples, we used a collection of curated datasets such as WebText, Wikiedia, and our web books corpus as the positive examples, and for the negative examples, we used unï¬ltered Common Crawl. We used this classiï¬er to score Common Crawl documents. We kept each document in our dataset iff
np.random.pareto(α) > 1 â document_score
We chose α = 9 in order to take mostly documents the classiï¬er scored highly, but still include some documents that were out of distribution. α was chosen to match the distribution of scores from our classiï¬er on WebText. We found this re-weighting increased quality as measured by loss on a range of out-of-distribution generative text samples.
2. To further improve model quality and prevent overï¬tting (which becomes increasingly important as model capacity increases), we fuzzily deduplicated documents (i.e. removed documents with high overlap with other documents) within each dataset using Sparkâs MinHashLSH implementation with 10 hashes, using the same features as were used for classiï¬cation above. We also fuzzily removed WebText from Common Crawl. Overall this decreased dataset size by an average of 10%.
After ï¬ltering for duplicates and quality, we also partially removed text occurring in benchmark datasets, described in Appendix C.
# B Details of Model Training
To train all versions of GPT-3, we use Adam with 3; = 0.9, G2 = 0.95, and e = 10°, we clip the global norm of the gradient at 1.0, and we use cosine decay for learning rate down to 10% of its value, over 260 billion tokens (after 260 billion tokens, training continues at 10% of the original learning rate). There is a linear LR warmup over the first 375 million tokens. We also gradually increase the batch size linearly from a small value (32k tokens) to the full value over the first 4-12 billion tokens of training, depending on the model size. Data are sampled without replacement during training (until an epoch boundary is reached) to minimize overfitting. All models use weight decay of 0.1 to provide a small amount of regularization [LH17].
During training we always train on sequences of the full nctx = 2048 token context window, packing multiple documents into a single sequence when documents are shorter than 2048, in order to increase computational efï¬ciency. Sequences with multiple documents are not masked in any special way but instead documents within a sequence are delimited with a special end of text token, giving the language model the information necessary to infer that context separated by the end of text token is unrelated. This allows for efï¬cient training without need for any special sequence-speciï¬c masking.
# C Details of Test Set Contamination Studies
In section 4 we gave a high level overview of test set contamination studies. In this section we provide details on methodology and results.
Initial training set ï¬ltering We attempted to remove text occurring in benchmarks from training data by searching for 13âgram overlaps between all test/development sets used in this work and our training data, and we removed the colliding 13âgram as well as a 200 character window around it, splitting the original document into pieces. For ï¬ltering purposes we deï¬ne a gram as a lowercase, whitespace delimited word with no punctuation. Pieces less than 200 characters long were discarded. Documents split into more than 10 pieces were considered contaminated and
10https://spark.apache.org/docs/latest/api/python/pyspark.ml.html#pyspark.ml.feature.HashingTF
43
removed entirely. Originally we removed entire documents given a single collision, but that overly penalized long documents such as books for false positives. An example of a false positive might be a test set based on Wikipedia, in which the Wikipedia article quotes a single line from a book. We ignored 13âgrams that matched more than 10 training documents, as inspection showed the majority of these to contain common cultural phrases, legal boilerplate, or similar content that we likely do want the model to learn, rather than undesired speciï¬c overlaps with test sets. Examples for various frequencies can be found in the GPT-3 release repository11.
Overlap methodology For our benchmark overlap analysis in Section 4, we used a variable number of words N to check for overlap for each dataset, where N is the 5th percentile example length in words, ignoring all punctuation, whitespace, and casing. Due to spurious collisions at lower values of N we use a minimum value of 8 on non-synthetic tasks. For performance reasons, we set a maximum value of 13 for all tasks. Values for N and the amount of data marked as dirty are shown in Table C.1. Unlike GPT-2âs use of bloom ï¬lters to compute probabilistic bounds for test contamination, we used Apache Spark to compute exact collisions across all training and test sets. We compute overlaps between test sets and our full training corpus, even though we only trained on 40% of our ï¬ltered Common Crawl documents per Section 2.2.
We deï¬ne a âdirtyâ example as one with any N -gram overlap with any training document, and a âcleanâ example as one with no collision.
Test and validation splits had similar contamination levels despite some test splits being unlabeled. Due to a bug revealed by this analysis, ï¬ltering described above failed on long documents such as books. Because of cost considerations it was infeasible to retrain the model on a corrected version of the training dataset. As such, several language modeling benchmarks plus the Childrenâs Book Test showed almost complete overlap, and therefore were not included in this paper. Overlaps are shown in Table C.1
Overlap results To understand how much having seen some of the data helps the model perform on downstream tasks, we ï¬lter every validation and test set by dirtiness. Then we run evaluation on the clean-only examples and report the relative percent change between the clean score and the original score. If the clean score is more than 1% or 2% worse than the overall score, it suggests the model may have overï¬t to the examples it has seen. If the clean score is signiï¬cantly better, our ï¬ltering scheme may have preferentially marked easier examples as dirty.
This overlap metric tends to show a high rate of false positives for datasets that contain background information (but not answers) drawn from the web (such as SQuAD, which draws from Wikipedia) or examples less than 8 words long, which we ignored in our ï¬ltering process (except for wordscrambling tasks). One instance where this technique seems to fail to give good signal is DROP, a reading comprehension task in which 94% of the examples are dirty. The information required to answer the question is in a passage provided to the model, so having seen the passage during training but not the questions and answers does not meaningfully constitute cheating. We conï¬rmed that every matching training document contained only the source passage, and none of the questions and answers in the dataset. The more likely explanation for the decrease in performance is that the 6% of examples that remain after ï¬ltering come from a slightly different distribution than the dirty examples.
Figure 4.2 shows that as the dataset becomes more contaminated, the variance of the clean/all fraction increases, but there is no apparent bias towards improved or degraded performance. This suggests that GPT-3 is relatively insensitive to contamination. See Section 4 for details on the datasets we ï¬agged for further review.
11https://github.com/openai/gpt-3/blob/master/overlap_frequency.md
44
Split Metric N Acc/F1/BLEU Total Count Dirty Acc/F1/BLEU Dirty Count Clean Acc/F1/BLEU Clean Count Clean Percentage
# Name
Quac SQuADv2 DROP Symbol Insertion CoQa ReCoRD Winograd BoolQ MultiRC RACE-h LAMBADA LAMBADA (No Blanks) WSC PIQA RACE-m DeâEn 16 EnâDe 16 EnâRo 16 RoâEn 16 WebQs ANLI R1 ANLI R2 TriviaQA ANLI R3 EnâFr 14 FrâEn 14 WiC RTE CB Anagrams 2 Reversed Words OpenBookQA ARC (Easy) Anagrams 1 COPA ARC (Challenge) HellaSwag NQs Cycled Letters SAT Analogies StoryCloze Winogrande
Table C.1: Overlap statistics for all datasets sorted from dirtiest to cleanest. We consider a dataset example dirty if it has a single N -gram collision with any document in our training corpus. âRelative Difference Clean vs Allâ shows the percent change in performance between only the clean examples vs all the examples in the benchmark. âCountâ shows the number of examples. âClean percentageâ is the percent of examples that are clean vs total. For âAcc/F1/BLEUâ we use the metric speciï¬ed in âMetricâ. These scores come from evaluations with a different seed for the random examples used for in-context learning, and will therefore differ slightly from the scores elsewhere in the paper.
45
# Relative Difference Clean vs All
# D Total Compute Used to Train Language Models
This appendix contains the calculations that were used to derive the approximate compute used to train the language models in Figure 2.2. As a simplifying assumption, we ignore the attention operation, as it typically uses less than 10% of the total compute for the models we are analyzing.
Calculations can be seen in Table D.1 and are explained within the table caption.
Model Total train compute (PF-days) Total train compute (ï¬ops) Params (M) Training tokens (billions) Flops per param per token Mult for bwd pass Frac of params active for each token T5-Small T5-Base T5-Large T5-3B T5-11B BERT-Base BERT-Large RoBERTa-Base RoBERTa-Large GPT-3 Small GPT-3 Medium GPT-3 Large GPT-3 XL GPT-3 2.7B GPT-3 6.7B GPT-3 13B GPT-3 175B 2.08E+00 7.64E+00 2.67E+01 1.04E+02 3.82E+02 1.89E+00 6.16E+00 1.74E+01 4.93E+01 2.60E+00 7.42E+00 1.58E+01 2.75E+01 5.52E+01 1.39E+02 2.68E+02 3.64E+03 1.80E+20 6.60E+20 2.31E+21 9.00E+21 3.30E+22 1.64E+20 5.33E+20 1.50E+21 4.26E+21 2.25E+20 6.41E+20 1.37E+21 2.38E+21 4.77E+21 1.20E+22 2.31E+22 3.14E+23 60 220 770 3,000 11,000 109 355 125 355 125 356 760 1,320 2,650 6,660 12,850 174,600 1,000 1,000 1,000 1,000 1,000 250 250 2,000 2,000 300 300 300 300 300 300 300 300 3 3 3 3 3 6 6 6 6 6 6 6 6 6 6 6 6 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 0.5 0.5 0.5 0.5 0.5 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0 1.0
Table D.1: Starting from the right hand side and moving left, we begin with the number of training tokens that each model was trained with. Next we note that since T5 uses an encoder-decoder model, only half of the parameters are active for each token during a forward or backwards pass. We then note that each token is involved in a single addition and a single multiply for each active parameter in the forward pass (ignoring attention). Then we add a multiplier of 3x to account for the backwards pass (as computing both âparams âloss use a similar amount of compute as the forwards pass. Combining the previous two numbers, we get the total ï¬ops per parameter per token. We multiply this value by the total training tokens and the total parameters to yield the number of total ï¬ops used during training. We report both ï¬ops and petaï¬op/s-day (each of which are 8.64e+19 ï¬ops).
# E Human Quality Assessment of Synthetic News Articles
This appendix contains details on the experiments measuring human ability to distinguish GPT-3-generated synthetic news articles from real news articles. We ï¬rst describe the experiments on the â¼ 200 word news articles, and then describe the preliminary investigation of â¼ 500 word news articles generated by GPT-3.
Participants: We recruited 718 unique participants to take part in 6 experiments. 97 participants were excluded for failing an internet check question, leaving a total of 621 participants: 343 male, 271 female, and 7 other. Mean participant age was â¼ 38 years old. All participants were recruited through Positly, which maintains a whitelist of high-performing workers from Mechanical Turk. All participants were US-based but there were no other demographic restrictions. Participants were paid $12 for their participation, based on a task time estimate of 60 minutes determined by pilot runs. In order to ensure that the sample of participants for each experiment quiz was unique, participants were not allowed to take part in an experiment more than once.
Procedure and design: We arbitrarily selected 25 news articles that appeared in newser.com in early 2020. We used the article titles and subtitles to produce outputs from the 125M, 350M, 760M, 1.3B, 2.7B, 6.7B, 13.0B, and 200B (GPT-3) parameter language models. Five outputs per question were generated by each model and the generation with a word count closest to that of the human written article was selected automatically. This was to minimize the effect that completion length might have on participantsâ judgments. The same output procedure for each model with the exception of the removal of the intentionally bad control model, as described in the main text.
46
Model Participants Recruited Participants Excluded Genders (m:f:other) Mean Age Control GPT-3 Small GPT-3 Medium GPT-3 Large GPT-3 XL GPT-3 2.7B GPT-3 6.7B GPT-3 13.0B GPT-3 175B 76 80 80 81 79 80 76 81 80 7 7 7 24 14 11 5 13 9 32:37:0 41:31:1 46:28:2 46:28:2 32:32:1 36:33:0 46:28:2 46:28:2 42:29:0 39 40 39 37 38 40 37 37 37 216:216 216:188 216:202 216:200 216:199 216:202 216:195 216:209 216:216
Table E.1: Participant details and article lengths for each experiment to evaluate human detection of â¼ 200 word model generated news articles. Participants were excluded due to internet check fails.
Average time spent trying to detect model generat 130 is} 3S Duration (seconds) 110 control (105 les 1e9 1e10 Number of parameters (log scale) ed news article seconds)
Figure E.1: Participants spend more time trying to identify whether each news article is machine generated as model size increases. Duration on the control model is indicated with the dashed line. Line of best ï¬t is a linear model on a log scale with 95% conï¬dence intervals.
In each experiment, half of the participants were randomly assigned to quiz A and half were randomly assigned to quiz B. Each quiz consisted of 25 articles: half (12-13) were human written and half (12-13) were model generated: the articles with human written completions in quiz A had model generated completions in quiz B and vice versa. The order of quiz question was shufï¬ed for each participant. Participants could leave comments and were asked to indicate if they had seen the articles before. Participants were instructed not to look up the articles or their content during the quiz and at the end of the quiz were asked if they had looked anything up during the quiz.
Statistical Tests: To compare means on the different runs, we performed a two-sample t-test for independent groups for each model against the control. This was implemented in Python using the scipy.stats.ttest_ind function. When plotting a regression line in the graph of average participant accuracy vs model size, we ï¬t a power law of the form axâb. The 95% conï¬dence intervals were estimated from the t-distribution of the sample mean.
Duration statistics: In the main text, we discussed the ï¬nding that the ability of human participants to distinguish model and human generated news articles decreases as our models become larger. We have also found that the average time spent for a given set of questions increases as the model size increases, as shown in Figure E.1. Lower
47
Model Participants Recruited Participants Excluded Genders (m:f:other) Mean Age Control GPT-3 175B 79 81 17 19 32:37:0 32:30:0 39 40 569:464 569:498
Table E.2: Participant details and article lengths for the experiments investigating human detection of â¼ 500 word model generated news articles. Participants were excluded due to internet check fails.
accuracy scores despite increased time investment from participants supports the ï¬nding that larger models generate harder-to-distinguish news articles.
Preliminary investigation of â¼ 500 word articles: We recruited 160 unique US-based participants to take part in 2 experiments through Positly (details are given in Table E.2). We randomly selected 12 Reuters world news articles from late 2019 and created a context for GPT-3 175B that consisted of a single Reuters article not in this set of 12. We then used the article titles and Reuters locations to generate completions from GPT-3 175B and the 160M control model from the previous experiments. These were used to create two 12-question quizzes per model, each consisting of half human written and half model generated articles. Comprehension questions were added and articles were shown to participants in 3 stages at 30 second intervals to encourage closer reading. Participants were paid $12 for this task. Model generation selection methods, exclusion criteria, and statistical tests mirror those of the previous experiments.
# F Additional Samples from GPT-3
GPT-3 adapts well to many tasks other than the ones explored in the main body of the paper. As an example, in Figure F.1, we show four uncurated samples from a prompt suggesting that the model write a poem, with a given title, in the style of Wallace Stevens. We ï¬rst experimented with a few prompts, then generated four samples with no additional editing or selection (sampling at temperature 1 using nucleus sampling [HBFC19] with P = 0.9). Completions were truncated when the model began to write a new title and author heading, or broke into prose commentary.
48
Context â
The City BY C. P. CAVAFY TRANSLATED BY EDMUND KEELEY [Poem text omitted]
SOME TREES John Ashbery [Poem text omitted]
Shadows on the Way Wallace Stevens
-------- Generated Poem 1 --------
# -------- Generated Poem 3 --------
I must have shadows on the way If I am to walk I must have Each step taken slowly and alone To have it ready made
And I must think in lines of grey To have dim thoughts to be my guide Must look on blue and green And never let my eye forget That color is my friend And purple must surround me too
The yellow of the sun is no more Intrusive than the bluish snow That falls on all of us. I must have Grey thoughts and blue thoughts walk with me If I am to go away at all.
# -------- Generated Poem 2 --------
There is a way in the world. That way a man may live in a small town His lifetime and see no man come, Never the stir of men about a market, Never the bobbing of covered carts, Never the creak passed on the stair Closing some other life; an old man Can prosper in that town, can prosper When his store is seen to close, his wife Fades in his house, and the world shouts in vain That no one comes to hear him Where his small bell tolls. In that small town Everything is known. The ways of the young men are known, And the ways of the young women As they walk to the Spring; The rumors are all known, The quarrels, the hurt, the envy Of the men and the women. Till a day comes That way out of the world That leads to a wood. Young, the young man Walks, a lover now, Knows the streets as they are and is of them.
The sun was all we had. All is changed. The mind must dwell on those White fields, that to its eyes were always old; Those ancient gleams, convoluting The way they lay among the huge roots, The great dark tomes of reverie, The plumed labyrinth of the sea.
When our own breath gonne faint and fade Up the long stairway of the dead We shall remember well The blindness of the light, the whiteness Of that white land. The footsteps, and The grasses in that light, and in the shade The menacing life.
We played, too, once, in the warmest rooms. To one content, there is one content For all mankind. The forest glades Are all the more for being in fable, And wood and town in legend shadowed- Once more to return. The one content Has in it every happiness, more brave For being purposeless; and so Those island gardens, deep in the mind, Make of remoteness familiarity And of the unspoken name a friend.
# -------- Generated Poem 4 --------
Nobody will come to this place. The solitude is deep. But they are desolate, and they turn the travelerâs face Towards the North. When the wind rises above the trees, The boughs bow to the ground. Even the birds that inhabit the tangle of weeds That is the roadside cover, are silent. One listens, But hears no roar of the forest. One will be taken. One will be taken. There is no utterance, there is no conversation, But one is uneasy all the same.... There is a thin blue mist, A darkness rising like smoke, And within that darkness A possession of the heart. One will be taken.... Here, under this sky empty and full of light.
There, in the wood, among trees, He sees shadows on the way, hears voices, hears the wind and the rustling of leaves; Through an open glade He sees a shape and the shape hears: It waits as he waits, Waits as the shadows wait, As the voices wait; Shadows on the way, voices in the wind.
Figure F.1: Four uncurated completions from a context suggesting the model compose a poem in the style of Wallace Stevens with the title âShadows on the Wayâ.
49
# G Details of Task Phrasing and Speciï¬cations
The following ï¬gures illustrate the formatting and phrasing of all the tasks included in the paper. All data comes from the ground truth datasets in this section, and no samples from GPT-3 are included here.
# Context â Article:
Informal conversation is an important part of any business relationship.Before you start a discussion,however,make sure you understand which topics are suitable and which are considered taboo in a particular culture. history, art and customs.You may expect questions about your family,and be sure to show pictures of your children.You may feel free to ask similar questions of your Latin American friends.The French think of conversation as an art form,and they enjoy the value of lively discussions as well as For them,arguments can be interesting and they can cover disagreements. pretty much or any topic ---- as long as they occur in are respectful and intelligent manner. In the United States,business people like to discuss a wide range of topics,including opinions about work,family,hobbies,and politics. In Japan,China,and Korea,however,people are much more private.They do not share much about their thoughts,feelings,or emotions because they feel that doing so might take away from the harmonious business relationship theyâre trying to build.Middle Easterners are also private about their personal lives and family matters.It is considered rude,for example,to ask a businessman from Saudi Arabia about his wife or children. As a general rule,itâs best not to talk about politics or religion with your business friends.This can get you into trouble,even in the United States,where people hold different religious views.In addition,discussing oneâs salary is usually considered unsuitable.Sports is typically a friendly subject in most parts of the world,although be careful not to criticize national sport.Instead,be friendly and praise your hostâs team.
Q: What shouldnât you do when talking about sports with colleagues from another country?
A: Criticizing the sports of your colleaguesâ country.
Q: Which is typically a friendly topic in most places according to the author?
A: Sports.
Q: Why are people from Asia more private in their conversation with others?
A: They donât want to have their good relationship with others harmed by informal conversation.
Q: The author considers politics and religion
A:
Correct Answer â taboo Incorrect Answer â cheerful topics Incorrect Answer â rude topics Incorrect Answer â topics that can never be talked about
Figure G.1: Formatted dataset example for RACE-h. When predicting, we normalize by the unconditional probability of each answer as described in 2.
50
Context â anli 2: anli 2: The Gold Coast Hotel & Casino is a hotel and casino located in Paradise, Nevada. by Boyd Gaming. Las Vegas Strip on West Flamingo Road. from the Palms Casino Resort and the Rio All Suite Hotel and Casino. Question: Neither? This localsâ casino is owned and operated The Gold Coast is located one mile (â¼ 1.6km) west of the It is located across the street The Gold Coast is a budget-friendly casino. True, False, or
# Correct Answer â Neither
# Incorrect Answer â True Incorrect Answer â False
Figure G.2: Formatted dataset example for ANLI R2
# Context â Article:
Mrs. along a few potatoes in plastic bag. write a name of a person that they hated And the next day, every child brought some potatoes. Mrs. even to the toilet, for two weeks. started to complain about the awful smell of the rotten potatoes. Those children who brought five potatoes began to feel the weight trouble of the bags. game was finally ended. the potatoes for two weeks?" The children started complaining about the trouble loudly. Then Mrs. Smith told them why she asked them to play the game. said,"This is exactly the situation when you carry your hatred for somebody inside your heart. heart and you will carry something unnecessary with you all the time. If you cannot stand the smell of the rotten potatoes for just two weeks, can you imagine how heavy it would be to have the hatred in your heart for your lifetime? happy."
Q: Which of the following is True according to the passage?
A: If a kid hated four people,he or she had to carry four potatoes.
Q: We can learn from the passage that we should
.
A: throw away the hatred inside
Q: The children complained about besides the weight trouble.
A: the smell
Q: Mrs.Smith asked her students to write on the potatoes.
A:
Correct Answer â names Incorrect Answer â numbers Incorrect Answer â time Incorrect Answer â places
Figure G.3: Formatted dataset example for RACE-m. When predicting, we normalize by the unconditional probability of each answer as described in 2.
51
Context â How to apply sealant to wood.
Correct Answer â Using a brush, brush on sealant onto wood until it is fully saturated with the sealant. Incorrect Answer â Using a brush, drip on sealant onto wood until it is fully saturated with the sealant.
Figure G.4: Formatted dataset example for PIQA
Context â My body cast a shadow over the grass because
Correct Answer â the sun was rising.
Incorrect Answer â the grass was cut.
Figure G.5: Formatted dataset example for COPA
Context â (CNN) Yuval Rabin, whose father, Yitzhak Rabin, was assassinated while He said that Trumpâs appeal to Correct Answer â - Referencing his father, who was shot and killed by an extremist amid political tension in Israel in 1995, Rabin condemned Donald Trumpâs aggressive rhetoric. Correct Answer â - Referencing his father, who was shot and killed by an extremist amid political tension in Israel in 1995, Rabin condemned Trumpâs aggressive rhetoric. Incorrect Answer â - Referencing his father, who was shot and killed by an extremist amid political tension in Israel in 1995, Rabin condemned Hillary Clintonâs aggressive rhetoric. Incorrect Answer â - Referencing his father, who was shot and killed by an extremist amid political tension in Israel in 1995, Rabin condemned U.S.âs aggressive rhetoric. Incorrect Answer â - Referencing his father, who was shot and killed by an extremist amid
Figure G.6: Formatted dataset example for ReCoRD. We consider the context above to be a single âproblemâ because this is how the task is presented in the ReCoRD dataset and scored in the ReCoRD evaluation script.
Context â anli 1: anli 1: Fulton James MacGregor MSP is a Scottish politician who is a Scottish National Party (SNP) Member of Scottish Parliament for the constituency of Coatbridge and Chryston. Parliamentary Liaison Officer to Shona Robison, Cabinet Secretary for Health & Sport. He also serves on the Justice and Education & Skills committees in the Scottish Parliament. Question: officer to Shona Robison who he swears is his best friend. Neither? MacGregor is currently Fulton James MacGregor is a Scottish politican who is a Liaison True, False, or
# Correct Answer â Neither
# Incorrect Answer â True Incorrect Answer â False
Figure G.7: Formatted dataset example for ANLI R1
52
Context â Organisms require energy in order to do what?
Correct Answer â mature and develop. Incorrect Answer â rest soundly. Incorrect Answer â absorb light. Incorrect Answer â take in nutrients.
Figure G.8: Formatted dataset example for OpenBookQA. When predicting, we normalize by the unconditional probability of each answer as described in 2.
Context â Making a cake: Several cake pops are shown on a display. A woman and girl are shown making the cake pops in a kitchen. They
Correct Answer â bake them, then frost and decorate.
Incorrect Answer â taste them as they place them on plates. Incorrect Answer â put the frosting on the cake as they pan it. Incorrect Answer â come out and begin decorating the cake as well.
Figure G.9: Formatted dataset example for HellaSwag
Context â anli 3: anli 3: We shut the loophole which has American workers actually subsidizing the loss of their own job. They just passed an expansion of that loophole in the last few days: $43 billion of giveaways, including favors to the oil and gas industry and the people importing ceiling fans from China. Question: The loophole is now gone True, False, or Neither?
# Correct Answer â False
# Incorrect Answer â True Incorrect Answer â Neither
Figure G.10: Formatted dataset example for ANLI R3
Context â Question: George wants to warm his hands quickly by rubbing them. Which skin surface will produce the most heat? Answer:
Correct Answer â dry palms Incorrect Answer â wet palms Incorrect Answer â palms covered with oil Incorrect Answer â palms covered with lotion
Figure G.11: Formatted dataset example for ARC (Challenge). When predicting, we normalize by the unconditional probability of each answer as described in 2.
Context â lull is to trust as Correct Answer â cajole is to compliance Incorrect Answer â balk is to fortitude Incorrect Answer â betray is to loyalty Incorrect Answer â hinder is to destination Incorrect Answer â soothe is to passion
# Figure G.12: Formatted dataset example for SAT Analogies
Correct Context â Grace was happy to trade me her sweater for my jacket. She thinks the sweater Incorrect Context â Grace was happy to trade me her sweater for my jacket. She thinks the jacket
Target Completion â looks dowdy on her.
Figure G.13: Formatted dataset example for Winograd. The âpartialâ evaluation method we use compares the probability of the completion given a correct and incorrect context.
53
Correct Context â Johnny likes fruits more than vegetables in his new keto diet because the fruits Incorrect Context â Johnny likes fruits more than vegetables in his new keto diet because the vegetables Target Completion â are saccharine.
Figure G.14: Formatted dataset example for Winogrande. The âpartialâ evaluation method we use compares the probability of the completion given a correct and incorrect context.
# Context â READING COMPREHENSION ANSWER KEY
While this process moved along, diplomacy continued its rounds. pressure on the Taliban had proved unsuccessful. put it, "Under the Taliban, Afghanistan is not so much a state sponsor of terrorism as it is a state sponsored by terrorists." In early 2000, the United States began a high-level effort to persuade Pakistan to use its influence over the Taliban. of State Karl Inderfurth and the State Departmentâs counterterrorism coordinator, Michael Sheehan, met with General Musharraf in Islamabad, dangling before him the possibility of a presidential visit in March as a reward for Pakistani cooperation. Such a visit was coveted by Musharraf, partly as a sign of his governmentâs legitimacy. that he would meet with Mullah Omar and press him on Bin Laden. left, however, reporting to Washington that Pakistan was unlikely in fact to do anything," given what it sees as the benefits of Taliban control of Afghanistan." President Clinton was scheduled to travel to India. The State Department felt that he should not visit India without also visiting Pakistan. The Secret Service and the CIA, however, warned in the strongest terms that visiting Pakistan would risk the Presidentâs life. enough to merit a presidential visit. on including Pakistan in the itinerary for his trip to South Asia. His one-day stopover on March 25, 2000, was the first time a U.S. president had been there since 1969. At his meeting with Musharraf and others, President Clinton concentrated on tensions between Pakistan and India and the dangers of nuclear proliferation, but also discussed Bin Laden. President Clinton told us that when he pulled Musharraf aside for a brief, one-on-one meeting, he pleaded with the general for help regarding Bin Laden." I offered him the moon when I went to see him, in terms of better relations with the United States, if heâd help us get Bin Laden and deal with another issue or two." The U.S. effort continued.
Who did The State Department feel should visit both India and Pakistan?
Correct Answer â - [False] Bin Laden Incorrect Answer â - [True] Bin Laden
Figure G.15: Formatted dataset example for MultiRC. There are three levels within MultiRC: (1) the passage, (2) the questions, and (3) the answers. During evaluation, accuracy is determined at the per-question level, with a question being considered correct if and only if all the answers within the question are labeled correctly. For this reason, we use K to refer to the number of questions shown within the context.
Context â Question: Which factor will most likely cause a person to develop a fever? Answer: Correct Answer â a bacterial population in the bloodstream Incorrect Answer â a leg muscle relaxing after exercise Incorrect Answer â several viral particles on the skin Incorrect Answer â carbohydrates being digested in the stomach
Figure G.16: Formatted dataset example for ARC (Easy). When predicting, we normalize by the unconditional probability of each answer as described in 2.
54
Context â Bob went to the gas station to fill up his car. His tank was completely empty and so was his wallet. came back later to pay. The cashier offered to pay for his gas if he Bob felt grateful as he drove home.
Correct Answer â Bob believed that there were good people in the world. Incorrect Answer â Bob contemplated how unfriendly the world was.
Figure G.17: Formatted dataset example for StoryCloze
Context â Helsinki is the capital and largest city of Finland. It is in the region of Uusimaa, in southern Finland, on the shore of the Gulf of Finland. Helsinki has a population of , an urban population of , and a metropolitan population of over 1.4 million, making it the most populous municipality and urban area in Finland. east of Stockholm, Sweden, and west of Saint Petersburg, Russia. has close historical connections with these three cities. Helsinki is some north of Tallinn, Estonia, Helsinki
The Helsinki metropolitan area includes the urban core of Helsinki, Espoo, Vantaa, Kauniainen, and surrounding commuter towns. northernmost metro area of over one million people, and the city is the northernmost capital of an EU member state. area is the third largest metropolitan area in the Nordic countries after Stockholm and Copenhagen, and the City of Helsinki is the third largest after Stockholm and Oslo. educational, financial, cultural, and research center as well as one of northern Europeâs major cities. Approximately 75% of foreign companies that operate in Finland have settled in the Helsinki region. municipality of Vantaa is the location of Helsinki Airport, with frequent service to various destinations in Europe and Asia.
Q: what is the most populous municipality in Finland?
A: Helsinki
Q: how many people live there?
A: 1.4 million in the metropolitan area
Q: what percent of the foreign companies that operate in Finland are in Helsinki?
A: 75%
Q: what towns are a part of the metropolitan area?
A:
# Target Completion â Helsinki, Espoo, Vantaa, Kauniainen, and surrounding commuter towns
Figure G.18: Formatted dataset example for CoQA
Context â Please unscramble the letters into a word, and write that word: asinoc =
# Target Completion â casino
Figure G.19: Formatted dataset example for Cycled Letters
55
Context â Passage: Saint Jean de Br´ebeuf was a French Jesuit missionary who travelled to New France in 1625. for the rest of his life, except for a few years in France from 1629 to 1633. He learned their language and culture, writing extensively about each to aid other missionaries. were captured when an Iroquois raid took over a Huron village . with Huron captives, the missionaries were ritually tortured and killed on March 16, 1649. missionaries canonized as saints in the Roman Catholic Church in 1930. How many years did Saint Jean de Br´ebeuf stay in New France Question: before he went back to France for a few years? Answer: There he worked primarily with the Huron In 1649, Br´ebeuf and another missionary Together Br´ebeuf was beatified in 1925 and among eight Jesuit
# Target Completion â 4
Figure G.20: Formatted dataset example for DROP
# Context â Fill in blank:
She held the torch in front of her. She caught her breath. "Chris? Thereâs a step." "What?" "A step. They both moved faster. "thereâs more than a Cut in the rock. . About fifty feet ahead." She moved faster. "In fact," she said, raising the torch higher, ->
Target Completion â step
Figure G.21: Formatted dataset example for LAMBADA
Context â Please unscramble the letters into a word, and write that word: skicts =
# Target Completion â sticks
Figure G.22: Formatted dataset example for Anagrams 1 (A1)
Context â Please unscramble the letters into a word, and write that word: volwskagen =
# Target Completion â volkswagen
Figure G.23: Formatted dataset example for Anagrams 2
Context â Q: Who played tess on touched by an angel?
A:
Target Completion â Delloreese Patricia Early (July 6, 1931 { November 19, 2017), known professionally as Della Reese
# Figure G.24: Formatted dataset example for Natural Questions
56
# Context â TITLE: William Perry (American football) - Professional career
PARAGRAPH: In 1985, he was selected in the first round of the 1985 NFL Draft by the Chicago Bears; he had been hand-picked by coach Mike Ditka. However, defensive coordinator Buddy Ryan, who had a highly acrimonious relationship with Ditka, called Perry a "wasted draft-pick". soon became a pawn in the political power struggle between Ditka and Ryan. quickly became a favorite of the Chicago Bears fans. him "Biscuit," as in "one biscuit shy of 350 pounds." While Ryan refused to play Perry, Ditka decided to use Perry as a fullback when the team was near the opponentsâ goal line or in fourth and short situations, either as a ball carrier or a lead blocker for star running back Walter Payton. Ditka stated the inspiration for using Perry as a fullback came to him during five-yard sprint exercises. rushed for two touchdowns and caught a pass for one. the opportunity to run the ball during Super Bowl XX, as a nod to his popularity and contributions to the teamâs success. got the ball, he was tackled for a one-yard loss while attempting to throw his first NFL pass on a halfback option play. ball, he scored a touchdown (running over Patriots linebacker Larry McGrew in the process). About halfway through his rookie season, Ryan finally began to play Perry, who soon proved that he was a capable defensive lineman. football player in the history of the event. the ring size for the average adult male is between 10 and 12. on to play for ten years in the NFL, retiring after the 1994 season. his ten years as a pro, he regularly struggled with his weight, which hampered his performance at times. He played in 138 games, recording 29.5 sacks and five fumble recoveries, which he returned for a total of In his offensive career he ran five yards for two touchdowns, 71 yards. and had one reception for another touchdown. comeback, playing an unremarkable 1996 season with the London Monarchs of the World League of American Football (later NFL Europa).
Q: what team did he play for?
A:
# Target Completion â the Chicago Bears
Figure G.25: Formatted dataset example for QuAC
Context â Please unscramble the letters into a word, and write that word: r e!c.i p r o.c a/l =
# Target Completion â reciprocal
Figure G.26: Formatted dataset example for Symbol Insertion
Context â Please unscramble the letters into a word, and write that word: taefed = Target Completion â defeat
Figure G.27: Formatted dataset example for Reversed Words
57
Context â Title: The Blitz
Background: The Luftwaffe flew 4,000 sorties that month, including 12 major and three heavy attacks. flew major inland missions only on moonlit nights. find and made better targets. observed until the bombs fell. false targets and switched only at the last minute. changes were introduced for X-Ger¨at, whose wider band of frequencies and greater tactical flexibility ensured it remained effective at a time when British selective jamming was degrading the effectiveness of Y-Ger¨at.
Q: How many sorties were flown in March 1941?
A: 4,000
Q: When did the Luftwaffe fly inland missions?
A:
# Target Completion â only on moonlit nights
Figure G.28: Formatted dataset example for SQuADv2
Context â Normal force -- In a simple case such as an object resting upon a table, the normal force on the object is equal but in opposite direction to the gravitational force applied on the object (or the weight of the object), that is, N = m g (\displaystyle N=mg), where m is mass, and g is the gravitational field strength (about 9.81 m/s on Earth). here represents the force applied by the table against the object that prevents it from sinking through the table and requires that the table is However, it sturdy enough to deliver this normal force without breaking. is easy to assume that the normal force and weight are action-reaction force pairs (a common mistake). weight need to be equal in magnitude to explain why there is no upward acceleration of the object. For example, a ball that bounces upwards accelerates upwards because the normal force acting on the ball is larger in magnitude than the weight of the ball. question: answer:
# Target Completion â yes
Figure G.29: Formatted dataset example for BoolQ
Context â The trend toward lower rents may seem surprising given that some communities in New York are bemoaning the loss of favorite local businesses to high rents. of these retailers thereâs still been too big a jump from the rental rates of the late 1970s, when their leases were signed. drop in prices doesnât mean Manhattan comes cheap. question: answer:
# Target Completion â false
Figure G.30: Formatted dataset example for CB
58
Context â The bet, which won him dinner for four, was regarding the existence and mass of the top quark, an elementary particle discovered in 1995. question: the standard model theory of particle physics. answer: The Top Quark is the last of six flavors of quarks predicted by True or False?
# Target Completion â False
Figure G.31: Formatted dataset example for RTE
Context â An outfitter provided everything needed for the safari. Before his first walking holiday, he went to a specialist outfitter to buy some boots. question: sentences above? answer: Is the word âoutfitterâ used in the same way in the two
# Target Completion â no
Figure G.32: Formatted dataset example for WiC
Context â Final Exam with Answer Key Instructions: passage, you must identify which noun the pronoun marked in *bold* refers to. ===== Passage: The result was that Mr. thinking that it belonged to his son Edward. Moncrieff has decided to cancel Edwardâs allowance on the ground that he no longer requires *his* financial support. Question: Answer:
# Target Completion â mr.
# moncrieff
moncrieff
Figure G.33: Formatted dataset example for WSC
Context â Q: âNude Descending A Staircaseâ is perhaps the most famous painting by which 20th century artist?
A:
Target Completion â MARCEL DUCHAMP Target Completion â r mutt Target Completion â duchamp Target Completion â marcel duchamp Target Completion â R.Mutt Target Completion â Marcel duChamp Target Completion â Henri-Robert-Marcel Duchamp Target Completion â Marcel du Champ Target Completion â henri robert marcel duchamp Target Completion â Duchampian Target Completion â Duchamp Target Completion â duchampian Target Completion â marcel du champ Target Completion â Marcel Duchamp Target Completion â MARCEL DUCHAMP
Figure G.34: Formatted dataset example for TriviaQA. TriviaQA allows for multiple valid completions.
59
Context â Q: What school did burne hogarth establish?
A:
# Target Completion â School of Visual Arts
Figure G.35: Formatted dataset example for WebQA
Context â Keinesfalls d¨urfen diese f¨ur den kommerziellen Gebrauch verwendet werden. =
Target Completion â In no case may they be used for commercial purposes.
Figure G.36: Formatted dataset example for DeâEn. This is the format for one- and few-shot learning, for this and other langauge tasks, the format for zero-shot learning is âQ: What is the {language} translation of {sentence} A: {translation}.â
Context â In no case may they be used for commercial purposes.
=
Target Completion â Keinesfalls d¨urfen diese f¨ur den kommerziellen Gebrauch verwendet werden.
Figure G.37: Formatted dataset example for EnâDe
Context â Analysis of instar distributions of larval I. verticalis collected from a series of ponds also indicated that males were in more advanced instars than females. =
Target Completion â Lâanalyse de la distribution de fr´equence des stades larvaires dâI. verticalis dans une s´erie dâ´etangs a ´egalement d´emontr´e que les larves m^ales ´etaient `a des stades plus avanc´es que les larves femelles.
Figure G.38: Formatted dataset example for EnâFr
Context â Lâanalyse de la distribution de fr´equence des stades larvaires dâI. verticalis dans une s´erie dâ´etangs a ´egalement d´emontr´e que les larves m^ales ´etaient `a des stades plus avanc´es que les larves femelles. = a series of ponds also indicated that males were in more advanced instars than females.
Figure G.39: Formatted dataset example for FrâEn
Context â The truth is that you want, at any price, and against the wishes of the peoples of Europe, to continue the negotiations for Turkeyâs accession to the European Union, despite Turkeyâs continuing refusal to recognise Cyprus and despite the fact that the democratic reforms are at a standstill. = Target Completion â AdevËarul este cËa vËa dorit¸i, cu orice pret¸ ¸si ^ımpotriva dorint¸ei
Figure G.40: Formatted dataset example for EnâRo
60
Context â AdevËarul este cËa vËa dorit¸i, cu orice pret¸ ¸si ^ımpotriva dorint¸ei
europenilor, sËa continuat¸i negocierile de aderare a Turciei la Uniunea EuropeanËa, ^ın ciuda refuzului continuu al Turciei de a recunoa¸ste Ciprul ¸si ^ın ciuda faptului cËa reformele democratice au ajuns ^ıntr-un punct mort. = Target Completion â The truth is that you want, at any price, and against the wishes of the peoples of Europe, to continue the negotiations for Turkeyâs accession to the European Union, despite Turkeyâs continuing refusal to recognise Cyprus and despite the fact that the democratic reforms are at a standstill.
Figure G.41: Formatted dataset example for RoâEn
# Context â Q: What is (2 * 4) * 6?
A:
# Target Completion â 48
Figure G.42: Formatted dataset example for Arithmetic 1DC
# Context â Q: What is 17 minus 14?
A:
# Target Completion â 3
Figure G.43: Formatted dataset example for Arithmetic 2D-
# Context â Q: What is 98 plus 45?
A:
# Target Completion â 143
Figure G.44: Formatted dataset example for Arithmetic 2D+
# Context â Q: What is 95 times 45?
A:
# Target Completion â 4275
Figure G.45: Formatted dataset example for Arithmetic 2Dx
# Context â Q: What is 509 minus 488?
A:
# Target Completion â 21
Figure G.46: Formatted dataset example for Arithmetic 3D-
# Context â Q: What is 556 plus 497?
A:
# Target Completion â 1053
Figure G.47: Formatted dataset example for Arithmetic 3D+
# Context â Q: What is 6209 minus 3365?
A:
# Target Completion â 2844
Figure G.48: Formatted dataset example for Arithmetic 4D-
61
# Context â Q: What is 9923 plus 617?
A:
# Target Completion â 10540
Figure G.49: Formatted dataset example for Arithmetic 4D+
Context â Q: What is 40649 minus 78746? A: Target Completion â -38097
Figure G.50: Formatted dataset example for Arithmetic 5Dâ
Context â Q: What is 65360 plus 16204? A:
Figure G.51: Formatted dataset example for Arithmetic 5D+
62
# H Results on All Tasks for All Model Sizes
Zero-Shot Fine-tune Name Metric Split SOTA K Small Med Large XL 2.7B 6.7B 13B 175B HellaSwag LAMBADA LAMBADA StoryCloze acc acc ppl acc dev test test test 85.6 68.0 8.63 91.8 20 15 15 70 33.7 43.6 51.0 54.7 62.8 67.4 70.9 78.9 42.7 54.3 60.4 63.6 67.1 70.3 72.5 76.2 18.6 9.09 6.53 5.44 4.60 4.00 3.56 3.00 63.3 68.5 72.4 73.4 77.2 77.7 79.5 83.2 NQs TriviaQA WebQs acc acc acc test dev test 44.5 68.0 45.5 64 64 64 0.64 1.75 2.71 4.40 6.01 5.79 7.84 14.6 4.15 7.61 14.0 19.7 31.3 38.7 41.8 64.3 1.77 3.20 4.33 4.63 7.92 7.73 8.22 14.4 RoâEn 16 RoâEn 16 EnâRo 16 EnâRo 16 FrâEn 14 FrâEn 14 EnâFr 14 EnâFr 14 DeâEn 16 DeâEn 16 EnâDe 16 EnâDe 16 BLEU-mb test BLEU-sb test BLEU-mb test BLEU-sb test BLEU-mb test BLEU-sb test BLEU-mb test BLEU-sb test BLEU-mb test BLEU-sb test BLEU-mb test BLEU-sb test 39.9 38.5 35.0 45.6 45.9 40.2 41.2 41.2 64 64 64 64 64 64 64 64 64 64 64 64 2.08 2.71 3.09 3.15 16.3 8.34 20.2 19.9 2.39 3.08 3.49 3.56 16.8 8.75 20.8 20.9 2.14 2.65 2.53 2.50 3.46 4.24 5.32 14.1 2.61 3.11 3.07 3.09 4.26 5.31 6.43 18.0 1.81 2.53 3.47 3.13 20.6 15.1 21.8 21.2 2.29 2.99 3.90 3.60 21.2 15.5 22.4 21.9 1.74 2.16 2.73 2.15 15.1 8.82 12.0 25.2 2.44 2.75 3.54 2.82 19.3 11.4 15.3 31.3 2.06 2.87 3.41 3.63 21.5 17.3 23.0 27.2 2.39 3.27 3.85 4.04 22.5 18.2 24.4 28.6 1.70 2.27 2.31 2.43 12.9 8.66 10.4 24.6 2.09 2.65 2.75 2.92 13.7 9.36 11.0 25.3 Winograd Winogrande acc acc test dev 93.8 84.6 7 50 66.3 72.9 74.7 76.9 82.4 85.7 87.9 88.3 52.0 52.1 57.4 58.7 62.3 64.5 67.9 70.2 PIQA acc ARC (Challenge) acc acc ARC (Easy) acc OpenBookQA dev test test test 77.1 78.5 92.0 87.2 50 50 50 100 64.6 70.2 72.9 75.1 75.6 78.0 78.5 81.0 26.6 29.5 31.8 35.5 38.0 41.4 43.7 51.4 43.6 46.5 53.0 53.8 58.2 60.2 63.8 68.8 35.6 43.2 45.2 46.8 53.0 50.4 55.6 57.6 Quac RACE-h RACE-m SQuADv2 SQuADv2 CoQA DROP f1 acc acc em f1 f1 f1 dev test test dev dev dev dev 74.4 90.0 93.1 90.7 93.0 90.7 89.1 5 10 10 16 16 5 20 21.2 26.8 31.0 30.1 34.7 36.1 38.4 41.5 35.2 37.9 40.1 40.9 42.4 44.1 44.6 45.5 42.1 47.2 52.1 52.3 54.7 54.4 56.7 58.4 22.6 32.8 33.9 43.1 43.6 45.4 49.0 52.6 28.3 40.2 41.4 50.3 51.0 52.7 56.3 59.5 34.5 55.0 61.8 65.3 71.1 72.8 76.3 81.5 9.40 13.6 14.4 16.4 19.7 17.0 24.0 23.6 BoolQ CB CB Copa RTE WiC WSC MultiRC MultiRC ReCoRD ReCoRD SuperGLUE acc acc f1 acc acc acc acc acc f1a acc f1 average dev dev dev dev dev dev dev dev dev dev dev dev 91.0 96.9 93.9 94.8 92.5 76.1 93.8 62.3 88.2 92.5 93.3 89.0 32 32 32 32 32 32 32 32 32 32 32 49.7 60.3 58.9 62.4 67.1 65.4 66.2 60.5 0.00 32.1 8.93 19.6 19.6 28.6 19.6 46.4 0.00 29.3 11.4 17.4 22.4 25.1 20.3 42.8 66.0 68.0 73.0 77.0 76.0 80.0 84.0 91.0 47.7 49.8 48.4 56.0 46.6 55.2 62.8 63.5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 59.6 56.7 65.4 61.5 66.3 60.6 64.4 65.4 4.72 9.65 12.3 13.6 14.3 18.4 24.2 27.6 57.0 59.7 60.4 59.9 60.0 64.5 71.4 72.9 70.8 78.5 82.1 84.1 86.2 88.6 89.0 90.2 71.9 79.2 82.8 85.2 87.3 89.5 90.4 91.0 40.6 47.4 46.8 49.6 50.1 52.3 54.4 58.2 ANLI R1 ANLI R2 ANLI R3 acc acc acc test test test 73.8 50.7 48.3 50 50 50 33.4 34.2 33.4 33.4 34.2 32.3 33.2 34.6 33.2 31.9 33.3 33.3 33.8 33.5 33.5 35.4 33.6 34.0 33.8 33.4 35.3 34.8 34.4 34.5 2D+ 2D- 3D+ 3D- 4D+ 4D- 5D+ 5D- 2Dx 1DC acc acc acc acc acc acc acc acc acc acc n/a n/a n/a n/a n/a n/a n/a n/a n/a n/a 50 50 50 50 50 50 50 50 50 50 0.70 0.65 0.70 0.85 1.10 2.54 15.4 76.9 1.25 1.25 1.25 1.25 1.60 7.60 12.6 58.0 0.10 0.10 0.05 0.10 0.10 0.25 1.40 34.2 0.05 0.05 0.05 0.05 0.05 0.45 1.35 48.3 0.05 0.05 0.00 0.00 0.05 0.05 0.15 4.00 0.00 0.00 0.00 0.00 0.00 0.00 0.10 7.50 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.65 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.80 2.20 2.25 2.65 2.10 2.55 5.80 6.15 19.8 1.25 2.95 2.75 0.05 0.30 2.35 0.75 9.75 acc Cycled Letters acc Anagrams 1 Anagrams 2 acc Symbol Insertion acc Reversed Words acc n/a n/a n/a n/a n/a 100 100 100 100 100 0.62 0.71 2.85 0.00 0.63 1.35 2.58 3.66 0.10 0.14 0.40 0.00 0.27 0.69 1.16 2.28 0.81 1.21 2.69 0.01 1.71 3.75 4.53 8.91 0.00 0.00 0.10 0.00 0.05 0.42 0.89 8.26 0.00 0.01 0.01 0.01 0.02 0.03 0.03 0.09
# One-Shot
One-Shot
# Small Med Large XL 2.7B 6.7B 13B 175B
33.0 42.9 50.5 53.5 61.9 66.5 70.0 78.1 22.0 47.1 52.6 58.3 61.1 65.4 69.0 72.5 165.0 11.6 8.29 6.46 5.53 4.61 4.06 3.35 62.3 68.7 72.3 74.2 77.3 78.7 79.7 84.7
1.19 3.07 4.79 5.43 8.73 9.78 13.7 23.0 4.19 12.9 20.5 26.5 35.9 44.4 51.3 68.0 2.56 6.20 8.51 9.15 14.5 15.1 19.0 25.3
0.55 15.4 23.0 26.3 30.6 33.2 35.6 38.6 0.65 15.9 23.6 26.8 31.3 34.2 36.7 40.0 0.35 3.30 7.89 8.72 13.2 15.1 17.3 20.6 0.55 3.90 9.15 10.3 15.7 18.2 20.8 24.9 1.28 15.9 23.7 26.3 29.0 30.5 30.2 33.7 1.50 16.3 24.4 27.0 30.0 31.6 31.4 35.6 0.49 8.00 14.8 15.9 20.3 23.3 24.9 28.3 0.81 10.0 18.2 19.3 24.7 28.3 30.1 34.1 0.83 16.2 22.5 24.7 28.2 30.7 33.0 30.4 0.93 17.1 23.4 25.8 29.2 31.9 34.5 32.1 0.50 7.00 12.9 13.1 18.3 20.9 22.5 26.2 0.54 7.40 13.4 13.4 18.8 21.7 23.3 27.3
63.4 68.5 72.9 76.9 82.4 84.6 86.1 89.7 51.3 53.0 58.3 59.1 61.7 65.8 66.9 73.2
64.3 69.3 71.8 74.4 74.3 76.3 77.8 80.5 25.5 30.2 31.6 36.4 38.4 41.5 43.1 53.2 42.7 48.2 54.6 55.9 60.3 62.6 66.8 71.2 37.0 39.8 46.2 46.4 53.4 53.0 55.8 58.8
21.1 26.9 31.9 32.3 37.4 39.0 40.6 43.4 34.3 37.7 40.0 42.0 43.8 44.3 44.6 45.9 42.3 47.3 51.7 55.2 56.1 54.7 56.9 57.4 25.1 37.5 37.9 47.9 47.9 51.1 56.0 60.1 30.1 43.6 44.1 54.0 54.1 57.1 61.8 65.4 30.6 52.1 61.6 66.1 71.8 75.1 77.9 84.0 11.7 18.1 20.9 23.0 26.4 27.3 29.2 34.3
52.6 61.7 60.4 63.7 68.4 68.7 69.0 76.7 55.4 53.6 53.6 48.2 57.1 33.9 55.4 64.3 60.1 39.8 45.6 37.5 45.7 28.5 44.6 52.5 62.0 64.0 66.0 74.0 76.0 82.0 86.0 87.0 53.1 47.3 49.5 49.5 54.9 54.9 56.3 70.4 50.0 50.3 50.3 49.2 49.4 50.3 50.0 48.6 58.7 58.7 60.6 62.5 66.3 60.6 66.3 69.2 4.72 9.65 12.3 13.6 14.3 18.4 24.2 27.6 57.0 59.7 60.4 59.9 60.0 64.5 71.4 72.9 69.8 77.0 80.7 83.0 85.9 88.0 88.8 90.2 70.7 77.8 81.6 83.9 86.8 88.8 89.7 91.2 54.4 55.1 56.7 57.8 61.2 59.7 64.3 68.9
32.1 31.6 31.9 34.6 30.6 31.6 32.7 32.0 35.7 33.7 33.2 32.7 32.7 33.9 33.9 33.9 35.0 32.6 33.0 33.9 34.1 33.1 32.5 35.1
2.00 0.55 3.15 4.00 12.1 19.6 73.0 99.6 1.15 0.95 1.45 1.95 3.85 11.5 44.6 86.4 0.15 0.00 0.10 0.30 0.45 0.95 15.4 65.5 0.05 0.15 0.25 0.30 0.55 1.60 6.15 78.7 0.00 0.00 0.10 0.00 0.00 0.10 0.80 14.0 0.00 0.00 0.00 0.00 0.05 0.00 0.50 14.0 0.00 0.00 0.00 0.00 0.00 0.00 0.05 3.45 0.00 0.00 0.00 0.00 0.00 0.00 0.05 3.75 1.35 2.35 3.35 2.35 4.75 9.15 11.0 27.4 1.90 2.80 2.85 3.65 6.45 9.15 8.20 14.3
Few-Shot
# 175B (test server)
Small Med Large XL 2.7B 6.7B 13B 175B
33.5 43.1 51.3 54.9 62.9 67.3 71.3 79.3 22.0 40.4 63.2 57.0 78.1 79.1 81.3 86.4 165.0 27.6 6.63 7.45 2.89 2.56 2.56 1.92 62.3 70.2 73.9 76.1 80.2 81.2 83.0 87.7
1.72 4.46 7.89 9.72 13.2 17.0 21.0 29.9 6.96 16.3 26.5 32.1 42.3 51.6 57.5 71.2 5.46 12.6 15.9 19.6 24.8 27.7 33.5 41.5
71.2
1.25 20.7 25.8 29.2 33.1 34.8 37.0 39.5 1.40 21.3 26.6 30.1 34.3 36.2 38.4 41.3 1.25 5.90 9.33 10.7 14.3 16.3 18.0 21.0 1.64 7.40 10.9 12.9 17.2 19.6 21.8 25.8 4.98 25.5 28.5 31.1 33.7 34.9 36.6 39.2 5.30 26.2 29.5 32.2 35.1 36.4 38.3 41.4 4.08 14.5 19.3 21.5 24.9 27.3 29.5 32.6 5.31 18.0 23.6 26.1 30.3 33.3 35.5 39.9 3.25 22.7 26.2 29.2 32.7 34.8 37.3 40.6 3.60 23.8 27.5 30.5 34.1 36.5 39.1 43.0 3.42 12.3 15.4 17.1 20.9 23.0 26.6 29.7 3.78 12.9 16.1 17.7 21.7 24.1 27.7 30.9
63.4 67.4 73.6 76.9 84.3 85.4 82.4 88.6 51.3 52.6 57.5 59.1 62.6 67.4 70.0 77.7
64.3 69.4 72.0 74.3 75.4 77.8 79.9 82.3 25.5 28.4 32.3 36.7 39.5 43.7 44.8 51.5 42.7 51.0 58.1 59.1 62.1 65.8 69.1 70.1 37.0 43.6 48.0 50.6 55.6 55.2 60.8 65.4
82.8
21.6 27.6 32.9 34.2 38.2 39.9 40.9 44.3 34.3 37.0 40.4 41.4 42.3 44.7 45.1 46.8 42.3 47.0 52.7 53.0 55.6 55.4 58.1 58.1 27.5 40.5 39.2 53.5 50.0 56.6 62.6 64.9 32.1 45.5 44.9 58.7 55.9 62.1 67.7 69.8 31.1 52.0 62.7 66.8 73.2 77.3 79.9 85.0 12.9 18.7 24.0 25.6 29.7 29.7 32.3 36.5
43.1 60.6 62.0 64.1 70.3 70.0 70.2 77.5 42.9 58.9 53.6 69.6 67.9 60.7 66.1 82.1 26.1 40.4 32.6 48.3 45.7 44.6 46.0 57.2 67.0 64.0 72.0 77.0 83.0 83.0 86.0 92.0 52.3 48.4 46.9 50.9 56.3 49.5 60.6 72.9 49.8 55.0 53.0 53.0 51.6 53.1 51.1 55.3 58.7 60.6 54.8 49.0 62.5 67.3 75.0 75.0 6.09 11.8 16.8 20.8 24.7 23.8 25.0 32.5 45.0 55.9 64.2 65.4 69.5 66.4 69.3 74.8 69.8 77.2 81.3 83.1 86.6 87.9 88.9 89.0 70.7 77.9 82.1 84.0 87.5 88.8 89.8 90.1 50.2 56.2 56.8 60.0 64.3 63.6 66.9 73.2
76.4 75.6 52.0 92.0 69.0 49.4 80.1 30.5 75.4 90.2 91.1 71.8
32.1 32.5 30.9 32.5 33.5 33.1 33.3 36.8 35.7 33.8 32.1 31.4 32.6 33.3 32.6 34.0 35.0 34.4 35.1 36.0 32.7 33.9 34.5 40.2
2.00 4.10 3.50 4.50 8.90 11.9 55.5 100.0 1.15 1.45 2.25 2.70 7.35 13.6 52.4 98.9 0.15 0.45 0.30 0.55 0.75 0.90 8.40 80.4 0.05 0.10 0.15 0.35 0.65 1.05 9.20 94.2 0.00 0.05 0.05 0.00 0.15 0.15 0.40 25.5 0.00 0.05 0.00 0.00 0.10 0.05 0.40 26.8 0.00 0.00 0.00 0.00 0.00 0.00 0.05 9.30 0.00 0.00 0.00 0.00 0.00 0.00 0.00 9.90 1.35 2.90 2.70 2.85 4.25 6.10 7.05 29.2 1.70 2.15 3.90 5.75 6.20 7.60 9.95 21.3
1.67 4.36 5.68 6.46 6.25 9.41 15.1 21.7 0.21 0.61 1.12 1.27 1.60 2.72 3.72 8.62 1.19 2.62 4.70 4.77 6.97 10.2 14.6 25.9 0.03 0.05 0.57 1.18 1.67 3.46 6.62 45.4 0.02 0.01 0.01 0.00 0.05 0.07 0.11 0.48
4.63 9.27 10.7 14.5 16.7 21.9 27.7 37.9 0.50 1.27 2.13 3.05 3.81 5.49 8.38 15.1 1.94 4.80 7.59 9.87 12.6 18.9 25.6 39.7 0.11 0.28 2.19 4.18 6.61 11.0 27.3 67.2 0.00 0.05 0.00 0.17 0.24 0.30 0.42 0.44
# SAT Analogies
â
# acc
# n/a
20
35.6 39.0 45.2 44.1 50.0 49.2 52.7 53.7
30.5 41.2 43.1 46.5 55.1 54.3 53.5 59.1
30.5 40.4 42.8 40.6 48.4 51.9 53.5 65.2
Table H.1: Scores for every task, setting and model that we investigate in this paper.
63
BoolQ Fine-tune SOTA âeâ Zero-Shot âe- One-Shot âeâ Few-Shot (K=32) 80 BERT-Large > S 70 j & 5 m Guessing 0.18 04B O88 13B 266 67B 138 1758 Parameters in LM (Billions) COPA Fine-tune SOTA 90 80 a 79 -- BERT Large 6 âe- Zero-Shot âe One-Shot 5 Random Guessing âe Few-Shot (K#32) 0.18 04B O88 13B 26B 67B 138 1758 Parameters in LM (Billions) WSC Fine-tune SOTA 99 80 âeâ Zero-Shot âeâ One-Shol bs 4] â*â Few-Shot (K=32} 0.1B 04B O8B 13B 26B 67B 13B 1758 Parameters in LM (Billions) ReCoRD (accuracy) Fine-tune SOTA 90 80 -Large 70 = â* Zero-Shot a âeâ One-Shot 3 & âeâ Few-Shot (K=32} * 40 Random Guessing Kt] 0.1B 04B O88 13B 26B 67B 13B 1758 Parameters in LM (Billions) CB (accuracy) 100 __ Fine-tune SOTA Accuracy BERT-Large uessing 40 âeâ Zero-Shot âeâ One-Shot â*â Few-Shot (K=32} 0.1B 04B O88 13B 2668 67B 138 1758 Parameters in LM (Billions) RTE Fine-tune SOTA â* Zero-Shot 9 âeâ One-Shot âe- Few-Shot (K=32} Accuracy Fi 0.1B 04B O88 13B 26B 67B 138 1758 Parameters in LM (Billions) MultiRC (accuracy) Fine-tune SOTA 6 E ¢) 40 we Zero-Shot âe One-Shot 3S âeâ Few-Shot (K"32) BERT-Large m Guessing 0.18 04B O88 13B 2668 67B 13B 1758 Parameters in LM (Billions) ReCoRD (F1a) Fine-tune SOTA 70 âe Zero-Shot âeâ One-Shot ea) âeâ Few-Shot (K=32)} 40 Random Guessing 0.1B 04B O88 13B 26B 67B 138 1758 Parameters in LM (Billions) CB (F1) Fine-tune SOTA 80 BERT-Large FA 40 âeâ Zero-Shot âeâ One-Shot â*â Few-Shot 0.18 04B O88 13B 26B 67B 13B Parameters in LM (Billions) Fine-tune SOTA âe Zero-Shot âeâ One-Shot âeâ Few-Shot Accuracy & 8 10 Ore et 0.18 04B O88 13B 26B 67B 13B Parameters in LM (Billions) MultiRG (F1a) 90 Fine-tune SOTA BERT-Large FA âeâ Zero-Shot âeâ One-Shot âeâ Few-Shot 0.18 04B O88 13B 26B 67B 13B Parameters in LM (Billions)
BoolQ Fine-tune SOTA âeâ Zero-Shot âe- One-Shot âeâ Few-Shot (K=32) 80 BERT-Large > S 70 j & 5 m Guessing 0.18 04B O88 13B 266 67B 138 1758 Parameters in LM (Billions)
CB (accuracy) 100 __ Fine-tune SOTA Accuracy BERT-Large uessing 40 âeâ Zero-Shot âeâ One-Shot â*â Few-Shot (K=32} 0.1B 04B O88 13B 2668 67B 138 1758 Parameters in LM (Billions)
CB (F1) Fine-tune SOTA 80 BERT-Large FA 40 âeâ Zero-Shot âeâ One-Shot â*â Few-Shot (K=32} 0.18 04B O88 13B 26B 67B 13B 1758 Parameters in LM (Billions)
COPA Fine-tune SOTA 90 80 a 79 -- BERT Large 6 âe- Zero-Shot âe One-Shot 5 Random Guessing âe Few-Shot (K#32) 0.18 04B O88 13B 26B 67B 138 1758 Parameters in LM (Billions)
RTE Fine-tune SOTA â* Zero-Shot 9 âeâ One-Shot âe- Few-Shot (K=32} 0.1B 04B O88 13B 26B 67B 138 1758 Parameters in LM (Billions)
Fine-tune SOTA âe Zero-Shot âeâ One-Shot âeâ Few-Shot (K=32} Accuracy & 8 10 Ore et 0.18 04B O88 13B 26B 67B 13B 1758 Parameters in LM (Billions)
WSC Fine-tune SOTA 99 80 âeâ Zero-Shot âeâ One-Shol bs 4] â*â Few-Shot (K=32} 0.1B 04B O8B 13B 26B 67B 13B 1758 Parameters in LM (Billions)
Accuracy MultiRC (accuracy) Fine-tune SOTA 6 E ¢) 40 we Zero-Shot âe One-Shot 3S âeâ Few-Shot (K"32) BERT-Large m Guessing 0.18 04B O88 13B 2668 67B 13B 1758 Parameters in LM (Billions)
MultiRG (F1a) 90 Fine-tune SOTA BERT-Large FA âeâ Zero-Shot âeâ One-Shot âeâ Few-Shot (K=32} 0.18 04B O88 13B 26B 67B 13B 1758 Parameters in LM (Billions)
ReCoRD (accuracy) Fine-tune SOTA 90 80 -Large 70 = â* Zero-Shot a âeâ One-Shot 3 & âeâ Few-Shot (K=32} * 40 Random Guessing Kt] 0.1B 04B O88 13B 26B 67B 13B 1758 Parameters in LM (Billions)
Fi ReCoRD (F1a) Fine-tune SOTA 70 âe Zero-Shot âeâ One-Shot ea) âeâ Few-Shot (K=32)} 40 Random Guessing 0.1B 04B O88 13B 26B 67B 138 1758 Parameters in LM (Billions)
Figure H.1: All results for all SuperGLUE tasks.
SAT Analogies âeâ Zero-Shot â*â One-Shot â*â Few-Shot (K®20) 20 Random Guessing 0.18 048 088 138 268 678 138 1758 Parameters in LM (Billions) Winogrande Fine-tuned SOTA 80 Fine-tuned RoBERTa-Large âeâ Zero-Shot âeâ One-Shot 70 ~~ Few-Shot (K=50} Hive-iuiieu CCN i-Laige Accuracy 0.18 048 088 138 268 67B 138 Parameters in LM (Billions) 1758 Fine-tuned SOTA = ââ Zero-Shot 3 âeâ One-Shot g âeâ Few-Shot (K=7) 60 50 Random Guessing 0.18 04B 088 13B 268 6.7B 138 Parameters in LM (Billions) 1758
SAT Analogies âeâ Zero-Shot â*â One-Shot â*â Few-Shot (K®20) 20 Random Guessing 0.18 048 088 138 268 678 138 1758 Parameters in LM (Billions)
Winogrande Fine-tuned SOTA 80 Fine-tuned RoBERTa-Large âeâ Zero-Shot âeâ One-Shot 70 ~~ Few-Shot (K=50} Hive-iuiieu CCN i-Laige Accuracy 0.18 048 088 138 268 67B 138 Parameters in LM (Billions) 1758
Fine-tuned SOTA = ââ Zero-Shot 3 âeâ One-Shot g âeâ Few-Shot (K=7) 60 50 Random Guessing 0.18 04B 088 13B 268 6.7B 138 Parameters in LM (Billions) 1758
Figure H.2: Results for SAT task.
Figure H.3: All results for all Winograd tasks.
64
Arithmetic (few-shot) Two Digit Addition Two Digit Multiplication 100 D â+â Two Digit Addition 100 â+â Zero-Shot â*â Zero-Shot â*â Two Digit Subtraction â*â One-Shot âeâ One-Shot 9 Three Digit Addition we Few-Shot (K=50) 25 âeâ Few-Shot (K=50) âeâ Three Digit Subtraction 8° â*â Four Digit Addition 20 o 7 Four Digit Subtraction ra S â*~ Five Digit Addition S 3 3 âeâ Five Dight Subtraction 3 a 15 ZS yg ~*~ Toe Digit Muttiplication Zz 4 2 Single Digit Three Ops 10 20 2 5 0 be 0 0.1B O4B O88 13B 26B 67B 138 1758 0.1B O4B O88 13B 26B 67B 138 1758 0.1B 04B OSB 13B 26B 67B 138 1758 Parameters in LM (Billions) Parameters in LM (Billions) Parameters in LM (Billions) Two Digit Subtraction Three Digit Addition Three Digit Subtraction 100 â.â Zero-Shot 80 âeâ Zero-Shot âeâ Zero-Shot âe- One-Shot âe- One-Shot =e One-Shot â*â Few-Shot (K=50) 70 â+â Few-Shot (K=50) 80 âeâ Few-Shot (K=50) ~ be] > > > o g Fe D 20 10 0 018 0468 O88 13B 26B 67B 138 1758 0.1B 0468 O88 13B 26B 67B 138 1758 0.1B 04B O88 13B 266 67B 138 1758 Parameters in LM (Billions) Parameters in LM (Billions) Parameters in LM (Billions) Four Digit Addition Four Digit Subtraction Five Digit Addition 25 â*â Zero-Shot â*â Zero-Shot â*â Zefo-Shot =e One-Shot 25 âeâ One-Shot âeâ One-Shot âeâ Few-Shot (K=50) âeâ Few-Shot (K=50) 8 âeâ Few-Shot (K=50) 2 2 7" 2 8 e Zz Zz z Er i 5 5 0 0 01B 0O4B O88 13B 2686 67B 13B 1758 01B 0O4B O88 13B 2686 67B 13B 1758 0.1B 04B O88 13B 2668 67B 138 1758 Parameters in LM (Billions) Parameters in LM (Billions) Parameters in LM (Billions) Five Digit Subtraction Single Digit Three Ops 10 _.â Zero-Shot â*â Zero-Shot â*â One-Shot 20 â+â One-Shot we Few-Shot (K=50) we Few-Shot (K=50) 0.1B 04B O88 13B 268 67B 138 1758 0.1B 04B O88 13B 268 67B 138 1758 Parameters in LM (Billions) Parameters in LM (Billions)
Arithmetic (few-shot) 100 â+â Two Digit Addition â*â Two Digit Subtraction 9 Three Digit Addition âeâ Three Digit Subtraction â*â Four Digit Addition o 7 Four Digit Subtraction S â*~ Five Digit Addition 3 âeâ Five Dight Subtraction ZS yg ~*~ Toe Digit Muttiplication Single Digit Three Ops 20 0 0.1B O4B O88 13B 26B 67B 138 1758 Parameters in LM (Billions)
Two Digit Addition 100 â+â Zero-Shot â*â One-Shot we Few-Shot (K=50) 8° ra S 3 Zz 4 2 be 0.1B O4B O88 13B 26B 67B 138 1758 Parameters in LM (Billions)
Two Digit Multiplication D â*â Zero-Shot âeâ One-Shot 25 âeâ Few-Shot (K=50) 20 3 a 15 2 10 5 0 0.1B 04B OSB 13B 26B 67B 138 1758 Parameters in LM (Billions)
Two Digit Subtraction 100 â.â Zero-Shot âe- One-Shot â*â Few-Shot (K=50) > o 018 0468 O88 13B 26B 67B 138 1758 Parameters in LM (Billions)
Three Digit Addition 80 âeâ Zero-Shot âe- One-Shot 70 â+â Few-Shot (K=50) ~ be] > g D 20 10 0 0.1B 0468 O88 13B 26B 67B 138 1758 Parameters in LM (Billions)
Three Digit Subtraction âeâ Zero-Shot =e One-Shot 80 âeâ Few-Shot (K=50) > Fe 0.1B 04B O88 13B 266 67B 138 1758 Parameters in LM (Billions)
Four Digit Addition 25 â*â Zero-Shot =e One-Shot âeâ Few-Shot (K=50) 2 7" Zz Er 5 0 01B 0O4B O88 13B 2686 67B 13B 1758 Parameters in LM (Billions)
Four Digit Subtraction â*â Zero-Shot 25 âeâ One-Shot âeâ Few-Shot (K=50) 2 2 8 Zz 5 0 01B 0O4B O88 13B 2686 67B 13B 1758 Parameters in LM (Billions)
Five Digit Addition â*â Zefo-Shot âeâ One-Shot 8 âeâ Few-Shot (K=50) e z i 0.1B 04B O88 13B 2668 67B 138 1758 Parameters in LM (Billions)
Five Digit Subtraction 10 _.â Zero-Shot â*â One-Shot we Few-Shot (K=50) 0.1B 04B O88 13B 268 67B 138 1758 Parameters in LM (Billions)
Single Digit Three Ops â*â Zero-Shot 20 â+â One-Shot we Few-Shot (K=50) 0.1B 04B O88 13B 268 67B 138 1758 Parameters in LM (Billions)
Figure H.4: All results for all Arithmetic tasks.
HellaSwag Lambada Storycloze Human Human Fine-tuned SOTA. 90 90 Fine-tuned SOTA 8 85 70 70 __ Zero-Shot SOTA = âeâ Zero-Shot z z 80 a â*â One-Shot 5 © : 2 âeâ Few-Shot (K=20) 2 2 75 » SO 70 40 40 ~*~ Zero-Shot =e Zero-Shot 20 30 âsâ One-Shot 65 âsâ One-Shot Random Guessing 20 â*â Few-Shot (K#15) â*â Few-Shot (K#70) 01B 04B O88 13B 26B 67B 13B 175B 01B 04B O88 13B 26B 67B 13B 175B 0.1B 04B O88 13B 26B 67B 138 175B Parameters in LM (Billions) Parameters in LM (Billions) Parameters in LM (Billions)
HellaSwag Human Fine-tuned SOTA 70 = âeâ Zero-Shot a â*â One-Shot 2 âeâ Few-Shot (K=20) SO 40 20 Random Guessing 01B 04B O88 13B 26B 67B 13B 175B Parameters in LM (Billions)
Lambada Human 90 8 70 __ Zero-Shot SOTA z 5 © 2 » 40 ~*~ Zero-Shot 30 âsâ One-Shot 20 â*â Few-Shot (K#15) 01B 04B O88 13B 26B 67B 13B 175B Parameters in LM (Billions)
Storycloze Fine-tuned SOTA. 90 85 z 80 : 2 75 70 =e Zero-Shot 65 âsâ One-Shot â*â Few-Shot (K#70) 0.1B 04B O88 13B 26B 67B 138 175B Parameters in LM (Billions)
Figure H.5: All results for all Cloze and Completion tasks.
65
60 âeâ Zero-Shot âeâ Zero-Shot 30 âeâ One-Shot 40 âeâ One-Shot 5p -- Random Guessing â*â Few-Shot (K=50) â*â Few-Shot (K=100) 0.18 O48 O88 13B 268 67B 138 1758 0.1B 04B O8B 13B 26B 67B 13B 1768 0.1B 048 O88 13B 2.68 67B 13B 1768 Parameters in LM (Billions) Parameters in LM (Billions) Parameters in LM (Billions) PhysicalQA Human âeâ Zefo-Shot âe- One-Shot âeâ Few-Shot (K=50) Fine-tuned SOTA 80 70 Accuracy 8 ARC Challenge Fine-tuned SOTA OpenBookQA Fine-tuned SOTA 70 Accuracy 8
60 5p -- Random Guessing 0.18 O48 O88 13B 268 67B 138 1758 Parameters in LM (Billions) PhysicalQA Human âeâ Zefo-Shot âe- One-Shot âeâ Few-Shot (K=50) Fine-tuned SOTA
âeâ Zero-Shot 30 âeâ One-Shot â*â Few-Shot (K=50) 0.1B 04B O8B 13B 26B 67B 13B 1768 Parameters in LM (Billions) 80 70 Accuracy 8 ARC Challenge Fine-tuned SOTA
âeâ Zero-Shot 40 âeâ One-Shot â*â Few-Shot (K=100) 0.1B 048 O88 13B 2.68 67B 13B 1768 Parameters in LM (Billions) OpenBookQA Fine-tuned SOTA 70 Accuracy 8
Figure H.6: All results for all Common Sense Reasoning tasks.
NaturalQuestions (T5 splits) TriviaQA WebQS Fine-tuned SOTA Fine-tuned SOTA 40 70 __ Fine-tuned SOTA 35 40 D K¢) 5 2 âeâ Zero-Shot | g 5 29 || 7° One-Shot g 40 3 âeâ Few-Shot (K=64) 3 3 » » 15 10 20 eel âeâ Zero-Shot 10 =e Zero-Shot 5 10 âeâ One-Shot âeâ One-Shot ° â* Few-Shot (K=64) âe Few-Shot (K=64) 0.1B 0O4B O88 13B 266 67B 12.88 174.68 0.1B 04B O8B 13B 266 67B 13B 1758 0.1B 04B O8B 13B 266 67B 138 1758 Parameters in LM (Billions) Parameters in LM (Billions) Parameters in LM (Billions)
NaturalQuestions (T5 splits) Fine-tuned SOTA 40 35 D 5 2 âeâ Zero-Shot 5 29 || 7° One-Shot 3 âeâ Few-Shot (K=64) 15 10 5 ° 0.1B 0O4B O88 13B 266 67B 12.88 174.68 Parameters in LM (Billions)
TriviaQA 70 __ Fine-tuned SOTA | g 40 3 » 20 âeâ Zero-Shot 10 âeâ One-Shot â* Few-Shot (K=64) 0.1B 04B O8B 13B 266 67B 13B 1758 Parameters in LM (Billions)
WebQS Fine-tuned SOTA 40 K¢) g 3 » eel 10 =e Zero-Shot âeâ One-Shot âe Few-Shot (K=64) 0.1B 04B O8B 13B 266 67B 138 1758 Parameters in LM (Billions)
Figure H.7: All results for all QA tasks.
Quac RACE-h RACE-m Fine-tuned SOTA x Fine-tuned SOTA Fine-tuned SOTA 70 20 BO 60 8 70 s ry =e Zero-Shot ry 5 70 8 8 âsâ One-Shot 5 3 3 0 âeâ Few-Shot (Ke10) 3 40 6 a] D âe Zero-Shot re) âeâ Zero-Shot âe Me-Shot 40 âe One-Shot » âe Few-Shot (K=5) âe Few-Shot (K=10) 40 0.1B 048 O88 13B 268 67B 138 1758 0.1B 04B O8B 13B 26B 67B 13B 1768 0.1B 048 O88 13B 268 67B 13B 1768 Parameters in LM (Billions) Parameters in LM (Billions) Parameters in LM (Billions) SquadV2 CoQA Drop Fine-tuned SOTA fine-tuned SOTA 9 .. Fine-tuned SOTA go ---Human 90 onan 8 80 8 70 70 70 60 = â*â Zero-Shot = 6 3 60 x # âeâ One-Shot 2 âeâ Few-Shot (K=20) 40 xâ 5 D 40 âe- Zero-Shot 40 ~~ Zero-Shot » âe One-Shot â- One-Shot x» â* Few-Shot (K#®16) 30 â*â Few-Shot (K®5) 10 0.1B 048 O88 13B 2668 67B 138 175B 0.1B 04B O88 13B 26B 67B 13B 175B 0.1B 046 O88 13B 26B 67B 138 1758 Parameters in LM (Billions) Parameters in LM (Billions) Parameters in LM (Billions)
Quac Fine-tuned SOTA 70 60 s 5 8 3 40 D âe Zero-Shot âe Me-Shot » âe Few-Shot (K=5) 0.1B 048 O88 13B 268 67B 138 1758 Parameters in LM (Billions)
RACE-h x Fine-tuned SOTA BO 70 ry =e Zero-Shot 8 âsâ One-Shot 3 0 âeâ Few-Shot (Ke10) a] 40 0.1B 04B O8B 13B 26B 67B 13B 1768 Parameters in LM (Billions)
RACE-m Fine-tuned SOTA 20 8 ry 70 5 3 6 re) âeâ Zero-Shot âe One-Shot âe Few-Shot (K=10) 40 0.1B 048 O88 13B 268 67B 13B 1768 Parameters in LM (Billions)
SquadV2 Fine-tuned SOTA go ---Human 80 70 = 6 xâ 40 âe- Zero-Shot âe One-Shot x» â* Few-Shot (K#®16) 0.1B 048 O88 13B 2668 67B 138 175B Parameters in LM (Billions)
CoQA fine-tuned SOTA 90 onan 8 70 = 3 60 2 5 40 ~~ Zero-Shot â- One-Shot 30 â*â Few-Shot (K®5) 0.1B 04B O88 13B 26B 67B 13B 175B Parameters in LM (Billions)
Drop 9 .. Fine-tuned SOTA 8 70 60 â*â Zero-Shot x # âeâ One-Shot âeâ Few-Shot (K=20) 40 D » 10 0.1B 046 O88 13B 26B 67B 138 1758 Parameters in LM (Billions)
Figure H.8: All results for all Reading Comprehension tasks.
ANLI Round1 ANLI Round2 ANLI Round3 Fine-tuned RGBERTa-Large Fine-tuned SOTA a Fine-tuned SOTA 0.0 = P a 70 rine tibed BER Teme ae 47.5 46 Fine-luned ROBERTa-Large 45.0 14 |= Finetined SERt-t OG Fine-tuned BERT-Large = âeâ Zero-Shot = 425 âeâ Zero-Shot Fy 42 _.â Zero-Shot 2 ââ Cne-Shot Fi ââ Cne-Shot 3 ââ One-Shot 2 0 âe~ Few-Shot (K=50) 2 40.0 ~e-~ Few-Shot (K=50) 2 ~~ Few-Shot (K=50) 5 38 40 35.0 3% F a 4 ee 32.6 <i 30 32 O1B O4B 0868 1368 2.68 67B 138 1758 O1B O4B 086 1368 266 67B 138 1758 0.1B 0.48 086 13B 2.68 67B 138 1758 Parameters in LM (Billions) Parameters in LM (Billions) Parameters in LM (Billions)
ANLI Round1 Fine-tuned RGBERTa-Large 70 Fine-tuned BERT-Large = âeâ Zero-Shot 2 ââ Cne-Shot 2 0 âe~ Few-Shot (K=50) 40 F a ee 30 O1B O4B 0868 1368 2.68 67B 138 1758 Parameters in LM (Billions)
ANLI Round2 Fine-tuned SOTA 0.0 = P a rine tibed BER Teme ae 47.5 45.0 = 425 âeâ Zero-Shot Fi ââ Cne-Shot 2 40.0 ~e-~ Few-Shot (K=50) 5 35.0 32.6 O1B O4B 086 1368 266 67B 138 1758 Parameters in LM (Billions)
ANLI Round3 a Fine-tuned SOTA 46 Fine-luned ROBERTa-Large 14 |= Finetined SERt-t OG Fy 42 _.â Zero-Shot 3 ââ One-Shot 2 ~~ Few-Shot (K=50) 38 3% 4 <i 32 0.1B 0.48 086 13B 2.68 67B 138 1758 Parameters in LM (Billions)
Figure H.9: All results for all ANLI rounds.
66
cycle letters Wordscramble (few-shot) mid word 1 anagrams 70 âeâ Zeto-Shot âeâ cycle letters âeâ Zefo-Shot 35. âe-â One-Shot eo 7 mid word 1 anagrams 14. âeâ One-Shot âeâ Few-Shot (K=100) â*â mi word 2 anagrams âeâ Few-Shot (K=100) wD â*â random insertion â* reversed words 25 > > 9 b=) 2 2 2 7 |] 245 3 10 5 0 oN âââ⢠0.1B O4B O8B 13B) 26B 67B 13B 1758 0.1B 04B O8B 13B 26B 67B 13B 1758 0.1B 04B OSB 13B 26B 67B 138 1758 Parameters in LM (Billions) Parameters in LM (Billions) Parameters in LM (Billions) mid word 2 anagrams = random insertion re reversed words 40 â.â Zero-Shot â* Zerto-Shot ~ â*â Zero-Shot 35 â*â One-Shot 6 ~*> One-Shot âeâ One-Shot ~âeâ Few-Shot (K=100) ~â@â Few-Shot (K=100) 4 ee -Few-Shot (K=100) xâ sD a 7° ; i 15 2 10 5 10 0 0 0.18 04B O88 13B 26B 67B 138 1758 0.1B 04B O88 13B 26B 67B 138 1758 0.18 04B O88 13B 26B 67B 138 1758 Parameters in LM (Billions) Parameters in LM (Billions) Parameters in LM (Billions)
cycle letters âeâ Zeto-Shot 35. âe-â One-Shot âeâ Few-Shot (K=100) wD 25 > 9 2 2 7 245 10 5 0 oN âââ⢠0.1B O4B O8B 13B) 26B 67B 13B 1758 Parameters in LM (Billions)
Wordscramble (few-shot) 70 âeâ cycle letters eo 7 mid word 1 anagrams â*â mi word 2 anagrams â*â random insertion â* reversed words > b=) 2 |] 3 0.1B 04B O8B 13B 26B 67B 13B 1758 Parameters in LM (Billions)
mid word 1 anagrams âeâ Zefo-Shot 14. âeâ One-Shot âeâ Few-Shot (K=100) 0.1B 04B OSB 13B 26B 67B 138 1758 Parameters in LM (Billions)
mid word 2 anagrams 40 â.â Zero-Shot 35 â*â One-Shot ~âeâ Few-Shot (K=100) xâ a 15 10 5 0 0.18 04B O88 13B 26B 67B 138 1758 Parameters in LM (Billions)
= random insertion â* Zerto-Shot 6 ~*> One-Shot ~â@â Few-Shot (K=100) sD 7° i 2 10 0 0.1B 04B O88 13B 26B 67B 138 1758 Parameters in LM (Billions)
re reversed words ~ â*â Zero-Shot âeâ One-Shot 4 ee -Few-Shot (K=100) ; 0.18 04B O88 13B 26B 67B 138 1758 Parameters in LM (Billions)
Figure H.10: All results for all Scramble tasks.
German -> English (SacreBLEU) ââ Zero-Shot 40 ~~ One-Shot âeâ Few-Shot (K=64) D 3 3 2â 10 0 0.1B 04B O88 13B 26B 67B 13B 1758 Parameters in LM (Billions)
English -> German (SacreBLEU) 39 â*- Zero-Shot âeâ One-Shot âeâ Few-Shot (K=64) : a 15 2 10 5 0 0.1B 04B O88 13B 26B 67B 138 1758 Parameters in LM (Billions)
English -> French (SacreBLEU) 40 â+â Zero-Shot âe One-Shot 35 Mg Few-Shot (K=64) » eal a 2 2 15 10 5 0 0.18 04B O88 13B 26B 67B 138 1758 Parameters in LM (Billions)
German -> English (SacreBLEU) English -> German (SacreBLEU) English -> French (SacreBLEU) ââ Zero-Shot 39 â*- Zero-Shot 40 â+â Zero-Shot 40 ~~ One-Shot âeâ One-Shot âe One-Shot âeâ Few-Shot (K=64) âeâ Few-Shot (K=64) 35 Mg Few-Shot (K=64) » D 3 : eal 3 a 15 a 2 2â 2 2 15 10 10 10 5 5 0 0 0 0.1B 04B O88 13B 26B 67B 13B 1758 0.1B 04B O88 13B 26B 67B 138 1758 0.18 04B O88 13B 26B 67B 138 1758 Parameters in LM (Billions) Parameters in LM (Billions) Parameters in LM (Billions) French -> English (SacreBLEU) English -> Romanian (SacreBLEU) Romanian -> English (SacreBLEU) 49 ~*~ Zefo-Shot 25 âe- Zeto-Shot 40 ~*~ Zefo-Shot =e One-Shot =e One-Shot =e One-Shot 35 â*â Few-Shot (K=64) âeâ Few-Shot (K=64) 35 â*â Few-Shot (K=64) 2 D Ke] Z 2 2 15 5 2% g g g a 3 3â 15 0 15 10 10 5 5 5 0 0 0 0.1B O4B OSB 13B 266 67B 13B 1758 0.1B 04B OSB 13B 266 67B 13B 1758 0.1B 048 O88 138 2.68 67B 138 1758 Parameters in LM (Billions) Parameters in LM (Billions) Parameters in LM (Billions) German -> English (Multi-BLEU) English -> German (Multi-BLEU) English -> French (Multi-BLEU) 40 â*â Zero-Shot â+â Zero-Shot â*â Zerto-Shot âeâ One-Shot âeâ One-Shot 3 =e One-Shot 3 Few-Shot (K=64) 25 ee Few-Shot (K=64) we Few-Shot (K=64) 30 25 2 ;* 3 3â A 20 3 1 ee 2 2 2 15 10 10 10 5 5 b) 0 0 0 0.1B 04B O88 13B 26B 67B 13B 1758 0.18 04B O88 13B 26B 67B 13B 1758 0.18 04B O88 13B 26B 67B 138 1758 Parameters in LM (Billions) Parameters in LM (Billions) Parameters in LM (Billions) French -> English (Multi-BLEU) English -> Romanian (Multi-BLEU) Romanian -> English (Multi-BLEU) ba âeâ Zeto-Shot » âeâ Zefo-Shot â0 //- Zero-Shot 35 7°" One-Shot âeâ One-Shot 35 =e One-Shot âeâ Few-Shot (K=64) âeâ Few-Shot (K=64) âeâ Few-Shot (K=64) D D 15 25 2 a 2 a 5 2 3 te 3 15 15 10 5 10 5 5 0 0 0 0.1B 04B O88 13B 268 67B 13B 1758 0.1B 04B O88 13B 268 67B 138 1758 0.1B 048 O88 138 268 67B 138 17658 Parameters in LM (Billions) Parameters in LM (Billions) Parameters in LM (Billions)
French -> English (SacreBLEU) 49 ~*~ Zefo-Shot =e One-Shot 35 â*â Few-Shot (K=64) D Z 2 g a 15 10 5 0 0.1B O4B OSB 13B 266 67B 13B 1758 Parameters in LM (Billions)
English -> Romanian (SacreBLEU) 25 âe- Zeto-Shot =e One-Shot âeâ Few-Shot (K=64) 2 2 15 g 3 0 5 0 0.1B 04B OSB 13B 266 67B 13B 1758 Parameters in LM (Billions)
Romanian -> English (SacreBLEU) 40 ~*~ Zefo-Shot =e One-Shot 35 â*â Few-Shot (K=64) Ke] 5 2% g 3â 15 10 5 0 0.1B 048 O88 138 2.68 67B 138 1758 Parameters in LM (Billions)
German -> English (Multi-BLEU) 40 â*â Zero-Shot âeâ One-Shot 3 Few-Shot (K=64) 30 ;* A 20 2 15 10 5 0 0.1B 04B O88 13B 26B 67B 13B 1758 Parameters in LM (Billions)
English -> German (Multi-BLEU) â+â Zero-Shot âeâ One-Shot 25 ee Few-Shot (K=64) 2 3 3 1 2 10 5 0 0.18 04B O88 13B 26B 67B 13B 1758 Parameters in LM (Billions)
English -> French (Multi-BLEU) â*â Zerto-Shot 3 =e One-Shot we Few-Shot (K=64) 25 3â ee 2 10 b) 0 0.18 04B O88 13B 26B 67B 138 1758 Parameters in LM (Billions)
French -> English (Multi-BLEU) ba âeâ Zeto-Shot 35 7°" One-Shot âeâ Few-Shot (K=64) D 25 a 2 3 15 10 5 0 0.1B 04B O88 13B 268 67B 13B 1758 Parameters in LM (Billions)
English -> Romanian (Multi-BLEU) » âeâ Zefo-Shot âeâ One-Shot âeâ Few-Shot (K=64) 15 a te 5 0 0.1B 04B O88 13B 268 67B 138 1758 Parameters in LM (Billions)
Romanian -> English (Multi-BLEU) â0 //- Zero-Shot 35 =e One-Shot âeâ Few-Shot (K=64) D 2 5 2 3 15 10 5 0 0.1B 048 O88 138 268 67B 138 17658 Parameters in LM (Billions)
Figure H.11: All results for all Translation tasks.
67
# References
[ADG+16] Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, and Nando De Freitas. Learning to learn by gradient descent by gradient descent. In Advances in neural information processing systems, pages 3981â3989, 2016.
[AI19] WeChat AI. Tr-mt (ensemble), December 2019.
[AJF19] Roee Aharoni, Melvin Johnson, and Orhan Firat. Massively multilingual neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 2019.
[BBDIW20] Su Lin Blodgett, Solon Barocas, Hal Daum´e III, and Hanna Wallach. Language (technology) is power: A critical survey of âbiasâ in nlp. arXiv preprint arXiv:2005.14050, 2020.
[BCFL13] Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. Semantic parsing on freebase from question-answer pairs. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1533â1544, 2013.
[BDD+09] Luisa Bentivogli, Ido Dagan, Hoa Trang Dang, Danilo Giampiccolo, and Bernardo Magnini. The ï¬fth PASCAL recognizing textual entailment challenge. 2009.
[BES10] Stefano Baccianella, Andrea Esuli, and Fabrizio Sebastiani. Sentiwordnet 3.0: an enhanced lexical resource for sentiment analysis and opinion mining. In Lrec, volume 10, pages 2200â2204, 2010.
[BHDD+06] Roy Bar Haim, Ido Dagan, Bill Dolan, Lisa Ferro, Danilo Giampiccolo, Bernardo Magnini, and Idan Szpektor. The second PASCAL recognising textual entailment challenge. 2006.
[BHT+20] Yonatan Bisk, Ari Holtzman, Jesse Thomason, Jacob Andreas, Yoshua Bengio, Joyce Chai, Mirella Lapata, Angeliki Lazaridou, Jonathan May, Aleksandr Nisnevich, et al. Experience grounds language. arXiv preprint arXiv:2004.10151, 2020.
[BLC13] Yoshua Bengio, Nicholas L´eonard, and Aaron C. Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. Arxiv, 2013.
[BZB+19] Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi. Piqa: Reasoning about physical commonsense in natural language. arXiv preprint arXiv:1911.11641, 2019.
[Car97] Rich Caruana. Multitask learning. Machine learning, 28(1), 1997.
[CB78] Susan Carey and Elsa Bartlett. Acquiring a single new word. Proceedings of the Stanford Child Language Conference, 1978.
[CCE+18] Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge. ArXiv, abs/1803.05457, 2018.
[CGRS19] Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. Generating long sequences with sparse transformers, 2019.
[CHI+18] Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen-tau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. Quac : Question answering in context. Arxiv, 2018.
[CLC+19] Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. BoolQ: Exploring the surprising difï¬culty of natural yes/no questions. arXiv preprint arXiv:1905.10044, 2019.
[CLY+19] Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. Uniter: Learning universal image-text representations. arXiv preprint arXiv:1909.11740, 2019.
[Cra17] Kate Crawford. The trouble with bias. NIPS 2017 Keynote, 2017.
[DCLT18] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
68
[DGM06] Ido Dagan, Oren Glickman, and Bernardo Magnini. The PASCAL recognising textual entailment challenge. In Machine learning challenges. evaluating predictive uncertainty, visual object classiï¬cation, and recognising textual entailment, pages 177â190. Springer, 2006.
[DGV+18] Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Lukasz Kaiser. Universal transformers. Arxiv, 2018.
[DHKH14] Nadir Durrani, Barry Haddow, Philipp Koehn, and Kenneth Heaï¬eld. Edinburghâs phrase-based machine translation systems for wmt-14. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 97â104, 2014.
[DL15] Andrew M. Dai and Quoc V. Le. Semi-supervised sequence learning. In Advances in neural information processing systems, 2015.
[DMST19] Marie-Catherine De Marneffe, Mandy Simons, and Judith Tonhauser. The CommitmentBank: Investigat- ing projection in naturally occurring discourse. 2019. To appear in proceedings of Sinn und Bedeutung 23. Data can be found at https://github.com/mcdm/CommitmentBank/.
[DSC+16] Yan Duan, John Schulman, Xi Chen, Peter L. Bartlett, Ilya Sutskever, and Pieter Abbeel. Rl2: Fast reinforcement learning via slow reinforcement learning. ArXiv, abs/1611.02779, 2016.
[DWD+19] Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. Drop: A reading comprehension benchmark requiring discrete reasoning over paragraphs. arXiv preprint arXiv:1903.00161, 2019.
[DYY+19] Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G. Carbonell, Quoc V. Le, and Ruslan Salakhutdinov. Transformer-xl: Attentive language models beyond a ï¬xed-length context. Arxiv, 2019.
[EOAG18] Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. Understanding back-translation at scale. arXiv preprint arXiv:1808.09381, 2018.
[FAL17] Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. ArXiv, abs/1703.03400, 2017.
[Fyo00] Yaroslav Fyodorov. A natural logic inference system, 2000.
[GG19] Hila Gonen and Yoav Goldberg. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. arXiv preprint arXiv:1903.03862, 2019.
[GLT+20] Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. Realm: Retrieval- augmented language model pre-training. arXiv preprint arXiv:2002.08909, 2020.
[GMDD07] Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. The third PASCAL recognizing textual entailment challenge. In Proceedings of the ACL-PASCAL workshop on textual entailment and paraphrasing, pages 1â9. Association for Computational Linguistics, 2007.
[Gra16] Alex Graves. Adaptive computation time for recurrent neural networks. Arxiv, 2016.
[GSL+18] Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R Bowman, and Noah A Smith. Annotation artifacts in natural language inference data. arXiv preprint arXiv:1803.02324, 2018.
[GSR19] Sebastian Gehrmann, Hendrik Strobelt, and Alexander M. Rush. Gltr: Statistical detection and visualiza- tion of generated text. arXiv preprint arXiv: 1906.04043, 2019.
[GWC+18] Jiatao Gu, Yong Wang, Yun Chen, Kyunghyun Cho, and Victor OK Li. Meta-learning for low-resource neural machine translation. arXiv preprint arXiv:1808.08437, 2018.
[HB20] Daniel Hernandez and Tom Brown. Ai and efï¬ciency, May 2020.
[HBFC19] Ari Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. The curious case of neural text degeneration. CoRR, abs/1904.09751, 2019.
[HLW+20] Dan Hendrycks, Xiaoyuan Liu, Eric Wallace, Adam Dziedzic, Rishabh Krishnan, and Dawn Song. Pretrained transformers improve out of distribution robustness. arXiv preprint arXiv:2004.06100, 2020.
69
[HNA+17] Joel Hestness, Sharan Narang, Newsha Ardalani, Gregory Diamos, Heewoo Jun, Hassan Kianinejad, Md. Mostofa Ali Patwary, Yang Yang, and Yanqi Zhou. Deep learning scaling is predictable, empirically. arXiv preprint arXiv:1712.00409, 2017.
[HR18] Jeremy Howard and Sebastian Ruder. Universal language model ï¬ne-tuning for text classiï¬cation. arXiv preprint arXiv:1801.06146, 2018.
[HVD15] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
[HYC01] Sepp Hochreiter, A Steven Younger, and Peter R Conwell. Learning to Learn Using Gradient Descent. In International Conference on Artiï¬cial Neural Networks, pages 87â94. Springer, 2001.
[HZJ+19] Po-Sen Huang, Huan Zhang, Ray Jiang, Robert Stanforth, Johannes Welbl, Jack Rae, Vishal Maini, Dani Yogatama, and Pushmeet Kohli. Reducing sentiment bias in language models via counterfactual evaluation. arXiv preprint arXiv:1911.03064, 2019.
[IBGC+14] Mohit Iyyer, Jordan Boyd-Graber, Leonardo Claudino, Richard Socher, and Hal Daum´e III. A neural network for factoid question answering over paragraphs. In Empirical Methods in Natural Language Processing, 2014.
[IDCBE19] Daphne Ippolito, Daniel Duckworth, Chris Callison-Burch, and Douglas Eck. Automatic detection of generated text is easiest when humans are fooled. arXiv preprint arXiv:1911.00650, 2019.
[JCWZ17] Mandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke Zettlemoyer. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. arXiv preprint arXiv:1705.03551, 2017.
[JN20] Zheng Junyuan and Gamma Lab NYC. Numeric transformer - albert, March 2020.
[JVS+16] Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016.
[JYS+19] Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. TinyBERT: Distilling BERT for natural language understanding. arXiv preprint arXiv:1909.10351, 2019.
[JZC+19] Ying Ju, Fubang Zhao, Shijie Chen, Bowen Zheng, Xuefeng Yang, and Yunfeng Liu. Technical report on conversational question answering. arXiv preprint arXiv:1909.10772, 2019.
[KCR+18] Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth. Looking beyond the surface: A challenge set for reading comprehension over multiple sentences. In Proceedings of North American Chapter of the Association for Computational Linguistics (NAACL), 2018.
[KKS+20] Daniel Khashabi, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Clark, and Hannaneh Hajishirzi. Uniï¬edqa: Crossing format boundaries with a single qa system. arXiv preprint arXiv:2005.00700, 2020.
[KMB20] Sarah E. Kreps, Miles McCain, and Miles Brundage. All the news thatâs ï¬t to fabricate: Ai-generated text as a tool of media misinformation, 2020.
[KMH+20] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models, 2020.
[KPR+19] Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redï¬eld, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. Natural ques- tions: a benchmark for question answering research. Transactions of the Association of Computational Linguistics, 2019.
[KR16] Yoon Kim and Alexander M. Rush. Sequence-level knowledge distillation. Arxiv, 2016.
[LB02] Edward Loper and Steven Bird. Nltk: The natural language toolkit, 2002.
[LC19] Guillaume Lample and Alexis Conneau. Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291, 2019.
70
[LCG+19] Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Sori- cut. ALBERT: A lite BERT for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942, 2019.
[LCH+20] Xiaodong Liu, Hao Cheng, Pengcheng He, Weizhu Chen, Yu Wang, Hoifung Poon, and Jianfeng Gao. Adversarial training for large neural language models. arXiv preprint arXiv:2004.08994, 2020.
[LDL19] Zhongyang Li, Xiao Ding, and Ting Liu. Story ending prediction by transferable bert. arXiv preprint arXiv:1905.07504, 2019.
[LDM12] Hector Levesque, Ernest Davis, and Leora Morgenstern. The Winograd schema challenge. In Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning, 2012.
[LGG+20] Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. Multilingual denoising pre-training for neural machine translation. arXiv preprint arXiv:2001.08210, 2020.
[LGH+15] Xiaodong Liu, Jianfeng Gao, Xiaodong He, Li Deng, Kevin Duh, and Ye-Yi Wang. Representation learning using multi-task deep neural networks for semantic classiï¬cation and information retrieval. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2015.
[LH17] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017.
[LHCG19a] Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. Improving multi-task deep neural networks via knowledge distillation for natural language understanding. arXiv preprint arXiv:1904.09482, 2019.
[LHCG19b] Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. Multi-task deep neural networks for natural language understanding. arXiv preprint arXiv:1901.11504, 2019.
[Lin20] Tal Linzen. How can we accelerate progress towards human-like linguistic generalization? arXiv preprint arXiv:2005.00955, 2020.
[LLG+19] Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461, 2019.
[LM17] Ke Li and Jitendra Malik. Learning to optimize neural nets. arXiv preprint arXiv:1703.00441, 2017.
[LOG+19] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
[LPP+20] Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich K¨uttler, Mike Lewis, Wen-tau Yih, Tim Rockt¨aschel, Sebastian Riedel, and Kiela Douwe. Retrieval-augmented generation for knowledge-intensive nlp tasks. arXiv preprint arXiv:2005.11401, 2020.
[LSP+18] Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. Generating Wikipedia by summarizing long sequences. arXiv preprint arXiv:1801.10198, 2018.
[LWS+20] Zhuohan Li, Eric Wallace, Sheng Shen, Kevin Lin, Kurt Keutzer, Dan Klein, and Joseph E. Gonzalez. Train large, then compress: Rethinking model size for efï¬cient training and inference of transformers, 2020.
[LXL+17] Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. Race: Large-scale reading comprehension dataset from examinations. arXiv preprint arXiv:1704.04683, 2017.
[LYN+20] Sheng-Chieh Lin, Jheng-Hong Yang, Rodrigo Nogueira, Ming-Feng Tsai, Chuan-Ju Wang, and Jimmy Lin. Tttttackling winogrande schemas. arXiv preprint arXiv:2003.08380, 2020.
[Mac92] David. MacKay. Information-based objective functions for active data selection. Neural Computation, 1992.
71
[MBXS17] Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. Learned in translation: Con- textualized word vectors. In Advances in Neural Information Processing Systems, pages 6294â6305, 2017.
[MCCD13] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efï¬cient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013.
[MCH+16] Nasrin Mostafazadeh, Nathanael Chambers, Xiaodong He, Devi Parikh, Dhruv Batra, Lucy Vanderwende, Pushmeet Kohli, and James Allen. A corpus and evaluation framework for deeper understanding of commonsense stories. arXiv preprint arXiv:1604.01696, 2016.
[MCKS18] Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. Can a suit of armor conduct electricity? a new dataset for open book question answering. ArXiv, abs/1809.02789, 2018.
[MKAT18] Sam McCandlish, Jared Kaplan, Dario Amodei, and OpenAI Dota Team. An empirical model of large-batch training, 2018.
[MKM+94] Mitchell Marcus, Grace Kim, Mary Ann Marcinkiewicz, Robert MacIntyre, Ann Bies, Mark Ferguson, Karen Katz, and Britta Schasberger. The penn treebank: annotating predicate argument structure. In Proceedings of the workshop on Human Language Technology, pages 114â119. Association for Computational Linguistics, 1994.
[MKXS18] Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. The natural language decathlon: Multitask learning as question answering. arXiv preprint arXiv:1806.08730, 2018.
[MPL19] R Thomas McCoy, Ellie Pavlick, and Tal Linzen. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. arXiv preprint arXiv:1902.01007, 2019.
[MWZ+18] Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. Model cards for model reporting, 2018.
[NBR20] Moin Nadeem, Anna Bethke, and Siva Reddy. Stereoset: Measuring stereotypical bias in pretrained language models. arXiv preprint arXiv:2004.09456, 2020.
[NK19] Timothy Niven and Hung-Yu Kao. Probing neural network comprehension of natural language arguments. arXiv preprint arXiv:1907.07355, 2019.
[Nor09] Peter Norvig. Natural language corpus data, 2009.
[NvNvdG19] Malvina Nissim, Rik van Noord, and Rob van der Goot. Fair is better than sensational: Man is to doctor as woman is to doctor. arXiv preprint arXiv:1905.09866, 2019.
[NWD+19] Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. Adversarial nli: A new benchmark for natural language understanding. arXiv preprint arXiv:1910.14599, 2019.
[oR16] University of Regensburg. Fascha, 2016.
[PCC18] Mohammad Taher Pilehvar and Jose Camacho-Collados. WIC: 10,000 example pairs for evaluating context-sensitive representations. arXiv preprint arXiv:1808.09121, 2018.
[PFB18] Jason Phang, Thibault F´evry, and Samuel R. Bowman. Sentence encoders on STILTs: Supplementary training on intermediate labeled-data tasks. arXiv preprint arXiv:1811.01088, 2018.
[PHR+18] Adam Poliak, Aparajita Haldar, Rachel Rudinger, J. Edward Hu, Ellie Pavlick, Aaron Steven White, and Benjamin Van Durme. Collecting diverse natural language inference problems for sentence representation evaluation. In Proceedings of EMNLP, 2018.
[PKL+16] Denis Paperno, Germ´an Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fern´andez. The lambada dataset: Word prediction requiring a broad discourse context. arXiv preprint arXiv:1606.06031, 2016.
[PNZtY18] Matthew E. Peters, Mark Neumann, Luke Zettlemoyer, and Wen tau Yih. Dissecting contextual word embeddings: Architecture and representation, 2018.
[Pos18] Matt Post. A call for clarity in reporting BLEU scores. arXiv preprint arXiv:1804.08771, 2018.
72
[PSM14] Jeffrey Pennington, Richard Socher, and Christopher Manning. GloVe: Global vectors for word In Proceedings of the 2014 conference on empirical methods in natural language representation. processing (EMNLP), 2014.
[QIA20] QIANXIN. Sa-net on albert (ensemble), April 2020.
[QMZH19] Yusu Qian, Urwa Muaz, Ben Zhang, and Jae Won Hyun. Reducing gender bias in word-level language models with a gender-equalizing loss function. arXiv preprint arXiv:1905.12801, 2019.
[RBG11] Melissa Roemmele, Cosmin Adrian Bejan, and Andrew S Gordon. Choice of plausible alternatives: An evaluation of commonsense causal reasoning. In 2011 AAAI Spring Symposium Series, 2011.
[RCM19] Siva Reddy, Danqi Chen, and Christopher D Manning. Coqa: A conversational question answering challenge. Transactions of the Association for Computational Linguistics, 7:249â266, 2019.
[RCP+17] Scott Reed, Yutian Chen, Thomas Paine, A¨aron van den Oord, SM Eslami, Danilo Rezende, Oriol Vinyals, and Nando de Freitas. Few-shot autoregressive density estimation: Towards learning to learn distributions. arXiv preprint arXiv:1710.10304, 2017.
[RJL18] Pranav Rajpurkar, Robin Jia, and Percy Liang. Know what you donât know: Unanswerable questions for squad. arXiv preprint arXiv:1806.03822, 2018.
[RL16] Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. ICLR 2017 (oral), 2016.
[RLL+19] Qiu Ran, Yankai Lin, Peng Li, Jie Zhou, and Zhiyuan Liu. NumNet: Machine reading comprehension with numerical reasoning. In Proceedings of EMNLP, 2019.
[RNLVD18] Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. Gender bias in coreference resolution. arXiv preprint arXiv:1804.09301, 2018.
[RNSS18] Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training, 2018.
[Ros12] R.S. Ross. Guide for conducting risk assessments. NIST Special Publication, 2012.
[RRBS19] Jonathan S. Rosenfeld, Amir Rosenfeld, Yonatan Belinkov, and Nir Shavit. A constructive prediction of the generalization error across scales, 2019.
[RRS20] Adam Roberts, Colin Raffel, and Noam Shazeer. How much knowledge can you pack into the parameters of a language model? arXiv preprint arXiv:2002.08910, 2020.
[RSR+19] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a uniï¬ed text-to-text transformer, 2019.
[RWC+19] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners, 2019.
[SBBC19] Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Winogrande: An adversarial winograd schema challenge at scale, 2019.
[SBC+19] Irene Solaiman, Miles Brundage, Jack Clark, Amanda Askell, Ariel Herbert-Voss, Jeff Wu, Alec Radford, Gretchen Krueger, Jong Wook Kim, Sarah Kreps, Miles McCain, Alex Newhouse, Jason Blazakis, Kris McGufï¬e, and Jasmine Wang. Release strategies and the social impacts of language models, 2019.
[SCNP19] Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. The woman worked as a babysitter: On biases in language generation. arXiv preprint arXiv:1909.01326, 2019.
[SDCW19] Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108, 2019.
[SDSE19] Roy Schwartz, Jesse Dodge, Noah A. Smith, and Oren Etzioni. Green AI. CoRR, abs/1907.10597, 2019.
[SHB15] Rico Sennrich, Barry Haddow, and Alexandra Birch. Improving neural machine translation models with monolingual data. arXiv preprint arXiv:1511.06709, 2015.
73
[SMM+17] Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538, 2017.
[SPP+19] Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro.
Megatron-lm: Training multi-billion parameter language models using model parallelism, 2019.
[SS20] Timo Schick and Hinrich Sch¨utze. Exploiting cloze questions for few-shot text classiï¬cation and natural language inference. arXiv preprint arXiv:2001.07676, 2020.
[STQ+19] Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie-Yan Liu. MASS: Masked sequence to sequence pre-training for language generation. arXiv preprint arXiv:1905.02450, 2019.
[TFR+17] Josh Tobin, Rachel Fong, Alex Ray, Jonas Schneider, Wojciech Zaremba, and Pieter Abbeel. Domain randomization for transferring deep neural networks from simulation to the real world. In 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS), pages 23â30. IEEE, 2017.
[TL05] Peter D. Turney and Michael L. Littman. Corpus-based learning of analogies and semantic relations. CoRR, abs/cs/0508103, 2005.
[TL18] Trieu H. Trinh and Quoc V. Le. A simple method for commonsense reasoning. arXiv preprint arXiv:1806.02847, 2018.
[TLBS03] Peter D. Turney, Michael L. Littman, Jeffrey Bigham, and Victor Shnayder. Combining independent modules to solve multiple-choice synonym and analogy problems. CoRR, cs.CL/0309035, 2003.
[Tur20] Project Turing. Microsoft research blog, Feb 2020.
[VBL+16] Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Daan Wierstra, et al. Matching Networks for One Shot Learning. In Advances in neural information processing systems, pages 3630â3638, 2016.
[VSP+17] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Åukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, 2017.
[WPN+19] Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. Superglue: A stickier benchmark for general-purpose language understand- ing systems. In Advances in Neural Information Processing Systems, pages 3261â3275, 2019.
[WXH+18] Yiren Wang, Yingce Xia, Tianyu He, Fei Tian, Tao Qin, ChengXiang Zhai, and Tie-Yan Liu. Multi-agent dual learning. ICLR 2019, 2018.
[XDH+19] Qizhe Xie, Zihang Dai, Eduard Hovy, Minh-Thang Luong, and Quoc V. Le. Unsupervised data augmentation for consistency training, 2019.
[YdC+19] Dani Yogatama, Cyprien de Masson dâAutume, Jerome Connor, Tomas Kocisky, Mike Chrzanowski, Lingpeng Kong, Angeliki Lazaridou, Wang Ling, Lei Yu, Chris Dyer, et al. Learning and evaluating general linguistic intelligence. arXiv preprint arXiv:1901.11373, 2019.
[YDY+19] Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. XLNet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237, 2019.
[ZHB+19] Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really ï¬nish your sentence? arXiv preprint arXiv:1905.07830, 2019.
[ZHR+19] Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. Defending against neural fake news. arXiv preprint arXiv:1905.12616, 2019.
[ZLL+18] Sheng Zhang, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Kevin Duh, and Benjamin Van Durme. ReCoRD: Bridging the gap between human and machine commonsense reading comprehension. arXiv preprint arXiv:1810.12885, 2018.
[ZSW+19a] Daniel M. Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B. Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. Fine-tuning language models from human preferences, 2019.
74
[ZSW+19b] Daniel M. Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B. Brown, Alec Radford, Dario Amodei, Paul Chris- tiano, and Geoffrey Irving. Fine-tuning language models from human preferences. ArXiv, abs/1909.08593, 2019.
75 | {
"id": "2004.09456"
} |
2005.13170 | Chat as Expected: Learning to Manipulate Black-box Neural Dialogue Models | Recently, neural network based dialogue systems have become ubiquitous in our
increasingly digitalized society. However, due to their inherent opaqueness,
some recently raised concerns about using neural models are starting to be
taken seriously. In fact, intentional or unintentional behaviors could lead to
a dialogue system to generate inappropriate responses. Thus, in this paper, we
investigate whether we can learn to craft input sentences that result in a
black-box neural dialogue model being manipulated into having its outputs
contain target words or match target sentences. We propose a reinforcement
learning based model that can generate such desired inputs automatically.
Extensive experiments on a popular well-trained state-of-the-art neural
dialogue model show that our method can successfully seek out desired inputs
that lead to the target outputs in a considerable portion of cases.
Consequently, our work reveals the potential of neural dialogue models to be
manipulated, which inspires and opens the door towards developing strategies to
defend them. | http://arxiv.org/pdf/2005.13170 | Haochen Liu, Zhiwei Wang, Tyler Derr, Jiliang Tang | cs.CL, cs.AI | 10 pages | null | cs.CL | 20200527 | 20200527 | 0 2 0 2
y a M 7 2 ] L C . s c [
1 v 0 7 1 3 1 . 5 0 0 2 : v i X r a
# Chat as Expected: Learning to Manipulate Black-box Neural Dialogue Models
Haochen Liu Michigan State University [email protected]
Zhiwei Wang Michigan State University [email protected]
Tyler Derr Michigan State University [email protected]
Jiliang Tang Michigan State University [email protected]
# Abstract
Recently, neural network based dialogue sys- tems have become ubiquitous in our increas- ingly digitalized society. However, due to their inherent opaqueness, some recently raised con- cerns about using neural models are starting to be taken seriously. In fact, intentional or un- intentional behaviors could lead to a dialogue system to generate inappropriate responses. Thus, in this paper, we investigate whether we can learn to craft input sentences that re- sult in a black-box neural dialogue model be- ing manipulated into having its outputs con- tain target words or match target sentences. We propose a reinforcement learning based model that can generate such desired inputs automatically. Extensive experiments on a popular well-trained state-of-the-art neural di- alogue model show that our method can suc- cessfully seek out desired inputs that lead to the target outputs in a considerable portion of cases. Consequently, our work reveals the po- tential of neural dialogue models to be manip- ulated, which inspires and opens the door to- wards developing strategies to defend them.
# Introduction
In recent years, we have seen an astonishingly fast adoption of dialogue systems being utilized across many domains and we are interacting with them more and more in our everyday lives. From early such as chatbots ELIZA (Weizenbaum, 1966) and ALICE (Wallace, 2001) to later ones like Siri and XiaoIce (Shum et al., 2018), the techniques have evolved from hand-crafted rule-based meth- ods (Weizenbaum, 1966; Goddeau et al., 1996), retrieval-based methods (Lu and Li, 2013; Hu et al., 2014) to learning-based methods (Shang et al., 2015; Vinyals and Le, 2015; Serban et al., 2016). Recently, because of the advent of a series of deep learning models and the appearance of large-scale real dialogue corpora, end-to-end neural generative
dialogue models emerge (Chen et al., 2017). Due to the simple implementation and the strong gen- eralization ability of neural dialogue models, they are one of the most popular techniques to build practical chatbot services (Zhou et al., 2018).
However, with the wide application of neural dialogue models, the ethical challenges they bring are attracting more and more attention (Hender- son et al., 2017; Liu et al., 2019a; Dinan et al., 2019). More recently it has been demonstrated that neural networks suffer from some problems includ- ing vulnerability due to their black-box nature and our lack of truly understanding their inner process- ing (Szegedy et al., 2013; Xu et al., 2019). Thus, as we are integrating these systems into more crit- ical and sensitive domains, it raises some serious concerns (Yuan et al., 2019), such as whether or not they can be manipulated to provide certain de- sired output (He and Glass, 2018; Liu et al., 2019b). More speciï¬cally, if such systems can be manipu- lated it could potentially cause a drastic shift to the current paradigm of how we interact with these sys- tems. For example, Tay, an AI chatbot developed by Microsoft, was shut down shortly after release due to its racism, sexist and obscene speech (Wolf et al., 2017). Online troublemakers found its vul- nerability and tried a variety of inputs to induce it to output inappropriate (malicious or sensitive) responses (Price, 2016). To that end, in this work, we set out to study this fundamental question of whether we can learn to manipulate state-of-the- art black-box dialogue models to produce target outputs by crafting inputs automatically. It goes without saying that if indeed we can manipulate these systems and if we are currently integrating them into our daily lives at such a rapid pace, then this opens the door to a plethora of potential harm- ful attacks that could be performed and potentially result in an almost unimaginable set of possible negative outcomes.
Nevertheless, even having now realized how crit- ical this question is to being answered, the path to discovering an answer has numerous challenges. First, unlike many existing studies in other do- mains such as images (Goodfellow et al., 2014; Chen, 2018), here the input search space is dis- crete and thus the traditional gradient-based opti- mization methodologies cannot be harnessed effec- tively (Zhang et al., 2019; Zhao et al., 2017). Fur- thermore, while seeking to discover if the current dialogue systems can be manipulated, we should not make the unreasonable assumption of having access to the full details/knowledge (i.e., model structure and parameters) of the system. Currently, most developed methodologies have focused on the continuous input space domain and further- more assumed access to the model. Thus, since our problem is deï¬ned with a discrete input do- main and our concern of whether these models can be manipulated is more realistic in the setting of a black-box dialogue model, then existing methods can not be applied and we require the development of novel frameworks to answer this indispensable fundamental question.
To address the above-mentioned challenges, in this paper, we regard the learning to craft input sentences as a sequential decision-making process. To this end, we propose the Target Dialogue Gener- ation Policy Network (TDGPN), which serves as a reinforcement learning (RL) agent to iteratively generate tokens guided towards speciï¬c objectives. The proposed policy networks are optimized by the REINFORCE-style estimators (Williams, 1992), eliminating the needs for standard gradient back- propagation which is largely impaired by the dis- crete nature of the sentence generation process and the assumption of no access to the dialogue model parameters. Our main contributions are summa- rized as follows:
⢠We identify the importance of exploring the potential of a black-box neural dialogue model to be manipulated towards a target output.
⢠We devise an RL-based framework TDGPN to effectively overcome the challenges associated with crafting inputs that enable black-box neural dialogue models to generate target responses.
⢠Experiments on a well-trained black-box neu- ral dialogue model verify the effectiveness of TDGPN that lead to target responses with a con- siderable success rate.
Reward Function ry 1g? Dialogue Model oo
Policy Network
# Environment
Figure 1: Diagram showing the overall reinforcement learning process for TDGPN.
# 2 The Proposed Framework
# 2.1 Problem Deï¬nition
Before detailing the proposed framework, we ï¬rst recap the problem of learning to manipulate neu- ral dialogue models. Suppose we have a neural dialogue model D, which is regarded as a black box and able to output a response sentence SD for a given input sentence. We seek for learning to craft an input sentence S that will lead D to output a response SD := D(S) that satisï¬es spe- ciï¬c requirements. We consider the following two manipulation tasks:
Target word task. Given a target word wT , we aim to learn to craft an input sentence S that will lead the dialogue model to output a response sentence SD that contains the target word, i.e., wT â SD.
Target response task. Given a target response ST , we aim to learn to craft an input sentence S that will lead the dialogue model to output a response sentence SD that is semantically approximate to the target response, i.e., SD â ST .
We build a generative model Gθ parameterized by θ to learn to craft such input sentences. Then the above problems are both essentially an opti- mization problem where we want to ï¬nd optimum parameters of G so that a) the probability of wT being generated in SD, or b) the similarity between D(S) and ST can be maximized. However, it is very challenging to solve this problem with the standard gradient-based back-propagation methods because of two reasons. First, S consists of dis- crete tokens instead of continuous values. Thus, it is difï¬cult to let the gradient back through G. Sec- ond, we do not have access to the dialogue model
Step t-1 Step t Step t+1
Figure 2: Diagram showing how to obtain the inter- mediate reward R(S1:t, at) at step st using the Monte Carlo (MC) method.
parameters, which makes it impossible to compute the gradient w.r.t D. Therefore, in this section, we formulate the problem as an RL problem and repre- sent the generative model Gθ by a policy network Ïθ, which regards the token generation process as a decision making process with the purpose to obtain a maximum reward.
The overall learning diagram is shown in Fig- ure 1. There are two key components, i.e., pol- icy network and environment which consists of a dialogue model and a reward function. The pol- icy network will interact with the environment by generating input sentences S. Then, based on S the environment will provide rewards to the policy network, which will in turn update its parameters towards obtaining maximum rewards. Next, we detail the interaction process.
# 2.2 Sentence Generation Process
In this subsection, we describe the sentence gener- ation process which can be regarded as a decision making process. On a high level, at each step, a policy network observes the state, outputs a token (i.e., makes an action) and receives a reward based on the dialogue model (i.e. environment). The generation process will be terminated when the pol- icy network outputs an end-of-sentence token, after which the policy network will be updated according to the rewards. Next, we describe the state, actions, policy network, reward, and objective function.
# 2.2.1 State and Action
In our framework, we denote the state at step t of the sentence generation process as st and let st = S1:t, where S1:t is a sequence of already generated tokens at step t. More speciï¬cally,
Six = {v1,%2,--+ , a}, where x; ⬠V and V is the vocabulary containing all the possible tokens. In addition, we assume that the state is fully ob- served. The start state is 5) = {xo} which consists of a special token x9 that indicates the start of a sentence. Thus, the state transition is determinis- tic such that P(s; = $).4|5:-1, @g = v) = 1 and P(s¢ = s\s¢_1,a¢ = et) = 0, Vs" Siz, where az is the action (i.e., token) at step ¢.
# 2.2.2 Policy Network
In this work, we represent the policy by a Long Short-Term Memory (LSTM) Network whose in- put is the state and output is selection probabilities p over all the possible actions, which are the tokens in V including a special token indicating the end of the sentence and leading to the terminate state. The weights of the LSTM are the policy parameters denoted by θ. More speciï¬cally, given the current state st = S1:t = {x1, x2, · · · , xt}, where xi â V is the token generated at step t, the LSTM will out- put a sequence of hidden states {h1, h2, · · · , ht} by the following recursive functions:
ht = LSTM(xt, htâ1) (1)
With the hidden states {h1, h2, · · · , ht}, the rep- resentation of current state mt can be obtained through:
mt = g(h1, h2, · · · , ht) (2)
where g(·) is a function that maps the hidden states to a d-dimensional hidden space and there are many choices for g(·) such as using a popular attention mechanism (Bahdanau et al., 2014). In this work, we let g(h1, h2, · · · , ht) = ht, which is commonly used for many RNN-based models and leave the investigation into other choices as a future work. With the representation mt, the probability distri- bution over all the possible actions Ïθ(a|mt) is calculated as:
Ïθ(a|mt) = softmax(Wvmt + bv)
where Wv â R|V|Ãd and bv are the parameters to be learned during the training process when having a vocabulary size of |V|.
2.2.3 Reward Remember that ST is the target sentence and let SD = D(S) be the response produced by the dia- logue model D when given S as input. We deï¬ne
two different reward functions for the two manipu- lation tasks.
In this task we want to learn an input sentence which leads to the dialogue model to output a response containing the target word wT . Thus, we wish that the probability of the target word predicted by the output layer of the dialogue model to be the largest at some timestamp t â [T ]. We deï¬ne the reward of the input sentence S as follows:
R(S) = max(py(w") - amas {i(w")}) (4)
where p(wâ) is the predicted probability of target word wâ at timestamp ¢ and MAX yeyT {p(w?)} indicates the largest probability among all the words other than wâ at this timestamp.
In this task, we wish the dialogue model can output a response that is semantically similar to the target response but not necessarily exactly matching it. We use the average word embedding similarity (Wieting et al., 2016) between ST and SD = D(S) as the reward for the input sentence generated by the policy network. We deï¬ne the reward as follows:
R(S) = Sim(ST , D(S)) (5)
where S is the sentence generated by the policy network and Sim is the similarity measurement.
Eq (4) and (5) only provide the ï¬nal rewards to the whole generated sentence S. In addition, we design the reward R(S1:t, at) at the intermediate state st to be the action-value function. Formally,
R(S1:t, at) = QÏθ (S1:t, at) (6)
where QÏθ (a, St) is the Q-value, which estimates the expected ï¬nal reward for the given action and state pair (Watkins and Dayan, 1992). We further adopt the Monte Carlo methods (Yu et al., 2017; Sutton et al., 2000) to estimate QÏθ (S1:t, at). The way to calculate the intermediate reward is shown in Figure 2. Speciï¬cally, given S1:t and at, we use current policy Ïθ to continuously simulate T â t tokens until the terminate state is reached, where T is the length of the complete simulated sentence. The simulation process will be repeated N r times. We deï¬ne this formally as follows:
{sir}, = Simulationâ °(Si4,a4) (7)
where Si. is a simulated sentence and Si, = Sut for all j ⬠{1,...,.N"}. Now, given {Shr} the estimation of R(S}.4, az) is calculated as: jal
wo Sim(S" , vr R(S12, at) D(Si.7)) (8)
where t < T indicates the intermediate step. We note that when t = T we can directly use Eq (4) and (5) rather than Eq. (8). Objective Function. With the previously deï¬ned reward function, the objective function that the agent aims to maximize can be formulated as fol- lows:
J(θ) = ESâ¼Ïθ(S)R(S) (9)
The accurate value of J(θ) in Eq. (9) is very dif- ï¬cult to obtain in practice. Next, we describe our optimization procedure for TDGPN.
# 2.3 Monte-Carlo Policy Gradient: REINFORCE
To optimize the objective in Eq. (9), we adopt the widely used REINFORCE algorithm (Williams, 1992), where Monte-Carlo sampling is applied to estimate âθJ(θ). Speciï¬cally,
LS \Vi6(S = Beans [R(S) N T 1 _ © 7 DD Vo log r9(ai| Six) R(S i=1 t=1 Vot(0 (10) m79(5)V log m9(S)]
We replace the reward for the whole sentence in Eq. (10) with the intermediate reward to acceler- ate the training speed and effectiveness (Liu et al., 2017). Thus, the estimated gradient of TDGPNâs objective is rewritten as:
N T a vU>d Vo log m6 (ai | S14) R( Sit, a4) i=l t=1 end) Vot(@
(11)
With the obtained gradient âθJ(θ), the parameters of the policy network Ïθ can be updated as follows:
θ = θ + αâθJ(θ) (12)
where α is the learning rate.
2.4 Alternate Learning
At the beginning of the RL training, since the pol- icy network has little knowledge about the environ- ment, it just randomly generates input sentences which hardly lead the dialogue model to output an expected response. As a result, few action- state pairs (.51.¢,q,) can receive positive rewards R(Si:t,a,). This problem impacts training effi- ciency seriously. To solve this issue, we propose to use samples that we know are closer to the de- sired input sentences so far as extra guidance to train the policy network. We call them âpacesetterâ samples. Recall that in each iteration, we sample N input sentences for RL training. The lengths of these sample sentences are {t;}/â.,. To estimate the reward for each state-action pair, we simulate NNâ complete input sentences. So we can get totally nu , N"t; complete input sentences and their cor- responding rewards. We collect the input sentences with the largest top- rewards as the pacesetter samples. Then, we use the pacesetter samples as the training examples to optimize the policy net- work via supervised learning (SL) for once. We update the parameters in the policy network by opti- mizing the maximum likelihood estimation (MLE) objective of the pacesetter samples. In this way, we perform RL and SL alternately to speed up the convergence of the policy network.
The detailed optimization process is shown in Algorithm 1. Here we brieï¬y introduce the algo- rithm. In line 1, we initialize the policy network Ïθ using a pre-trained language model Ïθ0. From line 3 to line 10, we use our proposed algorithm to update the parameters of Ïθ, until it could gen- erate sentences that will make the dialogue model output the target sentence ST or the number of iterations exceed M . Speciï¬cally, in line 5, we randomly sample N sentences from Ïθ, then for each sampled sentence at each step t, we sample N r sentences to estimate the intermediate reward 1:t, ai (i.e., action-value) R(Si t) and compute the gra- dient according to Eq. (11), which is used to update θ by Eq. (12). In lines 14 and 15, we collect the pacesetter samples and update the policy network on them via supervised learning. If the policy net- work cannot ï¬nd a desired sentence Sâ within M iterations, the algorithm will output a âFailureâ.
# Algorithm 1: Optimization for TDGPN
Input: Dialogue model D, a pre-trained language model Toy, target word wâ or target response Ssâ, hyper-parameters including N, M, N", K anda Output: an desired sentence S* or âFailureâ 1 Initialize 79: 79 < 7693 2 iter <+ 1; 3 repeat 4 iter ¢ iter + 1; 5 Generate N sentences {syn from 79 6 for i + 1 to N do 7 for t < 1toT do 8 Sample {sP}r, based on Sj. and 79 9 Compute R(S}.,, a) according to Eq. (8) 10 VoI(0) â VoI(0)+Vologno(ai, Sin) R(Si2, at) u end 12 end 13 0-04 £VoI(8); 14 Collect the top-K pacesetter samples {p;}/&, from {S?i ⬠(1, N],j ⬠[L,N"],t⬠[1 t)}: 15 Update 79 on {pias 6 until iter >= M or find a sentence S* such that D(S*) satisfies the requirement.
10
11
12
13
14
# 3 Experiment
# 3.1 Experimental Settings
In this subsection, we give a description of the experimental settings including the state-of-the-art dialogue model we are trying to manipulate, the implementation details of the policy network, and the training procedure of the proposed framework. The Dialogue Model. The experiments are conducted on a classic Seq2Seq (Sutskever et al., 2014) neural dialogue model. In this Seq2Seq model, both the encoder and the decoder are im- plemented by 3-layer LSTM networks with hidden states of size 1024. The vocabulary size is 30,000. Pre-trained Glove word vectors (Pennington et al., 2014) are used as the word embeddings whose di- mension is 300.
Dataset. A Twitter Conversation Corpus is used to pre-train the policy network, construct target response list, etc. This corpus is different from the one for training the black-box dialogue model. The corpus consists of tweets and replies extracted from Twitter. It contains 1,394,939 single-turn En- glish dialogues in open domains. The dataset is randomly split into training set, validation set, and test set, which consist of 976,457, 139,498 and 278,984 dialogues, respectively.
Implementation Details. In this work, we use a 2-layer LSTM with the hidden size being 1,024
as the policy network, which is implemented with Pytorch (Paszke et al., 2019). Before the manipula- tion experiments, we pre-trained the LSTM on the training set of the Twitter dialogue corpus where every single post or reply is treated as a sentence, resulting in around 1.9 million sentences in total. Speciï¬cally, in the pre-training process, the model is optimized by the standard stochastic gradient descent algorithm with the learning rate of 1.0. In addition, to prevent overï¬tting issues, we apply the dropout with the rate of 0.1 and gradient clipping with clip-value being 0.25. Moreover, the word em- beddings are randomly initialized and ï¬ne-tuned during the pre-training process.
As for the details in relation to Algorithm 1, we set the sample size N to be 5. The number of sim- ulations per generated sentence (i.e., N r) is set to 5. The maximum sequence length is 5 and 10 for target word task and target response task, respec- tively. In addition, the policy network parameters are optimized by Adam (Kingma and Ba, 2014) with the learning rate of 0.001 and 0.05 for the above two tasks, respectively. During RL training, once we ï¬nd a sample that satisï¬es a requirement, the training process will stop and we consider it to be successful. On the other hand, if the model cannot ï¬nd a desired sample within M = 50 it- erations, then we consider it as a failure. For the target word task, the requirement is that the out- put response contains the target word; while for the target response task, the requirement is that the sim- ilarity between the output response and the target response exceeds a threshold.
We note in the target word task, we donât directly adopt the reward deï¬ned in Eq. (4). Instead, for an input sentence S, we use max(R(S) â b, 0), where b is a baseline deï¬ned as the average reward value of a batch of input sentences. By the function max(·, 0), we replace all the negative rewards with 0 and only keep the positive rewards. Additionally, for the target response task, before RL training, we build a reference corpus by feeding 200k utter- ances (from the training set of the Twitter dialogue corpus) into the dialogue model and obtain 200k input-response pairs. Then, in order to improve the efï¬ciency of training, at the beginning of each train- ing session, we search 5 inputs whose responses are most similar to the target response as the ï¬rst batch to train the model.
Table 1: Results of the target word task.
Success Rate Average Iterations Common Malicious 65% 30% 12.64 38.73
Table 2: Case Study of the target word task on Mali- cious target word list. Input indicates the input crafted by TDGPN and output is the corresponding response produced by the dialogue model. Num.Iter represents the number of iterations.
Target word: shit Input: then start to eat Output: i â m not going to eat that shit Num.Iter: 5 Target word: ass Input: fat , iâm too classy Output: i â m not a fat ass Num.Iter: 7 Target word: idiot Input: when he is boring that Output: he â s a fucking idiot Num.Iter: 24
# 3.2 Experimental Results
In this subsection, we present the results of our experiments.
Target word task. In the experiments, we con- struct two lists of words as our target words. We randomly sample 200 words from the most frequent 800 words of the Twitter conversation corpus. They form the Common target word list. Moreover, we manually construct a Malicious target word list containing 50 malicious or dirty words that a dia- logue agent has to avoid to say 1.
Table 1 shows the success rate of our proposed TDGPN as well as the average number of iterations the algorithm performs to achieve a successful ma- nipulation. We can see that our algorithm can ma- nipulate the dialogue model successfully for a con- siderable number of target words. Moreover, we observe that the common words achieve higher suc- cess rate and need fewer iterations, which means that itâs easier to manipulate the common words than the malicious words. We show three cases of successful manipulation on malicious target words in Table 2.
Target response task. For the target response task, we ï¬rst construct two lists of target responses. So called Generated and Real target response lists are used. To construct the generated target response list, we ï¬rst feed 200k human utterances from the
1When doing experiments on the malicious target words, we set N = 10 and M = 100.
test set of the Twitter dialogue corpus into the dia- logue model to get 200k generated responses and then randomly sample 200 responses as the targets in length 1-3, 4-6 and 7-10, respectively. The real target response list is obtained by randomly sam- pling sentences from the rest part of the test set of the Twitter dialogue corpus. And we ï¬lter out some sentences which also occur in the generated target response list, to ensure that there is no over- lap between two lists. The number of real target responses in each length group is also 200.
In Figure 3, we show the success rates of TDGPN for manipulating the Twitter dialogue model on two lists of target responses. The ï¬g- ure shows how success rates vary with different thresholds. For example, in Figure (a), we can suc- cessfully ï¬nd desired inputs that lead to responses whose similarities with the target ones are above the threshold 0.8, for 34.5% generated target re- sponses with length 1-3.
First of all, from the ï¬gures we can see that for both the generated and the real target lists, TDGPN can achieve a success rate 85% â 100% with a threshold of 0.5. Especially for more than around 80% generated targets with length greater than or equal to 4, TDGPN is able to ï¬nd desired inputs that lead to a similarity score above 0.8. One key observation is that the model performs signiï¬cantly better on the generated target list than on the real target response list. Actually, the neural dialogue models suffer from a safe response problem (Li et al., 2015). Such models tend to offer generic re- sponses to diverse inputs, which makes it hard for the model to provide a target speciï¬c response (of- ten seen in real human conversations). In addition, we observe that the success rate of manipulation is closely related to the length of target responses. Except for a few cases, for longer target responses, itâs easier for TDGPN to manipulate the dialogue model to say something similar to them.
The Quality of Crafted Inputs. For each real tar- get response, TDGPN tries to ï¬nd an input whose corresponding output response is most similar to the target one. We also feed the real inputs corre- sponding to the target responses in the corpus into the dialogue model. We aim to check how similar the output responses to the target ones in manipu- lated and original settings. To further demonstrate the effectiveness of the proposed framework in ma- nipulating the dialogue model, we calculate these similarity scores for each real target response and
Table 3: Average similarity scores between the output response and the target response in Real list.
Length 1-3 4-6 7-10 Real Input TDGPN 0.439 0.69 0.518 0.726 0.566 0.748
report the average value in Table 3. From the table, we make the following observations. First, even inputting the real inputs, the similarity scores be- tween the output responses and the target responses are not high. Second, with the generated inputs from the proposed framework, the similarity scores are signiï¬cantly improved. Speciï¬cally, the word embedding similarity increases by 57.2%, 40.2% and 32.2% for the target responses with length 1-3, 4-6 and 7-10, respectively.
Case Study. Table 4 shows ï¬ve examples in the manipulating experiments. The ï¬rst three target responses are from the generated target response list while the other two are from the real response list. Given those target responses, desired inputs are successfully crafted. Each of them leads to an output of the dialogue model similar to the tar- get one, evaluating by the corresponding similarity measurement. Besides, unlike some related works (He and Glass, 2018; Cheng et al., 2018) where crafted text inputs are ungrammatical and meaning- less, the inputs generated by our model are smooth and natural utterances, which is guaranteed by the language model.
# 4 Related Work
Model attack in NLP has been a fast-moving ï¬eld especially across many neural based methodolo- gies (Chakraborty et al., 2018; Xu et al., 2019), since our understanding of their inner workings is quite limited. Chen et al. (Chen, 2018) investigate the ability for small perturbations that could result in the image captioning to contain a randomly se- lected keyword/caption. Although this is similar to our problem in that they are generating target text, the key difference is that they are working with continuous inputs (i.e., images). Similarly, some work has focused on text classiï¬cation (Kuleshov et al., 2018; Liang et al., 2018), but in the white-box setting as compared to our framework is proposed in the black-box setting.
Our work is primarily related to other works focused on building a better understanding of
Table 4: Case Study. The ï¬rst column shows the inputs from TDGPN. The middle column shows the target responses and the outputs by the dialogue model. The third column shows the similarity score.
Inputs the band perry is goin to be at the movies i followed you so you better follow me back so i have a friend in the sea whatâs poppin peeps ? honestly i miss my brother Responses Target: i â m going to be there tomorrow ! Output: i â m going to be there Target: i â m not following you . Output: i â m sorry i â m not following you Target: i â m in the same boat lol Output: i â m in the same boat . Target: nothing much just us chatting shit really Output: nothing much , just chillin Target: me = miss you ( : lol . Output: i miss you too Similarity 0.958 0.952 0.958 0.826 0.876
(a) Generated 1-3 (b) Real 1-3 1.0 + 1.5% 74.5% 74.5% 51.5% 54.5% 0.54 0.5 4 34.5% 13.594 17.0% 0.0 Boo | 1.0 4 Fd ml 0.5 0.6 0.7 0.8 0.9 0.5 0.6 0.7 0.8 0.9
(c) Generated 4-6 (d) Real 4-6 1.0 P75% 95.0% 1.0 fe" Success Rate 0.54 0.5 4 i [ 0.0 0.0 = 0.5 0.6 0.7 0.8 0.9 0.5 0.6 0.7 0.8 0.9 (e) Generated 7-10 (f) Real 7-10 1,0 Pod% 98.0% 98.0% 95 55, 1.0 £23" 93.5% 75.5% 0.5 4 0.5 4 | 35.09 28.5% 0.0 ll. | or 0.5 0.6 0.7 0.8 0.9 0.5 0.6 0.7 0.8 0.9 Threshold Threshold
network is far less practical (Zhang et al., 2019). Then Niu et al. (Niu and Bansal, 2018) focus on using adversarial training to investigate both over- sensitivity and over-stability of dialogue models, where the small changes to the input should or should not change the output of the dialogue sys- tem, respectively. Besides, He et al. (He and Glass, 2018) focus on learning an input to generate a spe- ciï¬c offensive/malicious output of a pre-trained dialogue system. However, our proposed frame- work is based on the black-box setting (unlike their model, which is under the white-box assumption) which raises signiï¬cantly higher levels of com- plexity to develop an optimization process. The work (Liu et al., 2019b) investigates the target re- sponse task on the black-box setting. The authors train a reusable reverse dialogue generator by rein- forcement learning and use it to discover the inputs leading to target outputs through multiple trials.
# 5 Conclusion
Figure 3: Results for the target response task.
sequence-to-sequence based dialog system models, such as their over-sensitivity and over-stability (Niu and Bansal, 2018), robustness (Cheng et al., 2018) and likeliness of generating egregious re- sponses (He and Glass, 2018). In (Cheng et al., 2018) the problem of attacking sequence- to-sequence models is presented to evaluate the robustness of this class of deep neural networks. Unlike our work, they focus on the development of a white-box framework that is built around having the gradient. However, in practice, the assump- tion of accessing the full knowledge of the neural
Currently, the state-of-the-art dialogue systems are harnessing the power of deep neural models, and al- though they are proving to become more and more human-like, recent concerns have been raised for neural models across all domains as to whether they can be manipulated (with most focusing on malicious attacks). Thus, in this work, we investi- gate whether current neural dialogue models can be manipulated and develop a reinforcement learn- ing based sentence generation framework that can learn to craft the input sentences causing dialogue models to output target words and responses. We conduct extensive experiments on a state-of-the-art dialogue neural model and the results show that dialogue systems can indeed be manipulated. In ad- dition, our proposed method is not only able to ma-
nipulate neural dialogue model, but itâs also likely to be applied on black-box dialogue systems based on other methods (e.g. rule-based, retrieval-based, etc.), or even models for other natural language generation tasks (e.g. text summarization, machine translation, etc.). We will leave the investigations on these areas as future works.
# References
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly arXiv preprint learning to align and translate. arXiv:1409.0473.
Anirban Chakraborty, Manaar Alam, Vishal Dey, Anu- pam Chattopadhyay, and Debdeep Mukhopadhyay. 2018. Adversarial attacks and defences: A survey. arXiv preprint arXiv:1810.00069.
language grounding with adversarial examples: A case study on neural image captioning. In ACL.
Hongshen Chen, Xiaorui Liu, Dawei Yin, and Jiliang Tang. 2017. A survey on dialogue systems: Recent advances and new frontiers. ACM SIGKDD Explo- rations Newsletter, 19(2):25â35.
Minhao Cheng, Jinfeng Yi, Huan Zhang, Pin-Yu Chen, Seq2sick: Evaluat- and Cho-Jui Hsieh. 2018. ing the robustness of sequence-to-sequence mod- arXiv preprint els with adversarial examples. arXiv:1803.01128.
Emily Dinan, Angela Fan, Adina Williams, Jack Ur- banek, Douwe Kiela, and Jason Weston. 2019. too: Mitigating gender Queens are powerful arXiv preprint bias in dialogue generation. arXiv:1911.03842.
Joseph Polifroni, Stephanie Seneff, and Senis Busayapongchai. 1996. A form-based dialogue manager for spoken lan- guage applications. In Proceeding of Fourth Interna- tional Conference on Spoken Language Processing. ICSLPâ96, volume 2, pages 701â704. IEEE.
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversar- ial examples. arXiv preprint arXiv:1412.6572.
Detecting egregious responses in neural sequence-to-sequence models. arXiv preprint arXiv:1809.04113.
Peter Henderson, Koustuv Sinha, Nicolas Angelard- Gontier, Nan Rosemary Ke, Genevieve Fried, Ryan Lowe, and Joelle Pineau. 2017. Ethical challenges arXiv preprint in data-driven dialogue systems. arXiv:1711.09050.
Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. 2014. Convolutional neural network architec- tures for matching natural language sentences. In Advances in neural information processing systems, pages 2042â2050.
Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
Volodymyr Kuleshov, Shantanu Thakoor, Tingfung Lau, and Stefano Ermon. 2018. Adversarial ex- amples for natural language classiï¬cation problems. Open Review submission OpenReview:r1QZ3zbAZ.
Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2015. A diversity-promoting objec- tive function for neural conversation models. arXiv preprint arXiv:1510.03055.
Bin Liang, Hongcheng Li, Miaoqiang Su, Pan Bian, Xirong Li, and Wenchang Shi. 2018. Deep text classiï¬cation can be fooled. In Proceedings of the Twenty-Seventh International Joint Conference on Artiï¬cial Intelligence, IJCAI-18, pages 4208â4215. International Joint Conferences on Artiï¬cial Intelli- gence Organization.
Haochen Liu, Jamell Dacon, Wenqi Fan, Hui Liu, Zitao Liu, and Jiliang Tang. 2019a. Does gender matter? towards fairness in dialogue systems. arXiv preprint arXiv:1910.10486.
Haochen Liu, Tyler Derr, Zitao Liu, and Jiliang Tang. i want: Towards the dark arXiv preprint Say what 2019b. side of neural dialogue models. arXiv:1909.06044.
Siqi Liu, Zhenhai Zhu, Ning Ye, Sergio Guadarrama, and Kevin Murphy. 2017. Improved image caption- ing via policy gradient optimization of spider. In Proceedings of the IEEE international conference on computer vision, pages 873â881.
Zhengdong Lu and Hang Li. 2013. A deep architec- ture for matching short texts. In Advances in Neural Information Processing Systems, pages 1367â1375.
Tong Niu and Mohit Bansal. 2018. Adversarial over- sensitivity and over-stability strategies for dialogue In The SIGNLL Conference on Computa- models. tional Natural Language Learning (CoNLL).
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, In Ad- high-performance deep learning library. vances in Neural Information Processing Systems, pages 8024â8035.
Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language process- ing (EMNLP), pages 1532â1543.
Rob Price. 2016. Microsoft is deleting its ai chatbots incredibly racist tweets. Business Insider.
Iulian Vlad Serban, Alessandro Sordoni, Yoshua Ben- gio, Aaron C Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using gener- ative hierarchical neural network models. In AAAI, volume 16, pages 3776â3784.
Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neu- ral responding machine for short-text conversation. arXiv preprint arXiv:1503.02364.
Heung-Yeung Shum, Xiao-dong He, and Di Li. 2018. From eliza to xiaoice: challenges and opportunities with social chatbots. Frontiers of Information Tech- nology & Electronic Engineering, 19(1):10â26.
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing sys- tems, pages 3104â3112.
Richard S Sutton, David A McAllester, Satinder P Singh, and Yishay Mansour. 2000. Policy gradient methods for reinforcement learning with function ap- proximation. In Advances in neural information pro- cessing systems, pages 1057â1063.
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199.
Oriol Vinyals and Quoc Le. 2015. A neural conversa- tional model. arXiv preprint arXiv:1506.05869.
Richard Wallace. 2001. Artiï¬cial linguistic internet computer entity (alice).
Christopher JCH Watkins and Peter Dayan. 1992. Q- learning. Machine learning, 8(3-4):279â292.
Joseph Weizenbaum. 1966. Elizaa computer program for the study of natural language communication be- tween man and machine. Communications of the ACM, 9(1):36â45.
John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2016. Towards universal paraphrastic sen- tence embeddings. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Pro- ceedings.
Ronald J Williams. 1992. Simple statistical gradient- following algorithms for connectionist reinforce- ment learning. Machine learning, 8(3-4):229â256.
Marty J Wolf, K Miller, and Frances S Grodzinsky. 2017. Why we should have seen that coming: Com- ments on microsoftâs tay experiment, and wider im- plications. ACM SIGCAS Computers and Society, 47(3):54â64.
Han Xu, Yao Ma, Haochen Liu, Debayan Deb, Hui Liu, Jiliang Tang, and Anil K. Jain. 2019. Adversarial attacks and defenses in images, graphs and text: A review. CoRR, abs/1909.08072.
Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. 2017. Seqgan: Sequence generative adversarial nets In Thirty-First AAAI Confer- with policy gradient. ence on Artiï¬cial Intelligence.
Xiaoyong Yuan, Pan He, Qile Zhu, and Xiaolin Li. 2019. Adversarial examples: Attacks and defenses for deep learning. IEEE transactions on neural net- works and learning systems.
Wei Emma Zhang, Quan Z Sheng, and Ahoud Abdul- rahmn F Alhazmi. 2019. Generating textual adver- sarial examples for deep learning models: A survey. arXiv preprint arXiv:1901.06796.
Zhengli Zhao, Dheeru Dua, and Sameer Singh. 2017. arXiv Generating natural adversarial examples. preprint arXiv:1710.11342.
Li Zhou, Jianfeng Gao, Di Li, and Heung-Yeung Shum. 2018. The design and implementation of xiaoice, an empathetic social chatbot. CoRR, abs/1812.08989. | {
"id": "1901.06796"
} |
2005.13239 | MOPO: Model-based Offline Policy Optimization | Offline reinforcement learning (RL) refers to the problem of learning
policies entirely from a large batch of previously collected data. This problem
setting offers the promise of utilizing such datasets to acquire policies
without any costly or dangerous active exploration. However, it is also
challenging, due to the distributional shift between the offline training data
and those states visited by the learned policy. Despite significant recent
progress, the most successful prior methods are model-free and constrain the
policy to the support of data, precluding generalization to unseen states. In
this paper, we first observe that an existing model-based RL algorithm already
produces significant gains in the offline setting compared to model-free
approaches. However, standard model-based RL methods, designed for the online
setting, do not provide an explicit mechanism to avoid the offline setting's
distributional shift issue. Instead, we propose to modify the existing
model-based RL methods by applying them with rewards artificially penalized by
the uncertainty of the dynamics. We theoretically show that the algorithm
maximizes a lower bound of the policy's return under the true MDP. We also
characterize the trade-off between the gain and risk of leaving the support of
the batch data. Our algorithm, Model-based Offline Policy Optimization (MOPO),
outperforms standard model-based RL algorithms and prior state-of-the-art
model-free offline RL algorithms on existing offline RL benchmarks and two
challenging continuous control tasks that require generalizing from data
collected for a different task. The code is available at
https://github.com/tianheyu927/mopo. | http://arxiv.org/pdf/2005.13239 | Tianhe Yu, Garrett Thomas, Lantao Yu, Stefano Ermon, James Zou, Sergey Levine, Chelsea Finn, Tengyu Ma | cs.LG, cs.AI, stat.ML | NeurIPS 2020. First two authors contributed equally. Last two authors
advised equally | null | cs.LG | 20200527 | 20201122 | 0 2 0 2
v o N 2 2 ] G L . s c [
6 v 9 3 2 3 1 . 5 0 0 2 : v i X r a
# MOPO: Model-based Ofï¬ine Policy Optimization
Tianhe Yuâ 1, Garrett Thomas*1, Lantao Yu1, Stefano Ermon1, James Zou1, Sergey Levine2, Chelsea Finnâ 1, Tengyu Maâ 1 Stanford University1, UC Berkeley2 {tianheyu,gwthomas}@cs.stanford.edu
# Abstract
Ofï¬ine reinforcement learning (RL) refers to the problem of learning policies entirely from a large batch of previously collected data. This problem setting offers the promise of utilizing such datasets to acquire policies without any costly or dangerous active exploration. However, it is also challenging, due to the distribu- tional shift between the ofï¬ine training data and those states visited by the learned policy. Despite signiï¬cant recent progress, the most successful prior methods are model-free and constrain the policy to the support of data, precluding generaliza- tion to unseen states. In this paper, we ï¬rst observe that an existing model-based RL algorithm already produces signiï¬cant gains in the ofï¬ine setting compared to model-free approaches. However, standard model-based RL methods, designed for the online setting, do not provide an explicit mechanism to avoid the ofï¬ine settingâs distributional shift issue. Instead, we propose to modify the existing model-based RL methods by applying them with rewards artiï¬cially penalized by the uncertainty of the dynamics. We theoretically show that the algorithm maxi- mizes a lower bound of the policyâs return under the true MDP. We also characterize the trade-off between the gain and risk of leaving the support of the batch data. Our algorithm, Model-based Ofï¬ine Policy Optimization (MOPO), outperforms standard model-based RL algorithms and prior state-of-the-art model-free ofï¬ine RL algorithms on existing ofï¬ine RL benchmarks and two challenging continuous control tasks that require generalizing from data collected for a different task.
# Introduction
Recent advances in machine learning using deep neural networks have shown signiï¬cant successes in scaling to large realistic datasets, such as ImageNet [13] in computer vision, SQuAD [55] in NLP, and RoboNet [10] in robot learning. Reinforcement learning (RL) methods, in contrast, struggle to scale to many real-world applications, e.g., autonomous driving [74] and healthcare [22], because they rely on costly online trial-and-error. However, pre-recorded datasets in domains like these can be large and diverse. Hence, designing RL algorithms that can learn from those diverse, static datasets would both enable more practical RL training in the real world and lead to more effective generalization.
While off-policy RL algorithms [43, 27, 20] can in principle utilize previously collected datasets, they perform poorly without online data collection. These failures are generally caused by large extrapolation error when the Q-function is evaluated on out-of-distribution actions [19, 36], which can lead to unstable learning and divergence. Ofï¬ine RL methods propose to mitigate bootstrapped error by constraining the learned policy to the behavior policy induced by the dataset [19, 36, 72, 30, 49, 52, 58]. While these methods achieve reasonable performances in some settings, their learning is limited to behaviors within the data manifold. Speciï¬cally, these methods estimate error with respect to out-of-distribution actions, but only consider states that lie within the ofï¬ine dataset and do not
âequal contribution. â equal advising. Orders randomized.
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
consider those that are out-of-distribution. We argue that it is important for an ofï¬ine RL algorithm to be equipped with the ability to leave the data support to learn a better policy for two reasons: (1) the provided batch dataset is usually sub-optimal in terms of both the states and actions covered by the dataset, and (2) the target task can be different from the tasks performed in the batch data for various reasons, e.g., because data is not available or hard to collect for the target task. Hence, the central question that this work is trying to answer is: can we develop an ofï¬ine RL algorithm that generalizes beyond the state and action support of the ofï¬ine data?
To approach this question, we ï¬rst hypothesize that model-based RL methods [64, 12, 42, 38, 29, 44] make a natural choice for enabling gener- alization, for a number of reasons. First, model- based RL algorithms effectively receive more supervision, since the model is trained on every transition, even in sparse-reward settings. Sec- ond, they are trained with supervised learning, which provides more stable and less noisy gra- dients than bootstrapping. Lastly, uncertainty estimation techniques, such as bootstrap ensem- bles, are well developed for supervised learning methods [40, 35, 60] and are known to perform poorly for value-based RL methods [72]. All of these attributes have the potential to improve or control generalization. As a proof-of-concept experiment, we evaluate two state-of-the-art off-policy model-based and model-free algorithms, MBPO [29] and SAC [27], in Figure 1. Although neither method is designed for the batch setting, we ï¬nd that the model-based method and its variant without ensembles show surprisingly large gains. This ï¬nding corroborates our hypothesis, suggesting that model-based methods are particularly well-suited for the batch setting, motivating their use in this paper.
DARL halfcheetah-mixed halfcheetah-jump 6k buffer mean aK | 0k. âSAC MBPO ho ens. MBPO SAC MBPO no ens. MBPO
Despite these promising preliminary results, we expect signiï¬cant headroom for improvement. In particular, because ofï¬ine model-based algorithms cannot improve the dynamics model using additional experience, we expect that such algorithms require careful use of the model in regions outside of the data support. Quantifying the risk imposed by imperfect dynamics and appropriately trading off that risk with the return is a key ingredient towards building a strong ofï¬ine model-based RL algorithm. To do so, we modify MBPO to incorporate a reward penalty based on an estimate of the model error. Crucially, this estimate is model-dependent, and does not necessarily penalize all out-of-distribution states and actions equally, but rather prescribes penalties based on the estimated magnitude of model error. Further, this estimation is done both on states and actions, allowing generalization to both, in contrast to model-free approaches that only reason about uncertainty with respect to actions.
The primary contribution of this work is an ofï¬ine model-based RL algorithm that optimizes a policy in an uncertainty-penalized MDP, where the reward function is penalized by an estimate of the modelâs error. Under this new MDP, we theoretically show that we maximize a lower bound of the return in the true MDP, and ï¬nd the optimal trade-off between the return and the risk. Based on our analysis, we develop a practical method that estimates model error using the predicted variance of a learned model, uses this uncertainty estimate as a reward penalty, and trains a policy using MBPO in this uncertainty-penalized MDP. We empirically compare this approach, model-based ofï¬ine policy optimization (MOPO), to both MBPO and existing state-of-the-art model-free ofï¬ine RL algorithms. Our results suggest that MOPO substantially outperforms these prior methods on the ofï¬ine RL benchmark D4RL [18] as well as on ofï¬ine RL problems where the agent must generalize to out-of-distribution states in order to succeed.
# 2 Related Work
Reinforcement learning algorithms are well-known for their ability to acquire behaviors through online trial-and-error in the environment [3, 65]. However, such online data collection can incur high sample complexity [46, 56, 57], limit the power of generalization to unseen random initialization [9, 76, 4], and pose risks in safety-critical settings [68]. These requirements often make real-world applications of RL less feasible. To overcome some of these challenges, we study the batch ofï¬ine
2
RL setting [41]. While many off-policy RL algorithms [53, 11, 31, 48, 43, 27, 20, 24, 25] can in principle be applied to a batch ofï¬ine setting, they perform poorly in practice [19].
Model-free Ofï¬ine RL. Many model-free batch RL methods are designed with two main ingredients: (1) constraining the learned policy to be closer to the behavioral policy either explicitly [19, 36, 72, 30, 49] or implicitly [52, 58], and (2) applying uncertainty quantiï¬cation techniques, such as ensembles, to stabilize Q-functions [1, 36, 72]. In contrast, our model-based method does not rely on constraining the policy to the behavioral distribution, allowing the policy to potentially beneï¬t from taking actions outside of it. Furthermore, we utilize uncertainty quantiï¬cation to quantify the risk of leaving the behavioral distribution and trade it off with the gains of exploring diverse states.
Model-based Online RL. Our approach builds upon the wealth of prior work on model-based online RL methods that model the dynamics by Gaussian processes [12], local linear models [42, 38], neural network function approximators [15, 21, 14], and neural video prediction models [16, 32]. Our work is orthogonal to the choice of model. While prior approaches have used these models to select actions using planning [67, 17, 54, 51, 59, 70], we choose to build upon Dyna-style approaches that optimize for a policy [64, 66, 73, 32, 26, 28, 44], speciï¬cally MBPO [29]. See [71] for an empirical evaluation of several model-based RL algorithms. Uncertainty quantiï¬cation, a key ingredient to our approach, is critical to good performance in model-based RL both theoretically [63, 75, 44] and empirically [12, 7, 50, 39, 8], and in optimal control [62, 2, 34]. Unlike these works, we develop and leverage proper uncertainty estimates that particularly suit the ofï¬ine setting.
Concurrent work by Kidambi et al. [33] also develops an ofï¬ine model-based RL algorithm, MOReL. Unlike MOReL, which constructs terminating states based on a hard threshold on uncertainty, MOPO uses a soft reward penalty to incorporate uncertainty. In principle, a potential beneï¬t of a soft penalty is that the policy is allowed to take a few risky actions and then return to the conï¬dent area near the behavioral distribution without being terminated. Moreover, while Kidambi et al. [33] compares to model-free approaches, we make the further observation that even a vanilla model-based RL method outperforms model-free ones in the ofï¬ine setting, opening interesting questions for future investigation. Finally, we evaluate our approach on both standard benchmarks [18] and domains that require out-of-distribution generalization, achieving positive results in both.
# 3 Preliminaries
We consider the standard Markov decision process (MDP) M = (S,.A,T,r, 10, y), where S and A denote the state space and action space respectively, T(sâ | s, a) the transition dynamics, r(s, a) the reward function, jio the initial state distribution, and y ⬠(0,1) the discount factor. The goal in RL is to optimize a policy 7(a| s) that maximizes the expected discounted return nyy(7) := E oe) V7 (se, ae)]- The value function Vy;(s) := E. oe V'7 (se, a) | So = 8] gives the T,4 HO T, expected discounted return under 7 when starting from state s.
E oe) V7 (se, ae)]- The value function Vy;(s) := E. oe V'7 (se, a) | So = 8] gives the T,4 HO T, expected discounted return under 7 when starting from state s. In the offline RL problem, the algorithm only has access to a static dataset Deny = {(s,a,1, 8â) } collected by one or a mixture of behavior policies 7, and cannot interact further with the environment. We refer to the distribution from which Deny was sampled as the behavioral distribution.
We also introduce the following notation for the derivation in Section 4. In the model-based approach we will have a dynamics model T estimated from the transitions in Deny. This estimated dynamics defines a model MDP M = (S,A, Tyr, Ho, 7). Let PS (8) denote the probability of being in state s at time step ¢ if actions are sampled according to 7 and transitions according to T. Let p(s, a) be the discounted occupancy measure of policy 7 under dynamics T: pa(s, a) := 7(a|s) Dee9 7'PF ,(s)- Note that Pas as defined here, is not a properly normalized probability distribution, as it integrates to 1/(1 â y). We will denote (improper) expectations with respect to OF with E, as in n57(7) = Eye [r(s,a)].
We now summarize model-based policy optimization (MBPO) [29], which we build on in this work. MBPO learns a model of the transition distribution Ty (sâ|s, a) parametrized by 6, via supervised learning on the behavorial data Deny. MBPO also learns a model of the reward function in the same manner. During training, MBPO performs k-step rollouts using To(s! |s, a) starting from state
3
s â Denv, adds the generated data to a separate replay buffer Dmodel, and ï¬nally updates the policy Ï(a|s) using data sampled from Denv ⪠Dmodel. When applied in an online setting, MBPO iteratively collects samples from the environment and uses them to further improve both the model and the policy. In our experiments in Table 1, Table 5.2 and Table 1, we observe that MBPO performs surprisingly well on the ofï¬ine RL problem compared to model-free methods. In the next section, we derive MOPO, which builds upon MBPO to further improve performance.
# 4 MOPO: Model-Based Ofï¬ine Policy Optimization
Unlike model-free methods, our goal is to design an ofï¬ine model-based reinforcement learning algorithm that can take actions that are not strictly within the support of the behavioral distribution. Using a model gives us the potential to do so. However, models will become increasingly inaccurate further from the behavioral distribution, and vanilla model-based policy optimization algorithms may exploit these regions where the model is inaccurate. This concern is especially important in the ofï¬ine setting, where mistakes in the dynamics will not be corrected with additional data collection.
For the algorithm to perform reliably, itâs crucial to balance the return and risk: 1. the potential gain in performance by escaping the behavioral distribution and ï¬nding a better policy, and 2. the risk of overï¬tting to the errors of the dynamics at regions far away from the behavioral distribution. To achieve the optimal balance, we ï¬rst bound the return from below by the return of a constructed model MDP penalized by the uncertainty of the dynamics (Section 4.1). Then we maximize the conservative estimation of the return by an off-the-shelf reinforcement learning algorithm, which gives MOPO, a generic model-based off-policy algorithm (Section 4.2). We discuss important practical implementation details in Section 4.3.
# 4.1 Quantifying the uncertainty: from the dynamics to the total return
Our key idea is to build a lower bound for the expected return of a policy 7 under the true dynamics and then maximize the lower bound over 7. A natural estimator for the true return 7,7 (7) is nx7(7), the return under the estimated dynamics. The error of this estimator depends on, potentially in a complex fashion, the error of 17, which may compound over time. In this subsection, we characterize how the error of M influences the uncertainty of the total return. We begin by stating a lemma (adapted from [44]) that gives a precise relationship between the performance of a policy under dynamics T and dynamics Tâ. (All proofs are given in Appendix B.) Lemma 4.1 (Telescoping lemma). Let M and M be two MDPs with the same reward âfunction r, but different dynamics T and T respectively. Let G5(s,a) := ; E Mar(s))] -) gE ; Vile) s/n T(s,a s'XT(s,
Then,
ng) â mln) =, B [G%(s-a)| a) s,a)~p%,
As an immediate corollary, we have
nu(t)= E [r(s.a) - 7G (s,a)| > ( E . [r(s. a) - 1CH(s.a)]] (2) (s,a)~pz sa) <p
Here and throughout the paper, we view T as the real dynamics and T as the learned dynamics. We observe that the quantity GU(s, a) plays a key role linking the estimation error of the dynamics and the estimation error of the return. By definition, we have that GE(s, a) measures the difference between M and M under the test function V⢠â indeed, if M = M, then G=(s,a) = 0. By equation (1), it governs the differences between the performances of 7 in the two MDPs. If we could estimate GB(s, a) or bound it from above, then we could use the RHS of (1) as an upper bound for the estimation error of 7,4 (7). Moreover, equation (2) suggests that a policy that obtains high reward in the estimated MDP while also minimizing Ga will obtain high reward in the real MDP.
However, computing Go remains elusive because it depends on the unknown function V;,. Leverag- ing properties of Vi, we will replace Go by an upper bound that depends solely on the error of the
Vi, we will replace Go
4
dynamics T. We first note that if F is a set of functions mapping S to R that contains V;7,
M , then,
Goal Sop] BGI EVO dFo.a).200),
where d+ is the integral probability metric (IPM) [47] defined by F. IPMs are quite general and contain several other distance measures as special cases [61]. Depending on what we are willing to assume about Vj, there are multiple options to bound Go by some notion of error of T, discussed in greater detail in Appendix A:
(@) If F = {f : ||f loo < 1}, then dz is the total variation distance. Thus, if we assume that the reward function is bounded such that V(s, a), |r(s,@)| < Tmax, we have ||V||oo < 329 7âTmax = a and hence
|OFq(s-4)| < [= Dwv(P(s,a),T(s,0)) (4)
(ii) If F is the set of 1-Lipschitz function w.r.t. to some distance metric, then d+ is the 1-Wasserstein distance w.r.t. the same metric. Thus, if we assume that V7, is L,-Lipschitz with respect to a norm || - ||, it follows that
|G3(s,a)| < L,Wi(P(s,a),T(s,a)) (5)
Note that when 7 and T are both deterministic, then W;(T(s, a), T(s,a)) = ||F(s, a) â T(s,a)|| (here T'(s, a) denotes the deterministic output of the model T).
Approach (ii) has the advantage that it incorporates the geometry of the state space, but at the cost of an additional assumption which is generally impossible to verify in our setting. The assumption in (i), on the other hand, is extremely mild and typically holds in practice. Therefore we will prefer (i) unless we have some prior knowledge about the MDP. We summarize the assumptions and the inequalities in the options above as follows. Assumption 4.2. Assume a scalar c and a function class F such that V Ï
As a direct corollary of Assumption 4.2 and equation (3), we have
|GF(s, a)| < edz(T(s, a), T(s, a). (6)
Concretely, option (i) above corresponds to ¢ = rmax/(1 ây) and F = {f : ||f|]oo < 1}, and option (ii) corresponds to c = L, and F = {f : f is 1-Lipschitz}. We will analyze our framework under the assumption that we have access to an oracle uncertainty quantification module that provides an upper bound on the error of the model. In our implementation, we will estimate the error of the dynamics by heuristics (see sections 4.3 and D). Assumption 4.3. Let F be the function class in Assumption 4.2. We say w : Sx A-â Ris an admissible error estimator for Tâ if d-(T'(s,a),T(s,@)) < u(s, a) forall s ⬠S,a ⬠A?
Given an admissible error estimator, we define the uncertainty-penalized reward F(s,a) := r(s,a) â Au(s,a) where X := yc, and the uncertainty-penalized MDP M = (S,.A,T,7, j10, 7). We observe that / is conservative in that the return under it bounds from below the true return:
mi(m)> EB [r(s,a)âGH(s,@)l] = E Ir(s,a) â Aus, a) (s.a)~p% (s,a)~p% (by equation (2) and (6)) = 15 (7) (7) Vv feo] or =~ % =
# 4.2 Policy optimization on uncertainty-penalized MDPs
Motivated by (7), we optimize the policy on the uncertainty-penalized MDP Min Algorithm 1.
2The deï¬nition here extends the deï¬nition of admissible conï¬dence interval in [63] slightly to the setting of stochastic dynamics.
5
# Algorithm 1 Framework for Model-based Ofï¬ine Policy Optimization (MOPO) with Reward Penalty
Require: Dynamics model T with admissible error estimator u(s, a); constant A. 1: Define 7(s, a) = r(s,a) â Au(s,a). Let M be the MDP with dynamics T and reward 7. 2: Run any RL algorithm on M until convergence to obtain 7 = argmax,,n;7 (7)
(7)
Theoretical Guarantees for MOPO. We will theoretical analyze the algorithm by establishing the optimality of the learned policy 7 among a family of policies. Let 7* be the optimal policy on M and 7® be the policy that generates the batch data. Define â¬,,(77) as
eu(m):= E _[u(s,a)] (8) (s,a)~eF
Note that â¬,, depends on T, but we omit this dependence in the notation for simplicity. We observe that â¬,,(7) characterizes how erroneous the model is along trajectories induced by 7. For example, consider the extreme case when 7 = 7°. Because T is learned on the data generated from 7®, we ~ : B expect T' to be relatively accurate for those (s, a) ~ PRs and thus u(s, a) tends to be small. Thus, we expect â¬,,(7) to be quite small. On the other end of the spectrum, when 7 often visits states out of the batch data distribution in the real MDP, namely p7 is different from ph we expect that pz is even more different from the batch data and therefore the error estimates u(s, a) for those (s,a) ~ p% tend to be large. As a consequence, we have that â¬u(7) will be large.
For 6 > dmin := min; â¬,(7), let 7 be the best policy among those incurring model error at most 6:
w := argmax q(7) (9) mien (WSS
The main theorem provides a performance guarantee on the policy ËÏ produced by MOPO. Theorem 4.4. Under Assumption 4.2 and 4.3, the learned policy ËÏ in MOPO (Algorithm 1) satisï¬es
nu (mt) > sup{na (7) â 2â¬u(z) (10)
In particular, for all δ ⥠δmin,
ηM (ËÏ) ⥠ηM (Ïδ) â 2λδ (11)
Interpretation: One consequence of (10) is that naz(7) > nar(7®) â 2Aâ¬. (7). This suggests that it should perform at least as well as the behavior policy 7®, because, as argued before, â¬,,(7) is expected to be small.
Equation (11) tells us that the learned policy 7 can be as good as any policy 7 with â¬,,(7) < 6, or in other words, any policy that visits states with sufficiently small uncertainty as measured by u(s, a). A special case of note is when 6 = â¬,,(7*), we have n(7) > nar(m*) â 2Ae,,(7*), which suggests that the suboptimality gap between the learned policy 7 and the optimal policy 7* depends on the error â¬,(7*). The closer pe is to the batch data, the more likely the uncertainty u(s, a) will be smaller on those points (s,a) ~ pf - On the other hand, the smaller the uncertainty error of the dynamics is, the smaller â¬,,(7*) is. In the extreme case when u(s, a) = 0 (perfect dynamics and uncertainty quantification), we recover the optimal policy 7â.
Second, by varying the choice of δ to maximize the RHS of Equation (11), we trade off the risk and the return. As δ increases, the return ηM (Ïδ) increases also, since Ïδ can be selected from a larger set of policies. However, the risk factor 2λδ increases also. The optimal choice of δ is achieved when the risk balances the gain from exploring policies far from the behavioral distribution. The exact optimal choice of δ may depend on the particular problem. We note δ is only used in the analysis, and our algorithm automatically achieves the optimal balance because Equation (11) holds for any δ.
# 4.3 Practical implementation
Now we describe a practical implementation of MOPO motivated by the analysis above. The method is summarized in Algorithm 2 in Appendix C, and largely follows MBPO with a few key exceptions.
6
Following MBPO, we model the dynamics using a neural network that outputs a Gaussian distribution over the next state and rewardâ: To 4(si41,7|S1, a0) = N(uo(se, az), Ua(se,a2)). We learn an ensemble of N dynamics models {Ti = (ui, &i,) }i1, with each model trained independently via maximum likelihood.
The most important distinction from MBPO is that we use uncertainty quantification following the analysis above. We aim to design the uncertainty estimator that captures both the epistemic and aleatoric uncertainty of the true dynamics. Bootstrap ensembles have been shown to give a consistent estimate of the population mean in theory [5] and empirically perform well in model-based RL [7]. Meanwhile, the learned variance of a Gaussian probabilistic model can theoretically recover the true aleatoric uncertainty when the model is well-specified. To leverage both, we design our error estimator u(s,a) = max)â, ||5/,(s, @)||r, the maximum standard deviation of the learned models in the ensemble. We use the maximum of the ensemble elements rather than the mean to be more conservative and robust. While this estimator lacks theoretical guarantees, we find that it is sufficiently accurate to achieve good performance in practice.* Hence the practical uncertainty-penalized reward of MOPO is computed as 7(s, a) = f(s,a) â Amax;=1,....N D5,(s, a)||p where 7 is the mean of the predicted reward output by T. We treat the penalty coefficient \ as a user-chosen hyperparameter. Since we do not have a true admissible error estimator, the value of prescribed by the theory may not be an optimal choice in practice; it should be larger if our heuristic u(s, a) underestimates the true error and smaller if u substantially overestimates the true error.
# 5 Experiments
In our experiments, we aim to study the follow questions: (1) How does MOPO perform on standard ofï¬ine RL benchmarks in comparison to prior state-of-the-art approaches? (2) Can MOPO solve tasks that require generalization to out-of-distribution behaviors? (3) How does each component in MOPO affect performance?
Question (2) is particularly relevant for scenarios in which we have logged interactions with the environment but want to use those data to optimize a policy for a different reward function. To study (2) and challenge methods further, we construct two additional continuous control tasks that demand out-of-distribution generalization, as described in Section 5.2. To answer question (3), we conduct a complete ablation study to analyze the effect of each module in MOPO in Appendix D. For more details on the experimental set-up and hyperparameters, see Appendix G. For more details on the experimental set-up and hyperparameters, see Appendix G. The code is available online5.
We compare against several baselines, including the current state-of-the-art model-free ofï¬ine RL algorithms. Bootstrapping error accumulation reduction (BEAR) aims to constrain the policyâs actions to lie in the support of the behavioral distribution [36]. This is implemented as a constraint on the average MMD [23] between Ï(· | s) and a generative model that approximates ÏB(· | s). Behavior- regularized actor critic (BRAC) is a family of algorithms that operate by penalizing the value function by some measure of discrepancy (KL divergence or MMD) between Ï(· | s) and ÏB(· | s) [72]. BRAC- v uses this penalty both when updating the critic and when updating the actor, while BRAC-p uses this penalty only when updating the actor and does not explicitly penalize the critic.
# 5.1 Evaluation on the D4RL benchmark
To answer question (1), we evaluate our method on a large subset of datasets in the D4RL benchmark [18] based on the MuJoCo simulator [69], including three environments (halfcheetah, hopper, and walker2d) and four dataset types (random, medium, mixed, medium-expert), yielding a total of
3Tf the reward function is known, we do not have to estimate the reward. The theory in Sections 4.1 and 4.2 applies to the case where the reward function is known. To extend the theory to an unknown reward function, we can consider the reward as being concatenated onto the state, so that the admissible error estimator bounds the error on (sâ,r), rather than just sâ.
4Designing prediction conï¬dence intervals with strong theoretical guarantees is challenging and beyond the scope of this work, which focuses on using uncertainty quantiï¬cation properly in ofï¬ine RL. 5Code is released at https://github.com/tianheyu927/mopo.
7
Dataset type Environment BC MOPO (ours) MBPO SAC BEAR BRAC-v random random random medium medium medium mixed mixed mixed med-expert med-expert med-expert halfcheetah hopper walker2d halfcheetah hopper walker2d halfcheetah hopper walker2d halfcheetah hopper walker2d 2.1 1.6 9.8 36.1 29.0 6.6 38.4 11.8 11.3 35.8 111.9 6.4 35.4 ± 2.5 11.7 ± 0.4 13.6 ± 2.6 42.3 ± 1.6 28.0 ± 12.4 17.8 ± 19.3 53.1 ± 2.0 67.5 ± 24.7 39.0 ± 9.6 63.3 ±38.0 23.7 ± 6.0 44.6 ± 12.9 30.7 ± 3.9 4.5 ± 6.0 8.6 ± 8.1 28.3 ± 22.7 4.9 ± 3.3 12.7 ± 7.6 47.3 ± 12.6 49.8 ± 30.4 22.2 ± 12.7 9.7 ± 9.5 56.0 ± 34.5 7.6 ± 3.7 30.5 11.3 4.1 -4.3 0.8 0.9 -2.4 1.9 3.5 1.8 1.6 -0.1 25.5 9.5 6.7 38.6 47.6 33.2 36.2 10.8 25.3 51.7 4.0 26.0 28.1 12.0 0.5 45.5 32.3 81.3 45.9 0.9 0.8 45.3 0.8 66.6
Table 1: Results for D4RL datasets. Each number is the normalized score proposed in [18] of the policy at the last iteration of training, averaged over 6 random seeds, ± standard deviation. The scores are undiscounted average returns normalized to roughly lie between 0 and 100, where a score of 0 corresponds to a random policy, and 100 corresponds to an expert. We include the performance of behavior cloning (BC) from the batch data for comparison. Numbers for model-free methods taken from [18], which does not report standard deviation. We omit BRAC-p in this table for space because BRAC-v obtains higher performance in 10 of these 12 tasks and is only slightly weaker on the other two. We bold the highest mean.
12 problem settings. We also perform empirical evaluations on non-MuJoCo environments in Appendix F. The datasets in this benchmark have been generated as follows: random: roll out a randomly initialized policy for 1M steps. medium: partially train a policy using SAC, then roll it out for 1M steps. mixed: train a policy using SAC until a certain (environment-speciï¬c) performance threshold is reached, and take the replay buffer as the batch. medium-expert: combine 1M samples of rollouts from a fully-trained policy with another 1M samples of rollouts from a partially trained policy or a random policy.
Results are given in Table 1. MOPO is the strongest by a signiï¬cant margin on all the mixed datasets and most of the medium-expert datasets, while also achieving strong performance on all of the random datasets. MOPO performs less well on the medium datasets. We hypothesize that the lack of action diversity in the medium datasets make it more difï¬cult to learn a model that generalizes well. Fortunately, this setting is one in which model-free methods can perform well, suggesting that model-based and model-free approaches are able to perform well in complementary settings.
# 5.2 Evaluation on tasks requiring out-of-distribution generalization
To answer question (2), we construct two environments halfcheetah-jump and ant-angle where the agent must solve a task that is different from the purpose of the behavioral policy. The trajectories of the batch data in the these datasets are from policies trained for the original dynamics and reward functions HalfCheetah and Ant in OpenAI Gym [6] which incentivize the cheetach and ant to move forward as fast as possible. Note that for HalfCheetah, we set the maximum velocity to be 3. Concretely, we train SAC for 1M steps and use the entire training replay buffer as the trajectories for the batch data. Then, we assign these trajectories with new rewards that incentivize the cheetach to jump and the ant to run towards the top right corner with a 30 degree angle. Thus, to achieve good performance for the new reward functions, the policy need to leave the observational distribution, as visualized in Figure 2. We include the exact forms of the new reward functions in Appendix G. In these environments, learning the correct behaviors requires leaving the support of the data distribution; optimizing solely within the data manifold will lead to sub-optimal policies.
In Table 2, we show that MOPO signiï¬cantly outperforms the state-of-the-art model-free approaches. In particular, model-free ofï¬ine RL cannot outperform the best trajectory in the batch dataset, whereas MOPO exceeds the batch max by a signiï¬cant margin. This validates that MOPO is able to generalize to out-of-distribution behaviors while existing model-free methods are unable to solve those challenges. Note that vanilla MBPO performs much better than SAC in the two environments, consolidating our claim that vanilla model-based methods can attain better results than model-free methods in the ofï¬ine setting, especially where generalization to out-of-distribution is needed. The visualization in Figure 2 suggests indeed the policy learned MOPO can effectively solve the tasks by reaching to states unseen in the batch data. Furthermore, we test the limit of the generalization abilities of MOPO in these environments and the results are included in Appendix E.
8
Batch data Learned policy x x x ' ; ' ' ' ' â H t-0 t=50 t=100 ' ' t ' ' : ' ' ' { x re Â¥ t=0 t=50) t=100 > >| t=0 t=50 t=100 t=0 t=50 t=100
Figure 2: We visualize the two out-of-distribution generalization environments halfcheetah-jump (bottom row) and ant-angle (top row). We show the training environments that generate the batch data on the left. On the right, we show the test environments where the agents perform behaviors that require the learned policies to leave the data support. In halfcheetah-jump, the agent is asked to run while jumping as high as possible given an training ofï¬ine dataset of halfcheetah running. In ant-angle, the ant is rewarded for running forward in a 30 degree angle and the corresponding training ofï¬ine dataset contains data of the ant running forward directly.
Environment Batch Mean Batch Max MOPO (ours) MBPO SAC BEAR BRAC-p BRAC-v halfcheetah-jump ant-angle -1022.6 866.7 1808.6 2311.9 4016.6±144 2530.9±137 -3588.2±1436 -966.4±778 16.8±60 1658.2±16 1069.9±232 1806.7±265 871±41 2333±139
2971.4±1262 13.6±66 Table 2: Average returns halfcheetah-jump and ant-angle that require out-of-distribution policy. The MOPO results are averaged over 6 random seeds, ± standard deviation, while the results of other methods are averaged over 3 random seeds. We include the mean and max undiscounted return of the episodes in the batch data (under Batch Mean and Batch Max, respectively) for comparison. Note that Batch Mean and Max are signiï¬cantly lower than on-policy SAC, suggesting that the behaviors stored in the buffers are far from optimal and the agent needs to go beyond the data support in order to achieve better performance. As shown in the results, MOPO outperforms all the baselines by a large margin, indicating that MOPO is effective in generalizing to out-of-distribution states where model-free ofï¬ine RL methods struggle.
# 6 Conclusion
In this paper, we studied model-based ofï¬ine RL algorithms. We started with the observation that, in the ofï¬ine setting, existing model-based methods signiï¬cantly outperform vanilla model-free methods, suggesting that model-based methods are more resilient to the overestimation and overï¬tting issues that plague off-policy model-free RL algorithms. This phenomenon implies that model-based RL has the ability to generalize to states outside of the data support and such generalization is conducive for ofï¬ine RL. However, online and ofï¬ine algorithms must act differently when handling out-of-distribution states. Model error on out-of-distribution states that often drives exploration and corrective feedback in the online setting [37] can be detrimental when interaction is not allowed. Using theoretical principles, we develop an algorithm, model-based ofï¬ine policy optimization (MOPO), which maximizes the policy on a MDP that penalizes states with high model uncertainty. MOPO trades off the risk of making mistakes and the beneï¬t of diverse exploration from escaping the behavioral distribution. In our experiments, MOPO outperforms state-of-the-art ofï¬ine RL methods in both standard benchmarks [18] and out-of-distribution generalization environments.
Our work opens up a number of questions and directions for future work. First, an interesting avenue for future research to incorporate the policy regularization ideas of BEAR and BRAC into the reward penalty framework to improve the performance of MOPO on narrow data distributions (such as the âmediumâ datasets in D4RL). Second, itâs an interesting theoretical question to understand why model-based methods appear to be much better suited to the batch setting than model-free methods. Multiple potential factors include a greater supervision from the states (instead of only the reward), more stable and less noisy supervised gradient updates, or ease of uncertainty estimation. Our work suggests that uncertainty estimation plays an important role, particularly in settings that demand generalization. However, uncertainty estimation does not explain the entire difference nor does it explain why model-free methods cannot also enjoy the beneï¬ts of uncertainty estimation. For those domains where learning a model may be very difï¬cult due to complex dynamics, developing better model-free ofï¬ine RL methods may be desirable or imperative. Hence, it is crucial to conduct future research on investigating how to bring model-free ofï¬ine RL methods up to the level of the performance of model-based methods, which would require further understanding where the generalization beneï¬ts come from.
9
# Broader Impact
MOPO achieves signiï¬cant strides in ofï¬ine reinforcement learning, a problem setting that is par- ticularly scalable to real-world settings. Ofï¬ine reinforcement learning has a number of potential application domains, including autonomous driving, healthcare, robotics, and is notably amenable to safety-critical settings where online data collection is costly. For example, in autonomous driving, online interaction with the environment runs the risk of crashing and hurting people; ofï¬ine RL methods can signiï¬cantly reduce that risk by learning from a pre-recorded driving dataset collected by a safe behavioral policy. Moreover, our work opens up the possibility of learning policies ofï¬ine for new tasks for which we do not already have expert data.
However, there are still risks associated with applying learned policies to high-risk domains. We have shown the beneï¬ts of explicitly accounting for error, but without reliable out-of-distribution uncertainty estimation techniques, there is a possibility that the policy will behave unpredictably when given a scenario it has not encountered. There is also the challenge of reward design: although the reward function will typically be under the engineerâs control, it can be difï¬cult to specify a reward function that elicits the desired behavior and is aligned with human objectives. Additionally, parametric models are known to be susceptible to adversarial attacks, and bad actors can potentially exploit this vulnerability. Advances in uncertainty quantiï¬cation, human-computer interaction, and robustness will improve our ability to apply learning-based methods in safety-critical domains.
Supposing we succeed at producing safe and reliable policies, there is still possibility of negative societal impact. An increased ability to automate decision-making processes may reduce companiesâ demand for employees in certain industries (e.g. manufacturing and logistics), thereby affecting job availability. However, historically, advances in technology have also created new jobs that did not previously exist (e.g. software engineering), and it is unclear if the net impact on jobs will be positive or negative.
Despite the aforementioned risks and challenges, we believe that ofï¬ine RL is a promising setting with enormous potential for automating and improving sequential decision-making in highly impactful domains. Currently, much additional work is needed to make ofï¬ine RL sufï¬ciently robust to be applied in safety-critical settings. We encourage the research community to pursue further study in uncertainty estimation, particularly considering the complications that arise in sequential decision problems.
# Acknowledgments and Disclosure of Funding
We thank Michael Janner for help with MBPO and Aviral Kumar for setting up BEAR and D4RL. TY is partially supported by Intel Corporation. CF is a CIFAR Fellow in the Learning in Machines and Brains program. TM and GT are also partially supported by Lam Research, Google Faculty Award, SDSI, and SAIL.
# References
[1] Rishabh Agarwal, Dale Schuurmans, and Mohammad Norouzi. Striving for simplicity in off-policy deep reinforcement learning. arXiv preprint arXiv:1907.04543, 2019.
[2] Andrzej Banaszuk, Vladimir A Fonoberov, Thomas A Frewen, Marin Kobilarov, George Mathew, Igor Mezic, Alessandro Pinto, Tuhin Sahai, Harshad Sane, Alberto Speranzon, et al. Scalable approach to uncertainty quantiï¬cation and robust design of interconnected dynamical systems. Annual Reviews in Control, 35(1):77â98, 2011.
[3] Andrew G Barto, Richard S Sutton, and Charles W Anderson. Neuronlike adaptive elements that can solve difï¬cult learning control problems. IEEE transactions on systems, man, and cybernetics, (5):834â846, 1983.
[4] Emmanuel Bengio, Joelle Pineau, and Doina Precup. Interference and generalization in temporal difference learning. arXiv preprint arXiv:2003.06350, 2020.
[5] Peter J Bickel and David A Freedman. Some asymptotic theory for the bootstrap. The annals of statistics, pages 1196â1217, 1981.
10
[6] Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. arXiv preprint arXiv:1606.01540, 2016.
[7] Kurtland Chua, Roberto Calandra, Rowan McAllister, and Sergey Levine. Deep reinforcement learning in a handful of trials using probabilistic dynamics models. In Advances in Neural Information Processing Systems, pages 4754â4765, 2018.
[8] Ignasi Clavera, Jonas Rothfuss, John Schulman, Yasuhiro Fujita, Tamim Asfour, and Pieter Abbeel. Model-based reinforcement learning via meta-policy optimization. arXiv preprint arXiv:1809.05214, 2018.
[9] Karl Cobbe, Oleg Klimov, Chris Hesse, Taehoon Kim, and John Schulman. Quantifying generalization in reinforcement learning. arXiv preprint arXiv:1812.02341, 2018.
[10] Sudeep Dasari, Frederik Ebert, Stephen Tian, Suraj Nair, Bernadette Bucher, Karl Schmeckpeper, Siddharth Singh, Sergey Levine, and Chelsea Finn. Robonet: Large-scale multi-robot learning. arXiv preprint arXiv:1910.11215, 2019.
[11] Thomas Degris, Martha White, and Richard S Sutton. Off-policy actor-critic. arXiv preprint arXiv:1205.4839, 2012.
[12] Marc Deisenroth and Carl E Rasmussen. Pilco: A model-based and data-efï¬cient approach to policy search. In Proceedings of the 28th International Conference on machine learning (ICML-11), pages 465â472, 2011.
[13] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large- scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248â255. Ieee, 2009.
[14] Stefan Depeweg, José Miguel Hernández-Lobato, Finale Doshi-Velez, and Steffen Udluft. Learning and policy search in stochastic dynamical systems with bayesian neural networks. arXiv preprint arXiv:1605.07127, 2016.
[15] Andreas Draeger, Sebastian Engell, and Horst Ranke. Model predictive control using neural networks. IEEE Control Systems Magazine, 15(5):61â66, 1995.
[16] Frederik Ebert, Chelsea Finn, Sudeep Dasari, Annie Xie, Alex Lee, and Sergey Levine. Visual foresight: Model-based deep reinforcement learning for vision-based robotic control. arXiv preprint arXiv:1812.00568, 2018.
[17] Chelsea Finn and Sergey Levine. Deep visual foresight for planning robot motion. In 2017 IEEE International Conference on Robotics and Automation (ICRA), pages 2786â2793. IEEE, 2017.
[18] Justin Fu, Aviral Kumar, Oï¬r Nachum, George Tucker, and Sergey Levine. D4rl: Datasets for deep data-driven reinforcement learning, 2020.
[19] Scott Fujimoto, David Meger, and Doina Precup. Off-policy deep reinforcement learning without exploration. arXiv preprint arXiv:1812.02900, 2018.
[20] Scott Fujimoto, Herke Van Hoof, and David Meger. Addressing function approximation error in actor-critic methods. arXiv preprint arXiv:1802.09477, 2018.
[21] Yarin Gal, Rowan McAllister, and Carl Edward Rasmussen. Improving pilco with bayesian In Data-Efï¬cient Machine Learning workshop, ICML, neural network dynamics models. volume 4, page 34, 2016.
[22] Omer Gottesman, Fredrik Johansson, Matthieu Komorowski, Aldo Faisal, David Sontag, Finale Doshi-Velez, and Leo Anthony Celi. Guidelines for reinforcement learning in healthcare. Nat Med, 25(1):16â18, 2019.
[23] Arthur Gretton, Karsten M. Borgwardt, Malte Rasch, Bernhard Scholköpf, and Alexander J. Smola. A kernel approach to comparing distributions. In Proceedings of the 22nd National Conference on Artiï¬cial Intelligence - Volume 2, AAAIâ07, page 1637â1641. AAAI Press, 2007. ISBN 9781577353232.
[24] Shixiang Gu, Timothy Lillicrap, Zoubin Ghahramani, Richard E Turner, and Sergey Levine. Q-prop: Sample-efï¬cient policy gradient with an off-policy critic. arXiv preprint arXiv:1611.02247, 2016.
11
[25] Shixiang Shane Gu, Timothy Lillicrap, Richard E Turner, Zoubin Ghahramani, Bernhard Schölkopf, and Sergey Levine. Interpolated policy gradient: Merging on-policy and off- policy gradient estimation for deep reinforcement learning. In Advances in neural information processing systems, pages 3846â3855, 2017.
[26] David Ha and Jürgen Schmidhuber. World models. arXiv preprint arXiv:1803.10122, 2018.
[27] Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off- policy maximum entropy deep reinforcement learning with a stochastic actor. arXiv preprint arXiv:1801.01290, 2018.
[28] G Zacharias Holland, Erin J Talvitie, and Michael Bowling. The effect of planning shape on dyna-style planning in high-dimensional state spaces. arXiv preprint arXiv:1806.01825, 2018.
[29] Michael Janner, Justin Fu, Marvin Zhang, and Sergey Levine. When to trust your model: Model-based policy optimization. In Advances in Neural Information Processing Systems, pages 12498â12509, 2019.
[30] Natasha Jaques, Asma Ghandeharioun, Judy Hanwen Shen, Craig Ferguson, Agata Lapedriza, Noah Jones, Shixiang Gu, and Rosalind Picard. Way off-policy batch deep reinforcement learning of implicit human preferences in dialog. arXiv preprint arXiv:1907.00456, 2019.
[31] Nan Jiang and Lihong Li. Doubly robust off-policy value evaluation for reinforcement learning. arXiv preprint arXiv:1511.03722, 2015.
[32] Lukasz Kaiser, Mohammad Babaeizadeh, Piotr Milos, Blazej Osinski, Roy H Campbell, Konrad Czechowski, Dumitru Erhan, Chelsea Finn, Piotr Kozakowski, Sergey Levine, Afroz Mohiuddin, Ryan Sepassi, George Tucker, and Henryk Michalewski. Model-based reinforcement learning for atari, 2019.
[33] Rahul Kidambi, Aravind Rajeswaran, Praneeth Netrapalli, and Thorsten Joachims. Morel: Model-based ofï¬ine reinforcement learning. arXiv preprint arXiv:2005.05951, 2020.
[34] Kwang-Ki K Kim, Dongying Erin Shen, Zoltan K Nagy, and Richard D Braatz. Wienerâs polynomial chaos for the analysis and control of nonlinear dynamical systems with probabilistic uncertainties [historical perspectives]. IEEE Control Systems Magazine, 33(5):58â67, 2013.
[35] Volodymyr Kuleshov, Nathan Fenner, and Stefano Ermon. Accurate uncertainties for deep learning using calibrated regression. arXiv preprint arXiv:1807.00263, 2018.
[36] Aviral Kumar, Justin Fu, Matthew Soh, George Tucker, and Sergey Levine. Stabilizing off-policy q-learning via bootstrapping error reduction. In Advances in Neural Information Processing Systems, pages 11761â11771, 2019.
[37] Aviral Kumar, Abhishek Gupta, and Sergey Levine. Discor: Corrective feedback in reinforce- ment learning via distribution correction. arXiv preprint arXiv:2003.07305, 2020.
[38] Vikash Kumar, Emanuel Todorov, and Sergey Levine. Optimal control with learned local models: Application to dexterous manipulation. In 2016 IEEE International Conference on Robotics and Automation (ICRA), pages 378â383. IEEE, 2016.
[39] Thanard Kurutach, Ignasi Clavera, Yan Duan, Aviv Tamar, and Pieter Abbeel. Model-ensemble trust-region policy optimization. arXiv preprint arXiv:1802.10592, 2018.
[40] Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. In Advances in neural information processing systems, pages 6402â6413, 2017.
[41] Sascha Lange, Thomas Gabel, and Martin Riedmiller. Batch reinforcement learning. Reinforcement learning, pages 45â73. Springer, 2012. In
[42] Sergey Levine and Vladlen Koltun. Guided policy search. In International Conference on Machine Learning, pages 1â9, 2013.
[43] Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
[44] Yuping Luo, Huazhe Xu, Yuanzhi Li, Yuandong Tian, Trevor Darrell, and Tengyu Ma. Algo- rithmic framework for model-based deep reinforcement learning with theoretical guarantees. arXiv preprint arXiv:1807.03858, 2018.
12
[45] Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. arXiv preprint arXiv:1802.05957, 2018.
[46] Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lilli- crap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International conference on machine learning, pages 1928â1937, 2016.
[47] Alfred Müller. Integral probability metrics and their generating classes of functions. Advances in Applied Probability, 29(2):429â443, 1997.
[48] Rémi Munos, Tom Stepleton, Anna Harutyunyan, and Marc Bellemare. Safe and efï¬cient off-policy reinforcement learning. In Advances in Neural Information Processing Systems, pages 1054â1062, 2016.
[49] Oï¬r Nachum, Bo Dai, Ilya Kostrikov, Yinlam Chow, Lihong Li, and Dale Schuurmans. Al- gaedice: Policy gradient from arbitrary experience. arXiv preprint arXiv:1912.02074, 2019.
[50] Anusha Nagabandi, Kurt Konoglie, Sergey Levine, and Vikash Kumar. Deep dynamics models for learning dexterous manipulation. arXiv preprint arXiv:1909.11652, 2019.
[51] Junhyuk Oh, Satinder Singh, and Honglak Lee. Value prediction network. In Advances in Neural Information Processing Systems, pages 6118â6128, 2017.
[52] Xue Bin Peng, Aviral Kumar, Grace Zhang, and Sergey Levine. Advantage-weighted regression: Simple and scalable off-policy reinforcement learning. arXiv preprint arXiv:1910.00177, 2019. [53] Doina Precup, Richard S Sutton, and Sanjoy Dasgupta. Off-policy temporal-difference learning
with function approximation. In ICML, pages 417â424, 2001.
[54] Sébastien Racanière, Théophane Weber, David Reichert, Lars Buesing, Arthur Guez, Danilo Jimenez Rezende, Adria Puigdomenech Badia, Oriol Vinyals, Nicolas Heess, Yujia Li, et al. Imagination-augmented agents for deep reinforcement learning. In Advances in neural information processing systems, pages 5690â5701, 2017.
[55] Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250, 2016.
[56] John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In International conference on machine learning, pages 1889â1897, 2015. [57] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal
policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
[58] Noah Y Siegel, Jost Tobias Springenberg, Felix Berkenkamp, Abbas Abdolmaleki, Michael Neunert, Thomas Lampe, Roland Hafner, and Martin Riedmiller. Keep doing what worked: Behavioral modelling priors for ofï¬ine reinforcement learning. arXiv preprint arXiv:2002.08396, 2020.
[59] David Silver, Hado van Hasselt, Matteo Hessel, Tom Schaul, Arthur Guez, Tim Harley, Gabriel Dulac-Arnold, David Reichert, Neil Rabinowitz, Andre Barreto, et al. The predictron: End-to- end learning and planning. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 3191â3199. JMLR. org, 2017.
[60] Jasper Snoek, Yaniv Ovadia, Emily Fertig, Balaji Lakshminarayanan, Sebastian Nowozin, D Sculley, Joshua Dillon, Jie Ren, and Zachary Nado. Can you trust your modelâs uncertainty? In Advances in Neural Information evaluating predictive uncertainty under dataset shift. Processing Systems, pages 13969â13980, 2019.
[61] Bharath K Sriperumbudur, Kenji Fukumizu, Arthur Gretton, Bernhard Schölkopf, and Gert RG Lanckriet. On integral probability metrics,\phi-divergences and binary classiï¬cation. arXiv preprint arXiv:0901.2698, 2009.
[62] Robert F Stengel. Optimal control and estimation. Courier Corporation, 1994. [63] Alexander L Strehl and Michael L Littman. An analysis of model-based interval estimation for markov decision processes. Journal of Computer and System Sciences, 74(8):1309â1331, 2008. [64] Richard S Sutton. Dyna, an integrated architecture for learning, planning, and reacting. ACM
Sigart Bulletin, 2(4):160â163, 1991.
[65] Richard S Sutton and Andrew G Barto. Reinforcement learning, 1998.
13
[66] Richard S Sutton, Csaba Szepesvári, Alborz Geramifard, and Michael P Bowling. Dyna- style planning with linear function approximation and prioritized sweeping. arXiv preprint arXiv:1206.3285, 2012.
[67] Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, and Pieter Abbeel. Value iteration networks. In Advances in Neural Information Processing Systems, pages 2154â2162, 2016.
[68] Philip S Thomas. Safe reinforcement learning. PhD thesis, University of Massachusetts Libraries, 2015.
[69] Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 5026â5033. IEEE, 2012.
[70] Tingwu Wang and Jimmy Ba. Exploring model-based planning with policy networks. arXiv preprint arXiv:1906.08649, 2019.
[71] Tingwu Wang, Xuchan Bao, Ignasi Clavera, Jerrick Hoang, Yeming Wen, Eric Langlois, Shunshi Zhang, Guodong Zhang, Pieter Abbeel, and Jimmy Ba. Benchmarking model-based reinforcement learning. arXiv preprint arXiv:1907.02057, 2019.
[72] Yifan Wu, George Tucker, and Oï¬r Nachum. Behavior regularized ofï¬ine reinforcement learning. arXiv preprint arXiv:1911.11361, 2019.
[73] Hengshuai Yao, Shalabh Bhatnagar, Dongcui Diao, Richard S Sutton, and Csaba Szepesvári. Multi-step dyna planning for policy evaluation and control. In Advances in neural information processing systems, pages 2187â2195, 2009.
[74] Fisher Yu, Wenqi Xian, Yingying Chen, Fangchen Liu, Mike Liao, Vashisht Madhavan, and Trevor Darrell. Bdd100k: A diverse driving video database with scalable annotation tooling. arXiv preprint arXiv:1805.04687, 2018.
[75] Andrea Zanette and Emma Brunskill. Tighter problem-dependent regret bounds in reinforcement learning without domain knowledge using value function bounds.
[76] Amy Zhang, Nicolas Ballas, and Joelle Pineau. A dissection of overï¬tting and generalization in continuous reinforcement learning. arXiv preprint arXiv:1806.07937, 2018.
14
# Appendix
# A Reminders about integral probability metrics
Let (X , Σ) be a measurable space. The integral probability metric associated with a class F of (measurable) real-valued functions on X is deï¬ned as
[ tar- [100] = 00,8, FOXY - EF) Y~Q (P,Q) = sup SEF
where P and Q are probability measures on X . We note the following special cases:
(i) If F = {f : ||flloo < 1}, then df is the total variation distance
dF (P, Q) = DTV(P, Q) := sup AâΣ |P (A) â Q(A)|
(ii) If F is the set of 1-Lipschitz function w.r.t. to some cost function (metric) c on X , then dF is the 1-Wasserstein distance w.r.t. the same metric:
dF (P, Q) = W1(P, Q) := inf γâÎ(P,Q) X 2 c(x, y) dγ(x, y)
where Î(P, Q) denotes the set of all couplings of P and Q, i.e. joint distributions on X 2 which have marginals P and Q.
(iii) If F = {f : ||fllz2 < 1} where H is a reproducing kernel Hilbert space with kernel k, then d is the maximum mean discrepancy:
dx(P,Q) =MMD(P,Q) := /E|K(X, X)] â 2E[K(X, Y)] + EK. Y)]
# where X,Xâ ~~ PandY,Yâ~Q.
In the context of Section 4.1, we have (at least) the following instantiations of Assumption 4.2:
(i) Assume the reward is bounded by rmax-
# Tmax
IGG(s,a)| S Ty? v(L(s, 4e* and F = {f : ||flloo < 1}.
IGG(s,a)| S Ty? v(L(s, a), T(s, a)
This corresponds to c = rmax
# (ii) Assume V Ï
# M is Lv-Lipschitz. Then
(ii) Assume Vj; is L,-Lipschitz. Then
|G(s,a)| < LyWi(T T(s,a),T(s,a)) This corresponds to c = L,, and F = {f : f is 1-Lipschitz}.
(iii) Assume ||Vx7 ||. < v. Then
# ||. < v. Then
|G(s,@)| < vMMD(Z(s,a),T(s,a))
This corresponds to c = v and F = {f : ||f|la < 1}.
# B Proofs
We provide a proof for Lemma 4.1 for completeness. The proof is essentially the same as that for [44, Lemma 4.3].
Proof. Let W; be the expected return when executing 7 on T for the first j steps, then switching to T for the remainder. That is,
Wi= E Serre 3) ae~m (st) t<j:segi~T(St,02) t>j:se41~T (st,at)
15
Note that Wo = nar(7) and W. = nxz(7), so
oo ng) â 9m) = SO(Wyar â W5) j=0
# η
Write
W=R+ Bo] Bb Pv) s5,aj~7,T |$j+1>T (st ,44) Wjii=Rj+ E sj,aj0n,T E baton 854107 (s1,a2)
where R; is the expected return of the first j time steps, which are taken with respect to T. Then
Wj -Wj=7t' E 8),ajnn,T =y" EB _ fat (sa) 8j,aj~n,T E [Vi(s')|-_ E wa] s'~T(s;,0;) s!\T(8},a;)
Thus
oo ng (â¢) â au (7) = SO(Wi4t â Wj) j=0 oo . =o EB fee (s;,a;) = E Z(s, Voaeps [Ca(s, a)|
# η
as claimed.
Now we prove Theorem 4.2.
Proof. We ï¬rst note that a two-sided bound follows from Lemma 4.1:
nar) âmo( <7, BIGFels al A,B lulsal] =Aealn) 12)
Then we have, for any policy Ï,
nu (*) = naz (*) (by (7) > nay (7) (by definition of 7) = ngg(â¢) â Aâ¬u (7) 2 mu (â¢) â 2râ¬u(â¢) (by (12))
# C MOPO Practical Algorithm Outline
We outline the practical MOPO algorithm in Algorithm 2.
# D Ablation Study
To answer question (3), we conduct a thorough ablation study on MOPO. The main goal of the ablation study is to understand how the choice of reward penalty affects performance. We denote no ens. as a method without model ensembles, ens. pen. as a method that uses model ensemble disagreement as the reward penalty, no pen. as a method without reward penalty, and true pen. as
16
Algorithm 2 MOPO instantiation with regularized probabilistic dynamics and ensemble uncertainty Require: reward penalty coefï¬cient λ rollout horizon h, rollout batch size b. 1: Train on batch data Denv N (µi(s, a), Σi(s, a))}N
i=1. 2: Initialize policy Ï and empty replay buffer Dmodel â â
. 3: for epoch 1, 2, . . . do 4: 5: 6: 7: 8: 9: 10: 11:
3: for epoch 1,2,... do > This for-loop is essentially one outer iteration of MBPO
Sample state s1 from Denv for the initialization of the rollout. for j = 1, 2, . . . , h do
Sample an action a; ~ 7(s;). Randomly pick dynamics T from {tr N, Compute #; = rjâA max, ||E"(s;, Add sample (sj, aj, 7, $;41) to Dmodet-
Randomly pick dynamics T from {tr N, and sample s;41,7j ~ T(s; aj).
||E"(s;, 4;)|lr-
Drawing samples from Denv ⪠Dmodel, use SAC to update Ï.
a method using the true model prediction error ||T(s,a) â T(s, a)|| as the reward penalty. Note that we include true pen. to indicate the upper bound of our approach. Also, note that no ens. measures disagreement among the ensemble: precisely, if the modelsâ mean predictions are denoted }1,---,/4N, We compute the average ji = 1/N ye j, and then take max; ||; â /1|| as the ensemble penalty.
The results of our study are shown in Table 3. For different reward penalty types, reward penalties based on learned variance perform comparably to those based on ensemble disagreement in D4RL environments while outperforming those based on ensemble disagreement in out-of-distribution domains. Both reward penalties achieve signiï¬cantly better performances than no reward penalty, indicating that it is imperative to consider model uncertainty in batch model-based RL. Methods that uses oracle uncertainty obtain slightly better performance than most of our methods. Note that MOPO even attains the best results on halfcheetah-jump. Such results suggest that our uncertainty quantiï¬cation on states is empirically successful, since there is only a small gap. We believe future work on improving uncertainty estimation may be able to bridge this gap further. Note that we do not report the results of methods with oracle uncertainty on walker2d-mixed and ant-angle as we are not able to get the true model error from the simulator based on the pre-recorded dataset.
In general, we ï¬nd that performance differences are much larger for halfcheetah-jump and ant-angle than the D4RL halfcheetah-mixed and walker2d-mixed datasets, likely because halfcheetah-jump and ant-angle requires greater generalization and hence places more demands on the accuracy of the model and uncertainty estimate.
Finally, we perform another ablation study on the choice of the reward penalty. We consider the u⢠(sa) = % ye |X},(s,a)||p, the average standard deviation of the learned models in the ensemble, as the reward penalty instead of the max standard deviation as used in MOPO. We denote the variant of MOPO with the average learned standard deviation as MOPO, avg. var.. We compare MOPO to MOPO, avg. var. in the halfcheetah-jump domain. MOPO achieves 4140.6+88 average return while MOPO, avg. var. achieves 4166.3+228.8 where the results are averaged over 3 random seeds. The two methods did similarly, suggesting that using either mean variance or max variance would be a reasonable choice for penalizing uncertainty.
# E Empirical results on generalization capabilities
We conduct experiments in ant-angle to show the limit of MOPOâs generalization capabilties. As shown in Table 4, we show that MOPO generalizes to Ant running at a 45⦠angle (achieving almost buffer max score), beyond the 30⦠shown in the paper, while failing to generalize to a 60 and 90⦠degree angle. This suggests that if the new task requires to explore states that are completely out of the data support, i.e. the buffer max and buffer mean both fairly bad, MOPO is unable to generalize.
17
Method halfcheetah-mixed walker2d-mixed halfcheetah-jump ant-angle MOPO MOPO, ens. pen. MOPO, no pen. MBPO MBPO, no ens. 6405.8 ± 35 6448.7 ± 115 6409.1 ± 429 5598.4 ± 1285 2247.2 ± 581 1916.4 ± 611 1923.6 ± 752 1421.2 ± 359 1021.8 ± 586 500.3 ± 34 2530.9 ± 137 4016.6 ± 144 2256.0 ± 288 3577.3 ± 461 18.6 ± 49 â980.8 ± 5625 2971.4 ± 1262 13.6 ± 65 â68.7 ± 1936 â720.1 ± 728 MOPO, true pen. 6984.0 ± 148 N/A 3818.6 ± 136 N/A
Table 3: Ablation study on two D4RL tasks halfcheetah-mixed and walker2d-mixed and two out-of- distribution tasks halfcheetah-jump and ant-angle. We use average returns where the results of MOPO and its variants are averaged over 6 random seeds and MBPO results are averaged over 3 random seeds as in Table 2. We observe that different reward penalties can all lead to substantial improvement of the performance and reward penalty based on learned variance is a better choice than that based on ensemble disagreement in out-of-distribution cases. Methods that use oracle uncertainty as the reward penalty achieve marginally better performance than MOPO, implying that MOPO is effective at estimating the uncertainty.
Environment ant-angle-45 ant-angle-60 ant-angle-90 Buffer Max Buffer Mean 3168.7 1953.7 838.8 1105.5 846.7 -901.6 MOPO 2571.3±598.1 840.5±1103.7 -503.2±803.4
# Table 4: Limit of generalization on ant-angle.
# F Experiments on HIV domains
Beyond continous control tasks in MuJoCo, we test MOPO on an HIV treatment simulator slightly modiï¬ed from the one in the whynot package. The task simulates the sequential decision making in HIV treatment, which involves determining the amounts of two anti-HIV drugs to be administered to the patient in order to maximize the immune response and minimize the amount of virus. The agent observes both of those quantities as well as the (log) number of infected and uninfected T cells and macrophages.
We evaluated MOPO with the data generated from the ï¬rst 200k steps of training an online SAC agent on this environment. We show results in Table 5, where MOPO outperforms BEAR and achieves almost the buffer max score.
Buffer Max Buffer Mean 15986.2 6747.2 SAC (online) 25716.3 ± 254.3 BEAR 11709.1±1292.1 MOPO 13484.6 ± 3900.7
Table 5: HIV treatment results, averaged over 3 random seeds.
# G Experiment Details
# G.1 Details of out-of-distribution environments
For halfcheetah-jump, the reward function that we use to train the behavioral policy is r(s, a) = max{v,,3} â 0.1 ||a||3 where v, denotes the velocity along the x-axis. After collecting the offline dataset, we relabel the reward function to r(s,a) = max{v,,3} â 0.1 ||al]3 + 15 * (z â init z) where z denotes the z-position of the half-cheetah and init z denotes the initial z-position.
For ant-angle, the reward function that we use to train the behavioral policy is r(s, a) = vx â control cost. After collecting the ofï¬ine dataset, we relabel the reward function to r(s, a) = vx · cos Ï 6 â control cost where vx, vy denote the velocity along the x, y-axis respectively. For both out-of-distribution environments, instead of sampling actions from the learned policy during the model rollout (line 10 in Algorithm 2), we sample random actions from Unif[â1, 1], which achieves better performance empirically. One potential reason is that using random actions during model rollouts leads to better exploration of the OOD states.
# 6 + vy · sin Ï
18
random random random medium medium medium mixed mixed mixed med-expert med-expert med-expert Table 6: Hyperparameters used in the D4RL datasets.
# G.2 Hyperparameters
Here we list the hyperparameters used in the experiments.
For the D4RL datasets, the rollout length h and penalty coefï¬cient λ are given in Table 6. We search over (h, λ) â {1, 5}2 and report the best ï¬nal performance, averaged over 3 seeds. The only exceptions are halfcheetah-random and walker2d-medium-expert, where other penalty coefï¬cients were found to work better.
For the out-of-generalization tasks, we use rollout length 5 for halfcheetah-jump and 25 for ant-angle, and penalty coefï¬cient 1 for halfcheetah-jump and 2 for ant-angle.
Across all domains, we train an ensemble of 7 models and pick the best 5 models based on their prediction error on a hold-out set of 1000 transitions in the ofï¬ine dataset. Each of the model in the ensemble is parametrized as a 4-layer feedforward neural network with 200 hidden units and after the last hidden layer, the model outputs the mean and variance using a two-head architecture. Spectral normalization [45] is applied to all layers except the head that outputs the model variance.
For the SAC updates, we sample a batch of 256 transitions, 5% of them from Denv and the rest of them from Dmodel. We also perform ablation studies on the percentage of the real data in a batch for MOPO. For simplicity, we use MBPO, which essentially MOPO without reward penalty, for this ablation study. We tried to train MBPO with data all sampled from Dmodel and no data from Denv and compare the performance to MBPO with 5% of data from Denv on all 12 settings in the D4RL benchmark. We ï¬nd that the performances of both methods are not signiï¬cantly distinct: no-real-data MBPO outperforms 5%-real-data MBPO on 6 out of 12 tasks and lies within one SD of 5%-real-data MBPO on 9 out of 12 tasks.
19 | {
"id": "1809.05214"
} |
2005.13013 | English Intermediate-Task Training Improves Zero-Shot Cross-Lingual Transfer Too | Intermediate-task training---fine-tuning a pretrained model on an
intermediate task before fine-tuning again on the target task---often improves
model performance substantially on language understanding tasks in monolingual
English settings. We investigate whether English intermediate-task training is
still helpful on non-English target tasks. Using nine intermediate
language-understanding tasks, we evaluate intermediate-task transfer in a
zero-shot cross-lingual setting on the XTREME benchmark. We see large
improvements from intermediate training on the BUCC and Tatoeba sentence
retrieval tasks and moderate improvements on question-answering target tasks.
MNLI, SQuAD and HellaSwag achieve the best overall results as intermediate
tasks, while multi-task intermediate offers small additional improvements.
Using our best intermediate-task models for each target task, we obtain a 5.4
point improvement over XLM-R Large on the XTREME benchmark, setting the state
of the art as of June 2020. We also investigate continuing multilingual MLM
during intermediate-task training and using machine-translated
intermediate-task data, but neither consistently outperforms simply performing
English intermediate-task training. | http://arxiv.org/pdf/2005.13013 | Jason Phang, Iacer Calixto, Phu Mon Htut, Yada Pruksachatkun, Haokun Liu, Clara Vania, Katharina Kann, Samuel R. Bowman | cs.CL | null | null | cs.CL | 20200526 | 20200930 | 0 2 0 2
p e S 0 3 ] L C . s c [
2 v 3 1 0 3 1 . 5 0 0 2 : v i X r a
# English Intermediate-Task Training Improves Zero-Shot Cross-Lingual Transfer Too
# Jason Phang1,â Iacer Calixto1,2,â Phu Mon Htut1 Yada Pruksachatkun1 Haokun Liu1 Clara Vania1 Katharina Kann3 Samuel R. Bowman1
1New York University 2ILLC, University of Amsterdam 3University of Colorado Boulder {jasonphang,iacer.calixto,bowman}@nyu.edu
# Abstract
Intermediate-task trainingâï¬ne-tuning a pre- trained model on an intermediate task before taskâoften ï¬ne-tuning again on the target improves model performance substantially on language understanding tasks in monolingual English settings. We investigate whether En- glish intermediate-task training is still helpful on non-English target tasks. Using nine in- termediate language-understanding tasks, we evaluate intermediate-task transfer in a zero- shot cross-lingual setting on the XTREME benchmark. We see large improvements from intermediate training on the BUCC and Tatoeba sentence retrieval tasks and moder- ate improvements on question-answering tar- get tasks. MNLI, SQuAD and HellaSwag achieve the best overall results as interme- diate tasks, while multi-task intermediate of- fers small additional improvements. Using our best intermediate-task models for each tar- get task, we obtain a 5.4 point improvement over XLM-R Large on the XTREME bench- mark, setting the state of the art1 as of June 2020. We also investigate continuing multi- lingual MLM during intermediate-task train- ing and using machine-translated intermediate- task data, but neither consistently outperforms simply performing English intermediate-task training.
# Introduction
Zero-shot cross-lingual transfer involves training a model on task data in one set of languages (or language pairs, in the case of translation) and eval- uating the model on the same task in unseen lan- guages (or pairs). In the context of natural language understanding tasks, this is generally done using a pretrained multilingual language-encoding model
ââEqual contribution. 1The state of art on XTREME at the time of ï¬nal publi- cation in September 2020 is held by Fang et al. (2020), who introduce an orthogonal method.
such as mBERT (Devlin et al., 2019a), XLM (Con- neau and Lample, 2019) or XLM-R (Conneau et al., 2020) that has been pretrained with a masked lan- guage modeling (MLM) objective on large corpora of multilingual data, ï¬ne-tune it on task data in one language, and evaluate the tuned model on the same task in other languages.
Intermediate-task training (STILTs; Phang et al., 2018) consists of ï¬ne-tuning a pretrained model on a data-rich intermediate task, before ï¬ne-tuning a second time on the target task. Despite its simplic- ity, this two-phase training setup has been shown to be helpful across a range of Transformer models and target tasks (Wang et al., 2019a; Pruksachatkun et al., 2020), at least within English settings.
In this work, we propose to use intermediate training on English tasks to improve zero-shot cross-lingual transfer performance. Starting with a pretrained multilingual language encoder, we per- form intermediate-task training on one or more English tasks, then ï¬ne-tune on the target task in English, and ï¬nally evaluate zero-shot on the same task in other languages.
Intermediate-task training on English data intro- duces a potential issue: We train the pretrained mul- tilingual model extensively on only English data before evaluating it on non-English target task data, potentially causing the model to lose the knowl- edge of the other languages that was acquired dur- ing pretraining (Kirkpatrick et al., 2017; Yogatama et al., 2019). To mitigate this issue, we experi- ment with mixing in multilingual MLM training updates during the intermediate-task training. In the same vein, we also conduct a case study where we machine-translate intermediate task data from English into three other languages (German, Rus- sian and Swahili) to investigate whether interme- diate training on these languages improves target task performance in the same languages.
Concretely, we use the pretrained XLM-R (Con-
Single-task fine-tuning on task #n Interm. Task: #n EN OR Interm. Task: #n EN & MLM Ke EE XLM-R : ms OR Multi-task fine-tuning Interm. Task: #1 to#N | EN Self-supervised Intermediate task multilingual training pre-training in English EN English 7K Multilingual MLM Masked Language Modeling training Target Task: #m EN Target Task: KK Target Task: #m Target Task: o #M Target task Target task fine-tuning evaluation in English in each language
Figure 1: We investigate the beneï¬t of injecting an additional phase of intermediate-task training on English language task data. We also consider variants using multi-task intermediate-task training, as well as continuing multilingual MLM during intermediate-task training. Best viewed in color.
neau et al., 2020) encoder and perform experi- ments on 9 target tasks from the recently introduced XTREME benchmark (Hu et al., 2020), which aims to evaluate zero-shot cross-lingual transfer perfor- mance across diverse target tasks across up to 40 languages each. We investigate how training on 9 different intermediate tasks, including question answering, sentence tagging, sentence completion, paraphrase detection, and natural language infer- ence impacts zero-shot cross-lingual transfer per- formance. We ï¬nd the following:
⢠Intermediate-task training on SQuAD, MNLI, and HellaSwag yields large target-task im- provements of 8.2, 7.5, and 7.0 points on the development set, respectively. Multi-task intermediate-task training on all 9 tasks per- forms best, improving by 8.7 points.
⢠Applying intermediate-task training to BUCC and Tatoeba, the two sentence retrieval target tasks that have no training data of their own, yields dramatic improvements with almost ev- ery intermediate training conï¬guration. Ty- DiQA shows consistent improvements with many intermediate tasks, whereas XNLI does not see beneï¬ts from intermediate training.
⢠Evaluating our best performing models for each target task on the XTREME benchmark yields an average improvement of 5.4 points, setting the state of the art as of writing.
⢠Training on English intermediate tasks out- performs the more complex alternatives of (i) continuing multilingual MLM during intermediate-task training, and (ii) using machine-translated intermediate-task data.
# 2 Approach
We follow a three-phase approach to training, illus- trated in Figure 1: (i) we use a publicly available model pretrained on raw multilingual text using MLM; (ii) we perform intermediate-task training on one or more English intermediate tasks; and (iii) we ï¬ne-tune the model on English target-task training data, before evaluating it on target-task test data in each target language.
In phase (ii), our intermediate tasks have English input data. In Section 2.4, we investigate an alterna- tive where we machine-translate intermediate-task data to other languages, which we use for training. We experiment with both single- and multi-task training for intermediate-task training. We use tar- get tasks from the recent XTREME benchmark for zero-shot cross-lingual transfer.
# Intermediate Tasks
We study the effect of intermediate-task training (STILTs; Phang et al., 2018) with nine different English intermediate tasks, described in Table 1.
We choose the tasks below based to cover a vari- ety of task formats (classiï¬cation, question answer- ing, and multiple choice) and based on evidence
Name |Train| |Dev| |Test| Task Genre/Source Intermediate tasks ANLI+ MNLI QQP SQuAD v2.0 SQuAD v1.1 HellaSwag CCG Cosmos QA CommonsenseQA 1,104,934 392,702 363,846 130,319 87,599 39,905 38,015 25,588 9,741 22,857 20,000 40,430 11,873 10,570 10,042 5,484 3,000 1,221 â â â â â â â â â natural language inference Misc. natural language inference Misc. paraphrase detection span extraction span extraction sentence completion tagging question answering question answering Quora questions Wikipedia Wikipedia Video captions & Wikihow Wall Street Journal Blogs Crowdsourced responses Target tasks (XTREME Benchmark) XNLI PAWS-X POS NER XQuAD MLQA TyDiQA-GoldP BUCC Tatoeba 392,702 49,401 21,253 20,000 87,599 87,599 3,696 â â 2,490 2,000 3,974 10,000 34,726 34,726 634 â â 5,010 2,000 47â20,436 1,000â10,000 1,190 4,517â11,590 323â2,719 1,896â14,330 1,000 natural language inference Misc. paraphrase detection tagging named entity recognition question answering question answering question answering sentence retrieval sentence retrieval Wiki/Quora Misc. Wikipedia Wikipedia Wikipedia Wikipedia Wiki / news Misc.
Table 1: Overview of the intermediate tasks (top) and target tasks (bottom) in our experiments. For target tasks, Train and Dev correspond to the English training and development sets, while Test shows the range of sizes for the target-language test sets for each task. XQuAD, TyDiQA and Tateoba do not have separate held-out development sets.
of positive transfer from literature. Pruksachatkun et al. (2020) shows that MNLI (of which ANLI+is a superset), CommonsenseQA, Cosmos QA and HellaSwag yield positive transfer to a range of downstream English-language tasks in intermedi- ate training. CCG involves token-wise prediction and is similar to the POS and NER target tasks. Both versions of SQuAD are widely-used question- answering tasks, while QQP is semantically sim- ilar to sentence retrieval target tasks (BUCC and Tatoeba) as well as PAWS-X, another paraphrase- detection task.
CCG CCGbank (Hockenmaier and Steedman, 2007) is a conversion of the Penn Treebank into Combinatory Categorial Grammar (CCG) deriva- tions. The CCG supertagging task that we use consists of assigning lexical categories to individ- ual word tokens, which together roughly determine a full parse.2
CommonsenseQA CommonsenseQA (Talmor et al., 2019) is a multiple-choice QA dataset gener- ated by crowdworkers based on clusters of concepts from ConceptNet (Speer et al., 2017).
ANLI + MNLI + SNLI (ANLI+) The Adver- sarial Natural Language Inference dataset (Nie et al., 2020) is collected using model-in-the-loop crowdsourcing as an extension of the Stanford Nat- ural Language Inference (SNLI; Bowman et al., 2015) and Multi-Genre Natural Language Infer- ence (MNLI; Williams et al., 2018) corpora. We follow Nie et al. (2020) and use the concatenated ANLI, MNLI and SNLI training sets, which we refer to as ANLI+. For all three natural language inference tasks, examples consist of premise and hypothesis sentence pairs, and the task is to classify the relationship between the premise and hypothe- sis as entailment, contradiction, or neutral.
Cosmos QA Cosmos QA is multiple-choice comprehension commonsense-based (Huang et al., 2019b) generated by dataset crowdworkers, with a focus on the causes and effects of events.
HellaSwag HellaSwag (Zellers et al., 2019) is a commonsense reasoning dataset framed as a four- way multiple choice task, where examples consist of an incomplete paragraph and four choices of spans, only one of which is a plausible continuation of the scenario. It is built using adversarial ï¬ltering (Zellers et al., 2018; Le Bras et al., 2020) with BERT.
2If a word is tokenized into sub-word tokens, we use the representation of the ï¬rst token for the tag prediction for that word as in Devlin et al. (2019a).
In additional to the full ANLI+, we also MNLI consider the MNLI task as a standalone interme- diate task because of its already large and diverse training set.
QQP Quora Question Pairs3 is a paraphrase de- tection dataset. Examples in the dataset consist of two questions, labeled for whether they are seman- tically equivalent.
SQuAD Stanford Question Answering Dataset (Rajpurkar et al., 2016, 2018) is a question- answering dataset consisting of passages extracted from Wikipedia articles and crowd-sourced ques- tions and answers. In SQuAD version 1.1, each example consists of a context passage and a ques- tion, and the answer is a text span from the context. SQuAD version 2.0 includes additional questions with no answers, written adversarially by crowd- workers. We use both versions in our experiments.
# 2.2 Target Tasks
We use the 9 target tasks from the XTREME bench- mark, which span 40 different languages (here- after referred to as the target languages): Cross- lingual Question Answering (XQuAD; Artetxe et al., 2020b); Multilingual Question Answer- ing (MLQA; Lewis et al., 2020); Typologically Diverse Question Answering (TyDiQA-GoldP; Clark et al., 2020); Cross-lingual Natural Language Inference (XNLI; Conneau et al., 2018); Cross- lingual Paraphrase Adversaries from Word Scram- bling (PAWS-X; Yang et al., 2019); Universal De- pendencies v2.5 (Nivre et al., 2018) POS tagging; Wikiann NER (Pan et al., 2017); BUCC (Zweigen- baum et al., 2017, 2018), which requires identi- fying parallel sentences from corpora of different languages; and Tatoeba (Artetxe and Schwenk, 2019), which involves aligning pairs of sentences with the same meaning.
Among the 9 tasks, BUCC and Tatoeba are sen- tence retrieval tasks that do not include training sets, and are scored based on the similarity of learned representations (see Appendix A). XQuAD, Ty- DiQA and Tatoeba do not include development sets separate from the test sets.4 For all XTREME tasks, we follow the training and evaluation protocol de- scribed in the benchmark paper (Hu et al., 2020)
# 3http://data.quora.com/
First-Quora-DatasetRelease-Question-Pairs 4UDPOS also does not include development sets for
Kazakh, Thai, Tagalog or Yoruba.
and their sample implementation.5 Intermediate- and target-task statistics are shown in Table 1.
# 2.3 Multilingual Masked Language Modeling
Our setup requires that we train the pretrained mul- tilingual model extensively on English data before using it on a non-English target task, which can lead to the catastrophic forgetting of other lan- guages acquired during pretraining. We investi- gate whether continuing to train on the multilin- gual MLM pretraining objective while ï¬ne-tuning on an English intermediate task can prevent catas- trophic forgetting of the target languages and im- prove downstream transfer performance.
We construct a multilingual corpus across the 40 languages covered by the XTREME benchmark us- ing Wikipedia dumps from April 14, 2020 for each language and the MLM data creation scripts from the jiant 1.3 library (Phang et al., 2020). In total, we use 2 million sentences sampled across all 40 languages using the sampling ratio from Conneau and Lample (2019) with α = 0.3.
# 2.4 Translated Intermediate-Task Training
Large-scale labeled datasets are rarely available in languages other than English for most language- understanding benchmark tasks. Given the avail- ability of increasingly performant machine trans- lation models, we investigate if using machine- translated intermediate-task data can improve same- language transfer performance, compared to using English intermediate task data.
We translate training and validation data of three intermediate tasks: QQP, HellaSwag, and MNLI. We choose these tasks based on the size of the training sets and because their example- level (rather than word-level) labels can be easily mapped onto translated data. To translate QQP and HellaSwag, we use pretrained machine trans- lation models from OPUS-MT (Tiedemann and Thottingal, 2020). These models are trained with Marian-NMT (Junczys-Dowmunt et al., 2018) on OPUS data (Tiedemann, 2012), which integrates several resources depending on the available cor- pora for the language pair. For MNLI, we use the publicly available machine-translated training data of XNLI provided by the XNLI authors.6 We use German, Russian, and Swahili translations of
5https://github.com/google-research/ xtreme
6According to Conneau et al. (2018), these data are trans- lated using a Facebook internal machine translation system.
all three datasets instead of English data for the intermediate-task training.
# 3 Experiments and Results
# 3.1 Models
We use the pretrained XLM-R Large model (Con- neau et al., 2020) as a starting point for all our experiments, as it currently achieves state-of-the- art performance on many zero-shot cross-lingual transfer tasks.7 Details on intermediate- and target- task training can be found in Appendix A.
XLM-R For our baseline, we directly ï¬ne-tune the pretrained XLM-R model on each target taskâs English training data (if available) and evaluate zero-shot on non-English data, closely follow- ing the sample implementation for the XTREME benchmark.
XLM-R + Intermediate Task In our main ap- proach, as described in Figure 1, we include an additional intermediate-task training phase before training and evaluating on the target tasks as de- scribed above.
We also experiment with multi-task training on all available intermediate tasks. We follow Raf- fel et al. (2020) and sample batches of examples for each task with probability r,, = ee. where e,,, is the number of examples in task m and the constant K = 2!â limits the oversampling of data-rich tasks.
XLM-R + Intermediate Task + MLM To in- corporate multilingual MLM into the intermediate- task training, we treat multilingual MLM as an additional task for intermediate training, using the same multi-task sampling strategy as above.
XLM-R + Translated Intermediate Task We translate intermediate-task training and validation data for three tasks and ï¬ne-tune XLM-R on trans- lated intermediate-task data before we train and evaluate on the target tasks.
# 3.2 Software
Experiments were carried out using the jiant (Phang et al., 2020) library (2.0 alpha), based on PyTorch (Paszke et al., 2019) and Transformers (Wolf et al., 2019).
7XLM-R Large (Conneau et al., 2020) is a 550m-parameter variant of the RoBERTa masked language model (Liu et al., 2019b) trained on a cleaned version of CommonCrawl on 100 languages. Notably, Yoruba is used in the POS and NER XTREME tasks but not is not in the set of 100 languages.
# 3.3 Results
We train three versions of each intermediate-task model with different random seeds. For each run, we compute the average target-task performance across languages, and report the median perfor- mance across the three random seeds.
Intermediate-Task Training As shown in Ta- ble 2, no single intermediate task yields positive transfer across all target tasks. The target tasks TyDiQA, BUCC and Tatoeba see consistent gains from most or all intermediate tasks. In particu- lar, BUCC and Tatoeba, the two sentence retrieval tasks with no training data, beneï¬t universally from intermediate-task training. PAWS-X, NER, XQuAD and MLQA also exhibit gains with the additional intermediate-task training on some inter- mediate tasks. On the other hand, we ï¬nd generally no or negative transfer to XNLI and POS.
Among the intermediate tasks, we ï¬nd that MNLI performs best; with meaningful improve- ments across the PAWS-X, TyDiQA, BUCC and Tatoeba tasks. ANLI+, SQuAD v1.1, SQuAD v2.0 and HellaSwag also show strong positive transfer performance: SQuAD v1.1 shows strong positive transfer across all three QA tasks, SQuAD v2.0 shows the most positive transfer to TyDiQA, while HellaSwag shows the most positive transfer to NER and BUCC tasks. ANLI+does not show any im- provement over MNLI (of which it is a superset), even on XNLI for which it offers additional directly relevant training data. This mirrors negative ï¬nd- ings from Nie et al. (2020) on NLI evaluations and Bowman et al. (2020) on transfer within English. QQP signiï¬cantly improves sentence retrieval-task performance, but has broadly negative transfer to the other target tasks.8 CCG also has relatively poor transfer performance, consistent with Pruk- sachatkun et al. (2020).
Among our intermediate tasks, both SQuAD v1.1 and MNLI also serve as training sets for target tasks (for XNLI and XQuAD/MLQA respectively). While both tasks show overall positive transfer, SQuAD v1.1 actually markedly improves the per- formance in XQuAD and MLQA, while MNLI slightly hurts XNLI performance. We hypothe- size that the somewhat surprising improvements to XQuAD and MLQA performance from SQuAD v1.1 arise due to the baseline XQuAD and MLQA
8For QQP, on 2 of the 3 random seeds the NER model performed extremely poorly, leading to the large negative transfer of -45.4.
Target tasks Metric # langs. XNLI acc. 15 PAWS-X POS NER acc. 7 F1 33 F1 40 XQuAD F1 / EM 11 MLQA F1 / EM 7 TyDiQA F1 / EM 9 BUCC Tatoeba Avg. acc. 37 F1 5 â â XLM-R 80.1 86.5 75.7 62.8 76.1 / 60.0 70.1 / 51.5 65.6 / 48.2 71.5 31.0 67.2 M L M t u o h t i W ANLI+ MNLI QQP SQuADv1.1 SQuADv2 HellaSwag CCG CosmosQA CSQA Multi-task - 0.8 - 1.2 - 4.4 - 1.9 - 1.6 - 7.1 - 2.6 - 2.1 - 2.9 - 0.9 - 0.0 + 1.4 - 4.8 + 1.2 + 1.9 + 1.8 - 3.4 - 0.3 - 2.8 + 1.7 - 1.4 - 0.7 - 6.5 - 0.8 - 1.1 - 0.7 - 2.0 - 1.4 - 1.7 - 1.0 - 3.5 + 0.5 -45.4 - 0.4 + 0.8 + 1.6 - 1.5 - 1.5 - 1.6 + 1.8 - 1.1 / - 0.5 - 0.3 / - 0.1 - 3.8 / - 3.8 + 1.8 / + 2.5 - 0.5 / + 0.7 - 0.0 / + 0.5 - 1.5 / - 1.3 - 0.9 / - 1.3 - 1.0 / - 1.8 + 0.3 / + 0.9 - 0.6 / - 0.8 + 0.2 / + 0.2 - 3.9 / - 4.4 + 2.2 / + 2.6 - 0.4 / + 0.1 - 0.1 / + 0.2 - 1.6 / - 1.5 - 1.5 / - 2.0 - 1.0 / - 0.6 + 0.2 / + 0.5 - 0.6 / - 3.0 - 1.0 / - 1.6 -11.1 / -10.2 + 9.7 / +10.8 +10.4 / +11.3 - 0.0 / - 1.0 - 2.8 / - 6.2 + 0.5 / - 0.6 + 3.5 / + 2.9 + 5.8 / + 6.0 +19.9 +20.0 +17.1 +18.9 +19.3 +20.3 +11.7 +19.2 +18.1 +19.6 +48.2 +48.8 +49.5 +41.3 +43.4 +47.6 +41.9 +43.9 +48.6 +49.9 + 6.6 + 7.5 - 1.5 + 8.1 + 8.2 + 7.0 + 4.1 + 6.1 + 6.5 + 8.7 M L M h t i W ANLI+ MNLI QQP SQuADv1.1 SQuADv2 HellaSwag CCG CosmosQA CSQA - 1.1 - 0.7 - 1.3 - 2.6 - 1.7 - 3.3 - 1.0 - 1.0 - 0.5 + 1.4 + 1.6 - 1.1 + 0.3 + 2.1 + 2.0 - 1.3 - 1.0 + 0.3 + 0.0 - 1.6 - 2.4 - 2.0 - 1.4 - 0.7 - 1.2 - 1.6 - 1.0 + 0.4 + 1.0 - 0.9 - 0.9 + 1.0 + 0.8 - 1.9 - 3.8 - 0.7 - 1.9 / - 1.7 - 0.7 / + 0.1 - 0.3 / - 0.2 + 0.2 / + 1.6 - 0.8 / + 0.1 - 0.8 / - 0.0 - 1.9 / - 2.2 - 3.1 / - 3.3 - 0.9 / - 1.0 - 0.7 / - 0.6 + 0.4 / + 0.8 + 0.0 / + 0.2 + 0.1 / + 1.1 - 0.8 / - 0.5 + 0.1 / + 0.6 - 2.1 / - 2.6 - 3.7 / - 4.2 - 0.7 / - 0.6 + 0.9 / + 0.5 - 1.8 / - 3.2 - 1.6 / - 4.2 + 8.5 / + 9.5 + 8.3 / + 8.9 + 0.3 / + 1.0 - 5.5 / - 6.2 - 0.6 / - 3.2 + 2.1 / + 0.4 +18.6 +17.1 +14.4 +16.0 +15.6 + 6.3 + 8.8 +15.5 +11.6 +46.2 +44.3 +39.8 +40.3 +31.3 +22.3 +36.1 +42.7 +17.2 + 7.1 + 6.6 + 5.0 + 6.8 + 6.1 + 3.1 + 3.3 + 4.7 + 2.9 XTREME Benchmark Scoresâ XLM-R (Hu et al., 2020) XLM-R (Ours) Our Best Modelsâ¡ Human (Hu et al., 2020) 79.2 79.5 80.0 92.8 86.4 86.2 87.9 97.5 72.6 74.0 74.4 97.0 65.4 62.6 64.0 - 76.6 / 60.8 76.1 / 60.0 78.7 / 63.3 91.2 / 82.3 71.6 / 53.2 70.2 / 51.2 72.4 / 53.7 91.2 / 82.3 65.1 / 45.0 65.6 / 48.2 76.0 / 59.5 90.1 / - 66.0 64.5 71.9 - 57.3 31.0 81.2 - 68.1 64.8 73.5 -
Table 2: Intermediate-task training results. We compute the average target task performance across all languages, and report the median over 3 separate runs with different random seeds. Multi-task experiments use all intermediate tasks. We underline the best results per target task with and without intermediate MLM co-training, and bold-face the best overall scores for each target task. â : XQuAD, TyDiQA and Tatoeba do not have held-out test data and are scored using development sets in the benchmark. â¡: Results obtained with our best-performing intermediate task conï¬guration for each target task, selected based on the development set. The results for individual languages can be found in Appendix B.
models being under-trained. For all target-task ï¬ne- tuning, we follow the sample implementation for target task training in the XTREME benchmark, which trains on SQuAD for only 2 epochs. This may explain why an additional phase of SQuAD training can improve performance. Conversely, the MNLI-to-XNLI model might be over-trained, given the MNLI training set is approximately 4 times as large as the SQuAD v1.1 training set.
Multi-Task Training Multi-task training on all intermediate tasks attains the best overall average performance on the XTREME tasks, and has the most positive transfer to NER and Tatoeba tasks. However, the overall margin of improvement over the best single intermediate-task model is relatively small (only 0.3, over MNLI), while requiring sig- niï¬cantly more training resources. Many single intermediate-task models also outperform the multi- task model in individual target tasks. Wang et al. (2019b) also found more mixed results from a hav- ing an initial phase of multi-task training, albeit
only among English language tasks across a dif- ferent set of tasks. On the other hand, multi-task training precludes the need to do intermediate-task model selection, and is a useful method for incor- porating multiple, diverse intermediate tasks.
MLM Incorporating MLM during intermediate- task training shows no clear trend. It reduces neg- ative transfer, as seen in the cases of Common- senseQA and QQP, but it also tends to somewhat reduce positive transfer. The reductions in positive transfer are particularly signiï¬cant for the BUCC and Tatoeba tasks, although the impact on TyDiQA is more mixed. On balance, we do not see that in- corporating MLM improves transfer performance.
XTREME Benchmark Results At the bottom of Table 2, we show results obtained by XLM-R on the XTREME benchmark as reported by Hu et al. (2020), results obtained with our re- implementation of XLM-R (i.e. our baseline), and results obtained with our best models, which use intermediate-task conï¬guration selected according
XNLI PAWS-X POS NER XQuAD MLQA TyDiQA BUCC Tatoeba 89.3 - 1.2 - 3.2 - 0.8 93.4 + 1.6 - 0.4 + 1.5 95.9 + 0.3 - 2.2 + 0.6 81.6 + 2.6 - 5.8 + 2.7 86.3 / 74.2 - 2.1 / - 1.6 - 4.0 / - 3.6 - 0.2 / + 1.4 81.6 / 68.6 + 1.1 / + 1.4 - 2.6 / - 2.6 + 1.8 / + 2.3 70.4 / 56.6 + 1.1 / + 1.1 - 6.2 / - 5.0 + 1.7 / + 2.5 â â â â â â â â XLM-R 83.8 88.1 88.6 78.6 77.7 / 61.2 69.1 / 52.0 â 77.7 63.9 MNLIen MNLIde QQPen QQPde HellaSwagen HellaSwagde - 0.8 - 0.4 - 2.2 - 2.6 - 0.3 - 0.2 + 0.9 + 0.5 - 4.2 - 9.1 + 0.3 + 0.2 - 0.1 - 0.3 - 3.2 - 3.2 + 0.1 - 0.4 - 0.8 - 0.9 - 7.3 -22.9 + 0.5 - 0.4 - 0.3 / - 1.0 + 0.2 / - 0.3 - 4.5 / - 4.7 - 6.6 / - 5.9 + 1.0 / + 0.2 + 0.2 / - 0.2 - 1.0 / - 0.2 - 2.4 / - 2.0 - 6.7 / - 6.4 - 7.7 / - 6.6 - 0.3 / + 0.4 - 3.5 / - 2.5 â â â â â â +16.5 +17.0 +16.5 +16.0 +16.9 +16.3 +32.7 +33.7 +32.6 +33.5 +33.8 +33.5 XLM-R 79.2 â 89.5 69.3 77.7 / 59.8 â 65.4 / 43.6 79.2 42.1 MNLIen MNLIru QQPen QQPru HellaSwagen HellaSwagru + 0.3 - 0.6 - 0.7 - 3.0 - 0.9 - 0.3 â â â â â â - 0.0 - 0.3 - 2.9 -10.6 - 0.0 - 0.4 + 0.8 + 1.9 -18.6 -59.1 + 1.4 + 2.8 + 0.1 / + 1.5 - 0.4 / + 1.3 - 3.5 / - 2.4 - 5.2 / - 3.9 + 0.8 / + 2.9 + 0.2 / + 0.2 â â â â â â - 1.5 / - 4.6 +11.2 / +16.1 - 8.1 / - 5.4 -14.4 / -12.1 - 4.0 / -10.6 + 8.5 / +13.2 +14.3 +13.1 +14.1 +13.3 +14.7 -71.6 +47.1 +48.3 +49.5 +46.7 +49.9 -23.5 XLM-R 72.4 â â 69.8 â â 67.2 / 48.7 â 7.9 MNLIen MNLIsw QQPen QQPsw HellaSwagen HellaSwagsw - 3.0 - 1.1 - 2.8 - 7.1 - 0.4 - 9.8 â â â â â â â â â â â â + 0.6 - 2.4 - 4.6 -32.1 + 0.1 + 0.4 â â â â â â â â â â â â - 0.3 / - 0.2 +13.8 / +23.4 -12.7 / -12.2 - 7.0 / - 0.4 - 0.9 / - 0.4 +15.6 / +26.3 â â â â â â +24.9 +47.9 +27.2 +41.8 +27.2 - 0.5
Table 3: Experiments with translated intermediate-task training and validation data evaluated on all XTREME target tasks. In each target language (TL) block, models are evaluated on a single target language. We show results for models trained on original intermediate-task training data (en) and compare it to models trained on translated data {de,ru,sw}. âââ indicates that target task data is not available for that target language.
to development set performance on each target task. Based on the results in Table 2, which reï¬ect the median over 3 runs, we pick the best intermediate- task conï¬guration for each target task, and then choose the best model out of the 3 runs. Scores on the XTREME benchmark are computed based on the respective test sets where available, and based on development sets for target tasks without sep- arate held-out test sets. We are generally able to replicate the best reported XLM-R baseline results, except for Tatoeba, where our implementation sig- niï¬cantly underperforms the reported scores in Hu et al. (2020), and TyDiQA, where our implemen- tation outperforms the reported scores. We also highlight that there is a large margin of difference between development and test set scores for BUCCâ this is likely because BUCC is evaluated based on sentence retrieval over the given set of input sen- tences, and the test sets for BUCC are generally much larger than the development sets.
Our best models show gains in 8 out of the 9 XTREME tasks relative to both baseline implemen- tations, attaining an average score of 73.5 across target tasks, a 5.4 point improvement over the pre-
vious best reported average score of 68.1. We set the state of the art on the XTREME benchmark as of June 2020, though Fang et al. (2020) achieve higher results and hold the state of the art using an orthogonal approach at the time of our ï¬nal publication in September 2020.
Translated Intermediate-Task Training Data In Table 3, we show results for experiments us- ing machine-translated intermediate-training data, and evaluated on the available target-task lan- guages. Surprisingly, even when evaluating in- language, using target-language intermediate-task data does not consistently outperform using En- glish intermediate-task data in any of the interme- diate tasks on average.
In general, cross-lingual transfer to XNLI is neg- ative regardless of the intermediate-task or the tar- get language. In contrast, we observe mostly pos- itive transfer on BUCC, and Tatoeba, with a few notable exceptions where models fail catastroph- ically. TyDiQA exhibits positive transfer where the intermediate- and target-task languages aligned: intermediate training on Russian or German helps TyDiQA performance in that respective language,
whereas intermediate training on English hurts non- English performance somewhat. For the remaining tasks, there appears to be little correlation between performance and the alignment of intermediate- and target-task languages. English language QQP already has mostly negative transfer to all target tasks except for BUCC and Tatoeba (see Table 2), and also shows a similar trend when translated into any of the three target languages.
We note that the quality of translations may af- fect the transfer performance. While validation performance on the translated intermediate tasks (Table 15) for MNLI and QQP is only slightly worse than the original English versions, the per- formance for the Russian and Swahili HellaSwag is much worse and close to chance. Despite this, intermediate-task training on Russian and Swahili HellaSwag improve performance on PAN-X and TyDiQA, while we see generally poor transfer performance from QQP. The interaction between translated intermediate-task data and transfer per- formance continues to be a complex open ques- tion. Artetxe et al. (2020a) found that translating or back-translating training data for a task can im- prove zero-shot cross-lingual performance for tasks such as XNLI depending on how the multilingual datasets are created. In contrast, we train on trans- lated intermediate-task data and then ï¬ne-tune on a target task with English training data (exclud- ing BUCC2018 and Tatoeba). The authors of the XTREME benchmark have also recently released translated versions of all the XTREME task train- ing data, which we hope will prompt further inves- tigation into this matter.
# 4 Related work
Sequential learning using pretrained Transformer-based encoders (Phang et al., 2018) has been shown to be effective for many text clas- siï¬cation tasks. This setup generally involves ï¬ne- tuning on a single task (Pruksachatkun et al., 2020; Vu et al., 2020) or multiple tasks (Liu et al., 2019a; Wang et al., 2019b; Raffel et al., 2020), sometimes referred to as the intermediate task(s), before ï¬ne- tuning on the target task. We build upon this line of work, focusing on intermediate-task training for improving cross-lingual transfer.
Early work on cross-lingual transfer mostly re- lies on the availability of parallel data, where one can perform translation (Mayhew et al., 2017) or project annotations from one language into another
(Hwa et al., 2005; Agi´c et al., 2016). For depen- dency parsing, McDonald et al. (2011) use delexi- calized parsers trained on source languages and la- beled training data for parsing target-language data. Agi´c (2017) proposes a parser selection method to select the single best parser for a target language. For large-scale cross-lingual transfer outside NLU, Johnson et al. (2017) train a single mul- tilingual neural machine translation system with up to 7 languages and perform zero-shot transla- tion without explicit bridging between the source and target languages. Aharoni et al. (2019) ex- pand this approach to cover over 100 languages in a single model. Recent works on extending pretrained Transformer-based encoders to multi- lingual settings show that these models are effec- tive for cross-lingual tasks and competitive with strong monolingual models on the XNLI bench- mark (Devlin et al., 2019b; Conneau and Lample, 2019; Conneau et al., 2020; Huang et al., 2019a). More recently, Artetxe et al. (2020a) showed that cross-lingual transfer performance can be sensitive to translation artifacts arising from a multilingual datasetsâ creation procedure.
Finally, Pfeiffer et al. (2020) propose adapter modules that learn language and task representa- tions for cross-lingual transfer, which allow adap- tation to languages not seen during pretraining.
# 5 Conclusion
We evaluate the impact of intermediate-task train- ing on zero-shot cross-lingual transfer. We investi- gate 9 intermediate tasks and how intermediate-task training impacts the zero-shot cross-lingual transfer to the 9 target tasks in the XTREME benchmark.
intermediate-task training signiï¬- cantly improves the performance on BUCC and Tatoeba, the two sentence retrieval target tasks in the XTREME benchmark, across almost every intermediate-task conï¬guration. Our best mod- els obtain 5.9 and 23.9 point gains on BUCC and Tatoeba, respectively, compared to the best avail- able XLM-R baseline scores (Hu et al., 2020). We also observed gains in question-answering tasks, particularly using SQuAD v1.1 and v2.0 as inter- mediate tasks, with absolute gains of 2.1 F1 for XQuAD, 0.8 F1 for MLQA, and 10.4 for F1 Ty- DiQA, again over the best available baseline scores. We improve over XLM-R by 5.4 points on aver- age on the XTREME benchmark. Additionally, we found multi-task training on all 9 intermedi-
ate tasks to slightly outperform individual inter- mediate training. On the other hand, we found that neither incorporating multilingual MLM into the intermediate-task training phase nor translating intermediate-task data consistently led to improved transfer performance.
While we have explored the extent to which En- glish intermediate-task training can improve cross- lingual transfer, a clear next avenue of investigation for future work is how the choice of intermediate- and target-task languages inï¬uences transfer across different tasks.
# Acknowledgments
This project has beneï¬ted from support to SB by Eric and Wendy Schmidt (made by recommenda- tion of the Schmidt Futures program), by Sam- sung Research (under the project Improving Deep Learning using Latent Structure), by Intuit, Inc., by NVIDIA Corporation (with the donation of a Titan V GPU), by Google (with the donation of Google Cloud credits). IC has received funding from the European Unions Horizon 2020 research and inno- vation program under the Marie SkÅodowska-Curie grant agreement No 838188. This project has ben- eï¬ted from direct support by the NYU IT High Performance Computing Center. This material is based upon work supported by the National Sci- ence Foundation under Grant No. 1922658. Any opinions, ï¬ndings, and conclusions or recommen- dations expressed in this material are those of the author(s) and do not necessarily reï¬ect the views of the National Science Foundation.
References ËZeljko Agi´c. 2017. Cross-lingual parser selection In Proceedings of the for low-resource languages. NoDaLiDa 2017 Workshop on Universal Dependen- cies (UDW 2017), pages 1â10, Gothenburg, Sweden. Association for Computational Linguistics.
ËZeljko Agi´c, Anders Johannsen, Barbara Plank, H´ector Mart´ınez Alonso, Natalie Schluter, and Anders Søgaard. 2016. Multilingual projection for parsing truly low-resource languages. Transactions of the Association for Computational Linguistics, 4:301â 312.
Roee Aharoni, Melvin Johnson, and Orhan Firat. 2019. Massively multilingual neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Compu- tational Linguistics: Human Language Technolo- gies, Volume 1 (Long and Short Papers), pages
3874â3884, Minneapolis, Minnesota. Association for Computational Linguistics.
Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2020a. Translation artifacts in cross-lingual transfer learning. arXiv preprint arXiv:2004.04721.
Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2020b. On the cross-lingual transferability of mono- lingual representations. In Proceedings of the 58th Annual Meeting of the Association for Computa- tional Linguistics, pages 4623â4637, Online. Asso- ciation for Computational Linguistics.
Mikel Artetxe and Holger Schwenk. 2019. Mas- sively multilingual sentence embeddings for zero- shot cross-lingual transfer and beyond. Transac- tions of the Association for Computational Linguis- tics, 7:597â610.
Samuel R. Bowman, Gabor Angeli, Potts, and Christo- pher D. Manning. 2015. A large annotated corpus for learning natural language inference. In Proceed- ings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). Associ- ation for Computational Linguistics.
Samuel R Bowman, Jennimaria Palomaki, Livio Bal- dini Soares, and Emily Pitler. 2020. Collecting en- tailment data for pretraining: New protocols and negative results. arXiv preprint arXiv:2004.11997.
Jonathan H. Clark, Eunsol Choi, Michael Collins, Dan Garrette, Tom Kwiatkowski, Vitaly Nikolaev, and Jennimaria Palomaki. 2020. Tydi qa: A benchmark for information-seeking question answering in typo- logically diverse languages. Transactions of the As- sociation for Computational Linguistics.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm´an, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440â 8451, Online. Association for Computational Lin- guistics.
Alexis Conneau and Guillaume Lample. 2019. Cross- lingual language model pretraining. In H. Wal- lach, H. Larochelle, A. Beygelzimer, F. d'Alch´e-Buc, E. Fox, and R. Garnett, editors, Advances in Neu- ral Information Processing Systems 32 (NeurIPS), pages 7059â7069. Curran Associates, Inc.
Alexis Conneau, Ruty Rinott, Guillaume Lample, Ad- ina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: Evaluating cross-lingual sentence representations. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2475â2485, Brussels, Belgium. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019a. BERT: Pre-training of deep bidirectional transformers for language under- In Proceedings of the 2019 Conference standing. of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171â4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019b. BERT: Pre-training of deep bidirectional transformers for language under- In Proceedings of the 2019 Conference standing. of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171â4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
Yuwei Fang, Shuohang Wang, Zhe Gan, Siqi Sun, and Jingjing Liu. 2020. FILTER: An Enhanced Fusion Method for Cross-lingual Language Understanding. arXiv e-prints, page arXiv:2009.05166.
Julia Hockenmaier and Mark Steedman. 2007. CCG- bank: A corpus of CCG derivations and dependency structures extracted from the Penn treebank. Com- putational Linguistics, 33(3):355â396.
Junjie Hu, Sebastian Ruder, Aditya Siddhant, Gra- ham Neubig, Orhan Firat, and Melvin Johnson. 2020. Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generaliza- tion. Proceedings of the 37th International Confer- ence on Machine Learning.
Haoyang Huang, Yaobo Liang, Nan Duan, Ming Gong, Linjun Shou, Daxin Jiang, and Ming Zhou. 2019a. Unicoder: A universal language encoder by pre- In Pro- training with multiple cross-lingual tasks. ceedings of the 2019 Conference on Empirical Meth- ods in Natural Language Processing and the 9th In- ternational Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2485â2494, Hong Kong, China. Association for Computational Linguistics.
Lifu Huang, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2019b. Cosmos QA: Machine reading comprehension with contextual commonsense rea- soning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 2391â2401, Hong Kong, China. Association for Computational Linguistics.
Rebecca Hwa, Philip Resnik, Amy Weinberg, Clara Cabezas, and Okan Kolak. 2005. Bootstrapping parsers via syntactic projection across parallel texts. Nat. Lang. Eng., 11(3):311325.
Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat,
Fernanda Vi´egas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Googleâs multilingual neural machine translation system: En- abling zero-shot translation. Transactions of the As- sociation for Computational Linguistics, 5:339â351.
Marcin Junczys-Dowmunt, Roman Grundkiewicz, Tomasz Dwojak, Hieu Hoang, Kenneth Heaï¬eld, Tom Neckermann, Frank Seide, Ulrich Germann, Alham Fikri Aji, Nikolay Bogoychev, Andr´e F. T. Martins, and Alexandra Birch. 2018. Marian: Fast neural machine translation in C++. In Proceedings of ACL 2018, System Demonstrations, pages 116â 121, Melbourne, Australia. Association for Compu- tational Linguistics.
James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, John Quan, Tiago Ramalho, Ag- nieszka Grabska-Barwinska, Demis Hassabis, Clau- dia Clopath, Dharshan Kumaran, and Raia Hadsell. 2017. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, 114(13):3521â3526.
Ronan Le Bras, Swabha Swayamdipta, Chandra Bha- gavatula, Rowan Zellers, Matthew E Peters, Ashish Sabharwal, and Yejin Choi. 2020. Adversarial ï¬lters of dataset biases. Proceedings of the 37th Interna- tional Conference on Machine Learning.
Patrick Lewis, Barlas Oguz, Ruty Rinott, Sebastian Riedel, and Holger Schwenk. 2020. MLQA: Evalu- ating cross-lingual extractive question answering. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 7315â 7330, Online. Association for Computational Lin- guistics.
Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jian- feng Gao. 2019a. Multi-task deep neural networks for natural language understanding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4487â4496, Flo- rence, Italy. Association for Computational Linguis- tics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.
Stephen Mayhew, Chen-Tse Tsai, and Dan Roth. 2017. Cheap translation for cross-lingual named entity recognition. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Process- ing, pages 2536â2545, Copenhagen, Denmark. As- sociation for Computational Linguistics.
Ryan McDonald, Slav Petrov, and Keith Hall. 2011. Multi-source transfer of delexicalized dependency parsers. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 62â72, Edinburgh, Scotland, UK. Association for Computational Linguistics.
Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. Ad- versarial NLI: A new benchmark for natural lan- guage understanding. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics. Association for Computational Linguis- tics.
ËZeljko Agi´c, Lars Ahrenberg, Lene Antonsen, Maria Jesus Aranzabe, Gashaw Arutie, Masayuki Asahara, Luma Ateyah, Mohammed Attia, and et al. 2018. Universal Depen- dencies 2.2. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics ( ´UFAL), Faculty of Mathematics and Physics, Charles Uni- versity.
Xiaoman Pan, Boliang Zhang, Jonathan May, Joel Nothman, Kevin Knight, and Heng Ji. 2017. Cross- lingual name tagging and linking for 282 languages. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1946â1958, Vancouver, Canada. Association for Computational Linguistics.
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Te- jani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Py- torch: An imperative style, high-performance deep In H. Wallach, H. Larochelle, learning library. A. Beygelzimer, F. dâ Alch´e-Buc, E. Fox, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 32, pages 8024â8035. Curran Asso- ciates, Inc.
Jonas Pfeiffer, Ivan Vuli´c, Iryna Gurevych, and Sebas- tian Ruder. 2020. MAD-X: An adapter-based frame- In Pro- work for multi-task cross-lingual transfer. ceedings of the 2020 Conference on Empirical Meth- ods in Natural Language Processing(EMNLP).
Jason Phang, Thibault F´evry, and Samuel R. Bowman. 2018. Sentence Encoders on STILTs: Supplemen- tary Training on Intermediate Labeled-data Tasks. Unpublished manuscript available on arXiv.
Jason Phang, Phil Yeres, Jesse Swanson, Haokun Liu, Alex Wang, Ian F. Tenney, Yada Pruksachatkun, Phu Mon Htut, , Katherin Yu, Jan Hula, Patrick Xia, Raghu Pappagari, Shuning Jin, R. Thomas McCoy, Roma Patel, Yinghui Huang, Edouard Grave, Na- joung Kim, Thibault F´evry, Berlin Chen, Nikita Nan- gia, Anhad Mohananey, Katharina Kann, Shikha Bordia, Nicolas Patry, David Benton, Ellie Pavlick, and Samuel R. Bowman. 2020. jiant 2.0: A soft- ware toolkit for research on general-purpose text un- derstanding models. http://jiant.info/.
Jason Phang, Haokun Liu, Phu Mon Htut, Xiaoyi Zhang, Richard Yuanzhe Pang, Clara Vania, Katharina Kann, and Samuel R.
Bowman. 2020. Intermediate-task transfer learning with pretrained language models: When and why In Proceedings of the 58th Annual does it work? Meeting of the Association for Computational Lin- guistics, pages 5231â5247, Online. Association for Computational Linguistics.
Colin Raffel, Noam Shazeer, Adam Roberts, Kather- ine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a uniï¬ed text-to- text transformer. Journal of Machine Learning Re- search, 21(140):1â67.
Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you donât know: Unanswerable ques- In Proceedings of the 56th An- tions for SQuAD. nual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784â 789, Melbourne, Australia. Association for Compu- tational Linguistics.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2383â2392, Austin, Texas. Association for Computational Linguistics.
Robert Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of gen- eral knowledge. In Thirty-First AAAI Conference on Artiï¬cial Intelligence.
Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. CommonsenseQA: A ques- tion answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4149â4158, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
J¨org Tiedemann. 2012. Parallel Data, Tools and In- In Proceedings of the Eight In- terfaces in OPUS. ternational Conference on Language Resources and Evaluation (LRECâ12), Istanbul, Turkey. European Language Resources Association (ELRA).
J¨org Tiedemann and Santhosh Thottingal. 2020. OPUS-MT Building open translation services for the World. In Proceedings of the 22nd Annual Con- ferenec of the European Association for Machine Translation (EAMT), Lisbon, Portugal.
Tu Vu, Tong Wang, Tsendsuren Munkhdalai, Alessan- dro Sordoni, Adam Trischler, Andrew Mattarella- Micke, Subhransu Maji, and Mohit Iyyer. 2020. Ex- ploring and predicting transferability across NLP tasks.
Alex Wang, Jan Hula, Patrick Xia, Raghavendra Pap- pagari, R. Thomas McCoy, Roma Patel, Najoung Kim, Ian Tenney, Yinghui Huang, Katherin Yu,
Shuning Jin, Berlin Chen, Benjamin Van Durme, Edouard Grave, Ellie Pavlick, and Samuel R. Bow- man. 2019a. Can you tell me how to get past sesame street? sentence-level pretraining beyond language modeling. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 4465â4476, Florence, Italy. Association for Computational Linguistics.
Alex Wang, Jan Hula, Patrick Xia, Raghavendra Pap- pagari, R. Thomas McCoy, Roma Patel, Najoung Kim, Ian Tenney, Yinghui Huang, Katherin Yu, Shuning Jin, Berlin Chen, Benjamin Van Durme, Edouard Grave, Ellie Pavlick, and Samuel R. Bow- man. 2019b. Can you tell me how to get past sesame street? sentence-level pretraining beyond language modeling. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 4465â4476, Florence, Italy. Association for Computational Linguistics.
Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112â1122. Association for Computational Linguistics.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Râemi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingfaceâs trans- formers: State-of-the-art natural language process- ing. Unpublished manuscript available on arXiv.
Yinfei Yang, Yuan Zhang, Chris Tar, and Jason Baldridge. 2019. PAWS-x: A cross-lingual adversar- ial dataset for paraphrase identiï¬cation. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 3687â3692, Hong Kong, China. Association for Computational Lin- guistics.
Dani Yogatama, Cyprien de Masson dâAutume, Jerome Connor, Tomas Kocisky, Mike Chrzanowski, Ling- peng Kong, Angeliki Lazaridou, Wang Ling, Lei Yu, Chris Dyer, et al. 2019. Learning and evaluat- ing general linguistic intelligence. arXiv preprint arXiv:1901.11373.
Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. SWAG: A large-scale adversar- ial dataset for grounded commonsense inference. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 93â 104, Brussels, Belgium. Association for Computa- tional Linguistics.
Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. HellaSwag: Can
In Pro- a machine really ï¬nish your sentence? ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4791â 4800, Florence, Italy. Association for Computational Linguistics.
Pierre Zweigenbaum, Serge Sharoff, and Reinhard Rapp. 2017. Overview of the second BUCC shared task: Spotting parallel sentences in comparable cor- pora. In Proceedings of the 10th Workshop on Build- ing and Using Comparable Corpora, pages 60â67, Vancouver, Canada. Association for Computational Linguistics.
Pierre Zweigenbaum, Serge Sharoff, and Reinhard Rapp. 2018. Overview of the third BUCC shared task:spotting parallel sentences in comparable cor- pora. In Proceedings of 11th Workshopon Building and Using Comparable Corpora, pages 39â42.
# A Implementation Details
# Intermediate Tasks
For intermediate-task training, we use a learning rate of 1e-5 without MLM, and 5e-6 with MLM. Hyperparameters in the Table 4 were chosen based on intermediate task validation performance in an preliminary search. We use a warmup of 10% of the total number of steps, and perform early stopping based on the ï¬rst 500 development set examples of each task with a patience of 30. For CCG, where tags are assigned for each word, we use the repre- sentation of ï¬rst sub-word token of each word for prediction.
Task Batch size # Epochs ANLI+ MNLI CCG CommonsenseQA Cosmos QA HellaSwag QQP SQuAD MLM Multi-task 24 24 24 4 4 24 24 8 8 Mixed 2 2 15 10 15 7 3 3 - 3
Table 4: Intermediate-task training conï¬guration.
# A.2 XTREME Benchmark Target Tasks
We follow the sample implementation for the XTREME benchmark unless otherwise stated. We use a learning rate of 3e-6, and use the same opti- mization procedure as for intermediate tasks. Hy- perparameters in the Table 5 follow the sample im- plementation. For POS and NER, we use the same strategy as for CCG for matching tags to tokens. For BUCC and Tatoeba, we extract the represen- tations for each token from the 13th self-attention layer, and use the mean-pooled representation as the embedding for that example, as in the sample implementation. Similarly, we follow the sample implementation and set an optimal threshold for each language sub-task for BUCC as a similarity score cut-off for extracting parallel sentences based on the development set and applied to the test set. We randomly initialize the corresponding output heads for each task, regardless of the similarity between intermediate and target tasks (e.g. even if both the intermediate and target tasks train on SQuAD, we randomly initialize the output head in between phases).
Task Batch size # Epochs XNLI (MNLI) PAWS-X XQuAD (SQuAD) MLQA (SQuAD) TyDiQA POS NER BUCC Tatoeba 4 32 16 16 16 32 32 - - 2 5 2 2 2 10 10 - -
Table 5: Target-task training conï¬guration.
# B Per-Language Results
M L M t u o h t i W M L M h t i W XLM-R ANLI+ MNLI QQP SQuAD v2.0 SQuAD v1.1 HellaSwag CCG Cosmos QA CSQA Multi-task ANLI+ MNLI QQP SQuAD v2.0 SQuAD v1.1 HellaSwag CCG Cosmos QA CSQA ar 79.8 77.5 78.4 77.1 77.9 77.1 78.6 77.3 77.1 77.3 76.9 78.5 78.0 78.0 77.5 77.9 79.3 77.9 78.1 79.0 bg de el en es fr hi ru 82.7 83.8 81.3 89.3 84.4 83.7 77.3 79.2 82.5 82.8 81.0 81.3 82.1 82.6 81.9 81.1 80.8 82.2 82.3 83.0 81.6 81.7 81.8 83.5 81.7 81.7 81.9 82.9 80.8 81.3 81.6 79.9 79.9 80.6 79.8 80.1 80.0 81.0 87.6 88.2 86.1 85.6 87.1 88.5 88.1 87.4 87.5 88.5 83.5 84.0 83.6 83.5 82.8 83.7 82.9 83.2 83.5 84.4 83.6 83.6 82.0 81.8 82.7 83.1 83.2 81.7 82.5 82.5 76.5 77.2 75.4 75.5 75.5 77.4 75.4 74.3 76.3 75.8 79.1 79.5 78.5 78.5 78.6 78.2 78.8 77.7 78.4 79.1 82.8 82.9 81.7 82.8 81.7 83.5 82.5 82.7 83.4 83.8 83.1 83.3 83.3 82.2 83.7 82.4 82.7 83.7 81.5 81.1 80.8 80.4 79.7 81.8 80.8 80.4 81.2 89.2 88.8 88.6 88.8 87.0 89.6 87.1 87.6 89.0 84.1 84.3 84.5 83.6 82.8 84.5 83.8 83.9 83.8 82.5 83.4 82.9 82.7 82.1 84.1 82.6 82.9 83.3 76.5 76.7 75.9 76.0 74.4 78.2 76.6 76.2 76.9 79.2 80.3 78.3 79.6 78.4 79.9 78.9 79.5 79.9 sw 72.4 70.4 69.4 69.6 70.6 71.3 72.0 69.9 72.0 70.6 71.1 72.7 72.2 72.2 71.6 71.2 72.9 72.0 73.7 72.3 th 77.1 77.3 77.6 76.9 77.2 76.3 77.4 76.5 75.2 76.3 77.1 77.4 78.4 77.7 77.0 76.6 78.1 76.7 77.8 78.0 tr 78.9 78.0 77.9 77.1 77.2 77.3 78.7 76.9 76.7 77.5 79.1 78.6 79.3 78.6 78.7 78.1 80.1 78.2 79.0 79.1 ur 72.6 73.5 73.2 72.7 73.7 71.2 73.5 71.4 71.1 72.5 72.0 72.7 73.4 72.7 72.9 71.3 74.5 72.2 72.7 73.3 vi 80.0 79.2 79.8 79.2 78.9 79.2 80.0 79.7 78.3 79.6 79.6 80.7 80.5 79.9 79.9 79.0 81.3 80.2 80.4 80.4 zh 79.6 79.3 79.1 78.6 79.6 78.6 79.4 78.6 78.4 78.5 79.2 80.1 80.2 78.9 78.9 78.6 80.7 78.4 79.6 80.6 Avg 80.1 79.4 79.7 78.7 78.9 78.8 79.8 78.8 78.4 78.9 79.4 80.0 80.2 79.6 79.6 78.7 80.8 79.4 79.8 80.2
Table 6: Full XNLI Results
M L M t u o h t i W M L M h t i W de en es fr ja XLM-R 88.1 93.4 89.2 89.3 81.8 ANLI+ 88.0 89.0 MNLI 83.9 QQP 88.9 SQuADv2.0 89.4 SQuADv1.1 88.4 HellaSwag CCG 83.5 Cosmos QA 88.4 85.9 CSQA 89.0 Multi-task 94.1 95.0 93.0 95.2 94.2 95.0 92.3 93.8 93.7 95.0 89.6 90.7 87.7 91.7 91.1 90.2 86.5 90.4 88.6 90.2 90.7 90.9 88.7 91.3 91.1 91.1 88.1 90.3 89.8 91.1 82.0 82.9 79.2 84.7 83.8 84.8 78.0 84.3 81.7 83.8 ANLI+ 88.1 90.1 MNLI 88.6 QQP 88.9 SQuADv2.0 89.0 SQuADv1.1 90.3 HellaSwag CCG 87.5 Cosmos QA 88.1 88.7 CSQA 94.5 95.5 94.3 95.0 93.8 95.0 93.3 94.0 94.1 90.1 91.3 89.8 91.7 90.3 91.0 88.3 89.4 89.1 90.4 91.3 90.6 92.0 88.9 90.5 88.4 90.0 89.8 84.0 84.4 81.7 85.2 82.7 84.9 81.5 82.5 82.5 ko 81.8 82.2 83.8 78.6 84.5 83.5 84.6 77.0 84.3 80.4 83.5 84.2 84.1 82.8 83.9 82.2 85.9 81.2 82.4 82.9 zh 82.0 81.9 84.2 79.7 85.4 83.9 84.5 78.6 85.0 81.5 85.5 84.2 84.5 82.3 84.7 82.2 84.8 81.3 82.3 82.2 Avg 86.5 87.0 88.1 84.4 88.8 88.1 88.4 83.5 88.1 86.0 88.3 87.9 88.7 87.1 88.8 87.0 88.9 85.9 87.0 87.0
Table 7: Full PAWS-X Results
M L M t u o h t i W M L M h t i W M L M t u o h t i W M L M h t i W af ar bg de el en es et eu fa ï¬ fr he XLM-R 87.7 56.3 87.9 88.6 85.6 95.9 89.8 87.6 72.8 70.0 84.9 65.5 68.1 ANLI+ 87.9 87.9 MNLI 83.9 QQP 87.5 SQuADv2.0 87.7 SQuADv1.1 88.3 HellaSwag CCG 88.2 Cosmos QA 88.4 87.1 CSQA 87.7 Multi-task 57.6 56.6 52.6 58.0 58.1 57.3 56.2 56.4 55.7 58.5 88.3 87.8 86.0 88.0 88.6 88.5 86.5 86.2 87.6 89.7 88.8 88.5 85.3 87.9 88.4 88.7 89.4 88.0 87.8 88.8 85.6 84.6 81.7 83.6 85.8 85.6 85.9 84.4 85.8 85.2 95.7 96.2 93.7 96.2 95.7 96.5 95.8 95.9 95.4 96.3 89.4 88.9 87.7 88.7 89.4 89.2 87.8 88.9 88.6 89.4 87.3 86.9 82.1 86.6 87.2 87.6 87.9 87.1 87.3 87.1 73.4 70.4 70.1 69.9 73.4 72.6 73.7 73.5 76.4 67.7 72.0 69.5 66.7 69.1 70.1 69.5 69.1 71.2 69.3 71.6 84.9 84.1 79.3 83.9 84.3 84.7 85.6 84.5 84.7 84.7 65.4 51.8 62.5 51.8 65.1 52.5 53.5 65.3 64.6 52.7 70.9 70.1 61.1 71.3 70.9 69.6 68.8 67.5 65.3 71.0 ANLI+ 87.9 89.1 MNLI 87.7 QQP 88.5 SQuADv2.0 88.0 SQuADv1.1 88.3 HellaSwag CCG 88.1 Cosmos QA 87.5 87.6 CSQA 58.4 57.2 56.3 57.8 55.1 58.0 54.5 57.8 55.9 88.3 87.6 87.6 87.8 88.6 87.8 86.7 87.7 87.4 88.9 88.6 88.6 88.5 88.9 88.3 89.2 88.6 88.7 86.3 85.1 84.2 85.8 85.3 85.7 86.3 85.5 85.1 95.8 96.2 95.9 96.2 95.7 96.4 95.9 95.8 95.6 90.3 88.8 89.6 89.0 89.7 87.2 87.5 89.5 88.5 87.8 88.0 88.1 86.1 85.7 86.8 87.6 88.1 87.2 76.4 73.4 76.3 74.7 73.5 74.0 77.2 71.7 76.4 72.5 69.5 71.2 71.0 70.2 70.2 71.4 70.1 70.4 85.1 85.1 84.5 84.6 83.5 84.3 84.0 84.9 84.2 53.3 52.7 59.7 49.1 64.5 51.5 64.4 64.4 65.1 69.0 68.0 67.5 68.2 66.7 70.9 66.3 68.9 68.2 ja kk ko mr nl pt ru ta te th tl tr ur XLM-R 31.9 - 50.4 80.0 90.1 90.2 89.5 67.1 90.0 - - 76.0 65.6 ANLI+ 19.4 MNLI 38.1 QQP 6.2 39.4 SQuADv2.0 SQuADv1.1 30.9 HellaSwag 31.1 17.8 CCG Cosmos QA 16.4 32.4 CSQA 36.4 Multi-task - - - - - - - - - - 50.7 50.7 45.9 50.8 49.7 50.5 50.3 50.3 49.3 50.7 79.6 79.1 73.5 80.5 78.7 83.7 81.0 77.7 82.8 79.6 90.1 90.4 88.4 90.3 90.5 90.1 90.1 89.9 89.4 90.0 89.7 89.7 88.2 90.1 89.7 89.8 88.0 89.7 88.5 89.8 90.0 89.4 86.6 89.1 89.3 89.5 88.9 89.4 88.5 88.9 69.2 69.4 65.1 68.5 66.8 69.7 66.8 67.9 66.9 68.4 86.6 86.7 81.7 86.1 84.9 86.2 88.4 88.1 86.3 86.2 - - - - - - - - - - - - - - - - - - - - 75.0 74.8 71.5 74.1 74.4 74.2 75.9 76.5 74.5 74.4 66.2 67.6 59.1 60.6 65.4 67.4 70.7 69.2 63.5 62.2 ANLI+ 39.0 30.1 MNLI 27.6 QQP 35.3 SQuADv2.0 16.3 SQuADv1.1 35.4 HellaSwag CCG 25.7 Cosmos QA 16.5 30.8 CSQA - - - - - - - - - 51.2 51.0 50.8 51.0 49.7 50.9 50.7 51.0 51.8 80.7 80.1 81.0 80.2 79.4 78.4 86.1 80.9 80.5 90.2 90.0 90.1 89.9 90.2 90.0 89.8 89.7 90.5 90.0 88.8 89.5 88.1 90.0 87.9 88.8 88.9 89.6 89.8 89.1 89.4 89.3 89.2 89.3 88.4 89.0 89.0 68.7 68.8 67.2 67.1 68.0 68.7 68.0 67.4 66.8 87.6 85.5 88.0 84.3 83.3 86.4 86.6 87.9 86.5 - - - - - - - - - - - - - - - - - - 76.4 75.1 76.2 75.5 75.8 75.4 76.2 76.3 74.8 66.2 69.6 70.3 68.8 64.6 69.3 68.2 70.1 61.9 hi 73.2 70.1 72.4 62.5 69.7 72.2 74.8 75.1 75.6 67.6 68.2 72.5 76.9 78.0 73.2 74.4 74.8 76.7 76.6 68.3 vi 56.4 55.3 54.4 54.5 54.1 56.2 54.5 55.5 56.3 56.0 55.5 56.7 55.4 56.5 56.9 57.3 54.8 55.5 56.0 56.3 hu 81.3 82.9 81.2 78.3 82.6 81.8 81.6 81.8 81.1 81.2 81.5 82.4 80.6 81.8 81.4 79.7 79.9 81.1 81.0 81.6 yo - - - - - - - - - - - - - - - - - - - - id 81.7 81.0 81.1 79.2 81.0 81.3 81.1 80.8 81.0 80.9 80.7 80.7 80.4 81.2 80.8 81.5 81.0 81.4 80.0 81.2 zh 40.9 27.2 48.6 12.0 45.3 37.7 35.1 23.1 23.2 29.6 44.3 45.7 38.4 34.0 39.0 19.0 43.6 23.9 19.6 31.3 it
Table 8: Full POS Results. kk, th, tl and yo do not have development set data.
M L M t u o h t i W M L M h t i W M L M t u o h t i W M L M h t i W af ar XLM-R 77.7 47.1 ANLI+ 75.4 76.9 MNLI 73.8 QQP 76.0 SQuADv2.0 79.1 SQuADv1.1 77.0 HellaSwag CCG 77.4 Cosmos QA 76.6 77.6 CSQA 78.5 Multi-task 52.7 48.3 40.9 48.0 52.6 54.9 51.5 49.3 46.1 49.2 ANLI+ 76.4 78.0 MNLI 77.1 QQP 78.0 SQuADv2.0 77.7 SQuADv1.1 78.7 HellaSwag CCG 74.5 Cosmos QA 78.2 77.4 CSQA 51.5 52.3 46.7 46.5 58.0 47.0 46.4 39.1 48.8 kk ko XLM-R 48.7 54.5 ANLI+ 50.2 MNLI 51.7 QQP 50.4 SQuADv2.0 49.9 SQuADv1.1 51.8 HellaSwag 50.5 52.4 CCG Cosmos QA 48.4 49.7 CSQA 53.2 Multi-task 52.6 58.8 40.1 58.1 57.1 58.4 52.7 52.4 52.0 57.8 ANLI+ 52.9 54.7 MNLI 49.9 QQP 52.1 SQuADv2.0 51.6 SQuADv1.1 53.6 HellaSwag CCG 54.6 Cosmos QA 49.7 52.2 CSQA 56.8 57.5 54.5 60.8 57.7 58.9 53.5 52.5 54.4 bg 81.9 78.1 80.5 75.5 81.1 80.1 82.7 78.7 79.2 78.9 82.0 80.7 81.7 79.0 82.8 81.4 81.8 76.7 80.0 78.9 ml 58.8 61.2 64.8 51.2 61.6 61.7 56.6 57.7 60.3 59.1 60.8 60.0 63.5 63.3 65.1 62.7 62.5 60.6 55.7 60.4 bn de 74.9 78.6 72.7 72.8 66.0 71.8 75.5 76.6 72.5 76.0 75.4 73.3 76.4 77.7 71.3 78.4 77.8 79.1 78.4 77.8 78.4 78.9 73.3 73.0 72.9 71.7 75.2 73.8 74.5 73.8 73.9 79.2 79.6 79.4 79.0 78.0 79.7 76.9 79.0 78.8 mr ms 61.8 54.1 63.0 61.3 51.4 62.5 59.8 66.6 59.6 62.1 62.9 61.0 66.8 69.8 61.4 72.1 50.4 72.8 52.3 56.9 62.4 69.3 61.1 63.3 64.6 63.2 60.2 63.2 62.8 60.2 61.1 75.4 66.3 54.7 54.7 62.2 72.4 69.1 52.1 52.9 el en es et eu fa ï¬ 76.3 81.6 74.7 77.2 61.2 58.2 78.3 76.3 77.9 71.6 78.2 78.1 78.9 76.2 76.1 76.2 80.1 80.9 84.2 75.8 84.3 80.8 84.3 80.8 81.2 81.3 84.5 71.6 76.9 65.5 74.7 75.3 77.8 73.0 73.2 77.3 76.6 72.8 78.5 69.3 78.4 76.7 78.0 78.0 76.6 75.2 78.5 52.2 62.1 55.5 53.9 54.3 58.8 56.9 59.8 59.8 59.4 60.7 58.3 49.9 56.9 61.9 65.0 62.1 55.8 61.9 49.4 75.8 78.7 73.1 78.9 78.7 77.5 78.2 77.8 78.0 79.1 77.8 78.1 76.3 77.3 77.4 78.2 75.7 77.2 76.3 84.3 84.4 81.9 84.2 82.1 84.8 80.5 81.4 81.9 75.4 77.2 74.2 74.8 69.6 73.6 72.6 70.3 75.2 78.0 79.4 78.7 79.0 76.1 79.2 77.7 78.8 79.5 57.7 59.6 61.8 61.6 54.1 55.8 58.9 65.4 66.7 49.7 60.6 66.0 63.3 58.4 55.6 59.6 48.9 58.6 77.6 79.2 78.3 79.5 77.5 78.2 77.7 78.7 79.6 my nl pt ru sw ta te 53.7 83.2 80.7 69.3 69.8 58.2 50.8 46.5 54.9 32.5 50.0 52.2 59.4 50.0 50.2 46.1 54.2 81.8 83.0 78.2 83.1 83.3 83.2 82.5 82.8 82.5 83.8 78.7 80.8 73.0 82.3 80.8 82.5 79.0 79.5 80.3 80.8 67.0 70.2 50.8 70.8 69.8 70.8 67.1 67.4 65.4 69.4 66.9 70.3 65.1 65.4 69.2 69.9 67.0 67.8 69.0 70.6 55.0 59.3 47.3 62.6 58.3 63.7 55.3 57.2 57.1 58.9 52.1 55.4 41.4 53.6 49.5 53.0 49.1 51.4 51.2 53.7 49.5 49.6 49.0 54.8 52.9 54.7 41.6 48.1 47.8 83.4 83.4 82.9 83.4 81.8 82.8 80.7 82.9 83.4 80.9 81.1 78.9 80.9 77.7 80.9 78.1 78.9 80.7 68.3 70.3 68.7 71.6 71.4 71.3 65.4 67.1 68.5 71.0 72.2 70.9 72.6 68.5 70.6 68.1 66.6 69.0 57.2 57.0 58.0 63.0 59.7 59.5 55.1 55.3 57.9 49.8 53.5 50.7 54.1 49.9 52.0 51.6 47.7 50.1 fr he 78.3 50.2 77.4 81.1 72.8 82.5 78.4 80.3 77.3 77.0 78.2 81.2 49.1 55.1 42.6 56.0 52.8 57.0 48.6 46.8 48.9 56.4 80.1 81.4 78.0 80.0 78.7 79.4 77.0 77.7 78.5 54.8 55.1 50.4 57.6 54.8 55.0 48.1 48.3 47.7 th tl 2.2 73.2 2.5 1.0 1.6 0.6 0.8 1.1 2.6 1.3 1.8 2.2 71.2 74.8 67.4 74.8 71.6 75.1 70.0 74.6 73.1 75.2 0.9 1.1 1.1 0.4 1.5 2.4 1.3 0.9 1.4 74.5 74.1 74.0 75.3 72.9 73.6 68.7 74.7 73.6 hi hu 68.7 80.6 69.6 69.0 59.8 68.9 65.6 71.2 67.3 67.8 67.6 70.6 79.6 81.1 74.3 79.8 80.3 81.8 79.7 79.4 79.6 81.0 68.9 68.6 69.1 67.5 67.5 69.8 66.3 68.0 68.2 80.8 81.0 81.6 81.9 78.8 81.3 80.1 80.8 81.0 tr ur 81.1 67.0 78.0 80.5 72.3 80.0 79.1 78.0 81.0 80.7 80.2 77.2 67.3 56.9 57.2 63.2 58.6 70.0 65.3 60.8 73.3 57.7 79.0 80.9 82.3 80.4 78.1 80.1 79.8 80.8 81.5 59.8 61.1 70.2 59.8 54.2 58.4 61.9 59.5 63.2 id 53.7 52.7 55.7 49.2 56.4 54.6 54.3 54.9 53.2 55.6 57.0 54.8 51.3 53.2 62.0 49.9 54.1 53.4 55.1 55.3 vi 74.9 73.9 78.1 67.9 78.9 76.3 75.0 74.2 74.9 73.5 75.6 76.3 75.1 77.1 77.6 71.5 78.3 68.8 74.0 74.0 it ja 80.8 15.6 78.9 80.8 75.9 80.8 80.8 81.4 79.9 80.0 80.1 80.7 13.1 16.4 5.7 18.1 18.7 19.6 15.9 14.1 11.6 20.7 80.5 81.0 80.1 80.7 79.5 81.3 78.7 81.2 81.3 14.4 14.0 15.1 20.0 14.5 18.5 13.8 13.2 12.2 yo zh 33.2 23.6 43.3 38.9 43.9 41.2 47.5 42.1 37.6 34.8 35.3 46.1 18.9 25.2 8.6 22.5 26.2 29.7 23.3 19.5 19.3 30.4 31.7 43.4 40.3 33.6 34.3 36.8 37.9 34.9 43.6 22.5 22.8 24.9 28.0 22.4 24.9 19.8 19.3 19.3 jv ka 56.2 54.3 54.2 54.4 61.8 52.1 56.9 60.3 55.5 53.8 64.7 54.9 62.0 62.6 62.3 55.9 58.1 57.1 58.9 60.4 Avg 62.8 - 62.0 64.2 55.9 64.1 63.0 65.5 62.1 61.8 62.3 64.7 - - - - - - - - - - 63.2 64.3 63.5 64.7 62.6 64.2 61.6 61.3 62.9 - - - - - - - - -
Table 9: Full NER Results
M L M t u o h t i W M L M h t i W ar XLM-R 72.5 / 53.4 ANLI+ 72.9 / 55.0 70.7 / 53.2 MNLI 68.4 / 50.4 QQP 73.8 / 56.0 SQuADv2.0 75.9 / 59.9 SQuADv1.1 73.9 / 56.9 HellaSwag CCG 71.5 / 54.2 Cosmos QA 73.2 / 53.8 72.6 / 53.4 CSQA 73.2 / 56.4 Multi-task ANLI+ 72.1 / 52.4 72.5 / 54.8 MNLI 72.8 / 55.3 QQP 72.3 / 55.0 SQuADv2.0 73.3 / 56.1 SQuADv1.1 73.3 / 56.2 HellaSwag CCG 71.8 / 53.2 Cosmos QA 72.5 / 53.9 73.0 / 54.0 CSQA de 77.7 / 61.2 77.2 / 60.7 77.4 / 60.2 73.2 / 56.5 79.5 / 62.0 80.3 / 63.6 78.7 / 61.3 76.3 / 58.5 78.1 / 62.2 79.5 / 62.4 79.1 / 61.8 77.3 / 59.8 78.4 / 60.7 78.8 / 61.6 79.0 / 63.3 79.0 / 62.9 77.4 / 59.7 77.4 / 60.5 77.2 / 61.2 77.6 / 60.7 el 77.6 / 59.2 75.8 / 58.3 76.8 / 59.1 73.3 / 55.9 78.6 / 60.6 80.3 / 62.1 77.9 / 58.8 75.9 / 58.2 77.3 / 58.3 78.3 / 59.4 78.3 / 60.0 76.1 / 57.6 77.8 / 60.4 76.9 / 58.8 76.9 / 58.6 78.8 / 60.5 78.0 / 58.7 75.7 / 56.9 76.9 / 59.1 77.4 / 58.7 en es hi ru 86.3 / 74.2 80.0 / 61.0 73.7 / 57.5 77.7 / 59.8 84.9 / 73.1 84.2 / 72.6 82.3 / 70.6 86.7 / 75.5 88.3 / 77.4 86.1 / 75.6 84.2 / 72.3 86.7 / 75.4 87.1 / 76.1 85.5 / 74.2 78.4 / 59.5 80.3 / 62.5 75.4 / 57.3 81.5 / 63.6 81.8 / 63.2 79.6 / 60.1 79.0 / 60.1 79.9 / 61.9 81.0 / 62.9 81.1 / 62.9 73.1 / 56.9 72.2 / 55.9 68.5 / 52.5 72.7 / 56.2 76.1 / 59.2 74.3 / 57.5 72.3 / 54.9 74.2 / 57.7 74.9 / 58.5 74.0 / 56.5 76.8 / 59.9 77.8 / 61.3 74.2 / 57.5 79.2 / 61.8 80.0 / 64.1 78.5 / 62.8 76.7 / 60.0 77.9 / 59.4 77.6 / 60.3 77.7 / 61.7 85.8 / 74.1 86.4 / 75.5 85.9 / 74.4 85.3 / 73.9 86.6 / 75.5 85.1 / 73.6 84.8 / 72.9 85.1 / 72.9 86.2 / 74.5 78.7 / 58.8 80.4 / 61.3 79.8 / 61.2 80.3 / 61.9 80.7 / 62.4 79.8 / 61.2 79.3 / 60.1 79.2 / 60.6 80.3 / 61.1 72.9 / 55.3 73.6 / 56.6 73.9 / 56.3 73.1 / 56.9 74.6 / 57.2 74.7 / 57.6 73.1 / 55.8 73.4 / 57.5 73.1 / 57.3 76.9 / 59.4 78.2 / 61.7 78.1 / 61.3 77.8 / 61.7 79.2 / 62.8 77.9 / 61.0 75.8 / 57.1 76.4 / 57.7 77.8 / 59.9 th 72.8 / 62.3 73.0 / 63.3 72.9 / 63.5 68.6 / 60.2 71.0 / 56.8 75.6 / 65.5 73.6 / 64.5 71.2 / 60.9 72.3 / 61.5 69.7 / 58.9 71.6 / 61.8 73.0 / 63.4 73.9 / 64.5 72.0 / 61.0 72.5 / 61.1 71.2 / 58.9 72.7 / 61.8 70.3 / 58.3 72.0 / 61.7 71.4 / 59.6 tr 72.6 / 54.8 72.1 / 55.0 71.9 / 56.3 68.3 / 51.4 75.0 / 59.1 75.8 / 59.2 73.5 / 56.6 71.7 / 55.3 73.3 / 55.6 73.4 / 56.5 73.7 / 57.6 72.3 / 55.3 72.5 / 57.5 73.4 / 57.7 72.8 / 55.8 73.8 / 56.3 73.2 / 57.6 71.7 / 55.6 72.1 / 55.1 72.1 / 55.0 vi 77.6 / 58.0 78.0 / 57.6 78.1 / 59.7 72.9 / 53.4 78.6 / 58.9 80.5 / 61.2 78.8 / 59.1 76.4 / 56.9 78.2 / 58.0 78.2 / 58.1 78.8 / 59.1 78.5 / 57.8 79.0 / 60.3 78.2 / 59.0 77.8 / 58.2 79.4 / 60.6 77.8 / 58.8 77.2 / 57.0 77.4 / 57.6 77.9 / 58.7 zh 68.7 / 58.2 68.3 / 59.0 68.0 / 60.0 66.3 / 58.0 68.8 / 57.6 70.8 / 61.3 69.2 / 59.4 67.9 / 58.2 68.3 / 58.5 67.5 / 57.3 68.1 / 57.0 70.9 / 61.0 69.0 / 59.7 67.6 / 57.2 68.4 / 58.6 69.3 / 59.6 67.7 / 58.3 66.9 / 57.4 68.6 / 59.0 71.2 / 60.7 Avg 76.1 / 60.0 75.5 / 59.8 75.5 / 60.4 72.0 / 56.7 76.9 / 60.7 78.7 / 63.3 76.7 / 61.1 74.8 / 59.0 76.3 / 60.2 76.3 / 60.3 76.5 / 60.8 75.9 / 59.5 76.5 / 61.2 76.1 / 60.4 76.0 / 60.4 76.9 / 61.2 76.1 / 60.4 74.9 / 58.6 75.5 / 59.6 76.2 / 60.0
Table 10: Full XQuAD Results
M L M t u o h t i W M L M h t i W ar de en es hi vi zh XLM-R 62.7 / 42.4 69.1 / 52.0 81.6 / 68.6 72.2 / 53.0 68.0 / 50.7 69.5 / 47.6 67.9 / 46.2 ANLI+ 64.1 / 43.9 64.2 / 43.5 MNLI 60.5 / 39.7 QQP 66.1 / 45.3 SQuADv2.0 67.4 / 46.4 SQuADv1.1 64.2 / 43.1 HellaSwag CCG 62.7 / 41.6 Cosmos QA 63.8 / 43.9 64.0 / 43.9 CSQA 65.1 / 44.1 Multi-task 66.8 / 49.8 68.1 / 51.8 62.4 / 45.5 68.2 / 50.2 69.6 / 52.9 68.8 / 52.3 67.5 / 50.4 68.2 / 50.4 68.8 / 52.0 70.2 / 54.9 82.5 / 69.4 82.7 / 70.0 79.0 / 66.0 83.5 / 71.1 84.1 / 70.8 83.5 / 70.9 82.9 / 70.0 82.2 / 69.0 83.4 / 70.6 82.9 / 69.4 71.9 / 52.6 73.7 / 54.8 70.7 / 51.6 73.6 / 55.4 75.3 / 56.8 73.0 / 53.6 72.9 / 54.6 72.9 / 54.2 75.2 / 55.0 75.2 / 56.4 69.2 / 50.5 70.3 / 52.7 62.9 / 45.4 68.5 / 51.5 72.5 / 54.8 69.2 / 51.7 66.1 / 50.1 69.4 / 51.7 69.1 / 51.5 70.1 / 52.3 70.5 / 49.7 68.9 / 49.5 67.0 / 47.6 71.7 / 52.4 70.9 / 51.7 69.8 / 48.7 68.9 / 48.9 70.8 / 50.1 72.6 / 52.1 72.0 / 51.7 66.9 / 44.8 67.1 / 46.0 63.5 / 41.1 68.2 / 46.4 69.4 / 47.0 68.5 / 46.2 66.4 / 45.6 66.6 / 44.4 69.2 / 46.6 68.6 / 46.2 ANLI+ 62.7 / 41.8 62.9 / 41.0 MNLI 64.6 / 44.9 QQP 64.7 / 43.9 SQuADv2.0 64.4 / 43.3 SQuADv1.1 64.7 / 44.3 HellaSwag CCG 60.4 / 41.4 Cosmos QA 63.4 / 43.1 64.3 / 43.7 CSQA 68.5 / 51.4 69.2 / 53.5 68.1 / 51.2 66.6 / 51.0 68.0 / 50.0 68.4 / 52.3 66.5 / 50.8 69.0 / 51.0 69.5 / 51.8 82.1 / 69.0 82.6 / 69.4 83.2 / 70.4 82.1 / 69.6 83.1 / 70.0 83.3 / 70.4 81.8 / 68.6 81.9 / 68.9 82.6 / 69.4 73.6 / 54.2 74.3 / 54.4 74.0 / 55.6 73.1 / 55.2 75.2 / 56.2 73.9 / 55.0 72.8 / 54.2 72.3 / 53.6 73.4 / 54.4 66.7 / 48.7 68.0 / 50.7 70.4 / 53.1 70.2 / 53.1 68.5 / 51.9 69.5 / 52.1 66.2 / 48.7 66.3 / 48.9 68.0 / 50.7 69.5 / 49.3 70.5 / 50.5 69.1 / 49.3 69.0 / 51.1 71.2 / 51.9 69.9 / 47.9 67.7 / 46.2 69.1 / 47.6 70.9 / 48.7 66.2 / 44.2 68.0 / 45.8 68.3 / 45.6 68.6 / 47.2 66.8 / 44.6 67.7 / 44.8 64.5 / 44.6 66.0 / 45.2 67.7 / 45.8 Avg 70.1 / 51.5 70.3 / 51.5 70.7 / 52.6 66.6 / 48.1 71.4 / 53.2 72.8 / 54.4 71.0 / 52.4 69.6 / 51.6 70.6 / 52.0 71.8 / 53.1 72.0 / 53.6 69.9 / 51.2 70.8 / 52.2 71.1 / 52.9 70.6 / 53.0 71.0 / 52.6 71.1 / 52.4 68.6 / 50.7 69.7 / 51.2 70.9 / 52.1
Table 11: Full MLQA Results
M L M t u o h t i W M L M h t i W ar bn en ï¬ id ko ru sw te XLM-R 64.5 / 46.9 59.5 / 41.6 70.4 / 56.6 64.9 / 49.2 75.1 / 59.8 54.7 / 39.5 65.4 / 43.6 67.2 / 48.7 68.8 / 48.3 ANLI+ 67.3 / 47.8 67.8 / 49.7 MNLI 63.2 / 44.4 QQP 76.5 / 59.8 SQuADv2.0 76.1 / 60.0 SQuADv1.1 69.9 / 49.4 HellaSwag CCG 63.6 / 41.8 Cosmos QA 71.7 / 51.9 70.9 / 52.1 CSQA 73.3 / 52.3 Multi-task 54.9 / 37.2 60.6 / 40.7 43.8 / 26.5 77.7 / 63.7 75.6 / 61.9 60.6 / 42.5 54.1 / 37.2 65.9 / 48.7 67.8 / 49.6 66.7 / 48.7 71.0 / 57.3 71.6 / 57.7 64.4 / 52.7 76.1 / 63.2 77.6 / 66.6 72.2 / 59.1 68.5 / 55.9 73.3 / 61.6 74.6 / 60.9 75.6 / 63.6 64.7 / 47.8 66.5 / 48.6 56.3 / 39.9 78.3 / 64.3 76.0 / 61.3 63.0 / 44.1 59.6 / 41.7 66.7 / 50.9 69.6 / 52.6 74.7 / 59.6 74.9 / 57.5 76.6 / 61.9 71.6 / 57.0 83.1 / 69.9 82.5 / 68.3 76.7 / 60.4 73.2 / 57.5 78.5 / 63.4 77.0 / 60.2 81.7 / 67.3 54.5 / 41.3 55.3 / 42.4 47.5 / 32.6 68.1 / 56.5 63.7 / 51.4 54.7 / 39.1 50.8 / 37.7 52.6 / 36.6 60.8 / 46.4 60.2 / 46.4 62.4 / 33.0 63.9 / 39.0 57.4 / 38.2 73.0 / 51.5 71.1 / 44.7 61.4 / 33.0 60.2 / 33.4 66.2 / 44.1 63.6 / 36.0 71.0 / 43.0 67.2 / 47.3 66.9 / 48.5 54.5 / 36.5 79.1 / 67.1 76.5 / 63.5 66.3 / 48.3 66.8 / 49.7 68.0 / 51.3 70.8 / 53.5 76.0 / 64.3 68.2 / 46.9 71.0 / 51.4 45.5 / 26.2 79.2 / 61.1 79.0 / 61.6 70.6 / 47.8 66.2 / 43.8 74.5 / 54.7 73.3 / 54.7 77.2 / 58.4 ANLI+ 67.1 / 48.9 67.3 / 49.7 MNLI 67.8 / 49.0 QQP 76.9 / 60.5 SQuADv2.0 77.0 / 59.3 SQuADv1.1 68.8 / 50.4 HellaSwag CCG 68.1 / 49.1 Cosmos QA 66.6 / 46.6 68.8 / 50.4 CSQA 59.5 / 42.5 60.0 / 41.6 55.7 / 37.2 70.1 / 54.9 68.5 / 51.3 62.6 / 47.8 57.5 / 39.8 56.8 / 37.2 60.2 / 43.4 72.2 / 58.9 71.2 / 59.3 69.8 / 56.1 76.6 / 64.5 75.4 / 64.3 70.9 / 56.8 69.0 / 55.9 71.5 / 58.0 71.3 / 59.1 67.2 / 51.4 66.8 / 50.4 64.1 / 47.1 74.4 / 59.6 77.2 / 63.4 64.0 / 48.6 65.9 / 48.6 64.2 / 45.0 67.6 / 50.5 76.8 / 60.7 78.1 / 62.1 74.2 / 58.6 83.4 / 69.7 83.3 / 71.0 77.4 / 61.8 76.5 / 61.9 75.0 / 57.0 76.9 / 59.8 54.9 / 42.0 56.4 / 42.0 49.0 / 34.4 61.6 / 48.6 63.7 / 51.8 54.6 / 40.9 55.0 / 39.9 56.3 / 41.3 54.0 / 41.3 62.4 / 35.3 62.2 / 33.9 60.0 / 34.5 71.3 / 45.2 71.7 / 47.9 61.2 / 31.7 61.6 / 31.9 63.6 / 39.0 63.5 / 38.1 70.3 / 52.1 68.5 / 50.7 64.5 / 45.7 74.0 / 61.5 73.1 / 56.5 68.2 / 49.5 67.5 / 49.3 69.0 / 51.1 69.5 / 52.9 70.4 / 53.1 70.0 / 48.4 70.1 / 45.6 76.7 / 59.3 76.4 / 59.0 71.4 / 50.5 56.3 / 30.3 63.6 / 46.3 72.8 / 54.1 Avg 65.6 / 48.2 65.0 / 46.2 66.7 / 48.9 56.0 / 39.3 76.8 / 61.9 75.3 / 59.9 66.1 / 47.1 62.6 / 44.3 68.6 / 51.5 69.8 / 51.8 72.9 / 56.0 66.8 / 49.4 66.7 / 48.7 63.9 / 45.3 73.9 / 58.2 74.0 / 58.3 66.6 / 48.7 64.2 / 45.2 65.2 / 46.8 67.2 / 49.9
Table 12: Full TyDiQA Results
M L M t u o h t i W M L M h t i W de fr ru XLM-R 77.7 62.7 79.2 ANLI+ 94.6 94.2 MNLI 94.2 QQP 94.0 SQuADv2.0 94.2 SQuADv1.1 94.6 HellaSwag CCG 88.3 Cosmos QA 94.1 95.1 CSQA 94.3 Multi-task 89.8 90.2 91.0 89.8 90.5 91.9 82.9 90.2 90.6 90.4 93.5 93.5 93.3 93.0 93.1 93.9 86.6 93.2 93.5 93.4 ANLI+ 93.4 92.7 MNLI 90.8 QQP 92.8 SQuADv2.0 92.9 SQuADv1.1 92.6 HellaSwag 87.6 CCG Cosmos QA 91.8 86.1 CSQA 88.0 89.0 86.9 87.0 89.5 87.5 78.5 86.9 80.8 92.9 93.2 90.6 91.4 92.7 91.4 87.6 91.7 87.9 zh 66.5 88.6 89.9 88.5 89.9 87.0 88.9 78.0 88.6 89.1 87.0 86.5 86.1 83.6 85.8 85.3 86.6 75.7 88.4 81.6 Avg 71.5 91.6 92.0 91.8 91.7 91.2 92.3 83.9 91.5 92.1 91.3 90.2 90.3 88.0 89.2 90.1 89.5 82.4 89.7 84.1
Table 13: Full BUCC Results
M L M t u o h t i W M L M h t i W M L M t u o h t i W M L M h t i W af XLM-R 30.5 ANLI+ 78.8 79.6 MNLI 80.4 QQP 73.7 SQuADv2.0 76.9 SQuADv1.1 78.9 HellaSwag CCG 71.9 Cosmos QA 78.6 79.5 CSQA 81.2 Multi-task ANLI+ 78.6 77.3 MNLI 74.4 QQP 70.8 SQuADv2.0 79.2 SQuADv1.1 57.1 HellaSwag CCG 71.9 Cosmos QA 69.7 54.3 CSQA ka XLM-R 11.8 ANLI+ 76.9 77.9 MNLI 78.7 QQP 67.0 SQuADv2.0 70.9 SQuADv1.1 80.8 HellaSwag CCG 65.1 Cosmos QA 75.7 80.8 CSQA 78.7 Multi-task ANLI+ 70.6 67.7 MNLI 66.0 QQP 53.8 SQuADv2.0 73.2 SQuADv1.1 38.5 HellaSwag CCG 58.3 Cosmos QA 63.3 33.4 CSQA ar 20.4 74.0 70.7 74.9 67.7 68.9 75.4 59.1 70.6 74.5 71.9 65.2 65.2 61.3 57.6 67.7 45.2 52.3 63.7 45.3 kk 17.4 67.3 67.7 69.4 63.0 63.7 72.0 56.9 69.9 70.3 68.2 64.7 63.3 64.2 54.8 66.8 43.1 51.3 56.0 36.2 bg 39.0 88.0 84.8 87.3 84.2 85.7 89.9 82.1 86.6 87.7 88.0 86.6 83.8 83.7 80.9 86.5 69.4 80.4 84.0 63.6 ko 35.5 84.6 84.3 86.4 80.8 83.3 86.5 76.8 83.6 85.5 85.0 83.6 81.8 80.2 77.5 83.9 70.5 74.6 80.7 65.9 bn 13.3 72.3 71.2 74.3 63.2 65.7 75.4 62.5 71.0 74.0 73.6 67.8 64.9 64.6 52.7 71.4 40.4 51.0 58.8 33.5 ml 19.4 90.8 89.8 92.9 82.8 87.3 92.1 82.5 90.1 91.7 91.4 88.9 84.3 82.0 72.5 89.8 63.2 76.3 79.0 47.0 de el es et eu fa ï¬ 63.9 18.9 48.0 25.8 19.9 42.0 41.5 97.4 96.6 96.5 96.0 96.4 97.7 95.5 96.4 96.9 97.1 82.4 82.5 84.1 74.3 76.3 84.8 74.4 80.5 83.6 82.9 91.2 93.1 93.8 89.2 89.5 93.1 87.0 91.8 92.9 92.6 70.9 74.3 74.7 70.5 76.9 79.8 67.3 77.6 79.1 73.1 53.3 59.2 60.2 54.0 58.4 64.8 49.0 60.7 65.8 58.6 91.5 90.0 91.0 87.9 88.0 91.8 84.7 89.8 90.0 90.4 88.6 89.0 90.3 85.5 88.5 92.0 82.6 91.3 92.0 89.6 97.0 97.2 96.2 96.6 96.7 89.7 95.0 95.1 87.0 78.2 76.1 75.7 63.4 80.4 57.8 72.6 74.2 50.5 90.2 92.1 88.1 84.5 91.6 73.4 86.0 84.6 70.0 79.1 77.7 76.7 71.5 83.1 64.0 73.5 76.5 58.8 59.3 57.3 59.4 47.4 66.3 42.2 51.0 58.6 35.7 89.3 88.1 86.3 85.4 90.8 77.1 83.3 85.7 74.1 89.1 88.8 87.0 86.9 91.1 76.4 84.1 85.2 71.0 mr nl pt ru sw ta te 15.2 52.6 47.2 42.1 7.9 9.1 19.7 80.5 80.4 82.9 71.6 74.7 81.1 70.3 78.7 82.7 80.4 93.6 92.5 93.3 89.7 91.7 93.2 88.9 92.0 93.3 92.1 91.0 91.3 92.5 90.4 90.2 91.9 88.8 91.3 91.4 92.0 90.5 89.2 91.6 86.9 89.1 92.0 84.5 89.7 90.4 90.2 30.8 32.8 35.1 27.7 31.5 35.1 24.9 34.1 35.9 34.4 76.5 70.0 81.4 60.9 60.6 79.2 60.3 72.3 73.3 68.7 85.5 78.2 90.6 74.4 77.8 87.2 65.4 84.6 84.6 83.8 75.6 75.0 70.6 61.5 78.9 39.7 58.4 63.1 30.9 92.0 90.8 89.4 90.0 93.0 79.1 89.0 89.4 76.6 91.0 90.5 89.8 87.0 90.4 78.4 86.9 87.2 74.7 88.1 87.8 86.7 87.2 89.7 80.0 82.9 86.1 75.5 29.0 29.7 30.5 20.3 33.8 19.2 23.3 26.2 19.0 70.0 62.2 60.9 41.7 76.2 30.9 46.9 55.7 28.3 76.9 73.5 76.1 51.7 85.0 55.6 60.3 71.8 49.6 fr 48.1 89.8 89.6 89.9 87.1 88.5 92.2 84.4 89.4 90.7 89.6 90.4 87.5 86.9 85.1 89.8 76.5 81.8 84.5 70.7 th 27.4 91.2 86.7 90.0 80.7 82.3 89.6 72.8 89.1 89.4 89.1 84.7 85.2 83.6 80.5 90.0 66.6 72.6 80.5 64.1 he 28.0 82.1 81.8 86.0 77.1 77.3 84.9 77.2 83.0 83.1 84.1 78.7 81.0 76.6 71.9 77.5 62.6 71.3 76.2 58.2 tl 10.3 59.9 60.9 64.6 54.2 59.3 64.5 53.3 59.7 65.4 62.3 51.6 53.4 52.3 38.0 54.5 33.1 40.9 44.6 26.0 hi 38.3 92.8 91.7 93.3 88.0 89.9 93.4 85.4 91.5 92.2 92.6 89.3 89.0 85.9 85.2 92.3 75.1 79.1 87.1 70.2 tr 37.8 87.9 88.8 91.4 85.9 88.3 90.6 82.6 89.6 90.2 88.9 88.0 87.6 84.9 81.8 90.0 71.5 82.5 83.0 64.1 hu 42.5 86.2 86.0 88.4 83.5 84.0 89.5 80.7 87.7 88.4 87.2 86.5 87.1 84.2 83.9 87.4 76.2 81.6 84.7 72.5 ur 22.5 79.7 74.5 81.7 70.6 68.3 82.4 64.7 79.8 77.1 77.6 71.7 71.2 72.7 63.3 78.6 49.8 55.8 63.7 53.0 id 47.0 92.1 91.7 92.1 89.5 90.4 92.1 87.2 91.4 91.8 92.6 91.0 90.5 89.8 90.4 91.8 82.5 87.2 88.5 80.4 vi 38.3 94.6 92.5 95.0 92.5 92.8 95.1 89.7 93.3 94.8 95.0 93.6 93.3 90.5 90.6 93.6 80.4 87.9 91.0 78.4 it 42.3 82.6 86.3 86.3 80.2 83.0 86.7 79.1 83.7 85.4 83.9 84.6 82.6 79.8 78.1 84.6 69.7 78.7 81.4 64.2 zh 41.2 93.0 91.2 92.3 89.3 90.8 92.6 84.8 90.9 92.9 92.8 91.6 88.5 88.0 89.1 90.9 77.7 80.3 85.1 75.1 ja 41.8 88.7 89.5 89.9 86.4 88.7 91.6 78.7 88.2 88.9 91.0 87.0 85.6 84.0 83.2 87.4 77.5 76.2 85.5 75.5 Avg 31.0 80.8 80.2 82.7 76.1 77.9 83.3 72.9 80.9 82.2 81.2 78.5 77.4 76.0 70.4 80.6 61.4 69.4 73.8 56.9 jv 10.2 31.7 30.7 35.6 32.2 30.2 37.1 24.9 37.1 33.7 34.1 26.3 27.3 28.8 16.1 26.3 22.0 12.7 24.9 16.6 - - - - - - - - - - - - - - - - - - - -
Table 14: Full Tatoeba Results
MNLI QQP HellaSwag en Translated to de Translated to ru Translated to sw 87.1 82.2 70.1 70.8 88.0 84.6 83.8 79.3 71.6 55.1 27.4 25.1
Table 15: Intermediate task performance on trained and evaluated on translated data. We report the median result for English (original) task data. | {
"id": "2004.11997"
} |
2005.12729 | Implementation Matters in Deep Policy Gradients: A Case Study on PPO and TRPO | We study the roots of algorithmic progress in deep policy gradient algorithms
through a case study on two popular algorithms: Proximal Policy Optimization
(PPO) and Trust Region Policy Optimization (TRPO). Specifically, we investigate
the consequences of "code-level optimizations:" algorithm augmentations found
only in implementations or described as auxiliary details to the core
algorithm. Seemingly of secondary importance, such optimizations turn out to
have a major impact on agent behavior. Our results show that they (a) are
responsible for most of PPO's gain in cumulative reward over TRPO, and (b)
fundamentally change how RL methods function. These insights show the
difficulty and importance of attributing performance gains in deep
reinforcement learning. Code for reproducing our results is available at
https://github.com/MadryLab/implementation-matters . | http://arxiv.org/pdf/2005.12729 | Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Firdaus Janoos, Larry Rudolph, Aleksander Madry | cs.LG, cs.RO, stat.ML | ICLR 2020 version. arXiv admin note: text overlap with
arXiv:1811.02553 | null | cs.LG | 20200525 | 20200525 | 0 2 0 2
y a M 5 2 ] G L . s c [
1 v 9 2 7 2 1 . 5 0 0 2 : v i X r a
Published as a conference paper at ICLR 2020
# IMPLEMENTATION MATTERS IN DEEP POLICY GRADIENTS: A CASE STUDY ON PPO AND TRPO
Logan Engstrom1*, Andrew Ilyas1*, Shibani Santurkar1, Dimitris Tsipras1, Firdaus Janoos2, Larry Rudolph1,2, and Aleksander M Ëadry1
# 1MIT 2Two Sigma {engstrom,ailyas,shibani,tsipras,madry}@mit.edu [email protected], [email protected]
# ABSTRACT
We study the roots of algorithmic progress in deep policy gradient algorithms through a case study on two popular algorithms: Proximal Policy Optimization (PPO) and Trust Region Policy Optimization (TRPO). Speciï¬cally, we investigate the consequences of âcode-level optimizations:â algorithm augmentations found only in implementations or described as auxiliary details to the core algorithm. Seemingly of secondary importance, such optimizations turn out to have a major impact on agent behavior. Our results show that they (a) are responsible for most of PPOâs gain in cumulative reward over TRPO, and (b) fundamentally change how RL methods function. These insights show the difï¬culty and importance of attributing performance gains in deep reinforcement learning.
# INTRODUCTION
Deep reinforcement learning (RL) algorithms have fueled many of the most publicized achievements in modern machine learning (Silver et al., 2017; OpenAI, 2018; Abbeel & Schulman, 2016; Mnih et al., 2013). However, despite these accomplishments, deep RL methods still are not nearly as reliable as their (deep) supervised learning counterparts. Indeed, recent research found the existing deep RL methods to be brittle (Henderson et al., 2017; Zhang et al., 2018), hard to reproduce (Hen- derson et al., 2017; Tucker et al., 2018), unreliable across runs (Henderson et al., 2017; 2018), and sometimes outperformed by simple baselines (Mania et al., 2018).
The prevalence of these issues points to a broader problem: we do not understand how the parts comprising deep RL algorithms impact agent training, either separately or as a whole. This unsat- isfactory understanding suggests that we should re-evaluate the inner workings of our algorithms. Indeed, the overall question motivating our work is: how do the multitude of mechanisms used in deep RL training algorithms impact agent behavior?
Our contributions. We analyze the underpinnings of agent behaviorâboth through the traditional metric of cumulative reward, and by measuring more ï¬ne-grained algorithmic properties. As a ï¬rst step, we conduct a case study of two of the most popular deep policy-gradient methods: Trust Region Policy Optimization (TRPO) (Schulman et al., 2015a) and Proximal Policy Optimization (PPO) (Schulman et al., 2017). These two methods are closely related: PPO was originally devel- oped as a reï¬nement of TRPO.
We ï¬nd that much of the observed improvement in reward brought by PPO may come from seem- ingly small modiï¬cations to the core algorithm which we call code-level optimizations. These op- timizations are either found only in implementations of PPO, or are described as auxiliary details and are not present in the corresponding TRPO baselines1. We pinpoint these modiï¬cations, and perform an ablation study demonstrating that they are instrumental to the PPOâs performance.
Equal contribution. Work done in part while interning at Two Sigma. 1Note that these code-level optimizations are separate from âimplementation choicesâ like the choice of
PyTorch versus TensorFlow in that they intentionally change the training algorithmâs operation.
1
Published as a conference paper at ICLR 2020
This observation prompts us to study how code-level optimizations change agent training dynamics, and whether we can truly think of these optimizations as merely auxiliary improvements. Our results indicate that these optimizations fundamentally change algorithmsâ operation, and go even beyond improvements in agent reward. We ï¬nd that they majorly impact a key algorithmic principle behind TRPO and PPOâs operations: trust region enforcement.
Ultimately, we discover that the PPO code-optimizations are more important in terms of ï¬nal re- ward achieved than the choice of general training algorithm (TRPO vs. PPO). This result is in stark contrast to the previous view that the central PPO clipping method drives the gains seen in Schulman et al. (2017). In doing so, we demonstrate that the algorithmic changes imposed by such optimiza- tions make rigorous comparisons of algorithms difï¬cult. Without a rigorous understanding of the full impact of code-level optimizations, we cannot hope to gain any reliable insight from comparing algorithms on benchmark tasks.
Our results emphasize the importance of building RL methods in a modular manner. To progress towards more performant and reliable algorithms, we need to understand each componentâs impact on agentsâ behavior and performanceâboth individually, and as part of a whole.
Code for all the results shown in this work is available at https://github.com/MadryLab/ implementation-matters.
# 2 RELATED WORK
The idea of using gradient estimates to update neural networkâbased RL agents dates back at least to the work of Williams (1992), who proposed the REINFORCE algorithm. Later, Sutton et al. (1999) established a unifying framework that casts the previous algorithms as instances of the policy gradient method.
Our work focuses on proximal policy optimization (PPO) (Schulman et al., 2017) and trust region policy optimization (TRPO) (Schulman et al., 2015a), which are two of the most prominent policy gradient algorithms used in deep RL. Much of the original inspiration for the usage of the trust regions stems from the conservative policy update of Kakade (2001). This policy update, similarly to TRPO, uses a natural gradient descent-based greedy policy update. TRPO also bears similarity to the relative policy entropy search method of Peters et al. (2010), which constrains the distance between marginal action distributions (whereas TRPO constrains the conditionals of such action distributions).
Notably, Henderson et al. (2017) points out a number of brittleness, reproducibility, and experimen- tal practice issues in deep RL algorithms. Importantly, we build on the observation of Henderson et al. (2017) that ï¬nal reward for a given algorithm is greatly inï¬uenced depending on the code base used. Rajeswaran et al. (2017) and Mania et al. (2018) also demonstrate that on many of the benchmark tasks, the performance of PPO and TRPO can be matched by fairly elementary random- ized search approaches. Additionally, Tucker et al. (2018) showed that one of the recently proposed extensions of the policy gradient framework, i.e., the usage of baseline functions that are also action- dependent (in addition to being state-dependent), might not lead to better policies after all.
# 3 ATTRIBUTING SUCCESS IN PROXIMAL POLICY OPTIMIZATION
Our overarching goal is to better understand the underpinnings of the behavior of deep policy gra- dient methods. We thus perform a careful study of two tightly linked algorithms: TRPO and PPO (recall that PPO is motivated as TRPO with a different trust region enforcement mechanism). To better understand these methods, we start by thoroughly investigating their implementations in prac- tice. We ï¬nd that in comparison to TRPO, the PPO implementation contains many non-trivial op- timizations that are not (or only barely) described in its corresponding paper. Indeed, the standard implementation of PPO2 contains the following additional optimizations:
2From the OpenAI baselines GitHub repository: https://github.com/openai/baselines
2
Published as a conference paper at ICLR 2020
1. Value function clipping: Schulman et al. (2017) originally suggest ï¬tting the value net- work via regression to target values:
LV = (Vθt â Vtarg)2, but the standard implementation instead ï¬ts the value network with a PPO-like objective:
LY = max |(Vo, ~ Viarg)â + (Clip (Vo, Von. ~ & Vows +2) ~Viarg)â |
where Vθ is clipped around the previous value estimates (and ε is ï¬xed to the same value as the value used to clip probability ratios in the PPO loss function (cf. Eq. (2) in Section 4). 2. Reward scaling: Rather than feeding the rewards directly from the environment into the objective, the PPO implementation performs a certain discount-based scaling scheme. In this scheme, the rewards are divided through by the standard deviation of a rolling dis- counted sum of the rewards (without subtracting and re-adding the mean)âsee Algorithm 1 in Appendix A.2.
3. Orthogonal initialization and layer scaling: Instead of using the default weight initial- ization scheme for the policy and value networks, the implementation uses an orthogonal initialization scheme with scaling that varies from layer to layer.
4. Adam learning rate annealing: Depending on the task, the implementation sometimes anneals the learning rate of Adam (Kingma & Ba, 2014) (an already adaptive method) for optimization.
5. Reward Clipping: The implementation also clips the rewards within a preset range (usu- ally [â5, 5] or [â10, 10]).
6. Observation Normalization: In a similar manner to the rewards, the raw states are also not fed into the optimizer. Instead, the states are ï¬rst normalized to mean-zero, variance-one vectors.
7. Observation Clipping: Analagously to rewards, the observations are also clipped within a range, usually [â10, 10].
8. Hyperbolic tan activations: As observed by Henderson et al. (2017), implementations of policy gradient algorithms also use hyperbolic tangent function activations between layers in the policy and value networks.
9. Global Gradient Clipping: After computing the gradient with respect to the policy and the value networks, the implementation clips the gradients such the âglobal £2 normâ (i.e. the norm of the concatenated gradients of all parameters) does not exceed 0.5.
These optimizations may appear as merely surface-level or insigniï¬cant algorithmic changes to the core policy gradient method at hand. However, we ï¬nd that they dramatically impact the perfor- mance of PPO. Speciï¬cally, we perform a full ablation study on the four optimizations mentioned above3. Figure 1 shows a histogram of the ï¬nal rewards of agents trained with every possible con- ï¬guration of the above optimizationsâfor each conï¬guration, a grid search for the optimal learning rate is performed, and we measure the reward of random agents trained using the identiï¬ed learning rate. Our ï¬ndings suggest that many code-level optimizations are necessary for PPO to attain its claimed performance.
The above ï¬ndings show that our ability to understand PPO from an algorithmic perspective hinges on the ability to distill out its fundamental principles from such algorithm-independent (in the sense that these optimizations can be implemented for any policy gradient method) optimizations. We thus consider a variant of PPO called PPO-MINIMAL (PPO-M) which implements only the core of the algorithm. PPO-M uses the standard value network loss, no reward scaling, the default network initialization, and Adam with a ï¬xed learning rate. Importantly, PPO-M ignores all the code-level optimizations listed at the beginning of Section 3. We explore PPO-M alongside PPO and TRPO. We list all the algorithms we study and their deï¬ning properties in Table 1.
Overall, our results on the importance of these optimizations both corroborate results demonstrating the brittleness of deep policy gradient methods, and demonstrate that even beyond environmental brittleness, the algorithms themselves exhibit high sensitivity to implementation choices 4.
3Due to restrictions on computational resources, we could only perform a full ablation on the ï¬rst four of the identiï¬ed optimizations.
4This might also explain the difference between different codebases observed in Henderson et al. (2017)
3
Published as a conference paper at ICLR 2020
Humanoid-v2 Walker2d-v2
Figure 1: An ablation study on the ï¬rst four optimizations described in Section 3 (value clipping, reward scaling, network initialization, and learning rate annealing). For each of the 24 possible con- ï¬gurations of optimizations, we train a Humanoid-v2 (top) and Walker2d-v2 (bottom) agent using PPO with ï¬ve random seeds and a grid of learning rates, and choose the learning rate which gives the best average reward (averaged over the random seeds). We then consider all rewards from the âbest learning rateâ runs (a total of 5 à 24 agents), and plot histograms in which agents are parti- tioned based on whether each optimization is on or off. Our results show that reward normalization, Adam annealing, and network initialization each signiï¬cantly impact the rewards landscape with re- spect to hyperparameters, and were necessary for attaining the highest PPO reward within the tested hyperparameter grid. We detail our experimental setup in Appendix A.1.
Table 1: An overview of the algorithms studied in this work. Step method refers to the method used to build each training step, PPO clipping refers to the use of clipping in the step (as in Equation (2)), and PPO optimizations refer to the optimizations listed in Section 3.
Algorithm Section Step method Uses PPO clipping? Uses PPO optimizations? PPO â PPO v As in (Dhariwal et al. PPO-M Sec. PPO v PPO-NOCLIP _ Sec. | PPO x Found via grid search TRPO â TRPO x TRPO+ Se TRPO Found via grid search
# 4 CODE-LEVEL OPTIMIZATIONS HAVE ALGORITHMIC EFFECTS
The seemingly disproportionate effect of code-level optimizations identiï¬ed in our ablation study may lead us to ask: how do these seemingly superï¬cial code-level optimizations impact underlying agent behavior? In this section, we demonstrate that the code-level optimizations fundamentally alter agent behavior. Rather than merely improving ultimate cumulative award, such optimizations directly impact the principles motivating the core algorithms.
Trust Region Optimization. A key property of policy gradient algorithms is that update steps computed at any speciï¬c policy Ïθt are only guaranteed predictiveness in a neighborhood around θt. Thus, to ensure that the update steps we derive remain predictive, many policy gradient algo- rithms ensure that these steps stay in the vicinity of the current policy. The resulting âtrust regionâ methods (Kakade, 2001; Schulman et al., 2015a; 2017) try to constrain the local variation of the parameters in policy-space by restricting the distributional distance between successive policies.
4
Published as a conference paper at ICLR 2020
A popular method in this class is trust region policy optimization (TRPO) Schulman et al. (2015a). TRPO constrains the KL divergence between successive policies on the optimization trajectory, leading to the following problem:
. T9(ai|8¢) > max E(s¢,ar)or rails, AT at) st. Dix(mo(- | s)|(- | 8)) <6, Vs. (1)
In practice, we maximize this objective with a second-order approximation of the KL divergence and natural gradient descent, and replace the worst-case KL constraints over all possible states with an approximation of the mean KL based on the states observed in the current trajectory.
Proximal policy optimization. One disadvantage of the TRPO algorithm is that it can be compu- tationally costlyâthe step direction is estimated with nonlinear conjugate gradients, which requires the computation of multiple Hessian-vector products. To address this issue, Schulman et al. (2017) propose proximal policy optimization (PPO), which tries to enforce a trust region with a different objective that does not require computing a projection. Concretely, PPO proposes replacing the KL-constrained objective (1) of TRPO by clipping the objective function directly as:
max E(s,,a,)00 [min (clip (pr, ââ¬,1 +2) An (st, a0), prAe(sr,a)) (2)
where
Ït = Ïθ(at|st) Ï(at|st) . (3)
Note that this objective can be optimized without an explicit projection step, leading to a simpler parameter update during training. In addition to its simplicity, PPO is intended to be faster and more sample-efï¬cient than TRPO (Schulman et al., 2017).
Trust regions in TRPO and PPO. Enforcing a trust region is a core algorithmic property of different policy gradient methods. However, whether or not a trust region is enforced is not directly observable from the ï¬nal rewards. So, how does this algorithmic property vary across state-of-the-art policy gradient methods?
In Figure 2 we measure the mean KL divergence between successive policies in a training run of both TRPO and PPO-M (PPO without code-level optimizations). Recall that TRPO is designed speciï¬cally to constrain this KL-based trust region, while the clipping mechanism of PPO attempts to approximate it. Indeed, we ï¬nd that TRPO precisely enforces this trust region (this is unsuprising, and nearly by construction).
We thus turn our attention to the trust regions induced by training with PPO and PPO-M. First, we consider mathematically the contribution of a single state-action pair to the gradient of the PPO objective, which is given by
xo (als) _ c VoLppo = VoLe atals) «⬠[l-âe,1+<¢lor Ly < Lo 0 otherwise
,
Tl als where Lo = E(s,ayernn [Ae An(s.0) ; LS = Eieayernn [atip (a sl-e,1+ :) A,(s, a]
# and
are respectively the standard and clipped versions of the surrogate objective. As a result, since we initialize Ïθ as Ï (and thus the ratios start all equal to one) the ï¬rst step we take is identical to a maximization step over the unclipped surrogate objective. It thus stands to reason that the nature of the trust region enforced is heavily dependent on the method with which the clipped PPO objective is optimized, rather than on the objective itself. Therefore, the size of the step we take is determined solely by the steepness of the surrogate landscape (i.e. Lipschitz constant of the optimization prob- lem we solve), and we can end up moving arbitrarily far from the trust region. We hypothesize that
5
Published as a conference paper at ICLR 2020
800 â TRpO 175 _ tRpo â Treo = pro iso â PPO 0.06 = PPo 2 = Fpom = PPoM = Prom 600 gs 0.05 g E100 e < 400 * 0.04 o 2 75 3 E = 5.0 0.03 200 2S a a a ee 0.02 ° 100 200 = 300400 ° oo = 200300400 ° 100 200 = 300400 # Iterations # Iterations # Iterations
Figure 2: Per step mean reward, maximum ratio (c.f. (2)), mean KL, and mean KL for agents trained to solve the MuJoCo Humanoid-v2 task. The quantities are measured over the state-action pairs collected in the training step. Each line represents a training curve from a separate agent. The black dotted line represents the 1 + ⬠ratio constraint in the PPO algorithm, and we measure each quantity every twenty five steps. We take mean and max over the KL divergences between the conditional action distributions of successive policies. In the left plot, the reward for each trained agent. In the middle plot, the PPO variantsâ maximum ratios consistently violate the ratio âtrust region.â In the right plot, both PPO and PPO-M constraint the KL well (compared to the TRPO bound of 0.07). The two methods exhibit different behavior. We measure the quantities over a heldout set of state-action pairs and find little qualitative difference in the results (seen in Figure/4]in the appendix), suggesting that TRPO enforces a mean KL trust region. We show plots for additional tasks in the Appendix in Figure[3] We detail our experimental setup in Appendix
this dependence of PPO on properties of the optimizer rather than on the optimization objective con- tributes to the brittleness of the algorithm to hyperparameters such as learning rate and momentum, as observed by Henderson et al. (2018) and others.
The results we observe (shown in Figure 2) corroborate this intuition. For agents trained with op- timal parameters, all three algorithms are able to maintain a KL-based trust region. First, we note that all three algorithms fail to maintain a ratio-based trust region, despite PPO and PPO-M being trained directly with a ratio-clipping objective. Furthermore, the nature of the KL trust region en- forced differs between PPO and PPO-M, despite the fact that the core algorithm remains constant between the two methods; while PPO-M KL trends up as the number of iterations increases, PPO KL peaks halfway through training before trending down again.
The ï¬ndings from this experiment and the corresponding calculations demonstrate that perhaps a key factor in the behavior of PPO-trained agents even from an algorithmic viewpoint comes from auxiliary optimizations, rather than the core methodology.
# IDENTIFYING ROOTS OF ALGORITHMIC PROGRESS
State-of-the-art deep policy gradient methods are comprised of many interacting components. At what is generally described as their core, these methods incorporate mechanisms like trust region- enforcing steps, time-dependent value predictors, and advantage estimation methods for controlling the exploitation/exploration trade-off (Schulman et al., 2015b). However, these algorithms also incorporate many less oft-discussed optimizations (cf. Section 3) that ultimately dictate much of agent behavior (cf. Section 4). Given the need to improve on these algorithms, the fact that such optimizations are so important begs the question: how do we identify the true roots of algorithmic progress in deep policy gradient methods?
Unfortunately, answering this question is not easy. Going back to our study of PPO and TRPO, it is widely believed (and claimed) that the key innovation of PPO responsible for its improved perfor- mance over the baseline of TRPO is the ratio clipping mechanism discussed in Section 4. However, we have already shown that this clipping mechanism is insufï¬cient theoretically to maintain a trust region, and also that the method by which the objective is optimized appears to have signiï¬cant effect on the resulting trust region. If code-level optimizations are thus partially responsible for algorithmic properties of PPO, is it possible that they are also a key factor in PPOâs improved performance?
To address this question, we set out to further disentangle the impact of PPOâs core clipping mecha- nism from its code-level optimizations by once again considering variations on the PPO and TRPO
6
Published as a conference paper at ICLR 2020
Table 2: Full ablation of step choices (PPO or TRPO) and presence of code-level optimizations measuring agent performance on benchmark tasks. TRPO+ is a variant of TRPO that uses PPO inspired code-level optimizations, and PPO-M is a variant of PPO that does not use PPOâs code- level optimizations (cf. Section 3). Varying the use of code-level optimizations impacts performance signiï¬cantly more than varying whether the PPO or TRPO step is used. We detail our experimental setup in Appendix A.1. We train at least 80 agents for each estimate (more for some high-variance cases). We present 95% conï¬dence intervals computed via a 1000-sample bootstrap. We also present the AAI and ACLI metrics discussed in Section 5, which attempt to quantify the relative contribution of algorithmic choice vs. use of code-level optimizations respectively.
STEP WALKER2D-V2 MUJOCO TASK HOPPER-V2 HUMANOID-V2 PPO 3292 [3157, 3426] PPO-M 2735 [2602, 2866] 2791 [2709, 2873] TRPO 3050 [2976, 3126] TRPO+ 2513 [2391, 2632] 2142 [2008, 2279] 2043 [1948, 2136] 2466 [2381, 2549] 806 [785, 827] 674 [656, 695] 586 [576, 596] 1030 [979, 1083] AAI ACLI 242 557 99 421 224 444
algorithms. Speciï¬cally, we examine how employing the core PPO and TRPO steps changes model performance while controlling for the effect of code-level optimizations identiï¬ed in standard im- plementations of PPO (in particular, we focus on those covered in Section 3). These code-level optimizations are largely algorithm-independent, and so they can be straightforwardly applied or lightly adapted to any policy gradient method. The previously introduced PPO-M algorithm cor- responds to PPO without these optimizations. To further account for their effects, we study an additional algorithm which we denote as TRPO+, consisting of the core algorithmic contribution of TRPO in combination with PPOâs code-level optimizations as identiï¬ed in Section 3 5. We note that TRPO+ together with the other three algorithms introduced (PPO, PPO-M, and TRPO; all listed in Table 1) now capture all combinations of core algorithms and code-level optimizations, allowing us to study the impact of each in a ï¬ne-grained manner.
As our results show in Table 2, it turns out that code-level optimizations contribute to algorithmsâ increased performance often signiï¬cantly more than the choice of algorithm (i.e., using PPO vs. TRPO). For example, on Hopper-v2, PPO and TRPO see 17% and 21% improvements (respectively) when equipped with code-level optimizations. At the same time, for all tasks after ï¬xing the choice to use or not use optimizations, the core algorithm employed does not seem to have a signiï¬cant impact on reward. In Table 2, we quantify this contrast through the following two metrics, which we denote average algorithmic improvement (AAI) and average code-level improvement (ACLI):
AAI = max{|PPO â TRPO+|, |PPO-M â TRPO|},
ACLI = max{|PPO â PPO-M|, |TRPO+ â TRPO|}.
In short, AAI measures the maximal effect of switching step algorithms, whereas ACLI measures the maximal effect of adding in code-level optimizations for a ï¬xed choice of step algorithm.
PPO without clipping. Given the relative insigniï¬cance of the step mechanism compared to the use of code-level optimizations, we are prompted to ask: to what extent is the clipping mechanism of PPO actually responsible for the algorithmâs success? In Table 3, we assess this by considering a PPO-NOCLIP algorithm which makes use of common code-level optimizations (by gridding over the best possible combination of such optimizations) but does not employ a clipping mechanism (this is the same algorithm we studied in Section 4 in the context of trust region enforcement)ârecall that we list all the algorithms studied in Table 1.
It turns out that the clipping mechanism is not necessary to achieve high performanceâwe ï¬nd that PPO-NOCLIP performs uniformly better than PPO-M, despite the latter employing the core PPO
5We also add a new code-level optimization, a KL decay, inapplicable to PPO but meant to serve as the analog of Adam learning rate annealing.
7
Published as a conference paper at ICLR 2020
Table 3: Comparison of PPO performance to PPO without clipping. We ï¬nd that there is little dif- ference between the rewards attained by the two algorithms on the benchmark tasks. Note that both algorithms use code-level optimizations; our results indicate that the clipping mechanism is often of comparable or lesser importance to the use of code-level optimizations. We detail our experimental setup in Appendix A.1. We train at least 80 agents for each estimate (for some high-variance cases, more agents were used). We present bootstrapped 95% conï¬dence intervals computed with 1000 samples. We also present results from the OpenAI baselines (Dhariwal et al., 2017) where available.
WALKER2D-V2 HOPPER-V2 HUMANOID-V2 PPO PPO ( B A S E L I N E S) PPO-M PPO-NOCLIP 3292 [3157, 3426] 3424 2735 [2602, 2866] 2867 [2701, 3024] 2513 [2391, 2632] 2316 2142 [2008, 2279] 2371 [2316, 2424] 806 [785, 827] â 674 [656, 695] 831 [798, 869]
clipping mechanism. Moreover, introducing code-level optimizations seems to outweigh even the core PPO algorithm in terms of effect on rewards. In fact, we ï¬nd that with sufï¬cient hyperparameter tuning, PPO-NOCLIP often matches the performance of standard PPO, which includes a standard conï¬guration of code-level optimizations6. We also include benchmark PPO numbers from the OpenAI baselines repository (Dhariwal et al., 2017) (where available) to put results into context.
Our results suggest that it is difï¬cult to attribute success to different aspects of policy gradient algorithms without careful analysis.
# 6 CONCLUSION
In this work, we take a ï¬rst step in examining how the mechanisms powering deep policy gradi- ent methods impact agents both in terms of achieved reward and underlying algorithmic behavior. Wanting to understand agent operation from the ground up, we take a deep dive into the operation of two of the most popular deep policy gradient methods: TRPO and PPO. In doing so, we identify a number of âcode-level optimizationsââalgorithm augmentations found only in algorithmsâ imple- mentations or described as auxiliary details in their presentationâand ï¬nd that these optimizations have a drastic effect on agent performance.
In fact, these seemingly unimportant optimizations fundamentally change algorithm operation in ways unpredicted by the conceptual policy gradient framework. Indeed, the optimizations often dictate the nature of the trust region enforced by policy gradient algorithms, even controlling for the surrogate objective being optimized. We go on to test the importance of code-level optimizations in agent performance, and ï¬nd that PPOâs marked improvement over TRPO (and even stochastic gradient descent) can be largely attributed to these optimizations.
Overall, our results highlight the necessity of designing deep RL methods in a modular manner. When building algorithms, we should understand precisely how each component impacts agent trainingâboth in terms of overall performance and underlying algorithmic behavior. It is impos- sible to properly attribute successes and failures in the complicated systems that make up deep RL methods without such diligence. More broadly, our ï¬ndings suggest that developing an RL toolkit will require moving beyond the current benchmark-driven evaluation model to a more ï¬ne-grained understanding of deep RL methods.
# 7 ACKNOWLEDGEMENTS
We would like to thank Chloe Hsu for identifying a bug in our initial implementation of PPO and TPRO. Work supported in part by the NSF grants CCF-1553428, CNS-1815221, the Google PhD Fellowship, the Open Phil AI Fellowship, and the Microsoft Corporation.
6Note that it is possible that further reï¬nement on the code-level optimizations could be added on top of PPO to perhaps improve its performance to an even greater extent (after all, PPO-NOCLIP can only express a subset the training algorithms covered by PPO, as the latter leaves the clipping severity ε to be free parameter)
8
Published as a conference paper at ICLR 2020
# REFERENCES
Pieter Abbeel and John Schulman. Deep reinforcement learning through policy optimization. Tuto- rial at Neural Information Processing Systems, 2016.
Prafulla Dhariwal, Christopher Hesse, Oleg Klimov, Alex Nichol, Matthias Plappert, Alec Radford, John Schulman, Szymon Sidor, Yuhuai Wu, and Peter Zhokhov. Openai baselines. https: //github.com/openai/baselines, 2017.
Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, and David Meger. Deep reinforcement learning that matters. arXiv preprint arXiv:1709.06560, 2017.
Peter Henderson, Joshua Romoff, and Joelle Pineau. Where did my optimum go?: An empirical analysis of gradient descent optimization in policy gradient methods, 2018.
Sham M. Kakade. A natural policy gradient. In NIPS, 2001.
Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014.
Horia Mania, Aurelia Guy, and Benjamin Recht. Simple random search provides a competitive approach to reinforcement learning. CoRR, abs/1803.07055, 2018.
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wier- stra, and Martin Riedmiller. Playing atari with deep reinforcement learning. In NeurIPS Deep Learning Workshop, 2013.
# OpenAI. Openai ï¬ve. https://blog.openai.com/openai-five/, 2018.
Jan Peters, Katharina Mülling, and Yasemin Altun. Relative entropy policy search. In AAAI, 2010.
Aravind Rajeswaran, Kendall Lowrey, Emanuel Todorov, and Sham M. Kakade. Towards general- ization and simplicity in continuous control. In NIPS, 2017.
John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In International Conference on Machine Learning, pp. 1889â1897, 2015a.
John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. High- arXiv preprint dimensional continuous control using generalized advantage estimation. arXiv:1506.02438, 2015b.
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of go without human knowledge. Nature, 550(7676):354, 2017.
Richard S. Sutton, David A. McAllester, Satinder P. Singh, and Yishay Mansour. Policy gradient methods for reinforcement learning with function approximation. In NIPS, 1999.
George Tucker, Surya Bhupatiraju, Shixiang Gu, Richard E. Turner, Zoubin Ghahramani, and Sergey Levine. The mirage of action-dependent baselines in reinforcement learning. In ICML, 2018.
Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8:229â256, 1992.
Amy Zhang, Yuxin Wu, and Joelle Pineau. Natural environment benchmarks for reinforcement learning, 2018.
9
Published as a conference paper at ICLR 2020
# A APPENDIX
A.1 EXPERIMENTAL SETUP
All the hyperparameters used in this paper were obtained through grid searches. For PPO the ex- act code-level optimizations and their associated hyperparameters (e.g. coefï¬cients for entropy regularization, reward clipping, etc.) were taken from the OpenAI baselines repository 7, and gridding is performed over the value function learning rate, the clipping constant, and the learn- In TRPO, we grid over the same parameters (replacing learning rate sched- ing rate schedule. ule with the KL constraint), but omit the code-level optimizations. For PPO-NoClip, we grid over the same parameters as PPO, in addition to the conï¬guration of code-level optimizations (since we lack a good reference for what the optimal conï¬guration of these optimizations is). For TRPO+ we also grid over the code-level optimizations, and also implement a âKL sched- uleâ whereby the KL constraint can change over training (analogous to the learning rate anneal- ing optimization in PPO). Finally, for PPO-M, we grid over the same parameters as PPO (just learning rate schedules), without any code-level optimizations. The ï¬nal parameters for each al- gorithm are given below, and a more detailed account is available in our code release: https: //github.com/MadryLab/implementation-matters.
Table 4: Hyperparameters for all algorithms for Walker2d-v2.
PPO TRPO = =PPO-NoClip PPO-M TRPO+ Timesteps per iteration 2048 2048 2048 2048 2048 Discount factor (7) 0.99 0.99 0.99 0.99 0.99 GAE discount (\) 0.95 0.95 0.85 0.95 0.95 Value network LR 0.0003 0.0003 0.0006 0.0002 0.0001 Value network num. epochs 10 10 10 10 10 Policy network hidden layers [64, 64] [64,64] [64, 64] [64,64] [64, 64] Value network hidden layers [64, 64] [64,64] [64, 64] [64,64] [64, 64] KL constraint (4) 0.04 0.07 Fisher estimation fraction 0.1 0.1 Conjugate gradient steps 10 10 Conjugate gradient damping 0.1 0.1 Backtracking steps 10 10 Policy LR (Adam) 0.0004 7.25e-05 0.0001 Policy epochs 10 10 10 PPO Clipping ⬠0.2 le+32 0.2 Entropy coeff. 0 0 -0.01 0 0 Reward clipping [-10.0, 10.0] - [-30, 30] - [-10.0, 10.0] Gradient clipping (â¬, norm) = -1 -1 0.1 -1 1 Reward normalization returns none rewards none returns State clipping [-10.0, 10.0] - [-30, 30] - [-10.0, 10.0]
All error bars we plot are 95% conï¬dence intervals, obtained via bootstrapped sampling.
# 7https://github.com/openai/baselines
10
Published as a conference paper at ICLR 2020
Table 5: Hyperparameters for all algorithms for Humanoid-v2.
PPO TRPO = =PPO-NoClip PPO-M TRPO+ Timesteps per iteration 2048 2048 2048 2048 2048 Discount factor (7) 0.99 0.99 0.99 0.99 0.99 GAE discount (\) 0.95 0.95 0.85 0.95 0.85 Value network LR 0.0001 0.0003 â 5e-05 0.0004 â 5e-05 Value network num. epochs 10 10 10 10 10 Policy network hidden layers [64, 64] [64,64] [64, 64] [64,64] [64, 64] Value network hidden layers [64, 64] [64,64] [64, 64] [64,64] [64, 64] KL constraint (4) 0.07 0.1 Fisher estimation fraction 0.1 0.1 Conjugate gradient steps 10 10 Conjugate gradient damping 0.1 0.1 Backtracking steps 10 10 Policy LR (Adam) 0.00015 2e-05 9e-05 Policy epochs 10 10 10 PPO Clipping ⬠0.2 le+32 0.2 Entropy coeff. 0 0 0.005 0 0 Reward clipping [-10.0, 10.0] - [-10.0, 10.0] - [-10.0, 10.0] Gradient clipping (â¬2 norm) = -1 -1 0.5 -1 0.5 Reward normalization returns none returns none returns State clipping [-10.0, 10.0] [-10.0, 10.0] [-10.0, 10.0]
2048 0.99 0.85 5e-05 10 [64, 64] [64, 64] 0.1 0.1 10 0.1 10 N/A N/A N/A 0 [-10.0, 10.0] 0.5 returns [-10.0, 10.0]
Table 6: Hyperparameters for all algorithms for Hopper-v2.
PPO TRPO = PPO-NoClip PPO-M TRPO+ Timesteps per iteration 2048 2048 2048 2048 2048 Discount factor (7) 0.99 0.99 0.99 0.99 0.99 GAE discount (A) 0.95 0.95 0.925 0.95 0.95 Value network LR 0.00025 0.0002 0.0004 0.0004 â 0.0002 Value network num. epochs 10 10 10 10 10 Policy network hidden layers [64, 64] [64,64] [64, 64] [64,64] [64, 64] Value network hidden layers â_[64, 64] [64,64] [64, 64] [64,64] [64, 64] KL constraint (4) 0.13 0.04 Fisher estimation fraction 0.1 0.1 Conjugate gradient steps 10 10 Conjugate gradient damping 0.1 0.1 Backtracking steps 10 10 Policy LR (Adam) 0.0003 6e-05 0.00017 Policy epochs 10 10 10 PPO Clipping ⬠0.2 le+32 0.2 Entropy coeff. 0 0 -0.005 0 0 Reward clipping [-10.0, 10.0] - [-2.5, 2.5] - [-10.0, 10.0] Gradient clipping (¢2 norm) = -1 -1 4 -1 1 Reward normalization returns none rewards none returns State clipping [-10.0, 10.0] - [-2.5, 2.5] - [-10.0, 10.0]
2048 0.99 0.95 0.0004 10 [64, 64] [64, 64] N/A N/A N/A N/A N/A 0.00017 N/A N/A 10 N/A 0.2 0 0 [-10.0, 10.0] â 1 -1 returns none [-10.0, 10.0] â
11
Published as a conference paper at ICLR 2020
A.2 PPO CODE-LEVEL OPTIMIZATIONS
Algorithm 1 PPO scaling optimization. 1: procedure INITIALIZE-SCALING() 2: Ro â 0 3 RS = RUNNINGSTATISTICS() > New running stats class that tracks mean, standard deviation 4: procedure SCALE-OBSERVATION(T;) > Input: a reward r; 5: Re yRi-atr > + is the reward discount 6: ApDD(RS, R;) 7: return r;/STANDARD-DEVIATION(RS) > Returns scaled reward
12
Published as a conference paper at ICLR 2020
A.3 TRUST REGION OPTIMIZATION
(a) Walker2d-v2 (train)
(b) Hopper-v2 (train)
Figure 3: Per step mean reward, maximum ratio (c.f. , mean KL, and maximum versus mean KL for agents trained to solve the MuJoCo Humanoid task. The quantities are measured over the state-action pairs collected in the training step. Each line represents a training curve from a separate agent. The black dotted line represents the 1 + ⬠ratio constraint in the PPO algorithm, and we measure each quantity every twenty five steps. Compare the results here with Figure qualitatively nearly identical.
13
Published as a conference paper at ICLR 2020
(a) Humanoid-v2 (heldout) (b) Walker2d-v2 (heldout)
(c) Hopper-v2 (heldout)
Figure 4: Per step mean reward, maximum ratio (c.f. Q)). mean KL, and maximum versus mean KL for agents trained to solve the MuJoCo Humanoid task. The quantities are measured over state- action pairs collected from heldout trajectories. Each line represents a curve from a separate agent. The black dotted line represents the 1 + ⬠ratio constraint in the PPO algorithm, and we measure each quantity every twenty five steps. See that the mean KL for TRPO nearly always stays within the desired mean KL trust region (at 0.06).
14 | {
"id": "1709.06560"
} |
2005.12246 | Demoting Racial Bias in Hate Speech Detection | In current hate speech datasets, there exists a high correlation between
annotators' perceptions of toxicity and signals of African American English
(AAE). This bias in annotated training data and the tendency of machine
learning models to amplify it cause AAE text to often be mislabeled as
abusive/offensive/hate speech with a high false positive rate by current hate
speech classifiers. In this paper, we use adversarial training to mitigate this
bias, introducing a hate speech classifier that learns to detect toxic
sentences while demoting confounds corresponding to AAE texts. Experimental
results on a hate speech dataset and an AAE dataset suggest that our method is
able to substantially reduce the false positive rate for AAE text while only
minimally affecting the performance of hate speech classification. | http://arxiv.org/pdf/2005.12246 | Mengzhou Xia, Anjalie Field, Yulia Tsvetkov | cs.CL | Accepted at SocialNLP Workshop @ACL 2020 | null | cs.CL | 20200525 | 20200525 | 0 2 0 2
y a M 5 2 ] L C . s c [
1 v 6 4 2 2 1 . 5 0 0 2 : v i X r a
# Demoting Racial Bias in Hate Speech Detection
Mengzhou Xia Anjalie Field Yulia Tsvetkov Language Technologies Institute Carnegie Mellon University {mengzhox,anjalief,ytsvetko}@cs.cmu.edu
# Abstract
In current hate speech datasets, there exists a high correlation between annotatorsâ per- ceptions of toxicity and signals of African American English (AAE). This bias in anno- tated training data and the tendency of ma- chine learning models to amplify it cause to often be mislabeled as abu- AAE text sive/offensive/hate speech with a high false positive rate by current hate speech classiï¬ers. In this paper, we use adversarial training to mitigate this bias, introducing a hate speech classiï¬er that learns to detect toxic sentences while demoting confounds corresponding to AAE texts. Experimental results on a hate speech dataset and an AAE dataset suggest that our method is able to substantially reduce the false positive rate for AAE text while only minimally affecting the performance of hate speech classiï¬cation.
# Introduction
of annotator bias, which can in turn lead to the marginalization of racial minorities. More speciï¬- cally, annotators are more likely to label comments as abusive if they are written in African American English (AAE). These comments are assumed to be incorrectly labelled, as annotators do not mark them as abusive if they are properly primed with dialect and race information (Sap et al., 2019).
These biases in annotations are absorbed and am- pliï¬ed by automated classiï¬ers. Classiï¬ers trained on biased annotations are more likely to incor- rectly label AAE text as abusive than non-AAE text: the false positive rate (FPR) is higher for AAE text, which risks further suppressing an al- ready marginalized community. More formally, the disparity in FPR between groups is a violation of the Equality of Opportunity criterion, a commonly used metric of algorithmic fairness whose viola- tion indicates discrimination (Hardt et al., 2016). According to Sap et al. (2019), the false positive rate for hate speech/abusive language of the AAE dialect can reach as high as 46%.
The prevalence of toxic comments on social media and the mental toll on human moderators has gener- ated much interest in automated systems for detect- ing hate speech and abusive language (Schmidt and Wiegand, 2017; Fortuna and Nunes, 2018), espe- cially language that targets particular social groups (Silva et al., 2016; Mondal et al., 2017; Mathew et al., 2019). However, deploying these systems without careful consideration of social context can increase bias, marginalization, and exclusion (Ben- der and Friedman, 2018; Waseem and Hovy, 2016). Most datasets currently used to train hate speech classiï¬ers were collected through crowdsourced annotations (Davidson et al., 2017; Founta et al., 2018), despite the risk of annotator bias. Waseem (2016) show that non-experts are more likely to label text as abusive than expert annotators, and Sap et al. (2019) show how lack of social con- text in annotation tasks further increases the risk
Thus, Sap et al. (2019) reveal two related issues in the task of hate speech classiï¬cation: the ï¬rst is biases in existing annotations, and the second is model tendencies to absorb and even amplify bi- ases from spurious correlations present in datasets (Zhao et al., 2017; Lloyd, 2018). While current datasets can be re-annotated, this process is time- consuming and expensive. Furthermore, even with perfect annotations, current hate speech detection models may still learn and amplify spurious corre- lations between AAE and abusive language (Zhao et al., 2017; Lloyd, 2018).
In this work, we present an adversarial approach to mitigating the risk of racial bias in hate speech classiï¬ers, even when there might be annotation bias in the underlying training data. In §2, we de- scribe our methodology in general terms, as it can be useful in any text classiï¬cation task that seeks
to predict a target attribute (here, toxicity) without basing predictions on a protected attribute (here, AAE). Although we aim at preserving the utility of classiï¬cation models, our primary goal is not to improve the raw performance over predicting the target attribute (hate speech detection), but rather to reduce the inï¬uence of the protected attribute.
In §3 and §4, we evaluate how well our approach reduces the risk of racial bias in hate speech classiï¬- cation by measuring the FPR of AAE text, i.e., how often the model incorrectly labels AAE text as abu- sive. We evaluate our methodology using two types of data: (1) a dataset inferred to be AAE using de- mographic information (Blodgett et al., 2016), and (2) datasets annotated for hate speech (Davidson et al., 2017; Founta et al., 2018) where we automat- ically infer AAE dialect and then demote indicators of AAE in corresponding hate speech classiï¬ers. Overall, our approach decreases the dialectal infor- mation encoded by the hate speech model, leading to a 2.2â3.2 percent reduction in FPR for AAE text, without sacriï¬cing the utility of hate speech classiï¬cation.
# 2 Methodology
Our goal is to train a model that can predict a tar- get attribute (abusive or not abusive language), but that does not base decisions off of confounds in data that result from protected attributes (e.g., AAE dialect). In order to achieve this, we use an adver- sarial objective, which discourages the model from encoding information about the protected attribute. Adversarial training is widely known for success- fully adapting models to learn representations that are invariant to undesired attributes, such as demo- graphics and topics, though they rarely disentangle attributes completely (Li et al., 2018; Elazar and Goldberg, 2018; Kumar et al., 2019; Lample et al., 2019; Landeiro et al., 2019).
Model Architecture Our demotion model con- sists of three parts: 1) An encoder H that encodes the text into a high dimensional space; 2) A binary classiï¬er C that predicts the target attribute from the input text; 3) An adversary D that predicts the protected attribute from the input text. We used a single-layer bidirectional LSTM encoder with an attention mechanism. Both classiï¬ers are two-layer MLPs with a tanh activation function.
Training Procedure Each data point in our train- ing set is a triplet {(xi, yi, zi); i â 1 . . . N }, where
xi is the input text, yi is the label for the target attribute and zi is label of the protected attribute. The (xi, yi) tuples are used to train the classiï¬er C, and the (xi, zi) tuple is used to train the adversary D.
We adapt a two-phase training procedure from Kumar et al. (2019). We use this procedure be- cause Kumar et al. (2019) show that their model is more effective than alternatives in a setting similar to ours, where the lexical indicators of the target and protected attributes are closely connected (e.g., words that are common in non-abusive AAE and are also common in abusive language datasets). In the ï¬rst phase (pre-training), we use the standard supervised training objective to update encoder H and classiï¬er C:
N min S> £L(C(H (a2) Yi) (1) i=1 C.H
After pre-training, the encoder should encode all relevant information that is useful for predicting the target attribute, including information predictive of the protected attribute.
In the second phase, starting from the best- performing checkpoint in the pre-training phase, we alternate training the adversary D with Equa- tion 2 and the other two models (H and C) with Equation 3:
N 1 min NW » L(D(H(2;)), zi) (2)
N HONS 4 (CCB) ui) + (3) (1â a) - L(D(H(a;)), 0.5)
Unlike Kumar et al. (2019), we introduce a hyper-parameter α, which controls the balance be- tween the two loss terms in Equation 3. We ï¬nd that α is crucial for correctly training the model (we detail this in §3).
We ï¬rst train the adversary to predict the pro- tected attribute from the text representations out- putted by the encoder. We then train the encoder to âfoolâ the adversary by generating representa- tions that will cause the adversary to output random guesses, rather than accurate predictions. At the same time, we train the classiï¬er to predict the target attribute from the encoder output.
Dataset Example Founta et al. (2018) I am hungry and I am dirty as hell bruh, need dat shower and dem calories Blodgett et al. (2016) so much energy and time wasted hatin on someone when alla that coulda been put towards makin yourself better.... a https://t.co/awCg1nCt8t
Table 1: Example from Founta et al. (2018) and Blodgett et al. (2016) where the state-of-the-art model misclassiï¬es innocuous tweets (inferred to be AAE) as abusive language. Our model correctly classiï¬es these tweets as non- toxic.
# 3 Experiments
# 3.1 Dataset
To the best of our knowledge, there are no datasets that are annotated both for toxicity and for AAE dialect. Instead, we use two toxicity datasets and one English dialect dataset that are all from the same domain (Twitter):
DWMW17 (Davidson et al., 2017) A Twitter dataset that contains 25K tweets annotated as hate speech, offensive, or none. The authors deï¬ne hate speech as language that is used to expresses ha- tred towards a targeted group or is intended to be derogatory, to humiliate, or to insult the members of the group, and offensive language as language that contains offensive terms which are not neces- sarily inappropriate.
FDCL18 (Founta et al., 2018) A Twitter dataset that contains 100K tweets annotated as hateful, abu- sive, spam or none. This labeling scheme was de- termined by conducting multiple rounds of crowd- sourcing to understand how crowdworkers use dif- ferent labels. Strongly impolite, rude, or hurtful language is considered abusive, and the deï¬nition of hate speech is the same as in DWMW17.
BROD16 (Blodgett et al., 2016) A 20K sample out of a 1.15M English tweet corpus that is demo- graphically associated with African American twit- ter users. Further analysis shows that the dataset contains signiï¬cant linguistic features of African American English.
the DWMW17 and FDCL18, we use an off-the-shelf demographically-aligned ensemble model (Blod- gett et al., 2016) which learns a posterior topic distribution (topics corresponding to African Amer- ican, Hispanic, White and Other) at a user, message, and word level. Blodgett et al. (2016) generate a AAE-aligned corpus comprising tweets from users labelled with at least 80% posterior probability as
using AAE-associated terms. Similarly, following Sap et al. (2019), we assign AAE label to tweets with at least 80% posterior probability of contain- ing AAE-associated terms at the message level and consider all other tweets as Non-AAE.
In order to obtain toxicity labels for the BROD16 dataset, we consider all tweets in this dataset to be non-toxic. This is a reasonable assumption since hate speech is relatively rare compared to the large amount of non-abusive language on social media (Founta et al., 2018).1
# 3.2 Training Parameters
In the pre-training phase, we train the model until convergence and pick the best-performing check- point for ï¬ne-tuning. In the ï¬ne-tuning phase, we alternate training one single adversary and the clas- siï¬cation model each for two epochs in one round and train for 10 rounds in total.
We additionally tuned the α parameter used to weight the loss terms in Equation 3 over validation sets. We found that the value of α is important for obtaining text representations containing less dialectal information. A large α easily leads to over-ï¬tting and a drastic drop in validation accu- racy for hate speech classiï¬cation. However, a near zero α severely reduces both training and valida- tion accuracy. We ultimately set α = 0.05.
We use the same architecture as Sap et al. (2019) as a baseline model, which does not contain an ad- versarial objective. For both of this baseline model and our model, because of the goal of demoting the inï¬uence of AAE markers, we select the model with the lowest false positive rate on validation set. We train models on both DWMW17 and FDCL18 datasets, which we split into train/dev/test subsets following Sap et al. (2019).
1We additionally did a simple check for abusive terms using a list of 20 hate speech words, randomly selected from Hatebase.org. We found that the percentage of sentences containing these words is much lower in AAE dataset (â 2%) than hate speech datasets (â 20%).
Dataset Accuracy F1 base ours base ours DWMW17 FDCL18 91.90 81.18 90.68 80.27 75.15 66.15 76.05 66.80
Table 2: Accuracy and F1 scores for detecting abu- sive language. F1 values are macro-averaged across all classiï¬cation categories (e.g. hate, offensive, none for DWMW17). Our model achieves an accuracy and F1 on par with the baseline model.
Offensive Hate base ours base ours FDCL18-AAE 20.94 16.44 BROD16 17.69 14.29 3.23 5.03 2.60 4.52
Table 3: False positive rates (FPR), indicating how of- ten AAE text is incorrectly classiï¬ed as hateful or abu- sive, when training with the FDCL18 dataset. Our model consistently improves FPR for offensiveness, and performs slightly better than the baseline for hate speech detection.
# 4 Results and Analysis
Table 2 reports accuracy and F1 scores over the hate speech classiï¬cation task. Despite the adversarial component in our model, which makes this task more difï¬cult, our model achieves comparable ac- curacy as the baseline and even improves F1 score. Furthermore, the results of our baseline model are on par with those reported in Sap et al. (2019), which veriï¬es the validity of our implementation. Next, we assess how well our demotion model reduces the false positive rate in AAE text in two ways: (1) we use our trained hate speech detec- tion model to classify text inferred as AAE in BROD16 dataset, in which we assume there is no hateful or offensive speech and (2) we use our trained hate speech detection model to classify the test partitions of the DWMW17 and FDCL18 datasets, which are annotated for hateful and offen- sive speech and for which we use an off-the-shelf model to infer dialect, as described in §3. Thus, for both evaluation criteria, we have or infer AAE labels and toxicity labels, and we can compute how often text inferred as AAE is misclassiï¬ed as hate- ful, abusive, or offensive.
Notably, Sap et al. (2019) show that datasets that annotate text for hate speech without sufï¬cient contextâlike DWMW17 and FDCL18âmay suf- fer from inaccurate annotations, in that annotators
Offensive Hate base ours base ours DWMW17-AAE 38.27 23.68 BROD16 42.59 24.34 0.70 0.28 2.06 0.83
Table 4: False positive rates (FPR), indicating how of- ten AAE text is incorrectly classiï¬ed as hateful or of- fensive, when training with DWMW17 dataset. Our model fails to improve FPR over the baseline, since 97% of AAE-labeled instances in the dataset are also labeled as toxic.
are more likely to label non-abusive AAE text as abusive. However, despite the risk of inaccurate annotations, we can still use these datasets to eval- uate racial bias in toxicity detection because of our focus on FPR. In particular, to analyze false posi- tives, we need to analyze the classiï¬erâs predictions of the text as toxic, when annotators labeled it as non-toxic. Sap et al. (2019) suggest that annotators over-estimate the toxicity in AAE text, meaning FPRs over the DWMW17 and FDCL18 test sets are actually lower-bounds, and the true FPR is could be even higher. Furthermore, if we assume that the DWMW17 and FDCL18 training sets contain bi- ased annotations, as suggested by Sap et al. (2019), then a high FPR over the corresponding test sets suggests that the classiï¬cation model ampliï¬es bias in the training data, and labels non-toxic AAE text as toxic even when annotators did not.
Table 3 reports results for both evaluation criteria when we train the model on the FDCL18 data. In both cases, our model successfully reduces FPR. For abusive language detection in the FDCL18 test set, the reduction in FPR is > 3; for hate speech detection, the FPR of our model is also reduced by 0.6 compared to the baseline model. We can also observe a 2.2 and 0.5 reduction in FPR for abusive speech and hate speech respectively when evaluating on BROD16 data.
Table 4 reports results when we train the model on the DWMW17 dataset. Unlike Table 3, unfor- tunately, our model fails to reduce the FPR rate for both offensive and hate speech of DWMW17 data. We also notice that our model trained with DWMW17 performs much worse than the model trained with FDCL18 data.
To understand the poor performance of our model when trained and evaluated on DWMW17 data, we investigated the data distribution in the test set and found that the vast majority of tweets
Accuracy 0.81 0.80 0.79 0 2 4 6 8 10 False Abusive Rate 0.20 0.15 0 2 4 6 8 10 False Hateful Rate 0.04 I 0.02 0 2 4 6 8 10 Epoch
Figure 1: Accuracy of the entire development set of FDCL18 (top), and FPR rate for abusive (middle) and hate (bottom) speech detection for tweets inferred as AAE in the development set. X axis denotes the num- ber of epochs. 0th epoch is the best checkpoint for pre- training step, which is also the baseline model.
labeled as AAE by the dialect classiï¬er were also annotated as toxic (97%). Thus, the subset of the data over which our model might improve FPR consists of merely < 3% of the AAE portion of the test set (49 tweets). In comparison, 70.98% of the tweets in the FDCL18 test set that were labeled as AAE were also annotated as toxic. Thus, we hy- pothesize that the performance of our model over the DWMW17 test set is not a representative esti- mate of how well our model reduces bias, because the improvable set in the DWMW17 is too small. In Table 1, we provide two examples of tweets that the baseline classiï¬er misclassiï¬es abu- sive/offensive, but our model, correctly classiï¬es as non-toxic. Both examples are drawn from a toxicity dataset and are classiï¬ed as AAE by the dialectal prediction model.
Trade-off between FPR and Accuracy In order to better understand model performance, we ex- plored the accuracy and FPR of our model through- out the entire training process. We evaluate the best checkpoint of the pre-trained model (0th epoch) and checkpoints of each epoch during adversarial train- ing and show the results in Figure 1. While the baseline model (0th epoch, before any adversar- ial training) achieves high accuracy, it also has a high FPR rate, particularly over abusive language. After adversarial training, the FPR rate decreases with only minor changes in accuracy. However, checkpoints with lower FPR rates also often have lower accuracy. While Tables 2 and 3 suggest that our model does achieve a balance between these
metrics, Figure 1 shows the difï¬culty of this task; that is, it is difï¬cult to disentangle these attributes completely.
Eliminatation of protected attribute In Fig- ure 2, we plot the validation accuracy of the ad- versary through the entire training process in order to verify that our model does learn a text represen- tation at least partially free of dialectal information. Further, we compare using one adversary during training with using multiple adversaries (Kumar et al., 2019). Through the course of training, the validation accuracy of AAE prediction decreases by about 6â10 and 2â5 points for both datasets, indicating that dialectal information is gradually removed from the encoded representation. How- ever, after a certain training threshold (6 epochs for DWMW17 and 8 epochs for FDCL18), the accu- racy of the classiï¬er (not shown) also drops drasti- cally, indicating that dialectal information cannot be completely eliminated from the text representa- tion without also decreasing the accuracy of hate- speech classiï¬cation. Multiple adversaries gener- ally cause a greater decrease in AAE prediction than a single adversary, but do not necessarily lead to a lower FPR and a higher classiï¬cation accuracy. We attribute this to the difference in experimental setups: in our settings, we focus on one attribute to demote, whereas Kumar et al. (2019) had to de- mote ten latent attributes and thus required multiple adversaries to stabilize the demotion model. Thus, unlike in (Kumar et al., 2019), our settings do not require multiple adversaries, and indeed, we do not see improvements from using multiple adversaries.
# 5 Related Work
Preventing neural models from absorbing or even amplifying unwanted artifacts present in datasets is indispensable towards building machine learning systems without unwanted biases.
One thread of work focuses on removing bias at the data level, through reducing annotator bias (Sap et al., 2019) and augmenting imbalanced datasets (Jurgens et al., 2017). Dixon et al. (2018) propose an unsupervised method based on balancing the training set and employing a proposed measure- ment for mitigating unintended bias in text clas- siï¬cation models. Webster et al. (2018) present a gender-balanced dataset with ambiguous name-pair pronouns to provide diversity coverage for real- world data. In addition to annotator bias, sampling
âeâ single adversary ââ multiple adversaries DWMW17 > 0.80 fo 2 5 g 0.75 w 2 0.70 0 5 10 15 20 FDCL18 i 8 0.92 5 & 0.90 < < <= 0.88 0 5 10 15 20 Epochs
Figure 2: Validation accuracy on AAE prediction of the adversary in the whole training process. The green line denotes the training setting of one adversary and the orange line denotes the training setting of multiple adversaries.
strategies also result in topic and author bias in datasets of abusive language detection, leading to decreased classiï¬cation performance when testing in more realistic settings, necessitating the adoption of cross-domain evaluation for fairness (Wiegand et al., 2019).
A related thread of work on debiasing focuses at the model level (Zhao et al., 2019). Adversarial training has been used to remove protected features from word embeddings (Xie et al., 2017; Zhang et al., 2018) and intermediate representations for both texts (Elazar and Goldberg, 2018; Zhang et al., 2018) and images (Edwards and Storkey, 2015; Wang et al., 2018). Though previous works have documented that adversarial training fails to oblit- erate protected features, Kumar et al. (2019) show that using multiple adversaries more effectively forces the removal.
Along similar lines, multitask learning has been adopted for learning task-invariant representations. Vaidya et al. (2019) show that multitask training on a related task e.g., identity prediction, allows the model to shift focus to toxic-related elements in hate speech detection.
# 6 Conclusion
In this work, we use adversarial training to demote a protected attribute (AAE dialect) when training a classiï¬er to predict a target attribute (toxicity). While we focus on AAE dialect and toxicity, our methodology readily generalizes to other settings, such as reducing bias related to age, gender, or
income-level in any other text classiï¬cation task. Overall, our approach has the potential to improve fairness and reduce bias in NLP models.
# 7 Acknowledgements
We gratefully thank anonymous reviewers, Maarten Sap, and Dallas Card for their help with this work. The second author of this work is supported by the NSF Graduate Research Fellowship Program under Grant No. DGE1745016. Any opinions, ï¬ndings, and conclusions or recommendations expressed in this material are those of the authors and do not nec- essarily reï¬ect the views of the NSF. We also grate- fully acknowledge Public Interest Technology Uni- versity Network Grant No. NVF-PITU-Carnegie Mellon University-Subgrant-009246-2019-10-01 for supporting this research.
# References
Emily M Bender and Batya Friedman. 2018. Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics, 6:587â604.
Su Lin Blodgett, Lisa Green, and Brendan OConnor. 2016. Demographic dialectal variation in social media: A case study of african-american english. In Proceedings of the 2016 Conference on Empiri- cal Methods in Natural Language Processing, pages 1119â1130.
Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In Eleventh international aaai conference on web and social media.
Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2018. Measuring and mitigat- In Pro- ing unintended bias in text classiï¬cation. ceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pages 67â73.
Harrison Edwards and Amos Storkey. 2015. Censoring representations with an adversary. arXiv preprint arXiv:1511.05897.
Yanai Elazar and Yoav Goldberg. 2018. Adversarial removal of demographic attributes from text data. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 11â 21.
Paula Fortuna and S´ergio Nunes. 2018. A survey on au- tomatic detection of hate speech in text. ACM Com- puting Surveys (CSUR), 51(4):85.
Antigoni Maria Founta, Constantinos Djouvas, De- spoina Chatzakou, Ilias Leontiadis, Jeremy Black- burn, Gianluca Stringhini, Athena Vakali, Michael Sirivianos, and Nicolas Kourtellis. 2018. Large scale crowdsourcing and characterization of twitter In Twelfth International AAAI abusive behavior. Conference on Web and Social Media.
Moritz Hardt, Eric Price, and Nathan Srebro. 2016. Equality of opportunity in supervised learning. In Proceedings of the 30th International Conference on Neural Information Processing Systems, pages 3323â3331.
David Jurgens, Yulia Tsvetkov, and Dan Jurafsky. 2017. Incorporating dialectal variability for socially equi- table language identiï¬cation. In Proceedings of the 55th Annual Meeting of the Association for Compu- tational Linguistics (Volume 2: Short Papers), vol- ume 2, pages 51â57.
Sachin Kumar, Shuly Wintner, Noah A Smith, and Yu- lia Tsvetkov. 2019. Topics to avoid: Demoting la- tent confounds in text classiï¬cation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Interna- tional Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 4144â4154.
Guillaume Lample, Sandeep Subramanian, Eric Smith, Ludovic Denoyer, MarcâAurelio Ranzato, and Y- Lan Boureau. 2019. Multiple-attribute text rewrit- ing. In International Conference on Learning Rep- resentations.
Virgile Landeiro, Tuan Tran, and Aron Culotta. 2019. Discovering and controlling for latent confounds in text classiï¬cation using adversarial domain adapta- tion. In Proceedings of the 2019 SIAM International Conference on Data Mining, pages 298â305. SIAM.
Yitong Li, Timothy Baldwin, and Trevor Cohn. 2018. Towards robust and privacy-preserving text represen- In Proceedings of the 56th Annual Meet- tations. ing of the Association for Computational Linguistics (Volume 2: Short Papers), pages 25â30, Melbourne, Australia. Association for Computational Linguis- tics.
Kirsten Lloyd. 2018. Bias ampliï¬cation in artiï¬cial in- telligence systems. CoRR, abs/1809.07842.
Binny Mathew, Ritam Dutt, Pawan Goyal, and Ani- mesh Mukherjee. 2019. Spread of hate speech in on- line social media. In Proceedings of the 10th ACM Conference on Web Science, pages 173â182. ACM.
Mainack Mondal, Leandro Ara´ujo Silva, and Fabr´ıcio Benevenuto. 2017. A measurement study of hate speech in social media. In Proceedings of the 28th ACM Conference on Hypertext and Social Media, pages 85â94. ACM.
Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A Smith. 2019. The risk of racial bias In Proceedings of the in hate speech detection.
57th Annual Meeting of the Association for Compu- tational Linguistics, pages 1668â1678.
Anna Schmidt and Michael Wiegand. 2017. A survey on hate speech detection using natural language pro- In Proceedings of the Fifth International cessing. Workshop on Natural Language Processing for So- cial Media, pages 1â10, Valencia, Spain. Associa- tion for Computational Linguistics.
Leandro Silva, Mainack Mondal, Denzil Correa, Fabr´ıcio Benevenuto, and Ingmar Weber. 2016. An- alyzing the targets of hate in online social media. In Tenth International AAAI Conference on Web and Social Media.
Ameya Vaidya, Feng Mai, and Yue Ning. 2019. Em- pirical analysis of multi-task learning for reducing arXiv model bias in toxic comment detection. preprint arXiv:1909.09758.
Tianlu Wang, Jieyu Zhao, Kai-Wei Chang, Mark Yatskar, and Vicente Ordonez. 2018. Adversarial re- moval of gender from deep image representations. arXiv preprint arXiv:1811.08489.
Zeerak Waseem. 2016. Are you a racist or am i seeing things? annotator inï¬uence on hate speech detection on twitter. In Proceedings of the ï¬rst workshop on NLP and computational social science, pages 138â 142.
Zeerak Waseem and Dirk Hovy. 2016. Hateful sym- bols or hateful people? predictive features for hate In Proceedings of the speech detection on twitter. NAACL student research workshop, pages 88â93.
Kellie Webster, Marta Recasens, Vera Axelrod, and Ja- son Baldridge. 2018. Mind the gap: A balanced corpus of gendered ambiguous pronouns. Transac- tions of the Association for Computational Linguis- tics, 6:605â617.
Michael Wiegand, Josef Ruppenhofer, and Thomas Kleinbauer. 2019. Detection of abusive language: In Proceedings of the problem of biased datasets. the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 602â608.
Qizhe Xie, Zihang Dai, Yulun Du, Eduard Hovy, and Graham Neubig. 2017. Controllable invariance through adversarial feature learning. In Proceedings of the 31st International Conference on Neural Infor- mation Processing Systems, pages 585â596.
Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. 2018. Mitigating unwanted biases with In Proceedings of the 2018 adversarial learning. AAAI/ACM Conference on AI, Ethics, and Society, pages 335â340.
Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cot- terell, Vicente Ordonez, and Kai-Wei Chang. 2019. Gender bias in contextualized word embeddings. In
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 629â634.
Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Or- donez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias ampliï¬cation using corpus-level constraints. In Proceedings of the 2017 Conference on Empirical Methods in Natural Lan- guage Processing, pages 2979â2989. | {
"id": "1811.08489"
} |
2005.11401 | Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks | Large pre-trained language models have been shown to store factual knowledge
in their parameters, and achieve state-of-the-art results when fine-tuned on
downstream NLP tasks. However, their ability to access and precisely manipulate
knowledge is still limited, and hence on knowledge-intensive tasks, their
performance lags behind task-specific architectures. Additionally, providing
provenance for their decisions and updating their world knowledge remain open
research problems. Pre-trained models with a differentiable access mechanism to
explicit non-parametric memory can overcome this issue, but have so far been
only investigated for extractive downstream tasks. We explore a general-purpose
fine-tuning recipe for retrieval-augmented generation (RAG) -- models which
combine pre-trained parametric and non-parametric memory for language
generation. We introduce RAG models where the parametric memory is a
pre-trained seq2seq model and the non-parametric memory is a dense vector index
of Wikipedia, accessed with a pre-trained neural retriever. We compare two RAG
formulations, one which conditions on the same retrieved passages across the
whole generated sequence, the other can use different passages per token. We
fine-tune and evaluate our models on a wide range of knowledge-intensive NLP
tasks and set the state-of-the-art on three open domain QA tasks, outperforming
parametric seq2seq models and task-specific retrieve-and-extract architectures.
For language generation tasks, we find that RAG models generate more specific,
diverse and factual language than a state-of-the-art parametric-only seq2seq
baseline. | http://arxiv.org/pdf/2005.11401 | Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela | cs.CL, cs.LG | Accepted at NeurIPS 2020 | null | cs.CL | 20200522 | 20210412 | 1 2 0 2
r p A 2 1 ] L C . s c [
4 v 1 0 4 1 1 . 5 0 0 2 : v i X r a
# Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks
Patrick Lewis't, Ethan Perez*,
Aleksandra Piktusâ , Fabio Petroniâ , Vladimir Karpukhinâ , Naman Goyalâ , Heinrich Küttlerâ ,
Mike Lewisâ , Wen-tau Yihâ , Tim Rocktäschelâ â¡, Sebastian Riedelâ â¡, Douwe Kielaâ
# tRacebook AI Research; âUniversity College London; *New York University; [email protected]
# Abstract
Large pre-trained language models have been shown to store factual knowledge in their parameters, and achieve state-of-the-art results when ï¬ne-tuned on down- stream NLP tasks. However, their ability to access and precisely manipulate knowl- edge is still limited, and hence on knowledge-intensive tasks, their performance lags behind task-speciï¬c architectures. Additionally, providing provenance for their decisions and updating their world knowledge remain open research problems. Pre- trained models with a differentiable access mechanism to explicit non-parametric memory have so far been only investigated for extractive downstream tasks. We explore a general-purpose ï¬ne-tuning recipe for retrieval-augmented generation (RAG) â models which combine pre-trained parametric and non-parametric mem- ory for language generation. We introduce RAG models where the parametric memory is a pre-trained seq2seq model and the non-parametric memory is a dense vector index of Wikipedia, accessed with a pre-trained neural retriever. We com- pare two RAG formulations, one which conditions on the same retrieved passages across the whole generated sequence, and another which can use different passages per token. We ï¬ne-tune and evaluate our models on a wide range of knowledge- intensive NLP tasks and set the state of the art on three open domain QA tasks, outperforming parametric seq2seq models and task-speciï¬c retrieve-and-extract architectures. For language generation tasks, we ï¬nd that RAG models generate more speciï¬c, diverse and factual language than a state-of-the-art parametric-only seq2seq baseline.
# 1 Introduction
Pre-trained neural language models have been shown to learn a substantial amount of in-depth knowl- edge from data [47]. They can do so without any access to an external memory, as a parameterized implicit knowledge base [51, 52]. While this development is exciting, such models do have down- sides: They cannot easily expand or revise their memory, canât straightforwardly provide insight into their predictions, and may produce âhallucinationsâ [38]. Hybrid models that combine parametric memory with non-parametric (i.e., retrieval-based) memories [20, 26, 48] can address some of these issues because knowledge can be directly revised and expanded, and accessed knowledge can be inspected and interpreted. REALM [20] and ORQA [31], two recently introduced models that combine masked language models [8] with a differentiable retriever, have shown promising results,
@â¬--- ee ee ee ee ee ee eee ee ee eee The middle ear includes End-to-End Backprop through q and pe the tympanic cavity and the three ossicles. (y) Define "middle ear" (x) Question Answering: Question Query Retriever py Document Generator pg âower Generation index (Non-Parametric) (Parametric) d(z) supports (y) Barack Obama was born in Hawaii. (x) q(x) Fact Verification: Fact Query Fact Verification: Label Generation The Divine This 14th century work Comedy (x) is divided into 3 Jeopardy Question Generation: Answer Query sections: "Inferno", "purgatorio" & "Paradiso" @) Question Generation
Figure 1: Overview of our approach. We combine a pre-trained retriever (Query Encoder + Document Index) with a pre-trained seq2seq model (Generator) and ï¬ne-tune end-to-end. For query x, we use Maximum Inner Product Search (MIPS) to ï¬nd the top-K documents zi. For ï¬nal prediction y, we treat z as a latent variable and marginalize over seq2seq predictions given different documents.
but have only explored open-domain extractive question answering. Here, we bring hybrid parametric and non-parametric memory to the âworkhorse of NLP,â i.e. sequence-to-sequence (seq2seq) models.
We endow pre-trained, parametric-memory generation models with a non-parametric memory through a general-purpose ï¬ne-tuning approach which we refer to as retrieval-augmented generation (RAG). We build RAG models where the parametric memory is a pre-trained seq2seq transformer, and the non-parametric memory is a dense vector index of Wikipedia, accessed with a pre-trained neural retriever. We combine these components in a probabilistic model trained end-to-end (Fig. 1). The retriever (Dense Passage Retriever [26], henceforth DPR) provides latent documents conditioned on the input, and the seq2seq model (BART [32]) then conditions on these latent documents together with the input to generate the output. We marginalize the latent documents with a top-K approximation, either on a per-output basis (assuming the same document is responsible for all tokens) or a per-token basis (where different documents are responsible for different tokens). Like T5 [51] or BART, RAG can be ï¬ne-tuned on any seq2seq task, whereby both the generator and retriever are jointly learned.
There has been extensive previous work proposing architectures to enrich systems with non-parametric memory which are trained from scratch for speciï¬c tasks, e.g. memory networks [64, 55], stack- augmented networks [25] and memory layers [30]. In contrast, we explore a setting where both parametric and non-parametric memory components are pre-trained and pre-loaded with extensive knowledge. Crucially, by using pre-trained access mechanisms, the ability to access knowledge is present without additional training.
Our results highlight the beneï¬ts of combining parametric and non-parametric memory with genera- tion for knowledge-intensive tasksâtasks that humans could not reasonably be expected to perform without access to an external knowledge source. Our RAG models achieve state-of-the-art results on open Natural Questions [29], WebQuestions [3] and CuratedTrec [2] and strongly outperform recent approaches that use specialised pre-training objectives on TriviaQA [24]. Despite these being extractive tasks, we ï¬nd that unconstrained generation outperforms previous extractive approaches. For knowledge-intensive generation, we experiment with MS-MARCO [1] and Jeopardy question generation, and we ï¬nd that our models generate responses that are more factual, speciï¬c, and diverse than a BART baseline. For FEVER [56] fact veriï¬cation, we achieve results within 4.3% of state-of-the-art pipeline models which use strong retrieval supervision. Finally, we demonstrate that the non-parametric memory can be replaced to update the modelsâ knowledge as the world changes.1
# 2 Methods
We explore RAG models, which use the input sequence x to retrieve text documents z and use them as additional context when generating the target sequence y. As shown in Figure 1, our models leverage two components: (i) a retriever pη(z|x) with parameters η that returns (top-K truncated) distributions over text passages given a query x and (ii) a generator pθ(yi|x, z, y1:iâ1) parametrized
1Code to run experiments with RAG has been open-sourced as part of the HuggingFace Transform- ers Library [66] and can be found at https://github.com/huggingface/transformers/blob/master/ examples/rag/. An interactive demo of RAG models can be found at https://huggingface.co/rag/
2
by θ that generates a current token based on a context of the previous i â 1 tokens y1:iâ1, the original input x and a retrieved passage z.
To train the retriever and generator end-to-end, we treat the retrieved document as a latent variable. We propose two models that marginalize over the latent documents in different ways to produce a distribution over generated text. In one approach, RAG-Sequence, the model uses the same document to predict each target token. The second approach, RAG-Token, can predict each target token based on a different document. In the following, we formally introduce both models and then describe the pη and pθ components, as well as the training and decoding procedure.
# 2.1 Models
RAG-Sequence Model The RAG-Sequence model uses the same retrieved document to generate the complete sequence. Technically, it treats the retrieved document as a single latent variable that is marginalized to get the seq2seq probability p(y|x) via a top-K approximation. Concretely, the top K documents are retrieved using the retriever, and the generator produces the output sequence probability for each document, which are then marginalized,
N Pracsequence(Ylt) & D> py(zlx)po(yle,2) = Yo pn(zle) T] polyile, 2, yreâ1) 2â¬top-k(p(-|x)) 2â¬top-k(p(-|x)) i
RAG-Token Model In the RAG-Token model we can draw a different latent document for each target token and marginalize accordingly. This allows the generator to choose content from several documents when producing an answer. Concretely, the top K documents are retrieved using the retriever, and then the generator produces a distribution for the next output token for each document, before marginalizing, and repeating the process with the following output token, Formally, we deï¬ne:
N Prac-token(y|%) Il > Dn (2|x)po (yilx, Z. Yriâ1) i 2â¬top-k(p(-|a))
Finally, we note that RAG can be used for sequence classiï¬cation tasks by considering the target class as a target sequence of length one, in which case RAG-Sequence and RAG-Token are equivalent.
# 2.2 Retriever: DPR
The retrieval component pη(z|x) is based on DPR [26]. DPR follows a bi-encoder architecture:
# py(z|x) x exp(d(z)'q())
d(z) = BERTd(z), q(x) = BERTq(x)
where d(z) is a dense representation of a document produced by a BERTBASE document encoder [8], and q(x) a query representation produced by a query encoder, also based on BERTBASE. Calculating top-k(pη(·|x)), the list of k documents z with highest prior probability pη(z|x), is a Maximum Inner Product Search (MIPS) problem, which can be approximately solved in sub-linear time [23]. We use a pre-trained bi-encoder from DPR to initialize our retriever and to build the document index. This retriever was trained to retrieve documents which contain answers to TriviaQA [24] questions and Natural Questions [29]. We refer to the document index as the non-parametric memory.
# 2.3 Generator: BART
The generator component pθ(yi|x, z, y1:iâ1) could be modelled using any encoder-decoder. We use BART-large [32], a pre-trained seq2seq transformer [58] with 400M parameters. To combine the input x with the retrieved content z when generating from BART, we simply concatenate them. BART was pre-trained using a denoising objective and a variety of different noising functions. It has obtained state-of-the-art results on a diverse set of generation tasks and outperforms comparably-sized T5 models [32]. We refer to the BART generator parameters θ as the parametric memory henceforth.
# 2.4 Training
We jointly train the retriever and generator components without any direct supervision on what document should be retrieved. Given a ï¬ne-tuning training corpus of input/output pairs (xj, yj), we
3
minimize the negative marginal log-likelihood of each target, y) â log p(y;|2;) using stochastic gradient descent with Adam [28]. Updating the document encoder BERT, during training is costly as it requires the document index to be periodically updated as REALM does during pre-training [20]. We do not find this step necessary for strong performance, and keep the document encoder (and index) fixed, only fine-tuning the query encoder BERT, and the BART generator.
# 2.5 Decoding
At test time, RAG-Sequence and RAG-Token require different ways to approximate arg maxy p(y|x).
RAG-Token The RAG-Token model can be seen as a standard, autoregressive seq2seq genera- tor with transition probability: pj (yi|z, yiiâ1) = De zetop-k(p(-l2)) Pn (Zi|@)po(Yil@, Zi, Y14â-1) To decode, we can plug Po(yi |x, y1iâ1) into a standard beam decoder.
RAG-Sequence For RAG-Sequence, the likelihood p(y|x) does not break into a conventional per- token likelihood, hence we cannot solve it with a single beam search. Instead, we run beam search for each document z, scoring each hypothesis using pθ(yi|x, z, y1:iâ1). This yields a set of hypotheses Y , some of which may not have appeared in the beams of all documents. To estimate the probability of an hypothesis y we run an additional forward pass for each document z for which y does not appear in the beam, multiply generator probability with pη(z|x) and then sum the probabilities across beams for the marginals. We refer to this decoding procedure as âThorough Decoding.â For longer output sequences, |Y | can become large, requiring many forward passes. For more efï¬cient decoding, we can make a further approximation that pθ(y|x, zi) â 0 where y was not generated during beam search from x, zi. This avoids the need to run additional forward passes once the candidate set Y has been generated. We refer to this decoding procedure as âFast Decoding.â
# 3 Experiments
We experiment with RAG in a wide range of knowledge-intensive tasks. For all experiments, we use a single Wikipedia dump for our non-parametric knowledge source. Following Lee et al. [31] and Karpukhin et al. [26], we use the December 2018 dump. Each Wikipedia article is split into disjoint 100-word chunks, to make a total of 21M documents. We use the document encoder to compute an embedding for each document, and build a single MIPS index using FAISS [23] with a Hierarchical Navigable Small World approximation for fast retrieval [37]. During training, we retrieve the top k documents for each query. We consider k â {5, 10} for training and set k for test time using dev data. We now discuss experimental details for each task.
# 3.1 Open-domain Question Answering
Open-domain question answering (QA) is an important real-world application and common testbed for knowledge-intensive tasks [20]. We treat questions and answers as input-output text pairs (x, y) and train RAG by directly minimizing the negative log-likelihood of answers. We compare RAG to the popular extractive QA paradigm [5, 7, 31, 26], where answers are extracted spans from retrieved documents, relying primarily on non-parametric knowledge. We also compare to âClosed-Book QAâ approaches [52], which, like RAG, generate answers, but which do not exploit retrieval, instead relying purely on parametric knowledge. We consider four popular open-domain QA datasets: Natural Questions (NQ) [29], TriviaQA (TQA) [24]. WebQuestions (WQ) [3] and CuratedTrec (CT) [2]. As CT and WQ are small, we follow DPR [26] by initializing CT and WQ models with our NQ RAG model. We use the same train/dev/test splits as prior work [31, 26] and report Exact Match (EM) scores. For TQA, to compare with T5 [52], we also evaluate on the TQA Wiki test set.
# 3.2 Abstractive Question Answering
RAG models can go beyond simple extractive QA and answer questions with free-form, abstractive text generation. To test RAGâs natural language generation (NLG) in a knowledge-intensive setting, we use the MSMARCO NLG task v2.1 [43]. The task consists of questions, ten gold passages retrieved from a search engine for each question, and a full sentence answer annotated from the retrieved passages. We do not use the supplied passages, only the questions and answers, to treat
4
MSMARCO as an open-domain abstractive QA task. MSMARCO has some questions that cannot be answered in a way that matches the reference answer without access to the gold passages, such as âWhat is the weather in Volcano, CA?â so performance will be lower without using gold passages. We also note that some MSMARCO questions cannot be answered using Wikipedia alone. Here, RAG can rely on parametric knowledge to generate reasonable responses.
# Jeopardy Question Generation
To evaluate RAGâs generation abilities in a non-QA setting, we study open-domain question gen- eration. Rather than use questions from standard open-domain QA tasks, which typically consist of short, simple questions, we propose the more demanding task of generating Jeopardy questions. Jeopardy is an unusual format that consists of trying to guess an entity from a fact about that entity. For example, âThe World Cupâ is the answer to the question âIn 1986 Mexico scored as the ï¬rst country to host this international sports competition twice.â As Jeopardy questions are precise, factual statements, generating Jeopardy questions conditioned on their answer entities constitutes a challenging knowledge-intensive generation task.
We use the splits from SearchQA [10], with 100K train, 14K dev, and 27K test examples. As this is a new task, we train a BART model for comparison. Following [67], we evaluate using the SQuAD-tuned Q-BLEU-1 metric [42]. Q-BLEU is a variant of BLEU with a higher weight for matching entities and has higher correlation with human judgment for question generation than standard metrics. We also perform two human evaluations, one to assess generation factuality, and one for speciï¬city. We deï¬ne factuality as whether a statement can be corroborated by trusted external sources, and speciï¬city as high mutual dependence between the input and output [33]. We follow best practice and use pairwise comparative evaluation [34]. Evaluators are shown an answer and two generated questions, one from BART and one from RAG. They are then asked to pick one of four optionsâquuestion A is better, question B is better, both are good, or neither is good.
# 3.4 Fact Veriï¬cation
FEVER [56] requires classifying whether a natural language claim is supported or refuted by Wikipedia, or whether there is not enough information to decide. The task requires retrieving evidence from Wikipedia relating to the claim and then reasoning over this evidence to classify whether the claim is true, false, or unveriï¬able from Wikipedia alone. FEVER is a retrieval problem coupled with an challenging entailment reasoning task. It also provides an appropriate testbed for exploring the RAG modelsâ ability to handle classiï¬cation rather than generation. We map FEVER class labels (supports, refutes, or not enough info) to single output tokens and directly train with claim-class pairs. Crucially, unlike most other approaches to FEVER, we do not use supervision on retrieved evidence. In many real-world applications, retrieval supervision signals arenât available, and models that do not require such supervision will be applicable to a wider range of tasks. We explore two variants: the standard 3-way classiï¬cation task (supports/refutes/not enough info) and the 2-way (supports/refutes) task studied in Thorne and Vlachos [57]. In both cases we report label accuracy.
# 4 Results
# 4.1 Open-domain Question Answering
Table 1 shows results for RAG along with state-of-the-art models. On all four open-domain QA tasks, RAG sets a new state of the art (only on the T5-comparable split for TQA). RAG combines the generation ï¬exibility of the âclosed-bookâ (parametric only) approaches and the performance of "open-book" retrieval-based approaches. Unlike REALM and T5+SSM, RAG enjoys strong results without expensive, specialized âsalient span maskingâ pre-training [20]. It is worth noting that RAGâs retriever is initialized using DPRâs retriever, which uses retrieval supervision on Natural Questions and TriviaQA. RAG compares favourably to the DPR QA system, which uses a BERT-based âcross- encoderâ to re-rank documents, along with an extractive reader. RAG demonstrates that neither a re-ranker nor extractive reader is necessary for state-of-the-art performance.
There are several advantages to generating answers even when it is possible to extract them. Docu- ments with clues about the answer but do not contain the answer verbatim can still contribute towards a correct answer being generated, which is not possible with standard extractive approaches, leading
5
Table 1: Open-Domain QA Test Scores. For TQA, left column uses the standard test set for Open- Domain QA, right column uses the TQA-Wiki test set. See Appendix D for further details. Table 2: Generation and classiï¬cation Test Scores. MS-MARCO SotA is [4], FEVER-3 is [68] and FEVER-2 is [57] *Uses gold context/evidence. Best model without gold access underlined.
Model NQ TQA WQ CT Closed Book T5-11B [52] 34.5 T5-11B+SSM[52] 36.6 - - /50.1 37.4 /60.5 44.7 - - Model Jeopardy MSMARCO FVR3 FVR2 B-1 QB-1 R-L B-1 Label Acc. Open Book REALM [20] DPR [26] 40.4 / 41.5 57.9/ - - - 40.7 46.8 41.1 50.6 SotA BART - - 15.1 19.7 49.8* 49.9* 38.2 41.6 76.8 64.0 92.2* 81.1 RAG-Token RAG-Seq. 44.1 55.2/66.1 45.5 50.0 44.5 56.8/68.0 45.2 52.2 RAG-Tok. 17.3 22.2 RAG-Seq. 14.7 21.4 40.1 40.8 41.5 44.2 72.5 89.5
to more effective marginalization over documents. Furthermore, RAG can generate correct answers even when the correct answer is not in any retrieved document, achieving 11.8% accuracy in such cases for NQ, where an extractive model would score 0%.
# 4.2 Abstractive Question Answering
As shown in Table 2, RAG-Sequence outperforms BART on Open MS-MARCO NLG by 2.6 Bleu points and 2.6 Rouge-L points. RAG approaches state-of-the-art model performance, which is impressive given that (i) those models access gold passages with speciï¬c information required to generate the reference answer , (ii) many questions are unanswerable without the gold passages, and (iii) not all questions are answerable from Wikipedia alone. Table 3 shows some generated answers from our models. Qualitatively, we ï¬nd that RAG models hallucinate less and generate factually correct text more often than BART. Later, we also show that RAG generations are more diverse than BART generations (see §4.5).
# Jeopardy Question Generation
Table 2 shows that RAG-Token performs better than RAG-Sequence on Jeopardy question generation, with both models outperforming BART on Q-BLEU-1. 4 shows human evaluation results, over 452 pairs of generations from BART and RAG-Token. Evaluators indicated that BART was more factual than RAG in only 7.1% of cases, while RAG was more factual in 42.7% of cases, and both RAG and BART were factual in a further 17% of cases, clearly demonstrating the effectiveness of RAG on the task over a state-of-the-art generation model. Evaluators also ï¬nd RAG generations to be more speciï¬c by a large margin. Table 3 shows typical generations from each model.
Jeopardy questions often contain two separate pieces of information, and RAG-Token may perform best because it can generate responses that combine content from several documents. Figure 2 shows an example. When generating âSunâ, the posterior is high for document 2 which mentions âThe Sun Also Risesâ. Similarly, document 1 dominates the posterior when âA Farewell to Armsâ is generated. Intriguingly, after the ï¬rst token of each book is generated, the document posterior ï¬attens. This observation suggests that the generator can complete the titles without depending on speciï¬c documents. In other words, the modelâs parametric knowledge is sufï¬cient to complete the titles. We ï¬nd evidence for this hypothesis by feeding the BART-only baseline with the partial decoding "The Sun. BART completes the generation "The Sun Also Rises" is a novel by this author of "The Sun Also Rises" indicating the title "The Sun Also Rises" is stored in BARTâs parameters. Similarly, BART will complete the partial decoding "The Sun Also Rises" is a novel by this author of "A with "The Sun Also Rises" is a novel by this author of "A Farewell to Arms". This example shows how parametric and non-parametric memories work togetherâthe non-parametric component helps to guide the generation, drawing out speciï¬c knowledge stored in the parametric memory.
# 4.4 Fact Veriï¬cation
Table 2 shows our results on FEVER. For 3-way classiï¬cation, RAG scores are within 4.3% of state-of-the-art models, which are complex pipeline systems with domain-speciï¬c architectures and substantial engineering, trained using intermediate retrieval supervision, which RAG does not require.
6
Document 1: his works are considered classics of American Doc 1 | | literature ... His wartime experiences formed the basis for his novel poe 2 | | âA Farewell to Armsâ (1929) ... Doc 3 Document 2: ... artists of the 1920s âLost Generationâ expatriate Doe 4 community. His debut novel, "The Sun Also Risesâ, was published °° in 1926. Doc 5 & , es, ee £ tg ° es & ss . TESS SK
Figure 2: RAG-Token document posterior p(zi|x, yi, yâi) for each generated token for input âHem- ingway" for Jeopardy generation with 5 retrieved documents. The posterior for document 1 is high when generating âA Farewell to Arms" and for document 2 when generating âThe Sun Also Rises".
Table 3: Examples from generation tasks. RAG models generate more speciï¬c and factually accurate responses. â?â indicates factually incorrect responses, * indicates partially correct responses.
Task Input Model Generation BART ?The middle ear is the part of the ear between the middle ear and the nose. RAG-T The middle ear is the portion of the ear internal to the eardrum. RAG-S The middle ear includes the tympanic cavity and the three ossicles. deï¬ne middle ear MS- MARCO what currency needed in scotland BART The currency needed in Scotland is Pound sterling. RAG-T Pound is the currency needed in Scotland. RAG-S The currency needed in Scotland is the pound sterling. BART ?This state has the largest number of counties in the U.S. RAG-T Itâs the only U.S. state named for a U.S. president RAG-S Itâs the state where youâll ï¬nd Mount Rainier National Park Jeopardy Question Gener -ation Washington BART *This epic poem by Dante is divided into 3 parts: the Inferno, the Purgatorio & the Purgatorio RAG-T Danteâs "Inferno" is the ï¬rst part of this epic poem RAG-S This 14th century work is divided into 3 sections: "Inferno", "Purgatorio" & "Paradiso" The Divine Comedy
For 2-way classiï¬cation, we compare against Thorne and Vlachos [57], who train RoBERTa [35] to classify the claim as true or false given the gold evidence sentence. RAG achieves an accuracy within 2.7% of this model, despite being supplied with only the claim and retrieving its own evidence. We also analyze whether documents retrieved by RAG correspond to documents annotated as gold evidence in FEVER. We calculate the overlap in article titles between the top k documents retrieved by RAG and gold evidence annotations. We ï¬nd that the top retrieved document is from a gold article in 71% of cases, and a gold article is present in the top 10 retrieved articles in 90% of cases.
# 4.5 Additional Results
Generation Diversity Section 4.3 shows that RAG models are more factual and speciï¬c than BART for Jeopardy question generation. Following recent work on diversity-promoting decoding [33, 59, 39], we also investigate generation diversity by calculating the ratio of distinct ngrams to total ngrams generated by different models. Table 5 shows that RAG-Sequenceâs generations are more diverse than RAG-Tokenâs, and both are signiï¬cantly more diverse than BART without needing any diversity-promoting decoding.
Retrieval Ablations A key feature of RAG is learning to retrieve relevant information for the task. To assess the effectiveness of the retrieval mechanism, we run ablations where we freeze the retriever during training. As shown in Table 6, learned retrieval improves results for all tasks.
We compare RAGâs dense retriever to a word overlap-based BM25 retriever [53]. Here, we replace RAGâs retriever with a ï¬xed BM25 system, and use BM25 retrieval scores as logits when calculating p(z|x). Table 6 shows the results. For FEVER, BM25 performs best, perhaps since FEVER claims are heavily entity-centric and thus well-suited for word overlap-based retrieval. Differentiable retrieval improves results on all other tasks, especially for Open-Domain QA, where it is crucial.
Index hot-swapping An advantage of non-parametric memory models like RAG is that knowledge can be easily updated at test time. Parametric-only models like T5 or BART need further training to update their behavior as the world changes. To demonstrate, we build an index using the DrQA [5] Wikipedia dump from December 2016 and compare outputs from RAG using this index to the newer index from our main results (December 2018). We prepare a list of 82 world leaders who had changed
7
Table 4: Human assessments for the Jeopardy Question Generation Task.
# Table 5: Ratio of distinct to total tri-grams for generation tasks.
Factuality Speciï¬city MSMARCO Jeopardy QGen BART better RAG better Both good Both poor No majority 7.1% 42.7% 11.7% 17.7% 20.8% 16.8% 37.4% 11.8% 6.9% 20.1% Gold BART RAG-Token RAG-Seq. 89.6% 70.7% 77.8% 83.5% 90.0% 32.4% 46.8% 53.8%
Table 6: Ablations on the dev set. As FEVER is a classiï¬cation task, both RAG models are equivalent.
Model NQ TQA WQ Exact Match CT Jeopardy-QGen MSMarco B-1 B-1 QB-1 R-L RAG-Token-BM25 RAG-Sequence-BM25 29.7 31.8 41.5 44.1 32.1 36.6 33.1 33.8 17.5 11.1 22.3 19.5 55.5 56.5 48.4 46.9 75.1 91.6 RAG-Token-Frozen RAG-Sequence-Frozen 37.8 41.2 50.1 52.1 37.1 41.8 51.1 52.6 16.7 11.8 21.7 19.6 55.9 56.7 49.4 47.3 72.9 89.4 RAG-Token RAG-Sequence 43.5 44.0 54.8 55.8 46.5 44.9 51.9 53.4 17.9 15.3 22.6 21.5 56.2 57.2 49.4 47.5 74.5 90.6
between these dates and use a template âWho is {position}?â (e.g. âWho is the President of Peru?â) to query our NQ RAG model with each index. RAG answers 70% correctly using the 2016 index for 2016 world leaders and 68% using the 2018 index for 2018 world leaders. Accuracy with mismatched indices is low (12% with the 2018 index and 2016 leaders, 4% with the 2016 index and 2018 leaders). This shows we can update RAGâs world knowledge by simply replacing its non-parametric memory.
Effect of Retrieving more documents Models are trained with either 5 or 10 retrieved latent documents, and we do not observe signiï¬cant differences in performance between them. We have the ï¬exibility to adjust the number of retrieved documents at test time, which can affect performance and runtime. Figure 3 (left) shows that retrieving more documents at test time monotonically improves Open-domain QA results for RAG-Sequence, but performance peaks for RAG-Token at 10 retrieved documents. Figure 3 (right) shows that retrieving more documents leads to higher Rouge-L for RAG-Token at the expense of Bleu-1, but the effect is less pronounced for RAG-Sequence.
| % 80 De ranananaEnEEREnEEEREE EERE g6 g / 2 fr = 70 Z| / 3 â RAG TORRE 2 nf |! g == RAG-Tok B-1 Eom EB 60 ==- RAG-Seq RL Fs F a 2, == RAGSeq B41 Q 50 > 50 ZO â reactor | & 3 soft === RAGSeq | Z 40 24s rr rr a a) rr a a rr a K Retrieved Docs K Retrieved Docs K Retrieved Docs
Figure 3: Left: NQ performance as more documents are retrieved. Center: Retrieval recall perfor- mance in NQ. Right: MS-MARCO Bleu-1 and Rouge-L as more documents are retrieved.
# 5 Related Work
Single-Task Retrieval Prior work has shown that retrieval improves performance across a variety of NLP tasks when considered in isolation. Such tasks include open-domain question answering [5, 29], fact checking [56], fact completion [48], long-form question answering [12], Wikipedia article generation [36], dialogue [41, 65, 9, 13], translation [17], and language modeling [19, 27]. Our work uniï¬es previous successes in incorporating retrieval into individual tasks, showing that a single retrieval-based architecture is capable of achieving strong performance across several tasks.
8
General-Purpose Architectures for NLP Prior work on general-purpose architectures for NLP tasks has shown great success without the use of retrieval. A single, pre-trained language model has been shown to achieve strong performance on various classiï¬cation tasks in the GLUE bench- marks [60, 61] after ï¬ne-tuning [49, 8]. GPT-2 [50] later showed that a single, left-to-right, pre-trained language model could achieve strong performance across both discriminative and generative tasks. For further improvement, BART [32] and T5 [51, 52] propose a single, pre-trained encoder-decoder model that leverages bi-directional attention to achieve stronger performance on discriminative and generative tasks. Our work aims to expand the space of possible tasks with a single, uniï¬ed architecture, by learning a retrieval module to augment pre-trained, generative language models.
Learned Retrieval There is signiï¬cant work on learning to retrieve documents in information retrieval, more recently with pre-trained, neural language models [44, 26] similar to ours. Some work optimizes the retrieval module to aid in a speciï¬c, downstream task such as question answering, using search [46], reinforcement learning [6, 63, 62], or a latent variable approach [31, 20] as in our work. These successes leverage different retrieval-based architectures and optimization techniques to achieve strong performance on a single task, while we show that a single retrieval-based architecture can be ï¬ne-tuned for strong performance on a variety of tasks.
Memory-based Architectures Our document index can be seen as a large external memory for neural networks to attend to, analogous to memory networks [64, 55]. Concurrent work [14] learns to retrieve a trained embedding for each entity in the input, rather than to retrieve raw text as in our work. Other work improves the ability of dialog models to generate factual text by attending over fact embeddings [15, 13]. A key feature of our memory is that it is comprised of raw text rather distributed representations, which makes the memory both (i) human-readable, lending a form of interpretability to our model, and (ii) human-writable, enabling us to dynamically update the modelâs memory by editing the document index. This approach has also been used in knowledge-intensive dialog, where generators have been conditioned on retrieved text directly, albeit obtained via TF-IDF rather than end-to-end learnt retrieval [9].
Retrieve-and-Edit approaches Our method shares some similarities with retrieve-and-edit style approaches, where a similar training input-output pair is retrieved for a given input, and then edited to provide a ï¬nal output. These approaches have proved successful in a number of domains including Machine Translation [18, 22] and Semantic Parsing [21]. Our approach does have several differences, including less of emphasis on lightly editing a retrieved item, but on aggregating content from several pieces of retrieved content, as well as learning latent retrieval, and retrieving evidence documents rather than related training pairs. This said, RAG techniques may work well in these settings, and could represent promising future work.
# 6 Discussion
In this work, we presented hybrid generation models with access to parametric and non-parametric memory. We showed that our RAG models obtain state of the art results on open-domain QA. We found that people prefer RAGâs generation over purely parametric BART, ï¬nding RAG more factual and speciï¬c. We conducted an thorough investigation of the learned retrieval component, validating its effectiveness, and we illustrated how the retrieval index can be hot-swapped to update the model without requiring any retraining. In future work, it may be fruitful to investigate if the two components can be jointly pre-trained from scratch, either with a denoising objective similar to BART or some another objective. Our work opens up new research directions on how parametric and non-parametric memories interact and how to most effectively combine them, showing promise in being applied to a wide variety of NLP tasks.
9
# Broader Impact
This work offers several positive societal beneï¬ts over previous work: the fact that it is more strongly grounded in real factual knowledge (in this case Wikipedia) makes it âhallucinateâ less with generations that are more factual, and offers more control and interpretability. RAG could be employed in a wide variety of scenarios with direct beneï¬t to society, for example by endowing it with a medical index and asking it open-domain questions on that topic, or by helping people be more effective at their jobs.
With these advantages also come potential downsides: Wikipedia, or any potential external knowledge source, will probably never be entirely factual and completely devoid of bias. Since RAG can be employed as a language model, similar concerns as for GPT-2 [50] are valid here, although arguably to a lesser extent, including that it might be used to generate abuse, faked or misleading content in the news or on social media; to impersonate others; or to automate the production of spam/phishing content [54]. Advanced language models may also lead to the automation of various jobs in the coming decades [16]. In order to mitigate these risks, AI systems could be employed to ï¬ght against misleading content and automated spam/phishing.
# Acknowledgments
The authors would like to thank the reviewers for their thoughtful and constructive feedback on this paper, as well as HuggingFace for their help in open-sourcing code to run RAG models. The authors would also like to thank Kyunghyun Cho and Sewon Min for productive discussions and advice. EP thanks supports from the NSF Graduate Research Fellowship. PL is supported by the FAIR PhD program.
# References
[1] Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, and Tong Wang. MS MARCO: A Human Generated MAchine Reading COmprehension Dataset. arXiv:1611.09268 [cs], November 2016. URL http: //arxiv.org/abs/1611.09268. arXiv: 1611.09268.
[2] Petr BaudiÅ¡ and Jan Å ediv`y. Modeling of the question answering task in the yodaqa system. In International Conference of the Cross-Language Evaluation Forum for European Languages, pages 222â228. Springer, 2015. URL https://link.springer.com/chapter/10.1007% 2F978-3-319-24027-5_20.
[3] Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. Semantic Parsing on Freebase from Question-Answer Pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1533â1544, Seattle, Washington, USA, October 2013. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/ D13-1160.
[4] Bin Bi, Chenliang Li, Chen Wu, Ming Yan, and Wei Wang. Palm: Pre-training an autoencod- ing&autoregressive language model for context-conditioned generation. ArXiv, abs/2004.07159, 2020. URL https://arxiv.org/abs/2004.07159.
[5] Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. Reading Wikipedia to Answer Open-Domain Questions. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870â1879, Vancouver, Canada, July 2017. Association for Computational Linguistics. doi: 10.18653/v1/P17-1171. URL https://www.aclweb.org/anthology/P17-1171.
[6] Eunsol Choi, Daniel Hewlett, Jakob Uszkoreit, Illia Polosukhin, Alexandre Lacoste, and Jonathan Berant. Coarse-to-ï¬ne question answering for long documents. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 209â220, Vancouver, Canada, July 2017. Association for Computational Linguistics. doi: 10.18653/v1/P17-1020. URL https://www.aclweb.org/anthology/P17-1020.
10
[7] Christopher Clark and Matt Gardner. Simple and Effective Multi-Paragraph Reading Compre- hension. arXiv:1710.10723 [cs], October 2017. URL http://arxiv.org/abs/1710.10723. arXiv: 1710.10723.
[8] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Con- ference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171â4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1423. URL https://www.aclweb.org/anthology/N19-1423.
[9] Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. Wiz- ard of wikipedia: Knowledge-powered conversational agents. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=r1l73iRqKm.
[10] Matthew Dunn, Levent Sagun, Mike Higgins, V. Ugur Guney, Volkan Cirik, and Kyunghyun Cho. SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine. arXiv:1704.05179 [cs], April 2017. URL http://arxiv.org/abs/1704.05179. arXiv: 1704.05179.
[11] Angela Fan, Mike Lewis, and Yann Dauphin. Hierarchical neural story generation. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889â898, Melbourne, Australia, July 2018. Association for Computational Linguistics. doi: 10.18653/v1/P18-1082. URL https://www.aclweb.org/anthology/ P18-1082.
[12] Angela Fan, Yacine Jernite, Ethan Perez, David Grangier, Jason Weston, and Michael Auli. ELI5: Long form question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3558â3567, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1346. URL https://www.aclweb.org/ anthology/P19-1346.
[13] Angela Fan, Claire Gardent, Chloe Braud, and Antoine Bordes. Augmenting transformers with KNN-based composite memory, 2020. URL https://openreview.net/forum?id= H1gx1CNKPH.
[14] Thibault Févry, Livio Baldini Soares, Nicholas FitzGerald, Eunsol Choi, and Tom Kwiatkowski. Entities as experts: Sparse memory access with entity supervision. ArXiv, abs/2004.07202, 2020. URL https://arxiv.org/abs/2004.07202.
[15] Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen tau Yih, and Michel Galley. A knowledge-grounded neural conversation model. In AAAI Conference on Artiï¬cial Intelligence, 2018. URL https://www.aaai.org/ocs/index.php/ AAAI/AAAI18/paper/view/16710.
[16] Katja Grace, John Salvatier, Allan Dafoe, Baobao Zhang, and Owain Evans. When will AI exceed human performance? evidence from AI experts. CoRR, abs/1705.08807, 2017. URL http://arxiv.org/abs/1705.08807.
[17] Jiatao Gu, Yong Wang, Kyunghyun Cho, and Victor O.K. Li. Search engine guided neural In AAAI Conference on Artiï¬cial Intelligence, 2018. URL https: machine translation. //www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/17282.
[18] Jiatao Gu, Yong Wang, Kyunghyun Cho, and Victor O.K. Li. Search engine guided neural machine translation. In 32nd AAAI Conference on Artiï¬cial Intelligence, AAAI 2018, 32nd AAAI Conference on Artiï¬cial Intelligence, AAAI 2018, pages 5133â5140. AAAI press, 2018. 32nd AAAI Conference on Artiï¬cial Intelligence, AAAI 2018 ; Conference date: 02-02-2018 Through 07-02-2018.
[19] Kelvin Guu, Tatsunori B. Hashimoto, Yonatan Oren, and Percy Liang. Generating sentences by editing prototypes. Transactions of the Association for Computational Linguistics, 6:437â450, 2018. doi: 10.1162/tacl_a_00030. URL https://www.aclweb.org/anthology/Q18-1031.
11
[20] Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. REALM: Retrieval-augmented language model pre-training. ArXiv, abs/2002.08909, 2020. URL https: //arxiv.org/abs/2002.08909.
A In S. Bengio, retrieve-and-edit H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, ed- itors, Advances pages 10052â 10062. Curran Associates, URL http://papers.nips.cc/paper/ 8209-a-retrieve-and-edit-framework-for-predicting-structured-outputs. pdf.
[22] Nabil Hossain, Marjan Ghazvininejad, and Luke Zettlemoyer. Simple and effective retrieve- edit-rerank text generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2532â2538, Online, July 2020. Association for Computa- tional Linguistics. doi: 10.18653/v1/2020.acl-main.228. URL https://www.aclweb.org/ anthology/2020.acl-main.228.
[23] Jeff Johnson, Matthijs Douze, and Hervé Jégou. Billion-scale similarity search with gpus. arXiv preprint arXiv:1702.08734, 2017. URL https://arxiv.org/abs/1702.08734.
[24] Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601â1611, Vancouver, Canada, July 2017. Association for Computational Linguistics. doi: 10.18653/v1/P17-1147. URL https://www.aclweb.org/anthology/P17-1147.
Inferring algorithmic patterns with stack- the 28th International Conference on augmented recurrent nets. Neural Information Processing Systems - Volume 1, NIPSâ15, page 190â198, Cam- bridge, MA, USA, 2015. MIT Press. URL https://papers.nips.cc/paper/ 5857-inferring-algorithmic-patterns-with-stack-augmented-recurrent-nets.
[26] Vladimir Karpukhin, Barlas Oguz, Sewon Min, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. Dense passage retrieval for open-domain question answering. arXiv preprint arXiv:2004.04906, 2020. URL https://arxiv.org/abs/2004.04906.
[27] Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. Generaliza- tion through memorization: Nearest neighbor language models. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=HklBjCEKvH.
[28] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Yoshua Bengio and Yann LeCun, editors, 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. URL http://arxiv.org/abs/1412.6980.
[29] Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redï¬eld, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Ken- ton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. Natural Questions: a Benchmark for Ques- the Association of Computational Lin- tion Answering Research. guistics, 2019. URL https://tomkwiat.users.x20web.corp.google.com/papers/ natural-questions/main-1455-kwiatkowski.pdf.
[30] Guillaume Lample, Alexandre Sablayrolles, Marcâ Aurelio Ranzato, Ludovic Denoyer, and Herve Jegou. Large memory layers with product keys. In H. Wallach, H. Larochelle, A. Beygelzimer, F. dâ Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural In- formation Processing Systems 32, pages 8548â8559. Curran Associates, Inc., 2019. URL http: //papers.nips.cc/paper/9061-large-memory-layers-with-product-keys.pdf.
[31] Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association
12
for Computational Linguistics, pages 6086â6096, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1612. URL https://www.aclweb.org/ anthology/P19-1612.
[32] Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461, 2019. URL https://arxiv.org/abs/1910.13461.
[33] Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110â119, San Diego, California, June 2016. Association for Computational Linguistics. doi: 10.18653/v1/N16-1014. URL https://www.aclweb.org/anthology/ N16-1014.
[34] Margaret Li, Jason Weston, and Stephen Roller. Acute-eval: Improved dialogue evaluation with optimized questions and multi-turn comparisons. ArXiv, abs/1909.03087, 2019. URL https://arxiv.org/abs/1909.03087.
[35] Hairong Liu, Mingbo Ma, Liang Huang, Hao Xiong, and Zhongjun He. Robust neural machine translation with joint textual and phonetic embedding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3044â3049, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1291. URL https://www.aclweb.org/anthology/P19-1291.
[36] Peter J. Liu*, Mohammad Saleh*, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. Generating wikipedia by summarizing long sequences. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum? id=Hyg0vbWC-.
[37] Yury A. Malkov and D. A. Yashunin. Efï¬cient and robust approximate nearest neighbor search using hierarchical navigable small world graphs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42:824â836, 2016. URL https://arxiv.org/abs/1603.09320.
[38] Gary Marcus. The next decade in ai: four steps towards robust artiï¬cial intelligence. arXiv preprint arXiv:2002.06177, 2020. URL https://arxiv.org/abs/2002.06177.
[39] Luca Massarelli, Fabio Petroni, Aleksandra Piktus, Myle Ott, Tim Rocktäschel, Vassilis Plachouras, Fabrizio Silvestri, and Sebastian Riedel. How decoding strategies affect the arXiv preprint arXiv:1911.03587, 2019. URL https: veriï¬ability of generated text. //arxiv.org/abs/1911.03587.
[40] Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, and Hao Wu. Mixed precision training. In ICLR, 2018. URL https://openreview.net/forum?id=r1gs9JgRZ.
[41] Nikita Moghe, Siddhartha Arora, Suman Banerjee, and Mitesh M. Khapra. Towards exploit- ing background knowledge for building conversation systems. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2322â2332, Brus- sels, Belgium, October-November 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-1255. URL https://www.aclweb.org/anthology/D18-1255.
[42] Preksha Nema and Mitesh M. Khapra. Towards a better metric for evaluating question generation systems. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3950â3959, Brussels, Belgium, October-November 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-1429. URL https://www.aclweb.org/ anthology/D18-1429.
[43] Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. MS MARCO: A human generated machine reading comprehension dataset. In Tarek Richard Besold, Antoine Bordes, Artur S. dâAvila Garcez, and Greg Wayne, editors, Proceedings of the Workshop on Cognitive Computation: Integrating neural and symbolic
13
approaches 2016 co-located with the 30th Annual Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain, December 9, 2016, volume 1773 of CEUR Workshop Proceedings. CEUR-WS.org, 2016. URL http://ceur-ws.org/Vol-1773/CoCoNIPS_ 2016_paper9.pdf.
[44] Rodrigo Nogueira and Kyunghyun Cho. Passage re-ranking with BERT. arXiv preprint arXiv:1901.04085, 2019. URL https://arxiv.org/abs/1901.04085.
[45] Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 48â53, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-4009. URL https://www.aclweb. org/anthology/N19-4009.
[46] Ethan Perez, Siddharth Karamcheti, Rob Fergus, Jason Weston, Douwe Kiela, and Kyunghyun Cho. Finding generalizable evidence by learning to convince q&a models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2402â2411, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-1244. URL https://www.aclweb.org/anthology/D19-1244.
[47] Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. Language models as knowledge bases? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2463â2473, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/ D19-1250. URL https://www.aclweb.org/anthology/D19-1250.
[48] Fabio Petroni, Patrick Lewis, Aleksandra Piktus, Tim Rocktäschel, Yuxiang Wu, Alexander H. Miller, and Sebastian Riedel. How context affects language modelsâ factual predictions. In Automated Knowledge Base Construction, 2020. URL https://openreview.net/forum? id=025X0zPfn.
[49] Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. proving Language Understanding https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/ language-unsupervised/language_understanding_paper.pdf. by Generative Pre-Training, 2018. Im- URL
[50] Alec Radford, Sutskever. https://d4mucfpksywv.cloudfront.net/better-language-models/language_ models_are_unsupervised_multitask_learners.pdf.
[51] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a uniï¬ed text-to-text transformer. arXiv e-prints, 2019. URL https://arxiv.org/abs/1910.10683.
[52] Adam Roberts, Colin Raffel, and Noam Shazeer. How much knowledge can you pack into the parameters of a language model? arXiv e-prints, 2020. URL https://arxiv.org/abs/ 2002.08910.
[53] Stephen Robertson and Hugo Zaragoza. The probabilistic relevance framework: Bm25 and beyond. Found. Trends Inf. Retr., 3(4):333â389, April 2009. ISSN 1554-0669. doi: 10.1561/ 1500000019. URL https://doi.org/10.1561/1500000019.
[54] Irene Solaiman, Miles Brundage, Jack Clark, Amanda Askell, Ariel Herbert-Voss, Jeff Wu, Alec Radford, and Jian-Bing Wang. Release strategies and the social impacts of language models. ArXiv, abs/1908.09203, 2019.
[55] Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-to-end memory net- works. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2440â2448. Curran Associates, Inc., 2015. URL http://papers.nips.cc/paper/5846-end-to-end-memory-networks.pdf.
14
[56] James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. FEVER: a large-scale dataset for fact extraction and VERiï¬cation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 809â819, New Orleans, Louisiana, June 2018. Association for Computational Linguistics. doi: 10.18653/v1/N18-1074. URL https://www.aclweb.org/anthology/N18-1074.
[57] James H. Thorne and Andreas Vlachos. Avoiding catastrophic forgetting in mitigating model biases in sentence-pair classiï¬cation with elastic weight consolidation. ArXiv, abs/2004.14366, 2020. URL https://arxiv.org/abs/2004.14366.
[58] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Å ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998â6008. Curran Associates, Inc., 2017. URL http://papers.nips.cc/paper/7181-attention-is-all-you-need.pdf.
[59] Ashwin Vijayakumar, Michael Cogswell, Ramprasaath Selvaraju, Qing Sun, Stefan Lee, David Crandall, and Dhruv Batra. Diverse beam search for improved description of complex scenes. AAAI Conference on Artiï¬cial Intelligence, 2018. URL https://www.aaai.org/ocs/index. php/AAAI/AAAI18/paper/view/17329.
[60] Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353â355, Brussels, Belgium, November 2018. Association for Computational Linguistics. doi: 10.18653/v1/W18-5446. URL https://www.aclweb.org/ anthology/W18-5446.
[61] Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. SuperGLUE: A Stickier Benchmark for General- Purpose Language Understanding Systems. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d extquotesingle Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 3261â3275. Curran Associates, Inc., 2019. URL https:// arxiv.org/abs/1905.00537.
[62] Shuohang Wang, Mo Yu, Xiaoxiao Guo, Zhiguo Wang, Tim Klinger, Wei Zhang, Shiyu Chang, Gerry Tesauro, Bowen Zhou, and Jing Jiang. R3: Reinforced ranker-reader for open-domain question answering. In Sheila A. McIlraith and Kilian Q. Weinberger, editors, Proceedings of the Thirty-Second AAAI Conference on Artiï¬cial Intelligence, (AAAI-18), the 30th innovative Applications of Artiï¬cial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artiï¬cial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 5981â5988. AAAI Press, 2018. URL https://www.aaai.org/ocs/index. php/AAAI/AAAI18/paper/view/16712.
[63] Shuohang Wang, Mo Yu, Jing Jiang, Wei Zhang, Xiaoxiao Guo, Shiyu Chang, Zhiguo Wang, Tim Klinger, Gerald Tesauro, and Murray Campbell. Evidence aggregation for answer re- ranking in open-domain question answering. In ICLR, 2018. URL https://openreview. net/forum?id=rJl3yM-Ab.
[64] Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. In Yoshua Bengio and Yann LeCun, editors, 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. URL http://arxiv.org/abs/1410.3916.
[65] Jason Weston, Emily Dinan, and Alexander Miller. Retrieve and reï¬ne: Improved sequence generation models for dialogue. In Proceedings of the 2018 EMNLP Workshop SCAI: The 2nd International Workshop on Search-Oriented Conversational AI, pages 87â92, Brussels, Belgium, October 2018. Association for Computational Linguistics. doi: 10.18653/v1/W18-5713. URL https://www.aclweb.org/anthology/W18-5713.
15
[66] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. Huggingfaceâs transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771, 2019.
[67] Shiyue Zhang and Mohit Bansal. Addressing semantic drift in question generation for semi- supervised question answering. In Proceedings of the 2019 Conference on Empirical Meth- ods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2495â2509, Hong Kong, China, Novem- ber 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-1253. URL https://www.aclweb.org/anthology/D19-1253.
[68] Wanjun Zhong, Jingjing Xu, Duyu Tang, Zenan Xu, Nan Duan, Ming Zhou, Jiahai Wang, and Jian Yin. Reasoning over semantic-level graph for fact checking. ArXiv, abs/1909.03745, 2019. URL https://arxiv.org/abs/1909.03745.
16
# Appendices for Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks
# A Implementation Details
For Open-domain QA we report test numbers using 15 retrieved documents for RAG-Token models. For RAG-Sequence models, we report test results using 50 retrieved documents, and we use the Thorough Decoding approach since answers are generally short. We use greedy decoding for QA as we did not ï¬nd beam search improved results. For Open-MSMarco and Jeopardy question generation, we report test numbers using ten retrieved documents for both RAG-Token and RAG-Sequence, and we also train a BART-large model as a baseline. We use a beam size of four, and use the Fast Decoding approach for RAG-Sequence models, as Thorough Decoding did not improve performance.
# B Human Evaluation
View full instructions Which sentence is more factually true? View tool guide Select an option Subject: Hemingway a Note: Some questions are Sentence Ais more 1 control questions. We require Sentence A : "The Sun Also Rises" is a novel by this author of "A true good accuracy on our control Farewell to Arms" Sentence Bismore 2 questions to accept true responses. Sentence B : This author of "The Sun Also Rises" was born in Both sentences are 8 Havana, Cuba, the son of Spanish immigrants âtrue Indicate which one of the P 3 following sentences is more Both sentences are factually true with respect to completely untrue the subject. Using the internet to check whether the sentences are true is encouraged.
Figure 4: Annotation interface for human evaluation of factuality. A pop-out for detailed instructions and a worked example appear when clicking "view tool guide".
Figure 4 shows the user interface for human evaluation. To avoid any biases for screen position, which model corresponded to sentence A and sentence B was randomly selected for each example. Annotators were encouraged to research the topic using the internet, and were given detailed instruc- tions and worked examples in a full instructions tab. We included some gold sentences in order to assess the accuracy of the annotators. Two annotators did not perform well on these examples and their annotations were removed from the results.
# C Training setup Details
We train all RAG models and BART baselines using Fairseq [45].2 We train with mixed precision ï¬oating point arithmetic [40], distributing training across 8, 32GB NVIDIA V100 GPUs, though training and inference can be run on one GPU. We ï¬nd that doing Maximum Inner Product Search with FAISS is sufï¬ciently fast on CPU, so we store document index vectors on CPU, requiring â¼ 100 GB of CPU memory for all of Wikipedia. After submission, We have ported our code to HuggingFace Transformers [66]3, which achieves equivalent performance to the previous version but is a cleaner and easier to use implementation. This version is also open-sourced. We also compress the document index using FAISSâs compression tools, reducing the CPU memory requirement to 36GB. Scripts to run experiments with RAG can be found at https://github.com/huggingface/transformers/ blob/master/examples/rag/README.md and an interactive demo of a RAG model can be found at https://huggingface.co/rag/
2https://github.com/pytorch/fairseq 3https://github.com/huggingface/transformers
17
# D Further Details on Open-Domain QA
For open-domain QA, multiple answer annotations are often available for a given question. These answer annotations are exploited by extractive models during training as typically all the answer annotations are used to ï¬nd matches within documents when preparing training data. For RAG, we also make use of multiple annotation examples for Natural Questions and WebQuestions by training the model with each (q, a) pair separately, leading to a small increase in accuracy. For TriviaQA, there are often many valid answers to a given question, some of which are not suitable training targets, such as emoji or spelling variants. For TriviaQA, we ï¬lter out answer candidates if they do not occur in top 1000 documents for the query.
CuratedTrec preprocessing The answers for CuratedTrec are given in the form of regular expres- sions, which has been suggested as a reason why it is unsuitable for answer-generation models [20]. To overcome this, we use a pre-processing step where we ï¬rst retrieve the top 1000 documents for each query, and use the answer that most frequently matches the regex pattern as the supervision target. If no matches are found, we resort to a simple heuristic: generate all possible permutations for each regex, replacing non-deterministic symbols in the regex nested tree structure with a whitespace.
TriviaQA Evaluation setups The open-domain QA community customarily uses public develop- ment datasets as test datasets, as test data for QA datasets is often restricted and dedicated to reading compehension purposes. We report our results using the datasets splits used in DPR [26], which are consistent with common practice in Open-domain QA. For TriviaQA, this test dataset is the public TriviaQA Web Development split. Roberts et al. [52] used the TriviaQA ofï¬cial Wikipedia test set instead. Févry et al. [14] follow this convention in order to compare with Roberts et al. [52] (See appendix of [14]). We report results on both test sets to enable fair comparison to both approaches. We ï¬nd that our performance is much higher using the ofï¬cial Wiki test set, rather than the more conventional open-domain test set, which we attribute to the ofï¬cial Wiki test set questions being simpler to answer from Wikipedia.
# E Further Details on FEVER
For FEVER classiï¬cation, we follow the practice from [32], and ï¬rst re-generate the claim, and then classify using the representation of the ï¬nal hidden state, before ï¬nally marginalizing across documents to obtain the class probabilities. The FEVER task traditionally has two sub-tasks. The ï¬rst is to classify the claim as either "Supported", "Refuted" or "Not Enough Info", which is the task we explore in the main paper. FEVERâs other sub-task involves extracting sentences from Wikipedia as evidence supporting the classiï¬cation prediction. As FEVER uses a different Wikipedia dump to us, directly tackling this task is not straightforward. We hope to address this in future work.
# F Null Document Probabilities
We experimented with adding "Null document" mechanism to RAG, similar to REALM [20] in order to model cases where no useful information could be retrieved for a given input. Here, if k documents were retrieved, we would additionally "retrieve" an empty document and predict a logit for the null document, before marginalizing over k + 1 predictions. We explored modelling this null document logit by learning (i) a document embedding for the null document, (ii) a static learnt bias term, or (iii) a neural network to predict the logit. We did not ï¬nd that these improved performance, so in the interests of simplicity, we omit them. For Open MS-MARCO, where useful retrieved documents cannot always be retrieved, we observe that the model learns to always retrieve a particular set of documents for questions that are less likely to beneï¬t from retrieval, suggesting that null document mechanisms may not be necessary for RAG.
# G Parameters
Our RAG models contain the trainable parameters for the BERT-base query and document encoder of DPR, with 110M parameters each (although we do not train the document encoder ourselves) and 406M trainable parameters from BART-large, 406M parameters, making a total of 626M trainable
18
Table 7: Number of instances in the datasets used. *A hidden subset of this data is used for evaluation
Task Train Development Test Natural Questions TriviaQA WebQuestions CuratedTrec Jeopardy Question Generation MS-MARCO FEVER-3-way FEVER-2-way 79169 78786 3418 635 97392 153726 145450 96966 8758 8838 362 134 13714 12468 10000 6666 3611 11314 2033 635 26849 101093* 10000 6666
parameters. The best performing "closed-book" (parametric only) open-domain QA model is T5-11B with 11 Billion trainable parameters. The T5 model with the closest number of parameters to our models is T5-large (770M parameters), which achieves a score of 28.9 EM on Natural Questions [52], substantially below the 44.5 that RAG-Sequence achieves, indicating that hybrid parametric/non- parametric models require far fewer trainable parameters for strong open-domain QA performance. The non-parametric memory index does not consist of trainable parameters, but does consists of 21M 728 dimensional vectors, consisting of 15.3B values. These can be easily be stored at 8-bit ï¬oating point precision to manage memory and disk footprints.
# H Retrieval Collapse
In preliminary experiments, we observed that for some tasks such as story generation [11], the retrieval component would âcollapseâ and learn to retrieve the same documents regardless of the input. In these cases, once retrieval had collapsed, the generator would learn to ignore the documents, and the RAG model would perform equivalently to BART. The collapse could be due to a less-explicit requirement for factual knowledge in some tasks, or the longer target sequences, which could result in less informative gradients for the retriever. Perez et al. [46] also found spurious retrieval results when optimizing a retrieval component in order to improve performance on downstream tasks.
# I Number of instances per dataset
The number of training, development and test datapoints in each of our datasets is shown in Table 7.
19 | {
"id": "1702.08734"
} |
2005.09904 | BiQGEMM: Matrix Multiplication with Lookup Table For Binary-Coding-based Quantized DNNs | The number of parameters in deep neural networks (DNNs) is rapidly increasing
to support complicated tasks and to improve model accuracy. Correspondingly,
the amount of computations and required memory footprint increase as well.
Quantization is an efficient method to address such concerns by compressing
DNNs such that computations can be simplified while required storage footprint
is significantly reduced. Unfortunately, commercial CPUs and GPUs do not fully
support quantization because only fixed data transfers (such as 32 bits) are
allowed. As a result, even if weights are quantized into a few bits, CPUs and
GPUs cannot access multiple quantized weights without memory bandwidth waste.
Success of quantization in practice, hence, relies on an efficient computation
engine design, especially for matrix multiplication that is a basic computation
engine in most DNNs. In this paper, we propose a novel matrix multiplication
method, called BiQGEMM, dedicated to quantized DNNs. BiQGEMM can access
multiple quantized weights simultaneously in one instruction. In addition,
BiQGEMM pre-computes intermediate results that are highly redundant when
quantization leads to limited available computation space. Since pre-computed
values are stored in lookup tables and reused, BiQGEMM achieves lower amount of
overall computations. Our extensive experimental results show that BiQGEMM
presents higher performance than conventional schemes when DNNs are quantized. | http://arxiv.org/pdf/2005.09904 | Yongkweon Jeon, Baeseong Park, Se Jung Kwon, Byeongwook Kim, Jeongin Yun, Dongsoo Lee | cs.LG, stat.ML | 13 pages, 12 figures | null | cs.LG | 20200520 | 20200831 | 0 2 0 2
g u A 1 3
] G L . s c [ 2 v 4 0 9 9 0 . 5 0 0 2 : v i X r a
# BiQGEMM: Matrix Multiplication with Lookup Table For Binary-Coding-based Quantized DNNs
Yongkweon Jeon*, Baeseong Park*, Se Jung Kwon, Byeongwook Kim, Jeongin Yun, and Dongsoo Lee Samsung Research, Seoul, Republic of Korea {dragwon.jeon, bpbs.park, sejung0.kwon, byeonguk.kim, ji6373.yun, dongsoo3.lee}@samsung.com
AbstractâThe number of parameters in deep neu- ral networks (DNNs) is rapidly increasing to support complicated tasks and to improve model accuracy. Correspondingly, the amount of computations and re- quired memory footprint increase as well. Quantiza- tion is an eï¬cient method to address such concerns by compressing DNNs such that computations can be simpliï¬ed while required storage footprint is sig- niï¬cantly reduced. Unfortunately, commercial CPUs and GPUs do not fully support quantization because only ï¬xed data transfers (such as 32 bits) are al- lowed. As a result, even if weights are quantized (by a non-uniform quantization scheme) into a few bits, CPUs and GPUs may not access multiple quantized weights without memory bandwidth waste. Success of quantization in practice, hence, relies on an eï¬- cient computation engine design, especially for matrix multiplication that is a basic computation engine in most DNNs. In this paper, we propose a novel matrix multiplication method, called BiQGEMM, dedicated to quantized DNNs. BiQGEMM can access multiple quantized weights simultaneously in one instruction. In addition, BiQGEMM pre-computes intermediate results that are highly redundant when quantization leads to limited available computation space. Since pre-computed values are stored in lookup tables and reused, BiQGEMM achieves lower amount of overall computations. Our extensive experimental results show that BiQGEMM presents higher performance than conventional schemes when DNNs are quantized.
Index TermsâModel Compression, Deep Learn- ing, Machine Learning, AI Inference, Quantization, GEMM, GEMV
I. Introduction As the number of parameters in DNNs increases to improve model accuracy with various tasks, reducing in- ference latency is becoming more challenging. Reducing response time becomes highly critical when real-time services are demanded (e.g., autonomous driving, auto- matic speech recognition, and neural machine translation). Note that most of response time is usually consumed by general matrix-to-matrix multiplication (GEMM) or general matrix-to-vector multiplication (GEMV) of high- order time complexity (see Fig. 1). Eï¬cient computation of matrix multiplication, therefore, directly corresponds to response time reduction. Previously in order to accelerate GEMM operations, both hardware- and software-based approaches have been introduced [1]â[6].
0 0 0 1 1 0 00 01 02 1 10 11 12 1 2 2 2 = 20 21 22 3 30 31 32 2 3 3 4 40 41 42 layer â 1 4 4 layer + 1 = â
layer â
0 1 2
Fig. 1. An example showing GEMV operations for layer l (a fully connected layer). Given an input vector x â Rn, which is the activa- tion of the previous layer, and a weight matrix W â {â1, 1}mÃn, an output vector y â Rm can be computed by using a GEMV routine, where m and n are output (hidden) size and input size, respectively. Activation a is the output of the activation function f (e.g., sigmoid, tanh, and ReLU) with y as an input. In this manuscript, âoutputâ refers to y, not a, unless speciï¬ed otherwise.
As an eï¬ort to reduce latency, few-batch multiplica- tions1 are strongly preferred for DNN inference at the cost of reduced weight reuse. Note that if GEMV is conducted to support single batch inference, weight matrix data is ac- cessed only once. Such a streaming-like operation is highly memory-bound in modern computing systems based on von Nuemann architecture, where main memory (DRAM) is separated from the processing unit [7]. Moreover, if weight matrix size becomes larger, then the portion of memory-bound operations is also larger. Execution work- loads with little data reuse on computing systems, there- fore, would prevent an eï¬cient utilization of computing resources because of the problem of memory-wall (also known as von Neumann bottleneck) [8]. To alleviate a memory-access bottleneck from hardware perspectives, in- memory computing (in which computational operations are performed within the memory unit) has been widely studied [9]. In other words, for DNNs, combating the memory bottleneck is desperate enough to request a new hardware architecture design paradigm.
As a practical solution at algorithm level, model com- pression is an eï¬ective technique to achieve lower end-to- end latency. Model compression reduces not only oï¬-chip
*Both authors contributed equally to this work.
1In this paper, we refer to either GEMV or GEMM as few-batch multiplication for convenience.
memory accesses (and hence, low power consumption) on mobile but also main memory bandwidth requirements by shrinking memory footprint with negligible accuracy drop [3], [4]. Thus, model compression is being widely studied to accelerate inference computations. Popular model compression techniques include pruning [10]â[12], low-rank approximation [13], [14], and quantization [15]â [17].
In this work, we consider quantization because of its simple structure and high compression ratio [15]â[17]. The rationale behind quantization for DNNs is that we can reduce the number of bits to represent each parameter without noticeable model accuracy drop because DNNs include a lot of redundancy. Note that weights and activa- tions need to be quantized at diï¬erent times. Weights are ï¬xed during inference, and hence, weight quantization is performed in advance before performing inference. On the other hand, activation quantization should be conducted on-the-ï¬y with additional computations (for quantization) during inference. If the quantization algorithm is compli- cated, then the cost of dynamic quantization might be larger than the gain from quantization eï¬ects. In addition, activation compression may result in a serious accuracy degradation if training is not aware of a quantized struc- ture of activations. In this manuscript, thus, we study weight quantization only that is enough to accelerate matrix multiplication as we demonstrate later.
In this paper, we propose a novel matrix multiplica- tion algorithm dedicated to quantized DNNs that can be performed on modern computer (von Neumann) architec- tures. Even though quantization obviously reduces stor- age requirements on oï¬-chip memories, achieving higher performance with quantized DNNs on CPUs or GPUs is challenging. In particular, because data transfer on commercial processors is performed with a ï¬xed width (such as 32 bits) while weights can be quantized with an arbitrary number of bits, accessing multiple (non- uniformly) quantized weights may cause some waste in bandwidth utilization. In order not to waste memory bandwidth, bit-packing is essential and unpacking process must be followed to multiply the sign (represented by each bit) and the scaling factor (each bit has a diï¬erent scaling factor (See Fig. 2)), which is an overhead. In addition, decoding non-uniformly quantized weights may induce additional instructions. In other words, for binary-coding- based quantization, existing CPUs or GPUs may waste memory bandwidth or require additional special hardware. Our proposed method, called BiQGEMM2, addresses such concerns using lookup tables that accept quantized weights as indices. BiQGEMM is built on the observation that quantization leads to a lot of redundant computations. The key idea is that for any real-number vector (of acti- vations) v â Rn, the number of possible outcomes from
2non-GEneral Matrix to Matrix multiplication for Binary-coding- based Quantized neural networks
0101, =5 ay [foi a2 eee -âo tH MM Binary-coding-based aif) see INT4 Quantization 22 ee : eo . Bo ! rr Say, ay Fay, = O31 Hq, aa 31 32 : Bs | on ; om: nn m7 . . . 7 : *. ° By | aa <Uniform Quantization (INT4)> <Binary-coding-based Quantization>
1
2
# <Uniform Quantization (INT4)>
# <Binary-coding-based Quantization>
Fig. 2. A comparison on placement of each quantization bit between INT4 and Binary-coding-based quantization. Suppose that an ele- ment of a weight matrix is quantized by using 4 bits. Those 4 bits in a weight are continuously placed in INT4. On the other hand, for binary-coding-based quantization, quantized bits of one weight are distributed into 4 binary matrices. In INT4, each 4-bit vector represents a ï¬xed-point number to be multiplied by a scaling factor while each bit in binary-coding quantization indicates the sign of a corresponding scaling factor. As such, when a weight is quantized as 01012, how to dequantize such a binary number becomes vastly diï¬erent depending on the selected quantization method.
a dot product of v and a binary vector b â {â1, 1}n (of quantized weights) is limited to be 2n, all of which can be pre-computed and stored in lookup tables that can be reused. We show that by replacing a majority of arithmetic operations with table lookups, BiQGEMM calculates matrix multiplications with high performance and improved bandwidth utilization.
# II. Background
A. Quantization
DNNs intentionally involve a lot of redundancy to ex- pedite searching for an optimal local minimum. Thus, the model size of DNNs has a potential to be signiï¬cantly reduced by various compression algorithms. Quantization is gaining increasing popularity as an eï¬ective model compression technique. There exist various quantization formats and dequantized weights can be represented either by ï¬xed-point numbers (based on uniform quantization) or by ï¬oating-point numbers (based on codebook lookups or binary-coding quantization).
Note that codebook-based quantization presents high compression ratios for various models with ignorable model accuracy degradation because expected values af- ter quantization are still maintained to be ï¬oating-point numbers [18]. Even though codebook-based quantization is highly eï¬cient to reduce oï¬-chip memory footprint, computational complexity is not reduced at all after de- quantization. Fixed-point quantization, on the other hand, reduces both storage requirement and computational com- plexity. Since INT8 quantization associated with addi-
Fig. 3. An illustration of binary-coding-based quantization when the number of quantizaton bits is 3.
B, | [y;] lx} | lef = | by- 0 Fel [Bow | bel [Â¥]=2y a, ⬠Râ¢,B, ⬠{-1,1}7,x ER, y, E Râ¢, y = Ly; âoâ denotes element-wise multiplication. sum reduction
Fig. 4. GEMV with multi-bit quantized weight matrix (when quan- tized weights follow the structure of the binary codes).
tional techniques is introduced to be able to maintain the model accuracy of some well-known CNN models, INT8 has been adopted by various commercial tools [16]. Note that operations other than GEMV or GEMM need to be re-designed to function in INT8 while INT8-aware re- training may be necessary not to degrade model accuracy seriously. For example, layer normalization and softmax operations for attention blocks for the Transformer de- mand ï¬oating-point computations [16]. Accordingly, the overhead due to frequent conversions between ï¬xed-point formats and ï¬oating-point formats might be close to the cost of quantized matrix multiplication [16].
As an eï¬ort to reduce both computations and footprint signiï¬cantly, binary-coding-based quantization has been proposed [15], [17], [19]. Since expected values after binary- coding quantization remain to be ï¬oating-point numbers, accuracy degradation can be ignored even when only about 3 bits are used for quantization [15], [17]. Despite a possibility to highly simplify computations, binary-coding- based quantization has not been useful in practice because a computing system should allow bit-level memory access. In this manuscript, hence, we consider binary-coding- based quantization as a baseline to enable practical usages in commercialized computing systems.
Note that in the case of INT8, activations should be also quantized in order to allow ï¬xed-point GEMV or GEMM, while such activation quantization with ï¬oating- point-based quantization is optional. Activation quanti-
zation inherently demands dynamic quantization process during inference. Even though inference operations can be a lot more eï¬cient by previously proposed methods such as method of 4 Russian [20] or popcount- and XOR- logic [19], activation quantization requires 1) modiï¬ca- tions to training algorithm to severely restrict the range of activation values and 2) computational overhead for format conversions [15], [16], [19]. In this paper, we show that BiQGEMM presents high performance even when activations are maintained to be ï¬oating-point numbers.
B. Binary-coding-based quantization
When a real-number vector w â Rp is quantized into q bits by binary-coding-based quantization method, w is mapped into scaling factors αi â R and binary vectors bi â {â1, +1}p (1 ⤠i ⤠q). Then, w is approximated as q i=1 αibi where scaling factors are shared by multiple P elements in w. Scaling factors and binary vectors are obtained as follows:
# q
arg min αi,bi ||w â X i=1 αibi||2 (1)
such that quantization error is minimized. Since there is no analytical solution to minimize such quantization error, numerous heuristic approaches have been proposed [15], [17], [21]â[23].
The same principle of binary-coding-based vector quan- tization can be applied to matrices where quantization can be independently performed for each row or column. For a weight matrix quantized into binary matrices Bi with scaling factor vectors αi, multiplication with a real- number vector x produces an output vector y as follows:
# βw
y = X i=1 (αi ⦠(Bi · x)) (2)
where operation ⦠denotes element-wise multiplication (i.e., Hadamard product) and βw is the number of qua- tization bits for weights. Fig. 4 illustrates how to perform multiplication of multi-bit quantized weight matrices by a real-number vector. Note that for convenience, binary weight matrices Bis can be concatenated in vertical di- rection and multiplied by a vector x. Then, element-wise multiplication by scaling factor αi produces an intermedi- ate partial output yi. Finally, sum of vectors of yi yields the ï¬nal output y.
Consider that a real-number activation vector x is also quantized by using βa bits into sj â {â1, 1}n with scaling factors γj (1 ⤠j ⤠βa), the previous output y now can be computed as follows:
βw βa y = X i=1 (αi ⦠(Bi · X j=1 γjsj)). (3)
Eq. 3 suggests that activation quantization would increase the number of computations compared to Eq. 2, even though most computations are simple as bit-wise logic. It should be noted that without sophisticated hardware de- sign support for bit-wise logic incurred by binary-coding- quantization, activation quantization may degrade matrix multiplication performance.
C. Natural Language Processing
In order to set a range of parameters (such as matrix size) to be used for our experiments and to take into ac- count the impact of the proposed algorithm, we investigate natural language processing (NLP) as a representative application of BiQGEMM.
RNNs [24] and the Transformer [25] are being widely accepted as time-series data analysis tools to process natu- ral language tasks. Long short-term memory (LSTM) [26], compared with conventional RNNs, introduces additional gates in a unit to overcome long-term dependency and gradient vanishing problem in vanilla RNNs. As such, most recently proposed RNN-based networks employ LSTM as a basic unit to improve model accuracy on multiple benchmark language models. The Transformer presented another noticeable advance in NLP. By breaking the recurrent structure and fully exploiting attention mech- anism [27], the Transformer better ï¬gure out the rele- vance between words in the sentences. Correspondingly, the Transformer has been initially proposed for neural machine translation (NMT), and then extended to a wide range of NLP tasks, including BERT [28], with impressive results on GLUE [29] and SQUAD [30].
The structure of the Transformer can be divided into encoder layers and decoder layers. An encoder layer in- cludes one attention block structured as four (n à n) weight matrices and a feed-forward block with (n à 4n) and (4n à n) matrices, where n is the hidden size. Also, a decoder layer presents two attention blocks and a feed- forward block while the structure of each block is the same as that of the encoder. The number of encoder layers is chosen to be 6 (6) and n is selected to be 512 (1024) for the base (big) model. Weight matrices are fed into matrix multiplication operations and the weight matrix size is rapidly increasing to support various complicated tasks with increased model accuracy goals. For example, for NMT, most models that show excellent performance are based on the big model version of the Transformer [31]â [34]. T5, another variant of the Transformer, increases the number of weights to 11 billion and the number of layers to 24 [35].
BERT [28] is a pre-training-based model for applications that require only the encoder part of the Transformer. BERT models are known to continuously set new records on model accuracy with high number of encoder layers and hidden size (such as 24 and 1024, respectively). Associated with new training algorithms based on the large model of BERT, various advanced models, such as XLNet [36],
TABLE I Quantization Comparison on the Transformer
Data format (bits) (weight / activation) 32 / 32 8 / 8 32 / 32 8 / 8 6 / 6 4 / 4 32 / 32 4 / 32 3 / 32 2 / 32 1 / 32 English-to-German BLEU 27.68 27.30 (-0.22) 26.46 26.38 (-0.08) 26.98 (+0.52) 18.32 (-8.14) 25.8 25.5 (-0.3) 25.3 (-0.5) 23.9 (-1.9) 0.4 (-25.4) Models Baseline Uniform Baseline Ref [16] Ref [47] Uniform Baseline Binary- Coding (Greedy) Ref [48]
TABLE II Memory Usage with Different Number of Quantization Bits (Weights: 512-by-512 matrix, batch size: 18)
Memory (MB) Data format (bits) W A W A O O total 32 32 32 1.049 0.037 0.037 1.122 8 8 32 0.262 0.009 0.037 0.308 6 6 32 0.197 0.007 0.037 0.240 4 4 32 0.131 0.005 0.037 0.173 4 32 32 0.131 0.037 0.037 0.205 3 32 32 0.098 0.037 0.037 0.172 2 32 32 0.066 0.037 0.037 0.139
W: weights, A: activations (inputs), O: outputs.
RoBERTa [37], and ERNIE [38], [39], are being devel- oped. Ever increasing requests of higher accuracy demands only larger weight matrices. For instance, the biggest weight matrix size in xx-large model of ALBERT [40] is (4K Ã 16K), which requires 256 MB (with FP32) of mem- ory footprint. Such large weight matrices cannot avoid frequent DRAM accesses even if the same parameters are repeatedly reused over the whole network.
As for automatic speech recognition (ASR), similarly, the number of parameters is also increasing to accomplish higher model accuracy [41]â[46]. To illustrate, LAS is an end-to-end ASR DNN model based on bi-directional LSTM using six encoder layers with (2.5K Ã 5K) weight matrix structure and two decoder layers with (1.2K Ã 1.2K) weight matrix structure [45].
In sum, fast matrix multiplication with a matrix size of (at least) a few thousands is essential to realize DNNs of NLP tasks. Such high-performance matrix multiplication needs to assume that DNNs are compressed because of increasing number of parameters.
D. Quantizing the Transformer
Now we estimate the number of quantization bits using the Transformer that are being widely applied to vari- ous NLP tasks. Table I lists quantization results of the (base model) Transformer (designed to perform English to German translation) using uniform quantization [16], [47]
and binary-coding-based quantization with greedy approx- imation [21]. For uniform quantization results, we refer to the numbers from [16], [47]. For binary-coding-based quantization based on greedy approximation method (to reduce quantization error), we retrain the model using quantization-aware training algorithm introduced in [48] using WMT13 data set. When retraining the model, all hyper-parameters are the same as in the Transformer [25] except large initial learning rate by 2à and additional hyper-parameter of distortion step (introduced in [48]) that is set to be 2000. The baselines results for each quantization case are inherently diï¬erent due to diï¬erent initialization conditions and test set, and the number of quantization bits and translation quality (given as BLEU score) are described in Table I.
Table II shows the memory usage when weights and activations are quantized into diï¬erent number of bits while a matrix size is ï¬xed to be 512-by-512 (that is the size of an attention layer of the base Transformer). The number of sub-words in the test data set is 18 on average, and thus, batch size is 18. Note that because of relatively small dimension of activations, activation quantization does not reduce memory footprint as much as weight quantization, while more bits for weight quantization need to be assigned given a target model accuracy as shown in Table I. Such observation is consistent with other matrix sizes. Combining Table I and Table II, for BiQGEMM design considerations, we quantize only weights while we are mainly interested in a few bits for quantization (e.g., 1 to 3 quantization bits).
III. Methodology
A. Motivation and Deï¬nitions
Deï¬nition 1. LUT-unit µ is the length of a sub-vector to be used as an input argument of a table lookup function.
Deï¬nition 2. Given an m-by-n matrix denoted by A, Ar µ is a µ-by- mÃn µ matrix reshaped from A while maintaining column-wise traversal.
Deï¬nition 3. Given an m-by-n matrix denoted by A, A[i; j] is a sub-matrix of A formed by i-to-j columns when i ⤠j < n.
Deï¬nition 4. Given a column vector v of length n, v[i; j] is a sub-vector of v comprised of i-to-j rows when i ⤠j < n. Deï¬nition 5. Mµ â {â1, 1}2µÃµ denotes a matrix con- structed by concatenating all possible (non-redundant) 2µ binary vectors of µ length. We assume that a binary weight matrix B â {â1, 1}mÃn and an input matrix X â RnÃb are given, where m, n, and b are output size, input size, and batch size, respectively. Fig. 5 shows an example of a quantized (binary) weight matrix and an input matrix. In Fig. 5, each matrix is equally divided into three parts along with LUT-unit µ
Quantized weight matrix â â1, 1 Ã 1 â1 1 â1 â1 â1 1 â1 â1 â1 â1 1 1 â1 1 1 â1 1 â1 â1 1 1 1 â1 â1 1 â1 â1 1 â1 â1 â1 1 1 1 â1 1 â1 1 â1 1 1 â1 1 1 1 1 â1 1 â1 â1 1 1 1 â1 â1 1 â1 1 1 â1 1 1 â1 â1 â1 â1 1 1 â1 1 â1 â1 â1 1 1 â1 1 1 1 â1 1 1 â1 1 1 1 â1 â1 â1 1 1 1 1 â1 1 â
I n p u t m a t r i x â â Ã
Fig. 5. An example of quantized weight matrix B and input matrix X composed of two input vectors x0 and x1.
of 4. Considering a shaded part in Fig. 5, a row vector (having 4 binary digits) in B[4; 7] is one of 2µ possible combinations. Correspondingly, each row after a product of B[4; 7] and x0[4; 7] is also limited to be one of 2µ possible vectors. As an attempt to exploit a strictly limited space of available outputs, the product of Mµ and reshaped input matrix X r µ is pre-computed and stored in lookup tables. Then, pre-computed values are retrieved from lookup ta- bles using µ-bit binary digits in a weight matrix as a key (or an index). Note that when output size is large enough (i.e., 2µ ⪠m), computing eï¬ciency is enhanced because most arithmetic operations can be replaced by retrieval operations.
B. Algorithm Description
When LUT-unit µ is given as a parameter, each column in the product of Mµ and X r µ becomes entries of a sepa- rate lookup table. Fig. 6 shows an exemplary process of building lookup tables when µ is 4, where we deï¬ne a sub- vector xβ α corresponding to xβ
xβ α â = xα[µβ; µβ + µ â 1] (4)
qβ α â = Mµ · xβ α (5)
when 0 ⤠α < b and 0 ⤠β < n µ , where b, n, and µ are the batch size, the input size, and the LUT-unit, respectively. Then, the product of a sub-matrix B[µβ; µβ + µ + 1] and a sub-vector xβ α, instead of performing GEMV. In other words, partial products of GEMM are replaced with table lookups in BiQGEMM.
As shown in Fig. 6(b), building a lookup table can be optimized by using dynamic programming technique. Speciï¬cally, while constructing a lookup table q0 0, dy- namic programming reduces redundant arithmetic oper- ations (described as right-sided equations in Fig. 6(b)) compared to the case when GEMV using Mµ=4 and x0 0 is performed (described as left-sided equations in Fig. 6(b)). Algorithm 1 presents the pseudo code of building a lookup table with dynamic programming. In Fig. 6(b), each equa- tion is annotated with the corresponding line numbers in Algorithm 1. Note that every n/µ sub-vector per input
ec I 8 â1 -1 -1 -1 i Q 1-1-1 1 é ! -1 -1 1-1 i c Po aot Reta . Pi I ioâ -1 1-1-1 i > 1 1-1 1 |. a -1 1 1-1 [- : 1 111 | . 1-1 -1 -1/° toot rt ate TB, dm | m 1-1 4-1 ! ca 1-1 1. 1) xf xh x} x? xt x? 0] To i oR 14-1 -1 (8) 1) Tw. er 1 1-1 1 xpeR <n | M2 ! 11 1-1 ws] 13 = My â¬{-1,1} hrs i 3B 9.43.95 4? at a
Ã
(a) Construction of lookup tables. Each column vector qj sents a lookup table corresponding to xj i .
# in Q repre-
â
0 0 = 0 â® 3 0 0 = 0 = 1 â® 15 â 0 â 1 â 2 â 3 = 0 â 0 â 1 â 2 + 3 = 1 = 0 + 2 Ã 3 â 0 â 1 + 2 â 3 = 2 0 + 2 Ã 2 â 0 â 1 + 2 + 3 = 3 = 1 + 2 Ã 2 â 0 + 1 â 2 â 3 = 4 = 0 + 2 Ã 1 â 0 + 1 â 2 + 3 = 5 = 1 + 2 Ã 1 â 0 + 1 + 2 â 3 = 6 = 2 + 2 Ã 1 â 0 + 1 + 2 + 3 = 7 = 3 + 2 Ã 1 + 0 â 1 â 2 â 3 = 8 = 0 + 2 Ã 0 = â 7 + 0 â 1 â 2 + 3 = 9 = 1 + 2 Ã 0 = â 6 + 0 â 1 + 2 â 3 = 10 = 2 + 2 Ã 0 = â 5 + 0 â 1 + 2 + 3 = 11 = 3 + 2 Ã 0 = â 4 + 0 + 1 â 2 â 3 = 12 = 4 + 2 Ã 0 = â 3 + 0 + 1 â 2 + 3 = 13 = 5 + 2 Ã 0 = â 2 + 0 + 1 + 2 â 3 = 14 = 6 + 2 0 = â 1 L n e s 2 - 3 i L n e s 4 - 7 i L n e s 8 - 9 i A g o r i t l h m 1 . B u i l d L o o k U P ( 00 , = 4 ) 0 1 2 3 15 7 0 0
(b) Dynamic programming method to build a lookup table.
Fig. 6. when LUT-unit µ is 4. Illustration of two diï¬erent methods to build lookup tables
Algorithm 1 Build a lookup table with dynamic programming Input: LUT-unit µ â N (µ ⪠m), where m is output size Input: a sub-vector xβ Output: A lookup table (qβ α) 1: procedure BuildLookUP(xβ for i â 0 to µ-1 do 2: r0 â r0 + xi 3: 4: 5: 6: 7: 8: 9:
induces a distinct lookup table of 2µ entries, and hence, the time complexity of the proposed dynamic programming scheme Tc,dp is calculated as follows:
Tc,dp = O((2µ + µ â 1) · n µ · b) â O(2µ · n µ · b). (6)
Tc,dp obtained by our proposed technique is µ times less than Tc,mm = O(2µ · µ · nÃb µ ) that is time complexity
à â â x i r t a m t u p t u O 00 10 20 30 40 50 60 70 01 11 21 31 41 51 61 71 00 = + + 01 = + + â â® Key matrix â ⤠0 3 6 10 9 14 9 9 13 7 8 11 11 11 12 12 13 9 13 4 15 1 6 12 à -1, 1, 1, -1 = 01102 = 6 bit-packing â1 1 1 â1 â1 â1 1 â1 â1 â1 â1 1 1 â1 1 1 â1 1 â1 â1 1 â1 â1 1 1 1 â1 â1 1 â1 â1 â1 1 1 1 â1 1 â1 1 â1 1 1 â1 1 1 1 1 â1 1 â1 â1 1 1 1 â1 â1 1 â1 1 1 â1 1 1 â1 â1 â1 â1 1 1 â1 1 â1 â1 â1 1 1 â1 1 1 1 â1 1 1 â1 1 1 1 â1 â1 â1 1 1 1 1 â1 1 Quantized weight matrix 0 1 Lookup tables 0 0 1 0 2 0 1 1 2 1 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Ã
â â
Fig. 7. Illustrations of retrieving partial results from lookup tables.
of GEMM-based lookup table construction method (see Fig. 6(a)). Suppose our dynamic programming scheme is combined with multi-threading, each thread is responsible for constructing one or more lookup tables (i.e., one lookup table cannot be implemented by coordinating more than two threads). Another level of parallelism is achieved by vectorizing independent equations in Fig. 6(b) to uti- lize SIMD instructions. Note that because of dependency among equations in the case of dynamic programming, however, conventional GEMV or GEMM schemes might be a better choice to ï¬ll up lookup table entries if a computing system embeds a sizable number of simple computation units (e.g., GPU). In other words, depending on the characteristics of a processor to run BiQGEMM, a choice of appropriate scheme to implement lookup tables would be diï¬erent.
Fig. 7 shows an illustrated process of querying partial results from lookup tables. Consecutive LUT-unit µ binary data in a quantized weight matrix B are grouped into a sub-vector that can be converted into an integer number k where 0 ⤠k < 2µ (e.g., {â1, 1, 1, â1} is converted into an integer 6 when µ is 4). In other words, the key matrix K in Fig. 7 is a µ-bit-packed version of B, and each element in K serves as an index of table lookups3. Partial results retrieved from lookup table entries are accumulated for each sub-vector of length µ per input vector, and the BiQGEMM is completed.
All keys in the key matrix are used b (=the number of input vectors) times. For example, all of the leftmost lookup tables corresponding to every input (i.e., q0 0 and q0 1 in Fig. 7) are commonly accessed by the ï¬rst column of key matrix. In other words, diï¬erent lookup tables are accessed by a shared key. By continuously placing lookup table entries associated with the same key as shown in Fig. 8, SIMD operations in CPU are encouraged, and bank conï¬icts in GPU can be mitigated. As for GPU, scratchpad
3Note that matrix K instead of B can be loaded in advance into the system, since the weight matrices are ï¬xed during inference.
Row-major order 0 3 6 10 9 14 9 9 13 7 8 11 11 11 12 12 13 9 13 4 15 1 6 12 0 1 2 3 4 5 6 7
Fig. 8. Lookup tables arrangement considering SIMD operations when the batch size b is 4.
^ Algo. 1 â
^
Fig. 9. A tiling strategy for LUT-stationary BiQGEMM. Each cell in K and Y represents a scalar, while each cell in ËX and Q implies a sub- vector xj i , respectively, where ËX is a reshaped one from an input matrix X (i.e., 3-dimensional representation of Xr
memory or software controlled caches (i.e., shared memory in CUDA) can store lookup tables. Then, even if the memory accesses are irregular, multiple threads can fetch multiple data in parallel unless multiple addresses within the same memory bank are accessed. Thus, a penalty of irregular access to a lookup table on GPU is not as critical as that of CPU.
Algorithm 2 LUT-based inference with LUT- stationary tiling input: Key Matrix (K â Zmà n output: Output Matrix(Y â RmÃb) 1: procedure QueryLUT( ËX, K) 2: 3: 4: 5: 6:
A tiling approach for BiQGEMM is illustrated in Fig. 9 and described in Algorithm 2. To build a lookup ta- ble (LUT) without redundancy, BiQGEMM adopts an LUT-stationary tiling scheme. Two input arguments of BiQGEMM are given as a key matrix K and an input tensor ËX reshaped from an input matrix (while the re- shaped form is determined by a user-deï¬ned parameter µ). For tiling, tile width tw and height th need to be speciï¬ed. Each tile in LUT is operated by one thread (that is assigned to one SM in the case of GPU), and hence, multiple accesses to lookup tables in a block can be SIMDiï¬ed (by multiple CUDA cores in the case of GPU). Lookup tables in TQ â Q are not constructed in advance (i.e., not taken from DRAM), instead, implemented on- the-ï¬y by Algorithm 1 with sub-vectors xj k as inputs (Line 3 in Algorithm 2). After implementing lookup tables in TQ, pairs of tiles in Q and K corresponding to TQ are loaded successively and individually, and computed by us- ing TQ (Lines 4â6 in Algorithm 2). With LUT-stationary tiling, partial output results obtained by multiple threads are processed through either sum reduction or atomic additions to obtain the ï¬nal output values. Since each of m · n µ keys is utilized b times, the worst-case time complexity required to retrieve Tr is given as follows:
Tr = O(m · n µ · b). (7)
To process multi-bit quantization of weight matrices, BiQGEMM assumes that multiple binary weight matrices are concatenated as described in Fig. 4. Note that such concatenation does not increase the number of lookup tables, and thus for BiQGEMM, only an amount of lookup table retrieving operations increases as the number of quantization bits increases. In short, for multi-bit quan- tized weight matrices, Tr becomes O(m · n µ · b · β), where β is the number of quantization bits.
C. Complexity Analysis
Given an input matrix X â RnÃb and a quantized binary weight matrix B â {â1, 1}mÃn, a matrix multipli- cation Y = B ·X performed by GEMM yields O(m·n·b) as time complexity, where m, n, and b are output size, input size, and batch size, respectively (Fig. 1 shows an example when b is 1). Time complexity analysis on a matrix multi- plication performed by BiQGEMM, on the other hand, is divided into the following two parts: i) constructing lookup tables (Eq. 6) and ii) retrieving lookup table entries (Eq. 7). Correspondingly, time complexity of BiQGEMM T is presented as follows:
n µ n µ 2µ + m m · µ T = Tc,dp + Tr = O(2µ · · b + m · = O(m · n · b · ( · b) )) (8) (9)
If 2µ ⪠m is satisï¬ed, then T can be approximated as
T â O( m · n · b µ ). (10)
Note that as long as 2µ ⪠m, Eq. 10 holds regardless of a choice of algorithm to build lookup tables (i.e., irrespective of a selection between Tc,dp or Tc,mm). Then, by using BiQGEMM instead of conventional GEMM, time complexity of a matrix multiplication is reduced by µ. For multi-bit quantized weights, time complexity of both BiQGEMM and GEMM increases linearly with the num- ber of quantization bits.
Since the underlying principles of BiQGEMM are funda- mentally diï¬erent compared to GEMM, rethinking hard- ware designs is necessary. First, while performance of FMA units is directly related to GEMM performance, the usage of FMAs for BiQGEMM is limited to constructing lookup tables. Second, while cache design is useful for GEMM to utilize spatial locality in SRAM when loading a matrix by accessing successive data, BiQGEMM cannot eï¬ciently facilitate such a locality because accessing en- tries of lookup tables would be non-sequential in general (note that, nonetheless, such degraded locality is not fatal if BiQGEMM is associated with multi-batch inference on CPU or with scratchpad on GPU (see Fig. 8)). In addition, because BiQGEMM is desired to produce lookup tables (that are usually larger than an input matrix) to be placed in SRAM, an available range of tile size would be highly constrained compared to GEMM. Now, let us explain why BiQGEMM is designed to be eï¬ciently operated in CPUs or GPUs with quantized weights despite such issues (i.e., low utilization of FMAs and low data access locality). Note that with quantized weights, GEMM needs to decom- press bit-packed quantized weight data by 1) performing two-step operations to extract quantized weights bitwise from a much wider data container (such as INT32) and 2) conducting two-step arithmetic operations to convert data of the form of {0, 1} into the form of {â1, 1} (see Algorithm 3 and Fig. 2). On the other hand, BiQGEMM directly accesses and utilizes the bit-packed weight data as keys (or indices) of lookup tables without such additional decompressing steps. It should be noted that for quantized weights, overhead by decompression can outweigh the gain by the reduced memory footprint as we demonstrate in the next section.
Algorithm 3 Unpacking for GEMM input: packed data x, output: unpacked weight w â {â1, 1}32 1: procedure unpacking(x) for i â 0 to 31 do 2: 3: wi â ((((x>> i)&(1)) â 2) â 1
Consideration of existing hardware architecture de- signs is one of the keys to understanding the impacts of BiQGEMM on the system performance. For example, even though large tile size for BiQGEMM would result
TABLE III Machine configurations used in Section IV
Mobile PC GPGPU Cortex-A76 4 64KB/core 4/core 8GB 31.8 19.36G Ã 4 LLVM 8.0.7 Android 9 (4.14.78) Processor # of Cores L1D-cache SIMD lane DRAM * GB/s FLOPS Tesla v100 i7-7700 80 (SMs) 4 128 KB/SM 32 KB/core 16*4/SM 8/core 16 GB 16 GB 900 35.76 181.87G Ã 4 57.6G Ã 4 gcc 5.4.0 nvcc 10.2 ubuntu 16.04.6 (4.15.0)
Maximum memory bandwidth
in improved data reuse, current CPU or GPU designs allow limited tile size of BiQGEMM such that large batch size (i.e., compute-bound workload) might be less favor- able to BiQGEMM. New hardware design dedicated to BiQGEMM is, therefore, suggested as an important future work.
IV. Experimental Results
A. Setup
We implemented our proposed algorithm BiQGEMM in C++/CUDA with various compilers targeting diï¬erent processor architectures. Table III presents descriptions of systems where tests are performed. As for tests with CPUs, performance of BiQGEMM is compared with Intel MKL (mkl) [49], Eigen (eigen) [50], FBGEMM (int8f) [51], [52], QNNPACK (int8q) [53], [54] and an algorithm introduced in [55] (kCpu). Additionally BiQGEMM is also run by GPU and compared with cuBLAS (cublas) [56], a kernel code in CUDA samples (kGpu) [57], and XNOR- popcout (xnor) [22]. Note that we included int8f, int8q, and xnor for the purpose of comparison on performance even though model accuracy and required hardware de- signs are all diï¬erent. We generated synthetic matrices ï¬lled by random numbers as data sets for tests.
Our proposed algorithm accepts LUT-unit µ as a user- deï¬ned parameter that can aï¬ect system performance. Let us explain how LUT-unit µ is optimized in practice. µ determines physical space allocated for lookup tables. If µ increases, then the number of lookup tables decreases while the number of entries in each lookup table increases exponentially (see Fig. 6(a) and Eq. 6) (i.e., there exists a trade-oï¬ between the number of LUTs and the number of entries inside each LUT). Combined with output size m, LUT-unit µ speciï¬es a relative performance gain of BiQGEMM over GEMM. Speciï¬cally from Eq. 9, for a 2µ+m m·µ . Note given output size m, we can ï¬nd µ by arg minµ that in practical situations, hardware resources may limit the maximum µ (due to internal SRAM size), and thus, restrict tile size as well. Thus, theoretically optimized µ should be veriï¬ed empirically throughout extensive exper- iments. We use µ = 8 (for our entire tests) that turns out to be close to the value optimized in theory .
100 100 80 80 ) ) % ( n o i t r o p o r P 60 40 20 query build replace % ( n o i t r o p o r P 60 40 20 query build replace 0 512 1024 2048 Output Size 4096 8192 0 512 1024 2048 Output Size 4096 8192 (a) n = 1K, b = 32 (b) n = 2K, b = 32
Fig. 10. Runtime proï¬ling of BiQGEMM. As output size m increases, the portion of retrieving (query) operations also increases. For all matrix sizes selected, retrieving operations are dominating the entire performance.
) c e s m ( e m i t n u R 400 200 w/o unpack sGEMM w/ unpack ) c e s u ( e m i t n u R 400 200 w/o unpack sGEMM w/ unpack 0 32 64 1KÃ1K 128 32 64 2KÃ2K 128 0 32 64 1KÃ1K 128 32 64 2KÃ2K 128 (a) on CPU (b) on GPU
Fig. 11. The plots representing overhead incurred by unpacking bits when weights are 1-bit quantized, and kCpu [55] and kGpu [57] are used. The weight matrices used in these experiments are square matrices of order m(= n) with batch size 32, 64, and 128. To ensure fair comparison, the same compiler optimization is applied on the codes.
# B. BiQGEMM Runtime Proï¬ling
Fig. 10 represents the runtime portion of each operation when running BiQGEMM on a CPU with various output size m. Operations are mainly categorized into 1) lookup tables construction (build), 2) retrieving values (query), and 3) memory replacement for tiling (replace). As dis- cussed in Section III-C, increasing output size induces a larger proportion in the process of retrieving values, and correspondingly, more arithmetic operations in GEMM to be replaced with retrieval operations in BiQGEMM. Note that even when more quantization bits are assigned to each weight, BiQGEMM increases the retrieving op- erations only which are relatively inexpensive among 3 operations (see Fig. 4). As such, when weights are quan- tized, BiQGEMM is better implemented (for performance compared with a GEMM-based scheme) when output size is larger and weights are quantized with more bits.
C. GEMM with Quantized and Bit-packed Weights
To reduce memory footprint, bit-packing is essential for quantized models to be densely stored in a general data type, such as INT32. Through bit-packing with few batch
multiplication, memory-bound algorithms are accelerated by reduced memory bandwidth requirements. However, unpacking is required to be performed prior to running GEMM operations with packed data. Since unpacking fundamentally requires bit-level manipulations, unpacking on CPUs and GPUs may cause a large computational overhead. Indeed, Fig. 11 demonstrates such concerns. As- suming that weights are 1-bit quantized, Fig. 11 compares runtime of 3 diï¬erent scenarios (w/o unpack, sGEMM, and w/ unpack) depending on how the binary vectors are processed: âsGEMMâ indicates a case where only one bit is stored in a 32-bit container while âw/ unpackâ means multiplying bit-packed data after extracting bits through unpacking process. Since âsGEMMâ does not assume bit- packed data formats, quantization would not aï¬ect per- formance (i.e., performance would be the same as that of full-precision weights). Note that âw/o unpackâ measures runtime when bit-packed data is multiplied by a real vector without unpacking (i.e., products of a 32-bit packed data (a scalar) and a vector of length 32), which will produce incorrect result, but is useful to identify performance gain by decreased memory access latency. Runtime gap between âw/o unpackâ and âsGEMMâ implies performance gain by reduced memory footprint, whereas the diï¬erence between âw/o unpackâ and âw/ packâ runtime indicates latency overhead by unpacking operations. Fig. 11 conï¬rms that GEMM with quantized weight is ineï¬cient in terms of the response time even though quantization reduces DRAM memory access latency.
D. Comparison with others
Even though small batch size is preferred for inference to reduce response time, recently developed DNNs demand batch size to be larger than 1. For example, an input (in the form of a sequence) of Transformersâ encoder contains several sub-words (tokens) to detect hidden relationships between sub-words. Because all of the sub-words in the in- put are multiplied by the same weights concurrently, those sub-words are processed in a group manner. The number of sub-words, thus, is equivalent to batch size in terms of computation. Accordingly, we conduct experiments using various batch sizes ranging from 1 to 256 considering the number of sub-words used for the Transformer and its variants.
Since unpacking process adds signiï¬cant overhead to run GEMM-based schemes with quantized bit-packed weights as shown in Section IV-C, âsGEMMâ version (that stores only one bit in a 32-bit containers without packing) introduced in the previous subsection is selected to be compared with BiQGEMM. âsGEMMâ version does not beneï¬t from quantization, and therefore, 1-bit quantized weights and full-precision weights result in the same per- formance measured when using MKL (mkl) and Eigen (eigen) (thus, we do not specify whether weights are quantized in Fiq. 12 for eigen and mkl). Performance of BiQGEMM is measured when weights are quantized into
BiQGEMM 3-bit BiQGEMM 2-bit BiQGEMM 1-bit Eigen MKL FBGEMM (INT8) 5 0 1K 2K 1-Batch 4K 1K 2K 8-Batch 4K 1K 2K 16-Batch 4K 1K 2K 32-Batch 4K 1K 2K 128-Batch 4K 1K 2K 256-Batch 4K (a) PC (i7-7700) QNNPACK (INT8) Eigen BiQGEMM 3-bit BiQGEMM 2-bit BiQGEMM 1-bit 10 5 0 1K 2K 1-Batch 4K 1K 2K 4-Batch 4K 1K 2K 8-Batch 4K 1K 2K 16-Batch 4K 1K 2K 32-Batch 4K 1K 2K 128-Batch 4K 1K 2K 256-Batch 4K (b) Mobile (Cortex-A76)
# p U d e e p S
# p U d e e p S
Fig. 12. Speedup over eigen using 1-thread. Matrix size is given as m-by-1K. Output size m and batch size are annotated along the horizontal axis.
1, 2, or 3 bits. Note that the runtime of both BiQGEMM and GEMM with quantized weights linearly increases as the number of quantization bits increases. Then, combined with an observation that BiQGEMM 1-bit (BiQGEMM with 1-bit quantization) shows highest performance in Fig. 12, BiQGEMM is always faster than GEMM given the same quantization bits. Moreover, if batch size is small enough, then BiQGEMM performance with 2- or 3-bit quantization outperforms not only GEMM with full- precision but also GEMM with INT84. Thus, even if the latency is of the top priority in the inference system design (at the expense of increased memory footprint without quantization), BiQGEMM can be the selection when the number of quantization bits is small enough with allowable model accuracy degradation.
hardware resources as discussed in Section III. If batch size increases and data reuse is improved correspond- ingly, then overall computational eï¬ciency improvement of mkl, eigen, and int8f can be higher than that of BiQGEMM. For example, when batch size exceeds 128 in Fig. 12(a), eigen and mkl are faster than BiQGEMM with 3-bit quantization. Speciï¬c batch size determining whether BiQGEMM can be faster than GEMM depends on the system conï¬guration. For example, in the case of mobile CPU with low computational power (see Table III), BiQGEMM outperforms not only full-precision GEMM (eigen) but also INT8 GEMM (int8q) even when batch size becomes larger compared to the case of PC as de- scribed in Fig. 12(b).
Even though experimental results in Fig. 12 assumed only one thread, multi-threading linearly improves per- formance of both BiQGEMM and GEMM that can be parallelized by tiling techniques.
Fig. 12 shows that when input size is ï¬xed, larger output size enhances speedup of BiQGEMM signiï¬cantly because of higher reuse rate of lookup tables and correspondingly increased number of arithmetic operations to be replaced by simple table lookups. Large batch size, on the other hand, may have adverse eï¬ects on BiQGEMM performed by CPUs or GPUs. Fig. 12 describes that BiQGEMM can be slower than the other GEMM kernels if batch size and the number of quantization bits are beyond a certain threshold value. In theory, since time complexity of BiQGEMM is given as O( β·m·n·b ) and µ is empirically optimized to be 8, BiQGEMM with under 8-bit quan- tization is supposed to be always faster than GEMM (of full-precision) regardless of batch size. However, in reality, we need to consider limiting factors due to available
E. Experiments with GPU
BiQGEMM with compare kGpu, cuBLAS, and xnor. Both cublas and kGpu assume that only 1 bit is occupied in 32-bit containers (with unnecessary storage of 31 bits) for each quantized considered because weight In the case of unpacking is as slow as sGEMM). such that xnor, activations are quantized as well matrix multiplications by XNOR and popcount operations without unpacking procedure. Assuming weights and activations are βw- and βa-bit quantized, xnor shows time complexity of O(βw · βa · (m · n 32 · b)), where m, n, and b are output size, input size, and batch size, respectively. Although
4Unlike weight matrix, quantization and packing process of activa- tions (inputs) need to be performed during inference time, and thus, included in the elapsed times of int8f and int8p.
TABLE IV Runtime comparison on GPGPU
runtime (µsec) weights (n-by-n) batch size cublas 12 20 25 26 14 27 45 64 31 52 109 179 90 130 339 594 BiQGEMM kGpu xnor* 18 18 19 19 18 19 21 24 19 23 29 40 23 34 64 109 4 11 30 58 4 20 70 135 5 47 175 330 7 130 528 1005 1 32 128 256 1 32 128 256 1 32 128 256 1 32 128 256 22 24 39 63 36 57 120 204 93 153 366 661 213 614 1396 2516 512 1024 2048 4096
It includes the packing cost for inputs (activations), but not the quantization cost for input.
computations can activation further, training quantization algorithm modiï¬cations and computational overhead during inference as discussed in Section II at the cost of model accuracy drop.
cublas is provided as a library form developed by a chip vendor such that we select kGpu as a baseline that we modify to implement BiQGEMM (for xnor, we use a publicly available code that is also based on kGpu). Table IV shows runtime with various matrix sizes and batch sizes when each weight is 1-bit quantized (for xnor, activations are also 1-bit quantized). Diï¬erence in perfor- mance between BiQGEMM and kGpu represents the gain by reduced bandwidth requirement and by improved com- putational principles. As shown in Table IV, BiQGEMM is faster than kGpu by 1.08â¼30.42 times (as weight matrix size increases and batch size decreases, BiQGEMM becomes relatively faster). Note that if batch size is small enough (to be memory-bound), even compared with xnor, BiQGEMM presents the best performance.
# V. Discussion
Let C (m à n) be the product of two matrices A (m à k) and B (k à n). BiQGEMM presents high performance in accelerating such a matrix multiplication operation especially when m is large and n is small (when m and n correspond to an output size and a batch size, respectively, in this manuscript). Note that for well known DNNs performing NLP tasks (including Transformers and LSTM), most layers can be represented as matrices (to be computed by GEMM) along with large m and small n. In convolutional neural networks (CNN), operations for convolutions can be transformed into GEMM routines as follows: When a batch size, an output feature map size, a ï¬lter size, a number of input channels, and a number
of output channels are given as b, ho à wo, hf à wf , ci, and co, respectively, then m, k, and n correspond to (co), (ci à hf à wf ), and (b à ho à wo), respectively [58]. Correspondingly, compared to NLP tasks, CNNs usually yield relatively small m and relatively large n. As such, BiQGEMM would not be the best choice for CNNs if latency is the major concern. In addition, activations are also required to be quantized in CNNs to reduce memory footprint because the amount of activations are usually larger than that of weights. For CNNs, thus, it would be necessary to consider INT8 operation or XNOR-popcnt accompanied by compressing both weights and activations. There are a variety of issues (with diï¬erent priority) on inference implementation that we need to consider to decide the optimal compression technique. For instance, lowering end-to-end latency may be of the utmost im- portance while reducing memory footprint can be critical. Also, an acceptable degree of accuracy drop highly aï¬ects the choice of a particular model compression method. Some networks are very sensitive to activation quanti- zation, but others may not. As such, a large spectrum of model compression techniques is demanded to support optimizing various aspects of inference implementation. In this regard, BiQGEMM is able to enlarge such a spectrum. For DNNs associated with NLP tasks, BiQGEMM based on the binary-coding-based quantization can be a reason- able solution to accelerate computations while shrinking memory footprint even without the need to compress activations.
# VI. Conclusion
We proposed an eï¬cient matrix-to-matrix multiplica- tion technique dedicated to quantized neural networks. When weights are quantized, available output space of computational results is quite limited such that for a large matrix multiplication, a lot of computations be- come redundant. BiQGEMM removes such redundancy by replacing multiplications with table lookups. More- over, because commercial processors enable only a ï¬xed data transfer width, a lot of memory bandwidth may be wasted when weights are non-uniformly quantized into a few bits. BiQGEMM provides a way to access mul- tiple quantized weights simultaneously regardless of the number of quantization bits. Hence, memory bandwidth utilization is enhanced by BiQGEMM signiï¬cantly while required memory bandwidth is reduced by quantization. We demonstrated that BiQGEMM is a lot faster than previous matrix multiplication schemes especially when matrix size is large and batch size is small.
# References
[1] E. Li, L. Zeng, Z. Zhou, and X. Chen, âEdge AI: On-demand accelerating deep neural network inference via edge computing,â IEEE Transactions on Wireless Communications, 2019.
[2] A. Reuther, P. Michaleas, M. Jones, V. Gadepally, S. Samsi, and J. Kepner, âSurvey and benchmarking of machine learning accelerators,â arXiv preprint arXiv:1908.11348, 2019.
[3] T. Choudhary, V. Mishra, A. Goswami, and J. Sarangapani, âA comprehensive survey on model compression and acceleration,â Artiï¬cial Intelligence Review, pp. 1â43, 2020.
[4] Y. Cheng, D. Wang, P. Zhou, and T. Zhang, âA survey of model compression and acceleration for deep neural networks,â arXiv preprint arXiv:1710.09282, 2017.
[5] Z. Zhou, X. Chen, E. Li, L. Zeng, K. Luo, and J. Zhang, âEdge intelligence: Paving the last mile of artiï¬cial intelligence with edge computing,â Proceedings of the IEEE, vol. 107, no. 8, pp. 1738â1762, 2019.
[6] D. He, Z. Wang, and J. Liu, âA survey to predict the trend of AI-able server evolution in the cloud,â IEEE Access, vol. 6, pp. 10591â10602, 2018.
[7] J. Von Neumann, âFirst draft of a report on the EDVAC,â IEEE Annals of the History of Computing, vol. 15, no. 4, pp. 27â75, 1993.
[8] J. L. Hennessy and D. A. Patterson, Computer architecture: a quantitative approach. Elsevier, 2011.
[9] E. Eleftheriou, ââin-memory computingâ: Accelerating ai ap- plications,â in 2018 48th European Solid-State Device Research Conference (ESSDERC), pp. 4â5, IEEE, 2018.
[10] S. Han, H. Mao, and W. J. Dally, âDeep compression: Compress- ing deep neural networks with pruning, trained quantization and huï¬man coding,â arXiv preprint arXiv:1510.00149, 2015. [11] Z. Liu, M. Sun, T. Zhou, G. Huang, and T. Darrell, âRethinking the value of network pruning,â arXiv preprint arXiv:1810.05270, 2018.
[12] D. Lee, S. J. Kwon, B. Kim, P. Kapoor, and G.-Y. Wei, âNetwork pruning for low-rank binary indexing,â arXiv preprint arXiv:1905.05686, 2019.
[13] T. N. Sainath, B. Kingsbury, V. Sindhwani, E. Arisoy, and B. Ramabhadran, âLow-rank matrix factorization for deep neu- ral network training with high-dimensional output targets,â in 2013 IEEE international conference on acoustics, speech and signal processing, pp. 6655â6659, IEEE, 2013.
[14] Y. Li, Y. Liang, and A. Risteski, âRecovery guarantee of weighted low-rank approximation via alternating minimiza- tion,â in International Conference on Machine Learning, pp. 2358â2367, 2016.
[15] C. Xu, J. Yao, Z. Lin, W. Ou, Y. Cao, Z. Wang, and H. Zha, âAl- ternating multi-bit quantization for recurrent neural networks,â arXiv preprint arXiv:1802.00150, 2018.
[16] A. Bhandare, V. Sripathi, D. Karkada, V. Menon, S. Choi, K. Datta, and V. Saletore, âEï¬cient 8-bit quantization of Transformer neural machine language translation model,â arXiv preprint arXiv:1906.00532, 2019.
[17] D. Zhang, J. Yang, D. Ye, and G. Hua, âLQ-Nets: Learned quan- tization for highly accurate and compact deep neural networks,â in Proceedings of the European conference on computer vision (ECCV), pp. 365â382, 2018.
[18] P. Stock, A. Joulin, R. Gribonval, B. Graham, and H. Jégou, âAnd the bit goes down: Revisiting the quantization of neural networks,â arXiv preprint arXiv:1907.05686, 2019.
[19] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi, âXNOR- Net: Imagenet classiï¬cation using binary convolutional neural networks,â in European conference on computer vision, pp. 525â 542, Springer, 2016.
[20] A. V. Aho and J. E. Hopcroft, The design and analysis of computer algorithms. Pearson Education India, 1974.
[21] Y. Guo, A. Yao, H. Zhao, and Y. Chen, âNetwork sketching: Exploiting binary structure in deep CNNs,â in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5955â5963, 2017.
[22] M. Courbariaux, I. Hubara, D. Soudry, R. El-Yaniv, and Y. Ben- gio, âBinarized neural networks: Training deep neural networks with weights and activations constrained to +1 or -1,â arXiv preprint arXiv:1602.02830, 2016.
[23] Y. He and S. Han, âADC: automated deep compres- sion and acceleration with reinforcement learning,â CoRR, vol. abs/1802.03494, 2018.
[24] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, âLearning representations by back-propagating errors,â nature, vol. 323, no. 6088, pp. 533â536, 1986.
[25] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Å. Kaiser, and I. Polosukhin, âAttention is all you need,â in Advances in neural information processing systems, pp. 5998â 6008, 2017.
[26] S. Hochreiter and J. Schmidhuber, âLong short-term memory,â Neural computation, vol. 9, no. 8, pp. 1735â1780, 1997.
[27] D. Bahdanau, K. Cho, and Y. Bengio, âNeural machine trans- lation by jointly learning to align and translate,â arXiv preprint arXiv:1409.0473, 2014.
[28] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, âBERT: Pre-training of deep bidirectional transformers for language understanding,â arXiv preprint arXiv:1810.04805, 2018. [29] A. Wang, A. Singh, J. Michael, F. Hill, O. Levy, and S. R. Bowman, âGLUE: A multi-task benchmark and analysis plat- language understanding,â arXiv preprint form for natural arXiv:1804.07461, 2018.
[30] P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang, âSQuAD: 100,000+ questions for machine comprehension of text,â arXiv preprint arXiv:1606.05250, 2016.
[31] K. Ahmed, N. S. Keskar, and R. Socher, âWeighted Trans- former network for machine translation,â arXiv preprint arXiv:1711.02132, 2017.
[32] P. Shaw, J. Uszkoreit, and A. Vaswani, âSelf-attention preprint with relative arXiv:1803.02155, 2018. position representations,â arXiv
[33] M. Ott, S. Edunov, D. Grangier, and M. Auli, âScaling neural machine translation,â arXiv preprint arXiv:1806.00187, 2018.
[34] S. Edunov, M. Ott, M. Auli, and D. Grangier, âUnderstanding back-translation at scale,â arXiv preprint arXiv:1808.09381, 2018.
[35] C. Raï¬el, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu, âExploring the limits of transfer learning with a uniï¬ed text-to-text Transformer,â ArXiv, vol. abs/1910.10683, 2019.
[36] Z. Yang, Z. Dai, Y. Yang, J. Carbonell, R. R. Salakhutdinov, and Q. V. Le, âXLNet: Generalized autoregressive pretraining for language understanding,â in Advances in neural information processing systems, pp. 5754â5764, 2019.
[37] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov, âRoBERTa: A robustly optimized BERT pretraining approach,â arXiv preprint arXiv:1907.11692, 2019.
[38] Z. Zhang, X. Han, Z. Liu, X. Jiang, M. Sun, and Q. Liu, âERNIE: Enhanced language representation with informative entities,â arXiv preprint arXiv:1905.07129, 2019.
[39] Y. Sun, S. Wang, Y. Li, S. Feng, H. Tian, H. Wu, and H. Wang, âERNIE 2.0: A continual pre-training framework for language understanding,â arXiv preprint arXiv:1907.12412, 2019. [40] Z. Lan, M. Chen, S. Goodman, K. Gimpel, P. Sharma, and R. Soricut, âALBERT: A lite BERT for self-supervised learning of language representations,â arXiv preprint arXiv:1909.11942, 2019.
[41] S. Karita, N. Chen, T. Hayashi, T. Hori, H. Inaguma, Z. Jiang, M. Someki, N. E. Y. Soplin, R. Yamamoto, X. Wang, et al., âA comparative study on Transformer vs RNN in speech applica- tions,â arXiv preprint arXiv:1909.06317, 2019.
[42] S. Karita, N. E. Y. Soplin, S. Watanabe, M. Delcroix, A. Ogawa, and T. Nakatani, âImproving Transformer-based end-to-end speech recognition with connectionist temporal classiï¬cation and language model integration,â Proc. Interspeech 2019, pp. 1408â1412, 2019.
[43] K. Irie, R. Prabhavalkar, A. Kannan, A. Bruguier, D. Rybach, and P. Nguyen, âOn the choice of modeling unit for sequence-to- sequence speech recognition,â Proc. Interspeech 2019, pp. 3800â 3804, 2019.
[44] C. Lüscher, E. Beck, K. Irie, M. Kitza, W. Michel, A. Zeyer, R. Schlüter, and H. Ney, âRWTH ASR systems for LibriSpeech: Hybrid vs attention-w/o data augmentation,â arXiv preprint arXiv:1905.03072, 2019.
[45] D. S. Park, W. Chan, Y. Zhang, C.-C. Chiu, B. Zoph, E. D. Cubuk, and Q. V. Le, âSpecAugment: A simple data augmenta- tion method for automatic speech recognition,â arXiv preprint arXiv:1904.08779, 2019.
[46] K. J. Han, R. Prieto, K. Wu, and T. Ma, âState-of-the-art speech recognition using multi-stream self-attention with dilated 1D convolutions,â arXiv preprint arXiv:1910.00716, 2019.
[47] G. Prato, E. Charlaix, and M. Rezagholizadeh, âFully quan- tized Transformer for improved translation,â arXiv preprint arXiv:1910.10485, 2019.
[48] D. Lee, P. Kapoor, and B. Kim, âDeeptwist: Learning model compression via occasional weight distortion,â arXiv preprint arXiv:1810.12823, 2018.
[49] E. Wang, Q. Zhang, B. Shen, G. Zhang, X. Lu, Q. Wu, and Y. Wang, âIntel math kernel library,â in High-Performance Computing on the Intel® Xeon Phiâ¢, pp. 167â188, Springer, 2014.
[50] G. Guennebaud, B. Jacob, et al., âEigen v3.â
http://eigen.tuxfamily.org, 2010. [51] D. Khudia, P. Basu, S. Deng,
J. Huang, H. Liu, J. Park, and M. Smelyanskiy, âFacebook GEMM library.â https://github.com/pytorch/fbgemm, 2018.
[52] D. Khudia, P. Basu, and S. Deng, âOpen-sourcing inference..â state-of-the-art server-side
FBGEMM for https//engineering.fb.com/ml-applications/fbgemm/, 2018. [53] M. Dukhan, Y. Wu, H. Lu, and B. Maher, âQNNPACK: Quan- tized neural netwrok package.â https://engineering.fb.com/ml- applications/qnnpack/, 2018.
[54] M. Dukhan, Y. Wu, and H. Lu, âQNNPACK: Open learning.â source https://engineering.fb.com/ml-applications/qnnpack/, 2018. library for optimized mobile deep
[55] E. W. Weisstein, âMatrix multiplication, resource.â from http://mathworld.wolfram.com/MatrixMultiplication.html/. MathWorldâa wolfram web
# NVIDIA
[56] C. Nvidia, âcuBLAS library,â NVIDIA Corporation, Santa Clara, California, vol. 15, no. 27, p. 31, 2008.
[57] V. Volkov and J. W. Demmel, âBenchmarking GPUs to tune dense linear algebra,â in SCâ08: Proceedings of the 2008 ACM/IEEE conference on Supercomputing, pp. 1â11, IEEE, 2008.
[58] S. Chetlur, C. Woolley, P. Vandermersch, J. Cohen, J. Tran, B. Catanzaro, and E. Shelhamer, âcuDNN: Eï¬cient primitives for deep learning,â arXiv preprint arXiv:1410.0759, 2014. | {
"id": "1904.08779"
} |
2005.10243 | What Makes for Good Views for Contrastive Learning? | Contrastive learning between multiple views of the data has recently achieved
state of the art performance in the field of self-supervised representation
learning. Despite its success, the influence of different view choices has been
less studied. In this paper, we use theoretical and empirical analysis to
better understand the importance of view selection, and argue that we should
reduce the mutual information (MI) between views while keeping task-relevant
information intact. To verify this hypothesis, we devise unsupervised and
semi-supervised frameworks that learn effective views by aiming to reduce their
MI. We also consider data augmentation as a way to reduce MI, and show that
increasing data augmentation indeed leads to decreasing MI and improves
downstream classification accuracy. As a by-product, we achieve a new
state-of-the-art accuracy on unsupervised pre-training for ImageNet
classification ($73\%$ top-1 linear readout with a ResNet-50). In addition,
transferring our models to PASCAL VOC object detection and COCO instance
segmentation consistently outperforms supervised pre-training.
Code:http://github.com/HobbitLong/PyContrast | http://arxiv.org/pdf/2005.10243 | Yonglong Tian, Chen Sun, Ben Poole, Dilip Krishnan, Cordelia Schmid, Phillip Isola | cs.CV, cs.LG | NeurIPS 2020. Project page: https://hobbitlong.github.io/InfoMin/ | null | cs.CV | 20200520 | 20201218 | 0 2 0 2 c e D 8 1
] V C . s c [
3 v 3 4 2 0 1 . 5 0 0 2 : v i X r a
# What Makes for Good Views for Contrastive Learning?
Yonglong Tian MIT CSAIL
Chen Sun Google, Brown University
Ben Poole Google Research
Dilip Krishnan Google Research
Cordelia Schmid Google Research
Phillip Isola MIT CSAIL
# Abstract
Contrastive learning between multiple views of the data has recently achieved state of the art performance in the ï¬eld of self-supervised representation learning. Despite its success, the inï¬uence of different view choices has been less studied. In this paper, we use theoretical and empirical analysis to better understand the impor- tance of view selection, and argue that we should reduce the mutual information (MI) between views while keeping task-relevant information intact. To verify this hypothesis, we devise unsupervised and semi-supervised frameworks that learn effective views by aiming to reduce their MI. We also consider data augmenta- tion as a way to reduce MI, and show that increasing data augmentation indeed leads to decreasing MI and improves downstream classiï¬cation accuracy. As a by- product, we achieve a new state-of-the-art accuracy on unsupervised pre-training for ImageNet classiï¬cation (73% top-1 linear readout with a ResNet-50)1.
# Introduction
It is commonsense that how you look at an object does not change its identity. Nonetheless, Jorge Luis Borges imagined the alternative. In his short story on Funes the Memorious, the titular character becomes bothered that a âdog at three fourteen (seen from the side) should have the same name as the dog at three ï¬fteen (seen from the front)" [8]. The curse of Funes is that he has a perfect memory, and every new way he looks at the world reveals a percept minutely distinct from anything he has seen before. He cannot collate the disparate experiences.
Most of us, fortunately, do not suffer from this curse. We build mental representations of identity that discard nuisances like time of day and viewing angle. The ability to build up view-invariant representations is central to a rich body of research on multiview learning. These methods seek representations of the world that are invariant to a family of viewing conditions. Currently, a popular paradigm is contrastive multiview learning, where two views of the same scene are brought together in representation space, and two views of different scenes are pushed apart.
This is a natural and powerful idea but it leaves open an important question: âwhich viewing conditions should we be invariant to?" Itâs possible to go too far: if our task is to classify the time of day then we certainly should not use a representation that is invariant to time. Or, like Funes, we could go not far enough: representing each speciï¬c viewing angle independently would cripple our ability to track a dog as it moves about a scene.
We therefore seek representations with enough invariance to be robust to inconsequential variations but not so much as to discard information required by downstream tasks. In contrastive learning,
# 1Project page: http://hobbitlong.github.io/InfoMin
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
the choice of âviews" is what controls the information the representation captures, as the framework results in representations that focus on the shared information between views [53]. Views are commonly different sensory signals, like photos and sounds [3], or different image channels [66] or slices in time [69], but may also be different âaugmented" versions of the same data tensor [10]. If the shared information is small, then the learned representation can discard more information about the input and achieve a greater degree of invariance against nuisance variables. How can we ï¬nd the right balance of views that share just the information we need, no more and no less?
We investigate this question in two ways: 1) we demonstrate that the optimal choice of views depends critically on the downstream task. If you know the task, it is often possible to design effective views. 2) We empirically demonstrate that for many common ways of generating views, there is a sweet spot in terms of downstream performance where the mutual information (MI) between views is neither too high nor too low.
Our analysis suggests an âInfoMin principle". A good set of views are those that share the minimal information necessary to perform well at the downstream task. This idea is related to the idea of minimal sufï¬cient statistics [61] and the Information Bottleneck theory [68, 2], which have been previously articulated in the representation learning literature. This principle also complements the already popular âInfoMax principle" [45] , which states that a goal in representation learning is to capture as much information as possible about the stimulus. We argue that maximizing information is only useful in so far as that information is task-relevant. Beyond that point, learning representations that throw out information about nuisance variables is preferable as it can improve generalization and decrease sample complexity on downstream tasks [61].
Based on our ï¬ndings, we also introduce a semi-supervised method to learn views that are effective for learning good representations when the downstream task is known. We additionally demonstrate that the InfoMin principle can be practically applied by simply seeking stronger data augmentation to further reduce mutual information toward the sweet spot. This effort results in state of the art accuracy on a standard benchmark.
Our contributions include:
Demonstrating that optimal views for contrastive representation learning are task-dependent. ⢠Empirically ï¬nding a U-shaped relationship between an estimate of mutual information and
representation quality in a variety of settings.
A new semi-supervised method to learn effective views for a given task. ⢠Applying our understanding to achieve state of the art accuracy of 73.0% on the ImageNet linear
readout benchmark with a ResNet-50.
# 2 Related Work
Recently the most competitive methods for learning representations without labels have been self- supervised contrastive representation learning [53, 32, 73, 66, 62, 10]. These methods learn represen- tations by a âcontrastiveâ loss which pushes apart dissimilar data pairs while pulling together similar pairs, an idea similar to exemplar learning [21]. Models based on contrastive losses have signiï¬cantly outperformed other approaches [80, 38, 54, 66, 20, 52, 19, 24, 78].
One of the major design choices in contrastive learning is how to select the similar (or positive) and dissimilar (or negative) pairs. The standard approach for generating positive pairs without additional annotations is to create multiple views of each datapoint. For example: luminance and chrominance decomposition [66], randomly augmenting an image twice [73, 10, 6, 28, 76, 63, 81, 83], using different time-steps of videos [53, 82, 59, 27, 26], patches of the same image [34, 53, 32], multiple sensory data [50, 12, 55], text and its context [48, 75, 46, 41], or representations of student and teacher models [67]. Negative pairs can be randomly chosen images/videos/texts. Theoretically, we can think of the positive pairs as coming from a joint distribution over views p(v1, v2), and the negative pairs from a product of marginals p(v1)p(v2). The contrastive learning objective InfoNCE [53] (or Deep InfoMax [32]) is developed to maximize a lower bound on the mutual information between the two views I(v1; v2). Such connection has been discussed further in [57, 70].
Leveraging labeled data in contrastive representation learning has been shown to guide representations towards task-relevant features that improve performance [77, 31, 36, 72]. Here we use labeled data
2
to learn better views, but still perform contrastive learning using only unlabeled data. Future work could combine these approaches to leverage labels for both view learning and representation learning. Besides, previous work [4] has studied the effects of augmentation with different amount of images.
# 3 What Are the Optimal Views for Contrastive Learning?
In this section, we ï¬rst introduce the standard multiview contrastive representation learning formula- tion, and then investigate what would be the optimal views for contrastive learning.
# 3.1 Multiview Contrastive Learning
Given two random variables v1 and v2, the goal of contrastive learning is to learn a parametric function to discriminate between samples from the empirical joint distribution p(v1)p(v2|v1) and samples from the product of marginals p(v1)p(v2). The resulting function is an estimator of the mutual information between v1 and v2, and the InfoNCE loss [53] has been shown to maximize a lower bound on I(v1; v2). In practice, given an anchor point v1,i, the InfoNCE loss is optimized to score the correct positive v2,i â¼ p(v2|v1,i) higher compared to a set of K distractors v2,j â¼ p(v2):
elv1ive2i) KB oh(vi3,v2,5 Vju1e (vatevag) Lyce = âE | log dd)
Minimizing this loss equivalently maximizes a lower bound (a.k.a. INCE(v1; v2)) on I(v1; v2), i.e., I(v1; v2) ⥠log(K) â LNCE = INCE(v1; v2). In practice, v1 and v2 are two views of the data x, such as different augmentations of the same image [73, 6, 28, 11, 10], different image channels [66], or video and text pairs [65, 47, 42]. The score function h(·, ·) typically consists of two encoders (f1 for v1 and f2 for v2), which may or may not share parameters depending on whether v1 and v2 are from the same domain. The resulting representations are z1 = f1(v1) and z2 = f2(v2) (see Fig. 1a). Deï¬nition 1. (Sufï¬cient Encoder) The encoder f1 of v1 is sufï¬cient in the contrastive learning framework if and only if I(v1; v2) = I(f1(v1); v2).
Intuitively, the encoder f1 is sufï¬cient if the amount of information in v1 about v2 is lossless during the encoding procedure. In other words, z1 has kept all the information that the contrastive learning objective requires. Symmetrically, f2 is sufï¬cient if I(v1; v2) = I(v1; f2(v2)). Deï¬nition 2. (Minimal Sufï¬cient Encoder) A sufï¬cient encoder f1 of v1 is minimal if and only if I(f1(v1); v1) ⤠I(f (v1); v1), â f that is sufï¬cient.
Among those encoders which are sufï¬cient, the minimal ones only extract relevant information of the contrastive task and throw away other irrelevant information. This is appealing in cases where the views are constructed in a way that all the information we care about is shared between them.
The representations learned in the contrastive framework are typically used in a separate downstream task. To characterize what representations are good for a downstream task, we deï¬ne the optimality of representations. To make notation simple, we use z to mean it can be either z1 or z2. Deï¬nition 3. (Optimal Representation of a Task) For a task T whose goal is to predict a semantic label y from the input data x, the optimal representation zâ encoded from x is the minimal sufï¬cient statistic with respect to y.
This says a model built on top of zâ has all the information necessary to predict y as accurately as if it were to access x. Furthermore, zâ maintains the smallest complexity, i.e., containing no other information besides that about y, which makes it more generalizable [61]. We refer the reader to [61] for a more in depth discussion about optimal visual representations and minimal sufï¬cient statistics.
# 3.2 Three Regimes of Information Captured
As our representations z1, z2 are built from our views and learned by the contrastive objective with the assumption of minimal sufï¬cient encoders, the amount and type of information shared between v1 and v2 (i.e., I(v1; v2)) determines how well we perform on downstream tasks. As in information
3
transfer b ) # bits I(vi;V2) performance Sweet Spot not enough too much \ hypothesis signal noise I(x; y) 5 Twva) 1059) T(vy:vg) Twava)-T069) T(vz:va)
Figure 1: (a) Schematic of multiview contrastive representation learning, where an image is split into two views, and passed through two encoders to learn an embedding where the views are close relative to views from other images. (b) When we have views that maximize I(v1; y) and I(v2; y) (how much task-relevant information is contained) while minimizing I(v1; v2) (information shared between views, including both task-relevant and irrelevant information), there are three regimes: missing information which leads to degraded performance due to I(v1; v2) < I(x; y); excess noise which worsens generalization due to additional noise; sweet spot where the only information shared between v1 and v2 is task-relevant and such information is complete.
bottleneck [68], we can trace out a tradeoff between how much information our views share about the input, and how well our learned representation performs at predicting y for a task. Depending on how our views are constructed, we may ï¬nd that we are keeping too many irrelevant variables while discarding relevant variables, leading to suboptimal performance on the information plane. Alternatively, we can ï¬nd the views that maximize I(v1; y) and I(v2; y) (how much information is contained about the task label) while minimizing I(v1; v2) (how much information is shared about the input, including both task-relevant and irrelevant information). Even in the case of these optimal traces, there are three regimes of performance we can consider that are depicted in Fig. 1b, and have been discussed previously in information bottleneck literature [68, 2, 23]:
1. Missing information: When I(v1; v2) < I(x; y), there is information about the task-relevant variable that is discarded by the view, degrading performance.
2. Sweet spot: When I(v1; y) = I(v2; y) = I(v1; v2) = I(x; y), the only information shared between v1 and v2 is task-relevant, and there is no irrelevant noise.
3. Excess noise: As we increase the amount of information shared in the views beyond I(x; y), we begin to include additional information that is irrelevant for the downstream task. This can lead to worse generalization on the downstream task [2, 60].
We hypothesize that the best performing views will be close to the sweet spot: containing as much task-relevant information while discarding as much irrelevant information in the input as possible. More formally, the following InfoMin proposition articulates which views are optimal supposing that we know the speciï¬c downstream task T in advance. The proof is in Section A.2 of the Appendix. Proposition 3.1. Suppose f1 and f2 are minimal sufï¬cient encoders. Given a downstream task T â) = arg minv1,v2 I(v1; v2), with label y, the optimal views created from the data x are (v1 1 (or zâ subject to I(v1; y) = I(v2; y) = I(x; y). Given v1 2) learned by contrastive learning is optimal for T (Def 3), thanks to the minimality and sufï¬ciency of f1 and f2.
Unlike in information bottleneck, for contrastive learning we often do not have access to a fully- labeled training set that speciï¬es the downstream task in advance, and thus evaluating how much task-relevant information is contained in the views and representation at training time is challenging. Instead, the construction of views has typically been guided by domain knowledge that alters the input while preserving the task-relevant variable.
# 3.3 View Selection Inï¬uences Mutual Information and Accuracy
The above analysis suggests that transfer performance will be upper-bounded by a reverse-U shaped curve (Fig. 1b, right), with the sweet spot at the top of the curve. In theory, when the mutual information between views is changed, information about the downstream task and nuisance variables can be selectively included or excluded, biasing the learned representation, as shown in Fig. 2. The upper-bound reverse-U might not be reached if views are selected that share noise rather than signal. But practically, a recent study [66] suggests that the reverse-U shape is quite common. Here we show
4
Transfer performance task-relevant info nuisance info Transfer performance task-relevant info nuisance info PI (vaiva) 09 @ @ @ @ (a) (b)
Figure 2: As the mutual information between views is changed, information about the downstream task (green) and nuisance variables (red) can be selectively included or excluded, biasing the learned representation. (a) depicts a scenario where views are chosen to preserve downstream task information between views while throwing out nuisance information, while in (b) reducing MI always throws out information relevant for the task leading to decreasing performance as MI is reduced.
several examples where reducing I(v1; v2) improves downstream accuracy. We use INCE as a neural proxy for I, and note it depends on network architectures. Therefore for each plot in this paper, we only vary the input views while keeping other settings the same, to make the results comparable.
Zsp | © patch distance ve g @ patch distance ve > 92 Fo ee 8 51 ots os § g92 360 a 0 53 350 2se°e < 224 $ os? S52 8 1497 @84 Pa 2 z Fd 382 G 48 4 57] esa 4 2.0 2.5 3.0 3.5 4.0 45 5.0 Ince 2.0 4.0 4.5 5.0 Ince 25 3.0 35 (a) STL-10 classification (b) CIFAR-10 classification
Figure 3: We create views by using pairs of image patches at various offsets from each other. As IN CE is reduced, the downstream task accuracy ï¬rstly increases and then decreases, leading to a reverse-U shape.
Example 1: Reducing I(v1; v2) with spatial distance. We create views by randomly cropping two patches of size 64x64 from the same image with various offsets. Namely, one patch starts at position (x, y) while the other starts at (x + d, y + d), with (x, y) randomly generated. We increase d from 64 to 384, and sample patches from inside high resolution images in the DIV2K dataset [1]. After contrastive training stage, we evaluate on STL-10 and CIFAR-10 by freezing the encoder and training a linear classiï¬er. The plots in Fig. 3 shows the Mutual Information v.s. Accuracy. The results show that the reverse-U curve is consistent across both STL-10 and CIFAR-10. We can identify the sweet spot at d = 128. More details are provided in Appendix.
18 wDebr a6 Lab 4 uy Cys Zea z a guED 2332 Se [email protected] SOE AGREE) g RBGEEIED Bar 2° aaa S 28 gE ste Eos eave) B30 engi 764/ © color space 9% 29 | Le coler space oe 34 56 58 6.0 62 64 Ince 34 5.6 5.8 6.0 62 6.4 INCE (a) STL-10 classification (b) NYU-v2 Segmentation
Figure 4: We build views by splitting channels of different color spaces. As INCE decreases, the accuracy on downstream tasks (STL-10 classiï¬cation, NYU-v2 segmentation) improves.
Example 2: Reducing I(v1; v2) with different color spaces. The correlation between channels may vary signiï¬cantly across different color spaces. We follow [66, 80] to split each color space into two views, such as {Y, DbDr} and {R, GB}. We perform contrastive learning on STL-10, and measure the representation quality by linear classiï¬cation accuracy on the STL-10 and segmentation performance on NYU-V2 [51] images. As shown in Fig. 4, the downstream performance keeps increasing as INCE decreases for both classiï¬cation and segmentation. Here we do not observe the the left half of the reverse U-shape, but in Sec. 4.2 we will show a learning method that generates color spaces which reveal the full shape and touch the sweet spot.
5
oao aero g eid £0.25 we on = 63 oa âhe 7s aS â0.5 = 0.15 £03 63.04 2,520 4 5 im C04 § 624 £0.08 62.5 < 3 62.0 2 ao 61.54 ~~ Color Jittering âx-b.125 g -oi RandomResizedCrop âcl 5 5.6 5.7 5.8 5.9 60 'nce 5.7 5.8 5.9 6.0 6.1 62 nce (a) Color Jittering (b) Random Resized Crop
# Zz z
5 g <t 3% Z & £
Figure 5: The reverse U-shape traced out by parameters of individual augmentation functions.
# 3.4 Data Augmentation to Reduce Mutual Information between Views
Multiple views can also be generated through augmenting an input in different ways. We can unify several recent contrastive learning methods through the perspective of view generation: despite differences in architecture, objective, and engineering tricks, all recent contrastive learning methods create two views v1 and v2 that implicitly follow the InfoMin principle. Below, we consider several recent works in this framework:
InstDis [73] and MoCo [28]. These two methods create views by applying a stochastic data augmentation function twice to the same input: (1) sample an image X from the empirical distribution p(x); (2) sample two independent transformations t1, t2 from a distribution of data augmentation functions T; (3) let v1 = t1(X) and v2 = t2(X). CMC [66]. CMC further split images across color channels such that vcmc is the ï¬rst color channel of 1 v1, and vcmc ; vcmc ) ⤠I(v1; v2) is theoretically 2 guaranteed, and we observe that CMC performs better than InstDis. PIRL [49]. PIRL keeps vpirl to get vpirl
SimCLR [10]. Despite other engineering techniques and tricks, SimCLR uses a stronger class of augmentations Tâ, which leads to smaller mutual information between the two views than InstDis.
CPC [53]. Different from the above methods that create views at the image level, CPC gets views vcpc 1 , vcpc from local patches with strong data augmentation (e.g., RA [15]) which results in smaller I(vcpc 2 ). As in Sec. 3.3, cropping views from disjoint patches also reduces I(vcpc
Table 1: Single-crop ImageNet accuracies (%) of linear classiï¬ers [79] trained on representations learned with different contrastive methods using ResNet-50 [30]. InfoMin Aug. refers to data augmentation using RandomResizedCrop, Color Jittering, Gaussian Blur, RandAugment, Color Dropping, and a JigSaw branch as in PIRL [49]. * indicates splitting the network into two halves.
Method Architecture Param. Head Epochs Top-1 Top-5 InstDis [73] Local Agg. [83] CMC [66] MoCo [28] PIRL [49] CPC v2 [31] SimCLR [10] ResNet-50 ResNet-50 ResNet-50* ResNet-50 ResNet-50 ResNet-50 ResNet-50 24 24 12 24 24 24 24 Linear Linear Linear Linear Linear - MLP 200 200 240 200 800 - 1000 56.5 58.8 60.0 60.6 63.6 63.8 69.3 - - 82.3 - - 85.3 89.0 InfoMin Aug. (Ours) ResNet-50 InfoMin Aug. (Ours) ResNet-50 24 24 MLP MLP 200 800 70.1 73.0 89.4 91.1
Besides, we also analyze how changing the magnitude parameter of individual augmentation functions trances out reverse-U shapes. We consider RandomResizedCrop and Color Jittering. For the former, a parameter c sets a low-area cropping bound, and smaller c indicates stronger augmentation. For the latter, a parameter x is adopted to control the strengths. The plots on ImageNet [16] are shown in Fig. 5, where we identify a sweet spot at 1.0 for Color Jittering and 0.2 for RandomResizedCrop.
6
Table 2: Results of object detection and instance segmentation ï¬ne-tuned on COCO. We adopt Mask R-CNN R50-FPN, and report the bounding box AP and mask AP on val2017. In the brackets are the gaps to the ImageNet supervised pre-training counterpart. For fair comparison, InstDis [73], PIRL [49], MoCo [28], and InfoMin are all pre-trained for 200 epochs. In green are the gaps of at least +0.5 point.
pre-train random init. supervised InstDis [73] PIRL [49] MoCo [28] MoCo v2 [11] InfoMin Aug. APbb 32.8 39.7 38.8 (â0.9) 38.6 (â1.1) 39.4 (â0.3) 40.1 (+0.4) 40.6 (+0.9) APbb 50 50.9 59.5 58.4 (â1.1) 58.2 (â1.3) 59.1 (â0.4) 59.8 (+0.3) 60.6 (+1.1) APbb 75 35.3 43.3 42.5 (â0.8) 42.1 (â1.2) 42.9 (â0.4) 44.1 (+0.8) 44.6 (+1.3) APmk 29.9 35.9 35.2 (â0.7) 35.1 (â0.8) 35.6 (â0.3) 36.3 (+0.4) 36.7 (+0.8) APmk 50 47.9 56.6 55.8 (â0.8) 55.5 (â1.1) 56.2 (â0.4) 56.9 (+0.3) 57.7 (+1.1) APmk 75 32.0 38.6 37.8 (â0.8) 37.7 (â0.9) 38.0 (â0.6) 39.1 (+0.5) 39.4 (+0.8)
(b) Mask R-CNN, R50-FPN, 2x schedule
pre-train random init. supervised InstDis [73] PIRL [49] MoCo [28] MoCo v2 [11] InfoMin Aug. APbb 38.4 41.6 41.3 (â0.3) 41.2 (â0.4) 41.7 (+0.1) 41.7 (+0.1) 42.5 (+0.9) APbb 50 57.5 61.7 61.0 (â0.7) 61.2 (â0.5) 61.4 (â0.3) 61.6 (â0.1) 62.7 (+1.0) APbb 75 42.0 45.3 45.3 (+0.0) 45.2 (â0.1) 45.7 (+0.4) 45.6 (+0.3) 46.8 (+1.5) APmk 34.7 37.6 37.3 (â0.3) 37.4 (â0.2) 37.5 (â0.1) 37.6 (+0.0) 38.4( (+0.8) APmk 50 54.8 58.7 58.3 (â0.4) 58.5 (â0.2) 58.6 (â0.1) 58.7 (+0.0) 59.7 (+1.0) APmk 75 37.2 40.4 39.9 (â0.5) 40.3 (â0.1) 40.5 (+0.1) 40.5 (+0.1) 41.4 (+1.0)
Table 3: Pascal VOC object detection. All contrastive models are pretrained for 200 epochs on ImageNet for fair comparison. We use Faster R-CNN R50-C4 architecture for object detection. APs are reported using the average of 5 runs. * we use numbers from [28] since the setting is exactly the same.
pre-train random init.* supervised* InstDis PIRL MoCo* MoCo v2 InfoMin Aug. (ours) AP50 60.2 81.3 80.9 (-0.4) 81.0 (-0.3) 81.5 (+0.2) 82.4 (+1.1) 82.7 (+1.4) AP 33.8 53.5 55.2 (+1.7) 55.5 (+2.0) 55.9 (+2.4) 57.0 (+3.5) 57.6 (+4.1) AP75 33.1 58.8 61.2 (+2.4) 61.3 (+2.5) 62.6 (+3.8) 63.6 (+4.8) 64.6 (+5.8) ImageNet Acc(%) - 76.1 59.5 61.7 60.6 67.5 70.1
Motivated by the InfoMin principle, we propose a new set of data augmentation, called InfoMin Aug. In combination of the JigSaw strategy proposed in PIRL [49], our InfoMin Aug achieves 73.0% top-1 accuracy on ImageNet linear readout benchmark with ResNet-50, outperforming SimCLR [10] by nearly 4%, as shown in Table 1. Besides, we also found that transferring our unsupervisedly pre-trained models to PASCAL VOC object detection and COCO instance segmentation consistently outperforms supervised ImageNet pre-training. More details and results are in Appendix. One goal of unsupervised pre-training is to learn transferable representations that are beneï¬cial for downstream tasks. The rapid progress of many vision tasks in past years can be ascribed to the paradigm of ï¬ne-tuning models that are initialized from supervised pre-training on ImageNet. When transferring to PASCAL VOC [22] and COCO [44], we found our InfoMin pre-training consistently outperforms supervised pre-training as well as other unsupervised pre-training methods.
COCO Object Detection/Segmentation. Feature normalization has been shown to be important during ï¬ne-tuning [28]. Therefore, we ï¬ne-tune the backbone with Synchronized BN (SyncBN [56]) and add SyncBN to newly initialized layers (e.g., FPN [43]). Table 2 reports the bounding box AP and mask AP on val2017 on COCO, using the Mask R-CNN [29] R50-FPN pipeline. All results are reported on Detectron2 [71]. We have tried different popular detection frameworks with various backbones, extended the ï¬ne-tuning schedule (e.g., 6x schedule), and compared InfoMin ResNeXt-152 [74] trained on ImageNet-1k with supervised ResNeXt-152 trained on ImageNet-5k (6 times larger than ImageNet-1k). In all cases, InfoMin consistently outperforms supervised pre-training. Please see Section D for more detailed comparisons.
Pascal VOC Object Detection. We strictly follow the setting introduced in [28]. Speciï¬cally, We use Faster R-CNN [58] with R50-C4 architecture. We ï¬ne-tune all layers with 24000 iterations, each consisting of 16 images. The results are reported in Table 3.
7
vy v2 movi ; 2 digit + FE aoa
Figure 6: Illustration of the Colorful-Moving-MNIST dataset. In this example, the ï¬rst view v1 is a sequence of frames containing the moving digit, e.g., v1 = x1:k. The matched second view v+ 2 share some factor with xt that v1 can predict, while the unmatched view vâ
Table 4: We study how information shared by views I(v1; v2) would affect the representation quality, by evaluating on three downstream tasks: digit classiï¬cation, localization, and background (STL-10) classiï¬cation. Evaluation for contrastive methods is performed by freezing the backbone and training a linear task-speciï¬c head
Single Factor Multiple Factors I(v1; v2) digit bkgd pos bkgd, digit, pos bkgd, digit bkgd, pos digit, pos Supervised 16.8 88.6 57.9 88.8 88.2 88.8 14.5 3.4 13.6 16.1 3.95 16.2 16.3 15.9 13.7 0.93
# 4 Learning views for contrastive learning
Hand-designed data augmentation is an effective method for generating views that have reduced mutual information and strong transfer performance for images. However, as contrastive learning is applied to new domains, generating views through careful construction of data augmentation strategies may prove ineffective. Furthermore, the types of views that are useful depend on the downstream task. Here we show the task-dependence of optimal views on a simple toy problem, and propose an unsupervised and semi-supervised learning method to learn views from data.
# 4.1 Optimal Views Depend on the Downstream Task
To understand how the choice of views impact the representations learned by contrastive learning, we construct a toy dataset that mixes three tasks. We build our toy dataset by combining Moving- MNIST [64] (consisting of videos where digits move inside a black canvas with constant speed and bounce off of image boundaries), with a ï¬xed background image sampled from the STL-10 dataset [13]. We call this dataset Colorful Moving-MNIST, which consists of three factors of variation in each frame: the class of the digit, the position of the digit, and the class of background image (see Appendix for more details). Here we analyze how the choice of views impacts which of these factors are extracted by contrastive learning.
Setup. We ï¬x view v1 as the sequence of past frames x1:k. For simplicity, we consider v2 as a single image, and construct it by referring to frame xt(t>k). One example of visualization is shown in Fig. 10, and please refer to Appendix for more details. We consider 3 downstream tasks for an image: (1) predict the digit class; (2) localize the digit; (3) classify the background image (10 classes from STL-10). This is performed by freezing the backbone and training a linear task-speciï¬c head. We also provide a âsupervisedâ baseline that is trained end-to-end for comparison.
Single Factor Shared. We consider the case that v1 and v2 only share one of the three factors: digit, position, or background. We synthesize v2 by setting one of the three factors the same as xt but randomly picking the other two. In such cases, the mutual information I(v1; v2) is either about digit, position, or background. The results are summarized in Table 4, which clearly shows that the performance is signiï¬cantly affected by what is shared between v1 and v2. Speciï¬cally, if the downstream task is relevant to one factor, I(v1; v2) should include that factor rather than others.
8
s ae g x x > S35 e 2 85 Bre a om * * a © * © 5 wo? a* 5 rN Â¥ 8074 2% 4 & 80 < < Ss te Raw RGB & 9(Xyovor),. NVP 3 de Raw RGB % 9(Xyopor). NVP 75 ie Raw YDbDr ® â 9(Xrce), VP â1 754 -llk Raw YDbDr % 9(Xrcs), VP 5 @ 9(Xpos).NVP x glXyob07), VP i 9(Xrcs).NVP x glXvobor), VP 4.0 45 5.0 55 6.0 Ince 4.0 45 6.0 Ince 5.0 55 (a) unsupervised (b) semi-supervised
Figure 7: View generator learned by (a) unsupervised or (b) semi-supervised objectives.
For example, when v2 only shares background image with v1, contrastive learning can hardly learn representations that capture digit class and location.
Multiple Factors Shared. We further explore how representation quality is changed if v1 and v2 share multiple factors. We follow a similar procedure as above to control factors shared by v1 and v2, and present the results in Table 4. We found that one factor can overwhelm another; for instance, whenever background is shared, the latent representation leaves out information for discriminating or localizing digits. This might because the information bits of background predominates, and the encoder chooses the background as a âshortcutâ to solve the contrastive pre-training task. When v1 and v2 share digit and position, the former is preferred over the latter.
# 4.2 Synthesizing Views with Invertible Generators
In this section, we design unsupervised and semi-supervised methods that synthesize novel views following the InfoMin principle. Concretely, we extend the color space experiments in Sec. 3.3 by learning ï¬ow-based models [18, 17, 39] that transfer natural color spaces into novel color spaces, from which we split the channels to get views. We still call the output of ï¬ow-based models as color spaces because the ï¬ows are designed to be pixel-wise and bijective (by its nature), which follows the property of color space conversion. After the views have been learned, we perform standard contrastive learning followed by linear classiï¬er evaluation.
Practically, the ï¬ow-based model g is restricted to pixel-wise 1x1 convolutions and ReLU activations, operating independently on each pixel. We try both volume preserving (VP) and non-volume preserving (NVP) ï¬ows. For an input image X, the splitting over channels is represented as {X1, X2:3}. ËX signiï¬es the transformed image, i.e., ËX = g(X). Experiments are conducted on STL-10, which includes 100k unlabeled and 5k labeled images. More details are in Appendix.
# 4.2.1 Unsupervised View Learning: Minimize I(v1; v2)
The idea is to leverage an adversarial training strategy [25]. Given ËX = g(X), we train two encoders f1, f2 to maximize INCE( ËX1; ËX2:3) as in Eqn. 1, similar to the discriminator of GAN [25]. Meanwhile, g is adversarially trained to minimize INCE( ËX1; ËX2:3). Formally, the objective is:
min max Taf? (9(X)15.9(X)2:3) @) 9 fisfe
Alternatively, one may use other MI bounds [7, 57], but we ï¬nd INCE works well and keep using it. We note that the invertibility of g(·) prevent it from learning degenerate/trivial solutions.
Results. We experiment with RGB and YDbDr. As shown in Fig. 7(a), a reverse U-shape of INCE and downstream accuracy is present. Interestingly, YDbDr is already near the sweet spot. This happens to be in line with our human prior that the âluminance-chrominanceâ decomposition is a good way to decorrelate colors but still retains recognizability of objects. We also note that another luminance-chrominance decomposition Lab, which performs similarly well to YDbDr (Fig. 4), was designed to mimic the way humans perceive color [35]. Our analysis therefore suggests yet another rational explanation for why humans perceive color the way we do â human perception of color may be near optimal for self-supervised representation learning.
9
Table 5: Comparison of different view generators by measuring STL-10 classiï¬cation accuracy: supervised, un- supervised, and semi-supervised. â# of Imagesâ indicates how many images are used to learn view generators. In representation learning stage, all 105k images are used.
Method (# of Images) RGB YDbDr unsupervised (100k) supervised (5k) semi-supervised (105k) raw views 82.4 ± 3.2 79.9 ± 1.5 86.0 ± 0.6 81.5 ± 0.2 84.3 ± 0.5 78.5 ± 2.3 87.0 ± 0.3 86.6 ± 0.2
96 ~Câ RGB ZZ a(RGB) (I YDbbr Za o(YDbDr) gS > 94 YY £& 92 3 90 < 88 z 86 in) ResNet50 ResNet50x2
Figure 8: Switching to larger backbones with views learned by the semi-supervised method.
With this unsupervised objective, in most cases INCE between views is overly reduced. In addition, we found this GAN-style training is unstable, as different runs with the same hyper-parameter vary signiï¬cantly. We conjecture it is because the view generator has no knowledge about the downstream task, and thus the constraint I(v1, y) = I(v2, y) = I(x, y) in Proposition 3.1 is heavily broken. To overcome this, we further develop an semi-supervised view learning method.
# 4.2.2 Semi-supervised View Learning: Find Views that Share the Label Information
We assume a handful of labels for the downstream task are available. Thus we can guide the generator g to retain I(g(X)1, y) and I(g(X)2:3, y). Practically, we introduce two classiï¬ers on each of the learned views to perform classiï¬cation during the view learning process. Formally, we optimize:
nin, max Tf? (g(X)a3 g(X)asa) + Lee(er(g(X)1), y) + Lee(e2(g(X)2:3); 9) (3) ees fais yr ) unsupervised: reduce (v1; v2) supervised: keep I(v1; y) and I(v2; y)
where c1, c2 are the classiï¬ers. The INCE term applies to all data while the latter two are only for labeled data. In each iteration, we sample an unlabeled batch and a labeled batch. After this process is done, we use frozen g to generate views for unsupervised contrastive representation learning.
Results. The plots are shown in Figure 7(b). Now the learned views are centered around the sweet spot, no matter what the input color space is and whether the generator is VP or NVP, which highlights the importance of keeping information about y. Meanwhile, to see the importance of the unsupervised term, which reduces IN CE, we train another view generator with only supervised loss. We further compare âsupervisedâ, âunsupervisedâ and âsemi-supervisedâ (the supervised + unsupervised losses) generators in Table 5, where we also includes contrastive learning over the original color space (âraw views") as a baseline. The semi-supervised view generator signiï¬cantly outperforms the supervised one, validating the importance of reducing I(v1; v2). We compare further compare g(X) with X (X is RGB or YDbDr) on larger backbone networks, as shown in Fig. 8, We see that the learned views consistently outperform its raw input, e.g., g(RGB) surpasses RGB by a large margin and reaches 94% classiï¬cation accuracy.
# 5 Conclusion
We have characterized that good views for a given task in contrastive representation learning frame- work should retain task-relevant information while minimizing irrelevant nuisances, which we call InfoMin principle. Based on it, we demonstrate that optimal views are task-dependent in both theory and practice. We further propose a semi-supversied method to learn effective views for a given task. In addition, we analyze the data augmentation used in recent methods from the InfoMin perspective, and further propose a new set of data augmentation that achieved a new state-of-the-art top-1 accuracy on ImageNet linear readout benchmark with a ResNet-50.
10
# Broader Impact
This paper is on the basic science of representation learning, and we believe it will be beneï¬cial to both the theory and practice of this ï¬eld. An immediate application of self-supervised representation learning is to reduce the reliance on labeled data for downstream applications. This may have the beneï¬cial effects of being more cost effective and reducing biases introduced by human annotations. At the same time, these methods open up the ability to use uncurated data more effectively, and such data may hide errors and biases that would have been uncovered via the human curation process. We also note that the view constructions we propose are not bias free, even when they do not use labels: using one color space or another may hide or reveal different properties of the data. The choice of views therefore plays a similar role to the choice of training data and training annotations in traditional supervised learning.
# Acknowledgments and Disclosure of Funding
Acknowledgements. This work was done when Yonglong Tian was a student researcher at Google. We thank Kevin Murphy for fruitful and insightful discussion; Lucas Beyer for feedback on related work; and Google Cloud team for supporting computation resources. Yonglong is grateful to Zhoutong Zhang for encouragement and feedback on experimental design.
Funding. Funding for this project was provided Google, as part of Yonglong Tianâs role as a student researcher at Google.
Competing interests. In the past 36 months, Phillip Isola has had employment at MIT, Google, and OpenAI; honorarium for lecturing at the ACDL summer school in Italy; honorarium for speaking at GIST AI Day in South Korea. P.I.âs lab at MIT has been supported by grants from Facebook, IBM, and the US Air Force; start up funding from iFlyTech via MIT; gifts from Adobe and Google; compute credit donations from Google Cloud. Yonglong Tian is a Ph.D. student supported by MIT EECS department. Chen Sun, Ben Poole, Dilip Krishan, and Cordelia Schmid are employees at Google.
# References
[1] Eirikur Agustsson and Radu Timofte. Ntire 2017 challenge on single image super-resolution: Dataset and study. In CVPR Workshops, 2017. 5, 16
[2] Alexander A Alemi, Ian Fischer, Joshua V Dillon, and Kevin Murphy. Deep variational information bottleneck. arXiv:1612.00410, 2016. 2, 4
[3] Relja Arandjelovic and Andrew Zisserman. Objects that sound. In ECCV, 2018. 2 [4] Yuki M Asano, Christian Rupprecht, and Andrea Vedaldi. A critical analysis of self-supervision,
or what we can learn from a single image. In ICLR, 2020. 3
[5] Yuki Markus Asano, Christian Rupprecht, and Andrea Vedaldi. Self-labelling via simultaneous clustering and representation learning. In ICLR, 2020. 21
[6] Philip Bachman, R Devon Hjelm, and William Buchwalter. Learning representations by maximizing mutual information across views. arXiv:1906.00910, 2019. 2, 3
[7] Mohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeswar, Sherjil Ozair, Yoshua Ben- gio, Aaron Courville, and R Devon Hjelm. Mine: mutual information neural estimation. arXiv:1801.04062, 2018. 9
[8] Jorge Luis Borges. Funes, the memorious. na, 1962. 1 [9] Zhaowei Cai and Nuno Vasconcelos. Cascade r-cnn: Delving into high quality object detection.
In CVPR, 2018. 22
[10] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. arXiv:2002.05709, 2020. 2, 3, 6, 7, 19, 20, 21 [11] Xinlei Chen, Haoqi Fan, Ross Girshick, and Kaiming He. Improved baselines with momentum
contrastive learning. arXiv:2003.04297, 2020. 3, 7, 20, 21, 22
[12] Soo-Whan Chung, Joon Son Chung, and Hong-Goo Kang. Perfect match: Improved cross-modal embeddings for audio-visual synchronisation. In ICASSP, 2019. 2
11
[13] Adam Coates, Andrew Ng, and Honglak Lee. An analysis of single-layer networks in unsuper- vised feature learning. In AISTATS, 2011. 8
[14] Thomas M Cover and Joy A Thomas. Entropy, relative entropy and mutual information. Elements of information theory, 1991. 15
[15] Ekin D Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V Le. Randaugment: Practical data augmentation with no separate search. arXiv:1909.13719, 2019. 6, 19
[16] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009. 6
[17] Laurent Dinh, David Krueger, and Yoshua Bengio. Nice: Non-linear independent components estimation. arXiv:1410.8516, 2014. 9
[18] Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real nvp. arXiv:1605.08803, 2016. 9
[19] Carl Doersch, Abhinav Gupta, and Alexei A Efros. Unsupervised visual representation learning by context prediction. In ICCV, 2015. 2, 21
[20] Jeff Donahue and Karen Simonyan. Large scale adversarial representation learning. In NeurIPS, 2019. 2, 21
[21] Alexey Dosovitskiy, Jost Tobias Springenberg, Martin Riedmiller, and Thomas Brox. Discrim- inative unsupervised feature learning with convolutional neural networks. In NIPS, 2014. 2, 21
[22] Mark Everingham, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. The pascal visual object classes (voc) challenge. IJCV, 2010. 7
[23] Ian Fischer. The conditional entropy bottleneck. arXiv:2002.05379, 2020. 4 [24] Spyros Gidaris, Praveer Singh, and Nikos Komodakis. Unsupervised representation learning by
predicting image rotations. arXiv:1803.07728, 2018. 2, 21
[25] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, 2014. 9, 18 [26] Daniel Gordon, Kiana Ehsani, Dieter Fox, and Ali Farhadi. Watching the world go by: Repre-
sentation learning from unlabeled videos. arXiv:2003.07990, 2020. 2
[27] Tengda Han, Weidi Xie, and Andrew Zisserman. Video representation learning by dense predictive coding. In ICCV Workshop, 2019. 2
[28] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. arXiv:1911.05722, 2019. 2, 3, 6, 7, 20, 21, 22 [29] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask r-cnn. In ICCV, 2017. 7 [30] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image
recognition. In CVPR, 2016. 6
[31] Olivier J Hénaff, Ali Razavi, Carl Doersch, SM Eslami, and Aaron van den Oord. Data-efï¬cient image recognition with contrastive predictive coding. arXiv:1905.09272, 2019. 2, 6, 21 [32] R Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Adam Trischler, and Yoshua Bengio. Learning deep representations by mutual information estimation and maximization. In ICLR, 2019. 2, 16
[33] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 1997. 18
[34] Phillip Isola, Daniel Zoran, Dilip Krishnan, and Edward H. Adelson. Learning visual groups from co-occurrences in space and time. ICLR Workshop, 2016. 2
[35] Anil K Jain. Fundamentals of digital image processing. Englewood Cliffs, NJ: Prentice Hallâ 1989. 9
[36] Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. Supervised contrastive learning. arXiv:2004.11362, 2020. 2
[37] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv:1412.6980, 2014. 18
12
[38] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv:1312.6114, 2013. 2
[39] Durk P Kingma and Prafulla Dhariwal. Glow: Generative ï¬ow with invertible 1x1 convolutions. In NIPS, 2018. 9
[40] Alexander Kolesnikov, Xiaohua Zhai, and Lucas Beyer. Revisiting self-supervised visual representation learning. In CVPR, 2019. 21
[41] Lingpeng Kong, Cyprien de Masson dâAutume, Lei Yu, Wang Ling, Zihang Dai, and Dani Yogatama. A mutual information maximization perspective of language representation learning. In ICLR, 2020. 2
[42] Tianhao Li and Limin Wang. Learning spatiotemporal features via video and text pair discrimi- nation. arXiv:2001.05691, 2020. 3
[43] Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection. In CVPR, 2017. 7
[44] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In ECCV, 2014. 7
[45] Ralph Linsker. Self-organization in a perceptual network. Computer, 1988. 2
[46] Lajanugen Logeswaran and Honglak Lee. An efï¬cient framework for learning sentence repre- sentations. In ICLR, 2018. 2
[47] Antoine Miech, Jean-Baptiste Alayrac, Lucas Smaira, Ivan Laptev, Josef Sivic, and Andrew Zisserman. End-to-end learning of visual representations from uncurated instructional videos. arXiv:1912.06430, 2019. 3
[48] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed repre- sentations of words and phrases and their compositionality. In NeurIPS, 2013. 2
[49] Ishan Misra and Laurens van der Maaten. Self-supervised learning of pretext-invariant repre- sentations. arXiv:1912.01991, 2019. 6, 7, 20, 21
[50] Pedro Morgado, Nuno Vasconcelos, and Ishan Misra. Audio-visual instance discrimination with cross-modal agreement. arXiv:2004.12943, 2020. 2
[51] Pushmeet Kohli Nathan Silberman, Derek Hoiem and Rob Fergus. Indoor segmentation and support inference from rgbd images. In ECCV, 2012. 5
[52] Mehdi Noroozi and Paolo Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles. In ECCV, 2016. 2, 20, 21
[53] Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv:1807.03748, 2018. 2, 3, 6
[54] Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context encoders: Feature learning by inpainting. In CVPR, 2016. 2
[55] Mandela Patrick, Yuki M Asano, Ruth Fong, João F Henriques, Geoffrey Zweig, and Andrea Vedaldi. Multi-modal self-supervision from generalized data transformations. arXiv:2003.04298, 2020. 2
[56] Chao Peng, Tete Xiao, Zeming Li, Yuning Jiang, Xiangyu Zhang, Kai Jia, Gang Yu, and Jian Sun. Megdet: A large mini-batch object detector. In CVPR, 2018. 7
[57] Ben Poole, Sherjil Ozair, Aaron van den Oord, Alexander A Alemi, and George Tucker. On variational bounds of mutual information. arXiv:1905.06922, 2019. 2, 9
[58] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In NIPS, 2015. 7
[59] Pierre Sermanet, Corey Lynch, Yevgen Chebotar, Jasmine Hsu, Eric Jang, Stefan Schaal, Sergey Levine, and Google Brain. Time-contrastive networks: Self-supervised learning from video. In ICRA, 2018. 2
[60] Ohad Shamir, Sivan Sabato, and Naftali Tishby. Learning and generalization with the informa- tion bottleneck. Theoretical Computer Science, 2010. 4
13
[61] Stefano Soatto and Alessandro Chiuso. Visual representations: Deï¬ning properties and deep approximations. In ICLR, 2016. 2, 3
[62] Kihyuk Sohn. Improved deep metric learning with multi-class n-pair loss objective. In NIPS, 2016. 2
[63] Aravind Srinivas, Michael Laskin, and Pieter Abbeel. Curl: Contrastive unsupervised represen- tations for reinforcement learning. arXiv:2004.04136, 2020. 2
[64] Nitish Srivastava, Elman Mansimov, and Ruslan Salakhudinov. Unsupervised learning of video representations using lstms. In ICML, 2015. 8, 17
[65] Chen Sun, Fabien Baradel, Kevin Murphy, and Cordelia Schmid. Contrastive bidirectional transformer for temporal representation learning. arXiv:1906.05743, 2019. 3
[66] Yonglong Tian, Dilip Krishnan, and Phillip Isola. arXiv:1906.05849, 2019. 2, 3, 4, 5, 6, 16, 17, 20, 21 Contrastive multiview coding.
[67] Yonglong Tian, Dilip Krishnan, and Phillip Isola. Contrastive representation distillation. In ICLR, 2020. 2
[68] Naftali Tishby, Fernando C Pereira, and William Bialek. The information bottleneck method. arXiv preprint physics/0004057, 2000. 2, 4
[69] Michael Tschannen, Josip Djolonga, Marvin Ritter, Aravindh Mahendran, Neil Houlsby, Syl- vain Gelly, and Mario Lucic. Self-supervised learning of video-induced visual invariances. arXiv:1912.02783, 2019. 2
[70] Michael Tschannen, Josip Djolonga, Paul K Rubenstein, Sylvain Gelly, and Mario Lucic. On mutual information maximization for representation learning. arXiv:1907.13625, 2019. 2 [71] Yuxin Wu, Alexander Kirillov, Francisco Massa, Wan-Yen Lo, and Ross Girshick. Detectron2.
https://github.com/facebookresearch/detectron2, 2019. 7
[72] Zhirong Wu, Alexei A Efros, and Stella X Yu. Improving generalization via scalable neighbor- hood component analysis. In ECCV, 2018. 2
[73] Zhirong Wu, Yuanjun Xiong, Stella X Yu, and Dahua Lin. Unsupervised feature learning via non-parametric instance discrimination. In CVPR, 2018. 2, 3, 6, 7, 19, 20, 21
[74] Saining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. In CVPR, 2017. 7
[75] Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. Xlnet: Generalized autoregressive pretraining for language understanding. In NeurIPS, 2019. 2
[76] Mang Ye, Xu Zhang, Pong C Yuen, and Shih-Fu Chang. Unsupervised embedding learning via invariant and spreading instance feature. In CVPR, 2019. 2
[77] Xiaohua Zhai, Avital Oliver, Alexander Kolesnikov, and Lucas Beyer. S4l: Self-supervised semi-supervised learning. In ICCV, 2019. 2
[78] Liheng Zhang, Guo-Jun Qi, Liqiang Wang, and Jiebo Luo. Aet vs. aed: Unsupervised represen- tation learning by auto-encoding transformations rather than data. In CVPR, 2019. 2
[79] Richard Zhang, Phillip Isola, and Alexei A Efros. Colorful image colorization. In ECCV, 2016. 6, 21
[80] Richard Zhang, Phillip Isola, and Alexei A Efros. Split-brain autoencoders: Unsupervised learning by cross-channel prediction. In CVPR, 2017. 2, 5
[81] Nanxuan Zhao, Zhirong Wu, Rynson WH Lau, and Stephen Lin. Distilling localization for self-supervised representation learning. arXiv:2004.06638, 2020. 2
[82] Chengxu Zhuang, Alex Andonian, and Daniel Yamins. Unsupervised learning from video with deep neural embeddings. arXiv:1905.11954, 2019. 2
[83] Chengxu Zhuang, Alex Lin Zhai, and Daniel Yamins. Local aggregation for unsupervised learning of visual embeddings. arXiv:1903.12355, 2019. 2, 6, 21
14
# Appendix: What Makes for Good Views for Contrastive Learning?
# A Proof of Proposition 3.1
In this section, we provide proof for the statement regarding optimal views in proposition 3.1 of the main text. As a warmup, we ï¬rstly recap some properties of mutual information.
# A.1 Properties of MI [14]:
(1) Nonnegativity:
I(x; y) ⥠0; I(x; y|z) ⥠0
(2) Chain Rule:
I(x; y, z) = I(x; y) + I(x; z|y)
(2) Multivariate Mutual Information:
I(x1; x2; ...; xn+1) = I(x1; ...; xn) â I(x1; ...; xn|xn+1)
# A.2 Proof
Proposition A.1. According to Proposition 1, the optimal views vâ views such that I(vâ 1; vâ 2) = I(vâ 1; y) = I(vâ 2; y) = I(x; y) 1, vâ 2 for task T with label y, are
Proof. Since I(v1; y) = I(v2; y) = I(x; y), and v1, v2 are functions of x.
I(y; x) = I(y; v1, v2) = I(y; v1) + I(y; v2|v1) = I(y; x) + I(y; v2|v1)
Therefore I(y; v2|v1) = 0, due to the nonnegativity. Then we have:
I(v1; v2) = I(v1; v2) + I(y; v2|v1) = I(v2; v1, y) = I(v2; y) + I(v2; v1|y) ⥠I(v2; y) = I(x; y)
Therefore the optimal views vâ 1; vâ I(vâ y, as now I(vâ 1, vâ 2 that minimizes I(v1; v2) subject to the constraint yields 2 are conditionally independent given 2) = I(x; y). Also note that optimal views vâ 1, vâ 2; vâ 1|y) = 0.
Proposition A.2. Given optimal views vâ 2 and minimal sufï¬cient encoders f1, f2, then the learned representations z1 (or z2) are sufï¬cient statistic of v1 (or v2) for y, i.e., I(z1; y) = I(v1; y) or I(z2; y) = I(v2; y).
Proof. Letâs prove for z1. Since z1 is a function of v1, we have:
I(y; v1) = I(y; v1, z1) = I(y; z1) + I(y; v1|z1)
15
To prove I(y; v1) = I(y; z1), we need to prove I(y; v1|z1) = 0.
y:vi) â Ilys vis 21) YÂ¥3 V1; V2) + I(y;vilv2) â L(y; v3 21) Â¥3 Vi; V2) + I(y;vilva) â (L(y; vis 213 v2) + T(y; v1; 21/2) y; vilv2) + [T(y; vi; v2) â I(y3 vi; 21; v2)] â L(y; vi; 21[v2) y; vilv2) + I(y; v1; v2lz1) â I(y; vi; 21|v2) y;vilv2) + I(y;vi; v2|z1) + L(y; za|vi, v2) âI(y; z1{v2) 0 (y; vilv2) + L(y; v1; valz1) = I(y;vilv2) + I(v1; v2|z1) â I(vi; valy, 21) 0 = I(y;vilv2) + I(v1; v2|z1)
In the above derivation I(y; z1|v1, v2) = 0 because z1 is a function of v1; I(v1; v2|y, z1) = 0 because optimal views v1, v2 are conditional independent given y, see Proposition A.1. Now, we can easily prove I(y; v1|v2) = 0 following a similar procedure in Proposition A.1. If we can further prove I(v1; v2|z1) = 0, then we get I(y; v1|z1) ⤠0. By nonnegativity, we will have I(y; v1|z1) = 0.
To see I(v1; v2|z1) = 0, recall that our encoders are sufï¬cient. According to Deï¬nition 1, we have I(v1; v2) = I(v2; z1):
T(v1; V2|21) = I(v1; v2) â T(v1; v25 21) = I(v1; v2) â I(v2;21) + I(va; z1|v1) See 0 0
Proposition A.3. The representations z1 and z2 are also minimal for y.
Proof. For all sufï¬cient encoders, we have proved z1 are sufï¬cient statistic of v1 for predicting y. Namely I(v1; y|z1) = 0. Now:
T(a13 v1) = T(z; vily) + T(zaivisy) = T(za;vily) + T(vasy) â Lvisylzs) YS 0 = I (21; vily) +T(visy) >I(visy)
The minimal sufï¬cient encoder will minimize I(z1; v1) to I(v1; y). This is achievable and leads to I(z1; v1|y) = 0. Therefore, z1 is a minimal sufï¬cient statistic for predicting y, thus optimal. Similarly, z2 is also optimal.
# B Implementation Details
# B.1 Spatial Patches with Different Distance
Why using DIV2K [1]? Recall that we randomly sample patches with a distance of d. During such sampling process, there is a possible bias that with an image of relatively small size (e.g., 512x512), a large d (e.g., 384) will always push these two patches around the boundary. To minimize this bias, we choose to use high resolution images (e.g. 2k) from DIV2K dataset.
Setup and Training. We use the training framework of CMC [66]. The backbone network is a tiny AlexNet, following [32, 66]. We train for 3000 epochs, with the learning rate initialized as 0.03 and decayed with cosine annealing.
16
Evaluation. We evaluate the learned representation on both STL-10 and CIFAR-10 datasets. For CIFAR-10, we resize the image to 64Ã64 to extract features. The linear classiï¬er is trained for 100 epochs.
# B.2 Channel Splitting with Various Color Spaces
Setup and Training. The backbone network is also a tiny AlexNet, with the modiï¬cation of adapting the ï¬rst layer to input of 1 or 2 channels. We follow the training recipe in [66].
Evaluation. For the evaluation on STL-10 dataset, we train a linear classiï¬er for 100 epochs and report the single-crop classiï¬cation accuracy. For NYU-Depth-v2 segmentation task, we freeze the backbone network and train a 4-layer decoder on top of the learned representations. We report the mean IoU for labeled classes.
B.3 Reducing I(v1; v2) with Frequency Separation
3 e Bluro oo 1.25 #0 © Bluro 28 300 125 p68 159 os 2.09, os %5 ~ ° 2 38.5 38.0 2.09 2 ° 37.5 %4 x ° 37.0 76.0 %.35 * 755 03 3. 35 7.10 7.15 7.20 7.25 7.30 7.35 7.40 7.45 7.50 7.10 7.15 7.20 7.25 7.30 7.35 7.40 7.45 7.50 Ince Ince (a) STL-10 classification (b) Tiny ImageNet classification STL-10 Accuracy (%) 3 Tiny ImageNet Accuracy (%) sf a
Figure 9: We create views by splitting images into low- and high-frequency pairs with a blur function parameterized by Ï. IN CE is maximized at Ï = 0.7. Starting from this point, either increasing or decreasing Ï will reduce IN CE but interestingly they form two different trajectories. When increasing Ï from 0.7, the accuracy ï¬rstly improves and then drops, forming a reverse-U shape corresponding to (a) in Figure 2 of the main paper. While decreasing Ï from 0.7, the accuracy keeps diminishing, corresponding to (b) in Figure 2 of the main paper.
Another example we consider is to separate images into low- and high-frequency images. To simplify, we extract v1 and v2 by Gaussian blur, i.e.,
# v1 = Blur(x, Ï) v2 = x â v1
where Blur is the Gaussian blur function and Ï is the parameter controlling the kernel. Extremely small or large Ï can make the high- or low-frequency image contain little information. In theory, the maximal I(v1; v2) is obtained with some intermediate Ï. As shown in Figure 9, we found Ï = 0.7 leads to the maximal IN CE on the STL-10 dataset. Either blurring more or less will reduce IN CE, but interestingly blurring more leads to different trajectory in the plot than blurring less. When increasing Ï from 0.7, the accuracy ï¬rstly improves and then drops, forming a reverse-U shape with a sweet spot at Ï = 1.0. This situation corresponds to (a) in Figure 2 of the main paper. While decreasing Ï from 0.7, the accuracy keeps diminishing, corresponding to (b) in Figure 2 of the main paper. This reminds us of the two aspects in Proposition 3.1: mutual information is not the whole story; what information is shared between the two views also matters.
Setup and Training. The setup is almost the same as that in color channel splitting experiments, except that each view consists of three input channels. We follow the training recipe in [66].
Evaluation. We train a linear classiï¬er for 100 epochs on STL-10 dataset and 40 epochs on TinyImageNet dataset.
# B.4 Colorful Moving MNIST
Dataset. Following the original Moving MNIST dataset [64], we use a canvas of size 64Ã64, which contains a digit of size 28Ã28. The back ground image is a random crop from original STL-10
17
vy v2 position Z Xe a 2 digit - ie kad . 2 Ps â¢~âs g c-moving MNIST
Figure 10: Illustration of the Colorful-Moving-MNIST dataset. In this example, the ï¬rst view v1 is a sequence of frames containing the moving digit, e.g., v1 = x1:k. The matched second view v+ 2 share some factor with xt that v1 can predict, while the unmatched view vâ
images (96Ã96). The starting position of the digit is uniformly sampled inside the canvas. The direction of the moving velocity is uniformly sampled in [0, 2Ï], while the magnitude is kept as 0.1 of the canvas size. When the digit touches the boundary, the velocity is reï¬ected.
Setup. We use the ï¬rst 10 frames as v1 (namely k = 10), and we construct v2 by referring to the 20-th frame (namely t = 20). During the contrastive learning phase, we employ a 4-layer ConvNet to encode images and use a single layer LSTM [33] on top of the ConvNet to aggregate features of continuous frames. The CNN backbone consists of 4 layers with 8, 16, 32, 64 ï¬lters from low to high. Average pooling is applied after the last convolutional layer, resulting in a 64 dimensional representation. The dimensions of the hidden layer and output in LSTM are both 64.
Examples. The examples of v1 and v2 are shown in Figure 10, where the three rows on the RHS shows cases that only a single factor (digit, position, or background) is shared.
Training. We perform intra-batch contrast. Namely, inside each batch of size 128, we contrast each sample with the other 127 samples. We train for 200 epochs, with the learning rate initialized as 0.03 and decayed with cosine annealing.
# B.5 Un-/Semi-supervised View Learning
Xi ~_ X (+) Y; ® Q © x ~O Y> Xo G) Y2 (a) Volume-Preserving (b) Volume-Preserving
Figure 11: Volume-preserving (a), and none volume-preserving (b) invertible model.
Invertible Generator. Figure 11 shows the basic building block for the Volume-Preserving (VP) and None-Volume-Preserving (NVP) invertible view generator. The F and G are pixel-wise convolutional function, i.e., convolutional layers with 1Ã1 kernel. X1 and Y1 represent a single channel of the input and output respectively, while X2 and Y2 represent the other two channels. While stacking basic building blocks, we alternatively select the ï¬rst, second, and the third channel as X1, to enhance the expressivity of view generator.
Setup and Training. For unsupervised view learning that only uses the adversarial IN CE loss, we found the training is relatively unstable, as also observed in GAN [25]. We found the learning rate of view generator should be larger than that of IN CE approximator. Concretely, we use Adam optimizer [37], and we set the learning rates of view generator and IN CE approximator as 2e-4 and 6e-4, respectively. For the semi-supervised view learning, we found the training is stable across different learning rate combinations, which we considered as an advantage. To be fair, we still use the same learning rates for both view generator and IN CE approximator.
Contrastive Learning and Evaluation. After the view learning stage, we perform contrastive learning and evaluation by following the recipe in Section B.2.
18
# C Data Augmentation as InfoMin
# InfoMin Augmentation
BACB Bac PyTorch-style data augmentation a & RandomResizedCrop(scale=(0.2, 1.0)) #)Blur RandomHorizontalFlip () BA ee lh stââ se â£)(2.0) cj = Colorditter([0.8,0.8,0.8,0.4]«x) RandomApply([cj], p=0.8) # Blur: random blurring &)(0.5) blur = Blur(sigma=(0.1,2.0)) RandomApply([blur], p=0.5) (£3(0|25) # RA: RandAugment rnd_augment () RandomGrayscale (p=0.2), a 8 a ImageNet Accuracy (%) wo 6s w ty a By) 4 Augmentation CQ.125) 96 98 100 102 104 106 Ince (a) Ince v. s Accuracy (b) Data Augmentation
Figure 12: (a) data augmentation as InfoMin on ImageNet with linear projection head; (b) illustration of step-by-step data augmentation used in InfoMin.
InfoMin Aug. We gradually strengthen the family of data augmentation functions T, and plot the trend between accuracy in downstream linear evaluation benchmarks and IN CE. The overall results are shown in Figure 12(a), where the plot is generated by only varying data augmentation while keeping all other settings ï¬xed. We consider Color Jittering with various strengths, Gaussian Blur, RandAugment [15], and their combinations, as illustrated in Figure 12(b). The results suggest that as we reduce IN CE(v1; v1), via stronger T (in theory, I(v1; v1) also decreases), the downstream accuracy keeps improving.
# C.2 Analysis of Data Augmentation as it relates to MI and Transfer Performance
We also investigate how sliding the strength parameter of individual augmentation functions leads to a practical reverse-U curves, as shown in Figures 13 and 14.
(a) Linear projection head (b) MLP projection head
Figure 13: Different low-area cropping bounds in RandomResizedCrop.
In PyTorch, the RandomResizedCrop(scale=(c, 1.0)) data augmentation Cropping. function sets a low-area cropping bound c. Smaller c means more aggressive data augmentation. We vary c for both a linear critic head [73] (with temperature 0.07) and nonlinear critic head [10] (with temperature 0.15), as shown in Figure 13. In both cases, decreasing c forms a reverse-U shape between IN CE and linear classiï¬cation accuracy, with a sweet spot at c = 0.2. This is different from the widely used 0.08 in the supervised learning setting. Using 0.08 can lead to more than 1% drop in accuracy compared to the optimal 0.2 when a nonlinear projection head is applied.
Color Jittering. As shown in Figure 12(b), we adopt a parameter x to control the strengths of color jittering function. As shown in Figure 14, increasing x from 0.125 to 2.5 also traces a reverse-U
19
(a) Linear projection head (b) MLP projection head
=
< 2
=
Figure 14: Different magnitudes of Color Jittering.
shape, no matter whether a linear or nonlinear projection head is used. The sweet spot lies around x = 1.0, which is the same value as used in SimCLR [10]. Practically, we see the accuracy is more sensitive around the sweet spot for the nonlinear projection head, which also happens for cropping. This implies that it is important to ï¬nd the sweet spot for future design of augmentation functions.
Details. These plots are based on the MoCo [28] framework. We use 65536 negatives and pre-train for 100 epochs on 8 GPUs with a batch size of 256. The learning rate starts as 0.03 and decays following a cosine annealing schedule. For the downstream task of linear evaluation, we train the linear classiï¬er for 60 epochs with an initial learning rate of 30, following [66].
# C.3 Results on ImageNet Benchmark
On top of the âRA-CJ-Blurâ augmentations shown in Figure 12, we further reduce the mutual information (or enhance the invariance) of views by using PIRL [49], i.e., adding JigSaw [52]. This improves the accuracy of the linear classiï¬er from 63.6% to 65.9%. Replacing the widely-used linear projection head [73, 66, 28] with a 2-layer MLP [10] increases the accuracy to 67.3%. When using this nonlinear projection head, we found a larger temperature is beneï¬cial for downstream linear readout (as also reported in [11]). All these numbers are obtained with 100 epochs of pre-training. For simplicity, we call such unsupervised pre-training as InfoMin pre-training (i.e., pre-training with our InfoMin inspired augmentation). As shown in Table 6, our InfoMin model trained with 200 epochs achieves 70.1%, outperforming SimCLR with 1000 epochs. Finally, a new state-of-the-art, 73.0% is obtained by training for 800 epochs. Compared to SimCLR requiring 128 TPUs for large batch training, our model can be trained with as less as 4 GPUs on a single machine.
For future improvement, there is still room for manually designing better data augmentation. As shown in Figure 12(a), using âRA-CJ-Blurâ has not touched the sweet spot yet. Another way to is to learn to synthesize better views (augmentations) by following (and expanding) the idea of semi-supervised view learning method presented in Section 4.2.2 of the main paper.
Different Architectures. We further include the performance of InfoMin as well as other SoTA methods with different architectures in Table 6. Increasing the network capacity leads to signiï¬cant improvement of linear readout performance on ImageNet for InfoMin, which is consistent with previous literature [66, 28, 10, 49].
# C.4 Comparing with SoTA in Transfer Learning
# D Transfer Learning with Various Backbones and Detectors on COCO
We evaluated the transferability of various models pre-trained with InfoMin, under different detection frameworks and ï¬ne-tuning schedules. In all cases we tested, models pre-trained with InfoMin outperform those pre-trained with supervised cross-entropy loss. Interestingly, ResNeXt-152 trained with InfoMin on ImageNet-1K beats its supervised counterpart trained on ImageNet 5K, which is 6x times larger. Bounding box AP and mask Ap are reported on val2017
20
Table 6: Single-crop ImageNet accuracies (%) of linear classiï¬ers [79] trained on representations learned with different methods using various architectures.
Method Architecture Param. Head Epochs Top-1 Methods using contrastive learning: InstDis [73] Local Agg. [83] CPC v2 [31] CMC [66] CMC [66] CMC [66] MoCo [28] MoCo [28] MoCo [28] PIRL [49] PIRL [49] SimCLR [10] SimCLR [10] SimCLR [10] MoCo V2 [11] InfoMin Aug. InfoMin Aug. InfoMin Aug. InfoMin Aug. InfoMin Aug. InfoMin Aug. InfoMin Aug. ResNet-50 ResNet-50 ResNet-50 2x ResNet-50(0.5x) 2x ResNet-50(1x) 2x ResNet-50(2x) ResNet-50 ResNet-50 (2x) ResNet-50 (4x) ResNet-50 ResNet-50 (2x) ResNet-50 ResNet-50 (2x) ResNet-50 (4x) ResNet-50 ResNet-50 ResNet-50 ResNet-50 ResNet-101 ResNet-152 ResNeXt-101 ResNeXt-152 24 24 24 12 47 188 24 94 375 24 94 24 94 375 24 24 24 24 43 58 87 120 Linear Linear - Linear Linear Linear Linear Linear Linear Linear Linear MLP MLP MLP MLP MLP MLP MLP MLP MLP MLP MLP 200 200 - 240 240 240 200 200 200 800 800 1000 1000 1000 800 100 200 800 300 200 200 200 56.5 58.8 63.8 60.0 66.2 70.6 60.6 65.4 68.6 63.6 67.4 69.3 74.2 76.5 71.1 67.4 70.1 73.0 73.4 73.4 74.5 75.2 Top-5 - - 85.3 82.3 87.0 89.7 - - - - - - - - - 87.9 89.4 91.1 - - - -
# D.1 ResNet-50 with Mask R-CNN, C4 architecture
The results of Mask R-CNN with R-50 C4 backbone are shown in Table 7. We experimented with 1x and 2x schedule.
# D.2 ResNet-50 with Mask R-CNN, FPN architecture
The results of Mask R-CNN with R-50 FPN backbone are shown in Table 8. We compared with MoCo [28] and MoCo v2 [11] under 2x schedule, and also experimented with 6x schedule.
# D.3 ResNet-101 with Mask R-CNN, C4 architecture
The results of Mask R-CNN with R-101 C4 backbone are shown in Table 9. We experimented with 1x and 1x schedule.
# D.4 ResNet-101 with Mask R-CNN, FPN architecture
The results of Mask R-CNN with R-101 FPN backbone are shown in Table 10. We experimented with 1x, 2x, and 6x schedule.
21
Table 7: COCO object detection and instance segmentation. R50-C4. In the brackets are the gaps to the ImageNet supervised pre-training counterpart. In green are gaps of ⥠0.5 point. * numbers are from [28] since we use exactly the same ï¬ne-tuning setting.
(a) Mask R-CNN, R50-C4, 1x schedule
pre-train random init* supervised* MoCo* InfoMin Aug. APbb 26.4 38.2 38.5(â0.3) 39.0(â0.8) APbb 50 44.0 58.2 58.3(â0.1) 58.5(â0.3) APbb 75 27.8 41.2 41.6(â0.4) 42.0(â0.8) APmk 29.3 33.3 33.6(â0.1) 34.1(â0.8) APmk 50 46.9 54.7 54.8(â0.1) 55.2(â0.5) APmk 75 30.8 35.2 35.6(â0.1) 36.3(â1.1)
(b) Mask R-CNN, R50-C4, 2x schedule
pre-train random init* supervised* MoCo* InfoMin Aug. APbb 35.6 40.0 40.7(â0.7) 41.3(â1.3) APbb 50 54.6 59.9 60.5(â0.6) 61.2(â1.3) APbb 75 38.2 43.1 44.1(â1.0) 45.0(â1.9) APmk 31.4 34.7 35.6(â0.7) 36.0(â1.3) APmk 50 51.5 56.5 57.4(â0.8) 57.9(â1.4) APmk 75 33.5 36.9 38.1(â0.7) 38.3(â1.4)
Table 8: COCO object detection and instance segmentation. R50-FPN. In the brackets are the gaps to the ImageNet supervised pre-training counterpart. In green are gaps of ⥠0.5 point. (a) Mask R-CNN, R50-FPN, 2x schedule
pre-train random init supervised MoCo [28] MoCo v2 [11] InfoMin Aug. APbb 38.4 41.6 41.7(â0.1) 41.7(â0.1) 42.5(â0.9) APbb 50 57.5 61.7 61.4(â0.3) 61.6(â0.1) 62.7(â1.0) APbb 75 42.0 45.3 45.7(â0.4) 45.6(â0.3) 46.8(â1.5) APmk 34.7 37.6 37.5(â0.1) 37.6(â0.0) 38.4(â0.8) APmk 50 54.8 58.7 58.6(â0.1) 58.7(â0.0) 59.7(â1.0) APmk 75 37.2 40.4 40.5(â0.1) 40.5(â0.1) 41.4(â1.0)
(b) Mask R-CNN, R50-FPN, 6x schedule
pre-train random init supervised InfoMin Aug. APbb 42.7 42.6 43.6(â1.0) APbb 50 62.6 62.4 63.6(â1.2) APbb 75 46.7 46.5 47.3(â0.8) APmk 38.6 38.5 39.2(â0.7) APmk 50 59.9 59.9 60.6(â0.7) APmk 75 41.6 41.5 42.3(â0.8)
# D.5 ResNet-101 with Cascade Mask R-CNN, FPN architecture
The results of Cascade [9] Mask R-CNN with R-101 FPN backbone are shown in Table 11. We experimented with 1x, 2x, and 6x schedule.
# D.6 ResNeXt-101 with Mask R-CNN, FPN architecture
The results of Mask R-CNN with X-101 FPN backbone are shown in Table 12. We experimented with 1x and 2x schedule.
# D.7 ResNeXt-152 with Mask R-CNN, FPN architecture
The results of Mask R-CNN with X-152 FPN backbone are shown in Table 13. We experimented with 1x schedule.. Note in this case, while InfoMin model is pre-trained on the standard ImageNet-1K dataset, supervised model is pre-trained on ImageNet-5K, which is 6x times larger than ImageNet-1K. That said, we found InfoMin still outperforms the supervised pre-training.
22
Table 9: COCO object detection and instance segmentation. R101-C4. In the brackets are the gaps to the ImageNet supervised pre-training counterpart.
# (a) Mask R-CNN, R101-C4, 1x schedule
pre-train supervised InfoMin Aug. APbb 40.9 42.5(â1.6) APbb 50 60.6 62.1(â1.5) APbb 75 44.2 46.1(â1.9) APmk 35.1 36.7(â1.6) APmk 50 56.8 58.7(â1.9) APmk 75 37.3 39.2(â1.9)
(b) Mask R-CNN, R101-C4, 2x schedule
pre-train supervised InfoMin Aug. APbb 42.5 43.9(â1.4) APbb 50 62.3 63.5(â1.2) APbb 75 46.1 47.5(â1.4) APmk 36.4 37.8(â1.4) APmk 50 58.7 60.4(â1.7) APmk 75 38.7 40.2(â1.5)
Table 10: COCO object detection and instance segmentation. R101-FPN. In the brackets are the gaps to the ImageNet supervised pre-training counterpart.
# (a) Mask R-CNN, R101-FPN, 1x schedule
pre-train supervised InfoMin Aug. APbb 42.0 42.9(â0.9) APbb 50 62.3 62.6(â0.3) APbb 75 46.0 47.2(â1.2) APmk 37.6 38.6(â1.0) APmk 50 59.1 59.7(â0.6) APmk 75 40.1 41.6(â1.5)
(b) Mask R-CNN, R101-FPN, 2x schedule
pre-train supervised InfoMin Aug. APbb 43.3 44.5(â1.2) APbb 50 63.3 64.4(â1.1) APbb 75 47.1 48.8(â1.7) APmk 38.8 39.9(â1.1) APmk 50 60.1 61.5(â1.4) APmk 75 42.1 42.9(â0.8)
(c) Mask R-CNN, R101-FPN, 6x schedule
pre-train supervised InfoMin Aug. APbb 44.1 45.3(â1.2) APbb 50 63.7 65.0(â1.3) APbb 75 48.0 49.3(â1.3) APmk 39.5 40.5(â1.0) APmk 50 61.0 62.5(â1.5) APmk 75 42.4 43.7(â1.3)
# E Change Log
arXiv v1 Initial release.
arXiv v2 Paper accepted to NeurIPS 2020. Updated to the camera ready version
arXiv v3 Included more details in disclosure of funding.
23
Table 11: COCO object detection and instance segmentation. Cascade R101-FPN. In the brackets are the gaps to the ImageNet supervised pre-training counterpart.
# (a) Cascade Mask R-CNN, R101-FPN, 1x schedule
pre-train supervised InfoMin Aug. APbb 44.9 45.8(â0.9) APbb 50 62.3 63.1(â0.8) APbb 75 48.8 49.5(â0.7) APmk 38.8 39.6(â0.8) APmk 50 59.9 60.4(â0.5) APmk 75 42.0 42.9(â0.9)
(b) Cascade Mask R-CNN, R101-FPN, 2x schedule
pre-train supervised InfoMin Aug. APbb 45.9 47.3(â1.4) APbb 50 63.4 64.6(â1.2) APbb 75 49.7 51.5(â1.8) APmk 39.8 40.9(â1.1) APmk 50 60.9 62.1(â1.2) APmk 75 43.0 44.6(â1.6)
(c) Cascade Mask R-CNN, R101-FPN, 6x schedule
pre-train supervised InfoMin Aug. APbb 46.6 48.2(â1.6) APbb 50 64.0 65.8(â1.8) APbb 75 50.6 52.7(â2.1) APmk 40.5 41.8(â1.3) APmk 50 61.9 63.5(â1.6) APmk 75 44.1 45.6(â1.5)
Table 12: COCO object detection and instance segmentation. X101-FPN. In the brackets are the gaps to the ImageNet supervised pre-training counterpart.
# (a) Mask R-CNN, X101-FPN, 1x schedule
pre-train supervised InfoMin Aug. APbb 44.1 45.0(â0.9) APbb 50 64.8 65.3(â0.5) APbb 75 48.3 49.5(â1.2) APmk 39.3 40.1(â0.8) APmk 50 61.5 62.3(â0.8) APmk 75 42.3 43.1(â0.8)
(b) Mask R-CNN, X101-FPN, 2x schedule
pre-train supervised InfoMin Aug. APbb 44.6 45.4(â0.8) APbb 50 64.4 65.3(â0.9) APbb 75 49.0 49.6(â0.6) APmk 39.8 40.5(â0.7) APmk 50 61.6 62.5(â0.9) APmk 75 43.0 43.8(â0.8)
Table 13: COCO object detection and instance segmentation. X152-FPN. In the brackets are the gaps to the ImageNet supervised pre-training counterpart. Supervised model is pre-trained on ImageNet-5K, while InfoMin model is only pre-trained on ImageNet-1K.
(a) Mask R-CNN, X152-FPN, 1x schedule
pre-train supervised InfoMin Aug. APbb 45.6 46.4(â0.8) APbb 50 65.7 66.5(â0.8) APbb 75 50.1 50.8(â0.7) APmk 40.6 41.3(â0.7) APmk 50 63.0 63.6(â0.6) APmk 75 43.5 44.4(â0.9)
24 | {
"id": "2002.05709"
} |
2005.08926 | Neural Controlled Differential Equations for Irregular Time Series | Neural ordinary differential equations are an attractive option for modelling
temporal dynamics. However, a fundamental issue is that the solution to an
ordinary differential equation is determined by its initial condition, and
there is no mechanism for adjusting the trajectory based on subsequent
observations. Here, we demonstrate how this may be resolved through the
well-understood mathematics of \emph{controlled differential equations}. The
resulting \emph{neural controlled differential equation} model is directly
applicable to the general setting of partially-observed irregularly-sampled
multivariate time series, and (unlike previous work on this problem) it may
utilise memory-efficient adjoint-based backpropagation even across
observations. We demonstrate that our model achieves state-of-the-art
performance against similar (ODE or RNN based) models in empirical studies on a
range of datasets. Finally we provide theoretical results demonstrating
universal approximation, and that our model subsumes alternative ODE models. | http://arxiv.org/pdf/2005.08926 | Patrick Kidger, James Morrill, James Foster, Terry Lyons | cs.LG, stat.ML | Accepted at NeurIPS 2020 (Spotlight) | null | cs.LG | 20200518 | 20201105 | # ee eee
0 2 0 2
# v o N 5
] G L . s c [ 2 v 6 2 9 8 0 . 5 0 0 2 : v i X r a
# Neural Controlled Differential Equations for Irregular Time Series
# Patrick Kidger James Morrill James Foster Terry Lyons
Mathematical Institute, University of Oxford The Alan Turing Institute, British Library {kidger, morrill, foster, tlyons}@maths.ox.ac.uk
# Abstract
Neural ordinary differential equations are an attractive option for modelling temporal dynamics. However, a fundamental issue is that the solution to an ordinary differential equation is determined by its initial condition, and there is no mechanism for adjusting the trajectory based on subsequent observations. Here, we demonstrate how this may be resolved through the well-understood mathematics of controlled differential equations. The resulting neural controlled differential equation model is directly applicable to the general setting of partially- observed irregularly-sampled multivariate time series, and (unlike previous work on this problem) it may utilise memory-efï¬cient adjoint-based backpropagation even across observations. We demonstrate that our model achieves state-of-the-art performance against similar (ODE or RNN based) models in empirical studies on a range of datasets. Finally we provide theoretical results demonstrating universal approximation, and that our model subsumes alternative ODE models.
# 1 Introduction
Recurrent neural networks (RNN) are a popular choice of model for sequential data, such as a time series. The data itself is often assumed to be a sequence of observations from an underlying process, and the RNN may be interpreted as a discrete approximation to some function of this process. Indeed the connection between RNNs and dynamical systems is well-known [1, 2, 3, 4].
However this discretisation typically breaks down if the data is irregularly sampled or partially observed, and the issue is often papered over by binning or imputing data [5].
A more elegant approach is to appreciate that because the underlying process develops in continuous time, so should our models. For example [6, 7, 8, 9] incorporate exponential decay between observations, [10, 11] hybridise a Gaussian process with traditional neural network models, [12] approximate the underlying continuous-time process, and [13, 14] adapt recurrent neural networks by allowing some hidden state to evolve as an ODE. It is this last one that is of most interest to us here.
# 1.1 Neural ordinary differential equations
Neural ordinary differential equations (Neural ODEs) [3, 15], seek to approximate a map x 7â y by learning a function fθ and linear maps â1 θ such that t
y â â1 θ(zT ), where zt = z0 + fθ(zs)ds and z0 = â2 θ(x). (1)
0 Z Note that fθ does not depend explicitly on s; if desired this can be included as an extra dimension in zs [15, Appendix B.2].
34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
Neural ODEs are an elegant concept. They provide an interface between machine learning and the other dominant modelling paradigm that is differential equations. Doing so allows for the well- understood tools of that ï¬eld to be applied. Neural ODEs also interact beautifully with the manifold hypothesis, as they describe a ï¬ow along which to evolve the data manifold.
This description has not yet involved sequential data such as time series. The t dimension in equation (1) was introduced and then integrated over, and is just an internal detail of the model.
However the presence of this extra (artiï¬cial) dimension motivates the question of whether this model can be extended to sequential data such as time series. Given some ordered data (x0, . . . , xn), the goal is to extend the z0 = â2 θ(x) condition of equation (1) to a condition resembling âz0 = â(x0), . . . , zn = â(xn)â, to align the introduced t dimension with the natural ordering of the data. The key difï¬culty is that equation (1) deï¬nes an ordinary differential equation; once θ has been learnt, then the solution of equation (1) is determined by the initial condition at z0, and there is no direct mechanism for incorporating data that arrives later [4].
However, it turns out that the resolution of this issue â how to incorporate incoming information â is already a well-studied problem in mathematics, in the ï¬eld of rough analysis, which is concerned with the study of controlled differential equations.1 See for example [16, 17, 18, 19]. An excellent introduction is [20]. A comprehensive textbook is [21].
We will not assume familiarity with either controlled differential equations or rough analysis. The only concept we will rely on that may be unfamiliar is that of a RiemannâStieltjes integral.
# 1.2 Contributions
We demonstrate how controlled differential equations may extend the Neural ODE model, which we refer to as the neural controlled differential equation (Neural CDE) model. Just as Neural ODEs are the continuous analogue of a ResNet, the Neural CDE is the continuous analogue of an RNN.
The Neural CDE model has three key features. One, it is capable of processing incoming data, which may be both irregularly sampled and partially observed. Two (and unlike previous work on this problem) the model may be trained with memory-efï¬cient adjoint-based backpropagation even across observations. Three, it demonstrates state-of-the-art performance against similar (ODE or RNN based) models, which we show in empirical studies on the CharacterTrajectories, PhysioNet sepsis prediction, and Speech Commands datasets.
We provide additional theoretical results showing that our model is a universal approximator, and that it subsumes apparently-similar ODE models in which the vector ï¬eld depends directly upon continuous data.
Our code is available at https://github.com/patrick-kidger/NeuralCDE. We have also released a library torchcde, at https://github.com/patrick-kidger/torchcde
# 2 Background
Let Ï, T â R with Ï < T , and let v, w â N. Let X : [Ï, T ] â Rv be a continuous function of bounded variation; for example this is implied by X being Lipschitz. Let ζ â Rw. Let f : Rw â RwÃv be continuous. Then we may deï¬ne a continuous path z : [Ï, T ] â Rw by zÏ = ζ and
t zt = zÏ + f (zs)dXs for t â (Ï, T ], Ï (2)
# Z
where the integral is a RiemannâStieltjes integral. As f (zs) â RwÃv and Xs â Rv, the notation âf (zs)dXsâ refers to matrix-vector multiplication. The subscript notation refers to function evaluation, for example as is common in stochastic calculus.
Equation (2) exhibits global existence and uniqueness subject to global Lipschitz conditions on f ; see [20, Theorem 1.3]. We say that equation (2) is a controlled differential equation (CDE) which is controlled or driven by X.
1Not to be confused with the similarly-named but separate ï¬eld of control theory.
2
Hidden state z Hidden state z Path X x1 x2 t1 t2 x3 t3 · · · xn tn Data x Time x1 x2 t1 t2 x3 t3 · · · xn tn Data x Time
Figure 1: Some data process is observed at times t1, . . . , tn to give observations x1, . . . , xn. It is otherwise unobserved. Left: Previous work has typically modiï¬ed hidden state at each observation, and perhaps continuously evolved the hidden state between observations. Right: In contrast, the hidden state of the Neural CDE model has continuous dependence on the observed data.
# 3 Method
Suppose for simplicity that we have a fully-observed but potentially irregularly sampled time series x = ((t0, x0), (t1, x1), . . . , (tn, xn)), with each ti â R the timestamp of the observation xi â Rv, and t0 < · · · < tn. (We will consider partially-observed data later.) Let X : [t0, tn] â Rv+1 be the natural cubic spline with knots at t0, . . . , tn such that Xti = (xi, ti). As x is often assumed to be a discretisation of an underlying process, observed only through x, then X is an approximation to this underlying process. Natural cubic splines have essentially the minimum regularity for handling certain edge cases; see Appendix A for the technical details. Let fθ : Rw â RwÃ(v+1) be any neural network model depending on parameters θ. The value w is a hyperparameter describing the size of the hidden state. Let ζθ : Rv+1 â Rw be any neural network model depending on parameters θ.
Then we deï¬ne the neural controlled differential equation model as the solution of the CDE
# t
zt = zt0 + fθ(zs)dXs for t â (t0, tn], (3) t0
# Z
where zt0 = ζθ(x0, t0). This initial condition is used to avoid translational invariance. Analogous to RNNs, the output of the model may either be taken to be the evolving process z, or the terminal value ztn, and the ï¬nal prediction should typically be given by a linear map applied to this output.
The resemblance between equations (1) and (3) is clear. The essential difference is that equation (3) is driven by the data process X, whilst equation (1) is driven only by the identity function ι : R â R. In this way, the Neural CDE is naturally adapting to incoming data, as changes in X change the local dynamics of the system. See Figure 1.
# 3.1 Universal Approximation
It is a famous theorem in CDEs that in some sense they represent general functions on streams [22, Theorem 4.2], [23, Proposition A.6]. This may be applied to show that Neural CDEs are universal approximators, which we summarise in the following informal statement. Theorem (Informal). The action of a linear map on the terminal value of a Neural CDE is a universal approximator from {sequences in Rv} to R.
Theorem B.14 in Appendix B gives a formal statement and a proof, which is somewhat technical. The essential idea is that CDEs may be used to approximate bases of functions on path space.
# 3.2 Evaluating the Neural CDE model
Evaluating the Neural CDE model is straightforward. In our formulation above, X is in fact not just of bounded variation but is differentiable. In this case, we may deï¬ne
gθ,X(z, s) = fθ(z) dX ds (s), (4)
3
so that for t â (t0, tn],
# t
# t
# t
zt = zt0 + t0 fθ(zs)dXs = zt0 + t0 fθ(zs) dX ds (s)ds = zt0 + t0 gθ,X(zs, s)ds. (5)
# Z
# Z
# Z
Thus it is possible to solve the Neural CDE using the same techniques as for Neural ODEs. In our experiments, we were able to straightforwardly use the already-existing torchdiffeq package [24] without modiï¬cation.
# 3.3 Comparison to alternative ODE models
For the reader not familiar with CDEs, it might instead seem more natural to replace gθ,X with some hθ(z, Xs) that is directly applied to and potentially nonlinear in Xs. Indeed, such approaches have been suggested before, in particular to derive a âGRU-ODEâ analogous to a GRU [14, 25].
However, it turns out that something is lost by doing so, which we summarise in the following statement. Theorem (Informal). Any equation of the form zt = z0 + exactly by a Neural CDE of the form zt = z0 + not true.
R Theorem C.1 in Appendix C provides the formal statement and proof. The essential idea is that a Neural CDE can easily represent the identity function between paths, whilst the alternative cannot.
In our experiments, we ï¬nd that the Neural CDE substantially outperforms the GRU-ODE, which we speculate is a consequence of this result.
# 3.4 Training via the adjoint method
An attractive part of Neural ODEs is the ability to train via adjoint backpropagation, see [15, 26, 27, 28], which uses only O(H) memory in the time horizon L = tn â t0 and the memory footprint H of the vector ï¬eld. This is contrast to directly backpropagating through the operations of an ODE solver, which requires O(LH) memory.
Previous work on Neural ODEs for time series, for example [13], has interrupted the ODE to make updates at each observation. Adjoint-based backpropagation cannot be performed across the jump, so this once again requires O(LH) memory.
In contrast, the gθ,X deï¬ned by equation (4) continuously incorporates incoming data, without interrupting the differential equation, and so adjoint backpropagation may be performed. This requires only O(H) memory. The underlying data unavoidably uses an additional O(L) memory. Thus training the Neural CDE has an overall memory footprint of just O(L + H).
We do remark that the adjoint method should be used with care, as some systems are not stable to evaluate in both the forward and backward directions [29, 30]. The problem of ï¬nite-time blow-up is at least not a concern, given global Lipschitz conditions on the vector ï¬eld [20, Theorem 1.3]. Such a condition will be satisï¬ed if fθ uses ReLU or tanh nonlinearities, for example.
# 3.5 Intensity as a channel
It has been observed that the frequency of observations may carry information [6]. For example, doctors may take more frequent measurements of patients they believe to be at greater risk. Some previous work has for example incorporated this information by learning an intensity function [12, 13, 15].
We instead present a simple non-learnt procedure, that is compatible with Neural CDEs. Simply concatenate the index i of xi together with xi, and then construct a path X from the pair (i, xi), as before. The channel of X corresponding to these indices then corresponds to the cumulative intensity of observations.
As the derivative of X is what is then used when evaluating the Neural CDE model, as in equation (5), then it is the intensity itself that then determines the vector ï¬eld.
4
# 3.6 Partially observed data
One advantage of our formulation is that it naturally adapts to the case of partially observed data. Each channel may independently be interpolated between observations to deï¬ne X in exactly the same manner as before.
In this case, the procedure for measuring observational intensity in Section 3.5 may be adjusted by instead having a separate observational intensity channel ci for each original channel oi, such that ci increments every time an observation is made in oi.
# 3.7 Batching
Given a batch of training samples with observation times drawn from the same interval [t0, tn], we may interpolate each x to produce a continuous X, as already described. Each path X is what may then be batched together, regardless of whether the underlying data is irregularly sampled or partially observed. Batching is thus efï¬cient for the Neural CDE model.
# 4 Experiments
We benchmark the Neural CDE against a variety of existing models.
These are: GRU-ât, which is a GRU with the time difference between observations additionally used as an input; GRU-D [6], which modiï¬es the GRU-ât with learnt exponential decays between observations; GRU-ODE [14, 25], which is an ODE analogous to the operation of a GRU and uses X as its input; ODE-RNN [13], which is a GRU-ât model which additionally applies a learnt Neural ODE to the hidden state between observations. Every model then used a learnt linear map from the ï¬nal hidden state to the output, and was trained with cross entropy or binary cross entropy loss.
The GRU-ât represents a straightforward baseline, the GRU-ODE is an alternative ODE model that is thematically similar to a Neural CDE, and the GRU-D and ODE-RNNs are state-of-the-art models for these types of problems. To avoid unreasonably extensive comparisons we have chosen to focus on demonstrating superiority within the class of ODE and RNN based models to which the Neural CDE belongs. These models were selected to collectively be representative of this class.
Each model is run ï¬ve times, and we report the mean and standard deviation of the test metrics.
For every problem, the hyperparameters were chosen by performing a grid search to optimise the performance of the baseline ODE-RNN model. Equivalent hyperparameters were then used for every other model, adjusted slightly so that every model has a comparable number of parameters.
Precise experimental details may be found in Appendix D, regarding normalisation, architectures, activation functions, optimisation, hyperparameters, regularisation, and so on.
# 4.1 Varying amounts of missing data on CharacterTrajectories
We begin by demonstrating the efï¬cacy of Neural CDEs on irregularly sampled time series.
To do this, we consider the CharacterTrajectories dataset from the UEA time series classiï¬cation archive [31]. This is a dataset of 2858 time series, each of length 182, consisting of the x, y position and pen tip force whilst writing a Latin alphabet character in a single stroke. The goal is to classify which of 20 different characters are written.
We run three experiments, in which we drop either 30%, 50% or 70% of the data. The observations to drop are selected uniformly at random and independently for each time series. Observations are removed across channels, so that the resulting dataset is irregularly sampled but completely observed. The randomly removed data is the same for every model and every repeat.
The results are shown in Table 1. The Neural CDE outperforms every other model considered, and furthermore it does so whilst using an order of magnitude less memory. The GRU-ODE does consistently poorly despite being the most theoretically similar model to a Neural CDE. Furthermore we see that even as the fraction of dropped data increases, the performance of the Neural CDE remains roughly constant, whilst the other models all start to decrease.
Further experimental details may be found in Appendix D.2.
5
Table 1: Test accuracy (mean ± std, computed across ï¬ve runs) and memory usage on CharacterTrajectories. Memory usage is independent of repeats and of amount of data dropped.
Model Test Accuracy 30% dropped 50% dropped 70% dropped Memory usage (MB) GRU-ODE GRU-ât GRU-D ODE-RNN Neural CDE (ours) 92.6% ± 1.6% 86.7% ± 3.9% 89.9% ± 3.7% 93.6% ± 2.0% 91.3% ± 2.1% 90.4% ± 0.8% 94.2% ± 2.1% 90.2% ± 4.8% 91.9% ± 1.7% 95.4% ± 0.6% 96.0% ± 0.3% 95.3% ± 0.6% 98.7% ± 0.8% 98.8% ± 0.2% 98.6% ± 0.4% 1.5 15.8 17.0 14.8 1.3
Table 2: Test AUC (mean ± std, computed across ï¬ve runs) and memory usage on PhysioNet sepsis prediction. âOIâ refers to the inclusion of observational intensity, âNo OIâ means without it. Memory usage is independent of repeats.
Model Test AUC Memory usage (MB) OI No OI OI No OI GRU-ODE GRU-ât GRU-D ODE-RNN Neural CDE (ours) 0.852 ± 0.010 0.878 ± 0.006 0.871 ± 0.022 0.874 ± 0.016 0.880 ± 0.006 0.771 ± 0.024 0.840 ± 0.007 0.850 ± 0.013 0.833 ± 0.020 0.776 ± 0.009 454 837 889 696 244 273 826 878 686 122
# 4.2 Observational intensity with PhysioNet sepsis prediction
Next we consider a dataset that is both irregularly sampled and partially observed, and investigate the beneï¬ts of observational intensity as discussed in Sections 3.5 and 3.6.
We use data from the PhysioNet 2019 challenge on sepsis prediction [32, 33]. This is a dataset of 40335 time series of variable length, describing the stay of patients within an ICU. Measurements are made of 5 static features such as age, and 34 time-dependent features such as respiration rate or creatinine concentration in the blood, down to an hourly resolution. Most values are missing; only 10.3% of values are observed. We consider the ï¬rst 72 hours of a patientâs stay, and consider the binary classiï¬cation problem of predicting whether they develop sepsis over the course of their entire stay (which is as long as a month for some patients).
We run two experiments, one with observational intensity, and one without. For the Neural CDE and GRU-ODE models, observational intensity is continuous and on a per-channel basis as described in Section 3.6. For the ODE-RNN, GRU-D, and GRU-ât models, observational intensity is given by appending an observed/not-observed mask to the input at each observation.23 The initial hidden state of every model is taken to be a function (a small single hidden layer neural network) of the static features.
The results are shown in Table 2. As the dataset is highly imbalanced (5% positive rate), we report AUC rather than accuracy. When observational intensity is used, then the Neural CDE produces the best AUC overall, although the ODE-RNN and GRU-ât models both perform well. The GRU-ODE continues to perform poorly.
Without observational intensity then every model performs substantially worse, and in particular we see that the beneï¬t of including observational intensity is particularly dramatic for the Neural CDE.
As before, the Neural CDE remains the most memory-efï¬cient model considered.
Further experimental details can be found in Appendix D.3.
2As our proposed observational intensity goes via a derivative, these each contain the same information. 3Note that the ODE-RNN, GRU-D and GRU-ât models always receive the time difference between observations, ât, as an input. Thus even in the no observational intensity case, they remain aware of the irregular sampling of the data, and so this case not completely fair to the Neural CDE and GRU-ODE models.
6
# 4.3 Regular time series with Speech Commands
Finally we demonstrate the efï¬cacy of Neural CDE models on regularly spaced, fully observed time series, where we might hypothesise that the baseline models will do better.
We used the Speech Commands dataset [34]. This consists of one-second audio recordings of both background noise and spoken words such as âleftâ, ârightâ, and so on. We used 34975 time series corresponding to ten spoken words so as to produce a balanced classiï¬cation problem. We preprocess the dataset by computing mel-frequency cepstrum coefï¬cients so that each time series is then regularly spaced with length 161 and 20 channels.
Table 3: Test Accuracy (mean ± std, computed across ï¬ve runs) and memory usage on Speech Commands. Memory usage is independent of repeats.
Memory usage (GB) 0.164 1.54 1.64 1.40 Test Accuracy Model 47.9% ± 2.9% 43.3% ± 33.9% 32.4% ± 34.8% 65.9% ± 35.6% 89.8% ± 2.5% GRU-ODE GRU-ât GRU-D ODE-RNN 0.167 Neural CDE (ours)
The results are shown in Table 3. We observed that the Neural CDE had the highest performance, whilst using very little memory. The GRU-ODE consistently failed to perform. The other benchmark models surprised us by exhibiting a large variance on this problem, due to sometimes failing to train, and we were unable to resolve this by tweaking the optimiser. The best GRU-ât, GRU-D and ODE-RNN models did match the performance of the Neural CDE, suggesting that on a regularly spaced problem all approaches can be made to work equally well.
In contrast, the Neural CDE model produced consistently good results every time. Anecdotally this aligns with what we observed over the course of all of our experiments, which is that the Neural CDE model usually trained quickly, and was robust to choice of optimisation hyperparameters. We stress that we did not perform a formal investigation of this phenomenen.
Further experimental details can be found in Appendix D.4.
# 5 Related work
In [13, 14] the authors consider interrupting a Neural ODE with updates from a recurrent cell at each observation, and were in fact the inspiration for this paper. Earlier work [6, 7, 8, 9] use intra-observation exponential decays, which are a special case. [35] consider something similar by interrupting a Neural ODE with stochastic events.
SDEs and CDEs are closely related, and several authors have introduced Neural SDEs. [36, 37, 38] treat them as generative models for time series and seek to model the data distribution. [39, 40] investigate using stochasticity as a regularizer, and demonstrate better performance by doing so. [41] use random vector ï¬elds so as to promote simpler trajectories, but do not use the âSDEâ terminology.
Adjoint backpropagation needs some work to apply to SDEs, and so [42, 43, 44] all propose methods for training Neural SDEs. We would particularly like to highlight the elegant approach of [44], who use the pathwise treatment given by rough analysis to approximate Brownian noise, and thus produce a random Neural ODE which may be trained in the usual way; such approaches may also avoid the poor convergence rates of SDE solvers as compared to ODE solvers.
Other elements of the theory of rough analysis and CDEs have also found machine learning applications. Amongst others, [23, 45, 46, 47, 48, 49, ?, ?] discuss applications of the signature transform to time series problems, and [50] investigate the related logsignature transform. [51] develop a kernel for time series using this methodology, and [52] apply this kernel to Gaussian processes. [53] develop software for these approaches tailored for machine learning.
There has been a range of work seeking to improve Neural ODEs. [54, 55] investigate speed- ups to the training proecedure, [56] develop an energy-based Neural ODE framework, and [29] [30, 57] consider ways to vary the demonstrate potential pitfalls with adjoint backpropagation. network parameters over time. [55, 58] consider how a Neural ODE model may be regularised (see also the stochastic regularisation discussed above). This provides a wide variety of techniques, and we are hopeful that some of them may additionally carry over to the Neural CDE case.
7
# 6 Discussion
# 6.1 Considerations
There are two key elements of the Neural CDE construction which are subtle, but important.
Time as a channel CDEs exhibit a tree-like invariance property [18]. What this means, roughly, is that a CDE is blind to speed at which X is traversed. Thus merely setting Xti = xi would not be enough, as time information is only incorporated via the parameterisation. This is why time is explicitly included as a channel via Xti = (xi, ti).
Initial value networks The initial hidden state zt0 should depend on Xt0 = (x0, t0). Otherwise, the Neural CDE will depend upon X only through its derivative dX/dt, and so will be translationally invariant. An alternative would be to append another channel whose ï¬rst derivative includes translation-sensitive information, for example by setting Xti = (xi, ti, tix0).
# 6.2 Performance tricks
We make certain (somewhat anecdotal) observations of tricks that seemed to help performance.
Final tanh nonlinearity We found it beneï¬cial to use a tanh as a ï¬nal nonlinearity for the vector ï¬eld fθ of a Neural CDE model. Doing so helps prevent extremely large initial losses, as the tanh constrains the rate of change of the hidden state. This is analogous to RNNs, where the key feature of GRUs and LSTMs are procedures to constrain the rate of change of the hidden state.
Layer-wise learning rates We found it beneï¬cial to use a larger (Ã10â100) learning rate for the linear layer on top of the output of the Neural CDE, than for the vector ï¬eld fθ of the Neural CDE itself. This was inspired by the observation that the ï¬nal linear layer has (in isolation) only a convex optimisation problem to solve.4
# 6.3 Limitations
Speed of computation We found that Neural CDEs were typically slightly faster to compute than the ODE-RNN model of [13]. (This is likely to be because in an Neural CDE, steps of the numerical solver can be made across observations, whilst the ODE-RNN must interrupt its solve at each observation.)
However, Neural CDEs were still roughly ï¬ves times slower than RNN models. We believe this is largely an implementation issue, as the implementation via torchdiffeq is in Python, and by default uses double-precision arithmetic with variable step size solvers, which we suspect is unnecessary for most practical tasks.
If the vector ï¬eld fθ : Rw â RwÃ(v+1) is a feedforward neural network, Number of parameters with ï¬nal hidden layer of size Ï, then the number of scalars for the ï¬nal afï¬ne transformation is of size O(Ïvw), which can easily be very large. In our experiments we have to choose small values of w and Ï for the Neural CDE to ensure that the number of parameters is the same across models.
We did experiment with representing the ï¬nal linear layer as an outer product of transformations Rw â Rw and Rw â Rv+1. This implies that the resulting matrix is rank-one, and reduces the number of parameters to just O(Ï(v + w)), but unfortunately we found that this hindered the classiï¬cation performance of the model.
# 6.4 Future work
Vector ï¬eld design The vector ï¬elds fθ that we consider are feedforward networks. More sophisticated choices may allow for improved performance, in particular to overcome the trilinearity issue just discussed.
4In our experiments we applied this learning rate to the linear layer on top of every model, not the just the Neural CDE, to ensure a fair comparison.
8
Modelling uncertainty As presented here, Neural CDEs do not give any measure of uncertainty about their predictions. Such extensions are likely to be possible, given the close links between CDEs and SDEs, and existing work on Neural SDEs [36, 37, 38, 39, 40, 42, 43, 44, ?].
Numerical schemes In this paper we integrated the Neural CDE by reducing it to an ODE. The ï¬eld of numerical CDEs is relatively small â to the best of our knowledge [17, 59, 60, 61, 62, 63, 64, 65, 66] constitute essentially the entire ï¬eld, and are largely restricted to rough controls. Other numerical methods may be able to exploit the CDE structure to improve performance. Choice of X Natural cubic splines were used to construct the path X from the time series x. However, these are not causal. That is, Xt depends upon the value of xi for t < ti. This makes it infeasible to apply Neural CDEs in real-time settings, for which Xt is needed before xi has been observed. Resolving this particular issue is a topic on which we have follow-up work planned.
Other problem types Our experiments here involved only classiï¬cation problems. There was no real reason for this choice, and we expect Neural CDEs to be applicable more broadly.
# 6.5 Related theories
Rough path theory The ï¬eld of rough path theory, which deals with the study of CDEs, is much larger than the small slice that we have used here. It is likely that further applications may serve to improve Neural CDEs. A particular focus of rough path theory is how to treat functions that must be sensitive to the order of events in a particular (continuous) way.
Control theory Despite their similar names, and consideration of similar-looking problems, control theory and controlled differential equations are essentially separate ï¬elds. Control theory has clear links and applications that may prove beneï¬cial to models of this type.
RNN theory Neural CDEs may be interpreted as continuous-time versions of RNNs. CDEs thus offer a theoretical construction through which RNNs may perhaps be better understood. Conversely, what is known about RNNs may have applications to improve Neural CDEs.
# 7 Conclusion
We have introduced a new class of continuous-time time series models, Neural CDEs. Just as Neural ODEs are the continuous analogue of ResNets, the Neural CDE is the continuous time analogue of an RNN. The model has three key advantages: it operates directly on irregularly sampled and partially observed multivariate time series, it demonstrates state-of-the-art performance, and it beneï¬ts from memory-efï¬cient adjoint-based backpropagation even across observations. To the best of our knowledge, no other model combines these three features together. We also provide additional theoretical results demonstrating universal approximation, and that Neural CDEs subsume alternative ODE models.
# Broader Impact
We have introduced a new tool for studying irregular time series. As with any tool, it may be used in both positive and negative ways. The authors have a particular interest in electronic health records (an important example of irregularly sampled time-stamped data) and so here at least we hope and expect to see a positive impact from this work. We do not expect any speciï¬c negative impacts from this work.
# Acknowledgments and Disclosure of Funding
Thanks to Cristopher Salvi for many vigorous discussions on this topic. PK was supported by the EPSRC grant EP/L015811/1. JM was supported by the EPSRC grant EP/L015803/1 in collaboration with Iterex Therapuetics. JF was supported by the EPSRC grant EP/N509711/1. PK, JM, JF, TL were supported by the Alan Turing Institute under the EPSRC grant EP/N510129/1.
9
# References
[1] K.-i. Funahashi and Y. Nakamura, âApproximation of dynamical systems by continuous time recurrent neural networks,â Neural Networks, vol. 6, no. 6, pp. 801 â 806, 1993.
[2] C. Bailer-Jones, D. MacKay, and P. Withers, âA recurrent neural network for modelling dynamical systems,â Network: Computation in Neural Systems, vol. 9, pp. 531â47, 1998.
[3] W. E, âA Proposal on Machine Learning via Dynamical Systems,â Commun. Math. Stat., vol. 5, no. 1, pp. 1â11, 2017.
[4] M. Ciccone, M. Gallieri, J. Masci, C. Osendorfer, and F. Gomez, âNAIS-Net: Stable Deep Networks from Non-Autonomous Differential Equations,â in Advances in Neural Information Processing Systems 31, pp. 3025â3035, Curran Associates, Inc., 2018.
[5] A. Gelman and J. Hill, Data Analysis Using Regression and Multilevel/Hierarchical Models. Cambridge University Press, 2007.
[6] Z. Che, S. Purushotham, K. Cho, D. Sontag, and Y. Liu, âRecurrent Neural Networks for Multivariate Time Series with Missing Values,â Scientiï¬c Reports, vol. 8, no. 1, p. 6085, 2018.
[7] W. Cao, D. Wang, J. Li, H. Zhou, L. Li, and Y. Li, âBRITS: Bidirectional Recurrent Imputation for Time Series,â in Advances in Neural Information Processing Systems 31 (S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, eds.), pp. 6775â6785, Curran Associates, Inc., 2018.
[8] H. Mei and J. M. Eisner, âThe Neural Hawkes Process: A Neurally Self-Modulating Multivariate Point Process,â in Advances in Neural Information Processing Systems 30 (I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, eds.), pp. 6754â6764, Curran Associates, Inc., 2017.
[9] M. Mozer, D. Kazakov, and R. Lindsey, âDiscrete Event, Continuous Time RNNs,â arXiv:1710.04110, 2017.
[10] S. C.-X. Li and B. M. Marlin, âA scalable end-to-end Gaussian process adapter for irregularly sampled time series classiï¬cation,â in Advances in Neural Information Processing Systems, pp. 1804â1812, 2016.
[11] J. Futoma, S. Hariharan, and K. Heller, âLearning to Detect Sepsis with a Multitask Gaussian Process RNN Classiï¬er,â in Proceedings of the 34th International Conference on Machine Learning, pp. 1174â 1182, 2017.
[12] S. N. Shukla and B. Marlin, âInterpolation-Prediction Networks for Irregularly Sampled Time Series,â in International Conference on Learning Representations, 2019.
[13] Y. Rubanova, T. Q. Chen, and D. K. Duvenaud, âLatent Ordinary Differential Equations for Irregularly- Sampled Time Series,â in Advances in Neural Information Processing Systems 32, pp. 5320â5330, Curran Associates, Inc., 2019.
[14] E. De Brouwer, J. Simm, A. Arany, and Y. Moreau, âGRU-ODE-Bayes: Continuous Modeling of Sporadically-Observed Time Series,â in Advances in Neural Information Processing Systems 32, pp. 7379â7390, Curran Associates, Inc., 2019.
[15] R. T. Q. Chen, Y. Rubanova, J. Bettencourt, and D. K. Duvenaud, âNeural Ordinary Differential Equations,â in Advances in Neural Information Processing Systems 31, pp. 6571â6583, Curran Associates, Inc., 2018.
[16] T. J. Lyons, âDifferential equations driven by rough signals,â Revista Matem´atica Iberoamericana, vol. 14, no. 2, pp. 215â310, 1998.
[17] T. Lyons, âRough paths, signatures and the modelling of functions on streams,â arXiv:1405.4537, 2014.
[18] B. M. Hambly and T. J. Lyons, âUniqueness for the signature of a path of bounded variation and the reduced path group,â Annals of Mathematics, vol. 171, no. 1, pp. 109â167, 2010.
[19] I. Chevyrev and H. Oberhauser, âSignature moments to characterize laws of stochastic processes,â arXiv:1810.10971, 2018.
[20] T. Lyons, M. Caruana, and T. Levy, Differential equations driven by rough paths. Springer, 2004. ´Ecole
dâ ´Et´e de Probabilit´es de Saint-Flour XXXIV - 2004.
[21] P. K. Friz and N. B. Victoir, âMultidimensional stochastic processes as rough paths: theory and applications,â Cambridge University Press, 2010.
[22] I. Perez Arribas, âDerivatives pricing using signature payoffs,â arXiv:1809.09466, 2018.
[23] P. Bonnier, P. Kidger, I. Perez Arribas, C. Salvi, and T. Lyons, âDeep Signature Transforms,â in Advances in Neural Information Processing Systems, pp. 3099â3109, 2019.
[24] R. T. Q. Chen, âtorchdiffeq,â 2018. https://github.com/rtqichen/torchdiffeq.
10
[25] I. Jordan, P. A. Sokol, and I. M. Park, âGated recurrent units viewed through the lens of continuous time dynamical systems,â arXiv:1906.01005, 2019.
[26] L. S. Pontryagin, E. F. Mishchenko, V. G. Boltyanskii, and R. V. Gamkrelidze, âThe mathematical theory of optimal processes,â 1962.
[27] M. B. Giles and N. A. Pierce, âAn Introduction to the Adjoint Approach to Design,â Flow, Turbulence and Combustion, vol. 65, pp. 393â415, Dec 2000.
[28] W. W. Hager, âRunge-Kutta methods in optimal control and the transformed adjoint system,â Numerische Mathematik, vol. 87, pp. 247â282, Dec 2000.
[29] A. Gholami, K. Keutzer, and G. Biros, âANODE: Unconditionally Accurate Memory-Efï¬cient Gradients for Neural ODEs,â arXiv:1902.10298, 2019.
[30] T. Zhang, Z. Yao, A. Gholami, J. E. Gonzalez, K. Keutzer, M. W. Mahoney, and G. Biros, âANODEV2: A Coupled Neural ODE Framework,â in Advances in Neural Information Processing Systems 32, pp. 5151â 5161, Curran Associates, Inc., 2019.
[31] A. Bagnall, H. A. Dau, J. Lines, M. Flynn, J. Large, A. Bostrom, P. Southam, and E. Keogh, âThe uea multivariate time series classiï¬cation archive,â arXiv:1811.00075, 2018.
[32] M. Reyna, C. Josef, R. Jeter, S. Shashikumar, B. Moody, M. B. Westover, A. Sharma, S. Nemati, and G. Clifford, âEarly Prediction of Sepsis from Clinical Data: The PhysioNet/Computing in Cardiology Challenge,â Critical Care Medicine, vol. 48, no. 2, pp. 210â217, 2019.
[33] Goldberger, A. L. and Amaral L. A. N. and Glass, L. and Hausdorff, J. M. and Ivanov P. Ch. and Mark, R. G. and Mietus, J. E. and Moody, G. B. and Peng, C.-K. and Stanley, H. E., âPhysioBank, PhysioToolkit and PhysioNet: Components of a New Research Resource for Complex Physiologic Signals,â Circulation, vol. 23, pp. 215â220, 2003.
[34] P. Warden, âSpeech commands: A dataset for limited-vocabulary speech recognition,â arXiv:1804.03209, 2020.
[35] J. Jia and A. R. Benson, âNeural Jump Stochastic Differential Equations,â in Advances in Neural Information Processing Systems 32, pp. 9847â9858, Curran Associates, Inc., 2019.
[36] C. Cuchiero, W. Khosrawi, and J. Tiechmann, âA generative adversarial network approach to calibration of local stochastic volatility models,â arXiv:2005.02505, 2020.
[37] B. Tzen and M. Raginsky, âTheoretical guarantees for sampling and inference in generative models with latent diffusions,â COLT, 2019.
[38] R. Deng, B. Chang, M. Brubaker, G. Mori, and A. Lehrmann, âModeling Continuous Stochastic Processes with Dynamic Normalizing Flows,â arXiv:2002.10516, 2020.
[39] X. Liu, T. Xiao, S. Si, Q. Cao, S. Kumar, and C.-J. Hsieh, âNeural SDE: Stabilizing Neural ODE Networks with Stochastic Noise,â arXiv:1906.02355, 2019.
[40] V. Oganesyan, A. Volokhova, and D. Vetrov, âStochasticity in Neural ODEs: An Empirical Study,â arXiv:2002.09779, 2020.
[41] N. Twomey, M. KozÅowski, and R. Santos-Rodr´ıguez, âNeural ODEs with stochastic vector ï¬eld mixtures,â arXiv:1905.09905, 2019.
[42] X. Li, T.-K. L. Wong, R. T. Q. Chen, and D. K. Duvenaud, âScalable Gradients and Variational Inference for Stochastic Differential Equations,â AISTATS, 2020.
[43] B. Tzen and M. Raginsky, âNeural Stochastic Differential Equations: Deep Latent Gaussian Models in the Diffusion Limit,â arXiv:1905.09883, 2019.
[44] L. Hodgkinson, C. van der Heide, F. Roosta, and M. Mahoney, âStochastic Normalizing Flows,â arXiv:2002.09547, 2020.
[45] I. Chevyrev and A. Kormilitzin, âA primer on the signature method in machine learning,â arXiv:1603.03788, 2016.
[46] I. Perez Arribas, G. M. Goodwin, J. R. Geddes, T. Lyons, and K. E. A. Saunders, âA signature- based machine learning model for distinguishing bipolar disorder and borderline personality disorder,â Translational Psychiatry, vol. 8, no. 1, p. 274, 2018.
[47] A. Fermanian, âEmbedding and learning with signatures,â arXiv:1911.13211, 2019.
[48] J. Morrill, A. Kormilitzin, A. Nevado-Holgado, S. Swaminathan, S. Howison, and T. Lyons, âThe Signature-based Model for Early Detection of Sepsis from Electronic Health Records in the Intensive Care Unit,â International Conference in Computing in Cardiology, 2019.
[49] J. Reizenstein, Iterated-integral signatures in machine learning. PhD thesis, University of Warwick, 2019. http://wrap.warwick.ac.uk/131162/.
11
[50] S. Liao, T. Lyons, W. Yang, and H. Ni, âLearning stochastic differential equations using RNN with log signature features,â arXiv:1908.08286, 2019.
[51] F. J. Kir´aly and H. Oberhauser, âKernels for sequentially ordered data,â Journal of Machine Learning Research, 2019.
[52] C. Toth and H. Oberhauser, âVariational Gaussian Processes with Signature Covariances,â arXiv:1906.08215, 2019.
[53] P. Kidger and T. Lyons, âSignatory: differentiable computations of the signature and logsignature transforms, on both CPU and GPU,â arXiv:2001.00706, 2020.
[54] A. Quaglino, M. Gallieri, J. Masci, and J. Koutn´ık, âSnode: Spectral discretization of neural odes for system identiï¬cation,â in International Conference on Learning Representations, 2020.
[55] C. Finlay, J.-H. Jacobsen, L. Nurbekyan, and A. Oberman, âHow to train your neural ODE,â arXiv:2002.02798, 2020.
[56] S. Massaroli, M. Poli, M. Bin, J. Park, A. Yamashita, and H. Asama, âStable Neural ï¬ows,â arXiv:2003.08063, 2020.
[57] S. Massaroli, M. Poli, arXiv:2002.08071, 2020. J. Park, A. Yamashita, and H. Asama, âDissecting Neural ODEs,â
[58] H. Yan, J. Du, V. Y. F. Tan, and J. Feng, âOn Robustness of Neural Ordinary Differential Equations,â arXiv:1910.05513, 2019.
[59] F. Castell and J. Gaines, âThe ordinary differential equation approach to asymptotically efï¬cient schemes for solution of stochastic differential equations,â Annales de lâInstitut Henri Poincar´e. Probabilit´es et Statistiques, vol. 32, 1996.
[60] S. Malham and A. Wiese, âStochastic Lie Group Integrators,â SIAM J. Sci. Comput., vol. 30, no. 2, pp. 597â617, 2007.
[61] L. G. Gyurk´o, Numerical approximations for stochastic differential equations. PhD thesis, University of Oxford, 2008.
[62] A. Janssen, Order book models, signatures and numerical approximations of rough differential equations. PhD thesis, University of Oxford, 2011.
[63] Y. Boutaib, L. G. Gyurk´o, T. Lyons, and D. Yang, âDimension-free Euler estimates of rough differential equations,â Rev. Roumaine Math. Pures Appl., 2014.
[64] H. Boedihardjo, T. Lyons, and D. Yang, âUniform factorial decay estimates for controlled differential equations,â Electronic Communications in Probability, vol. 20, no. 94, 2015.
[65] J. Foster, Numerical approximations for stochastic differential equations. PhD thesis, University of Oxford, 2020.
[66] J. Foster, T. Lyons, and H. Oberhauser, âAn optimal polynomial approximation of Brownian motion,â SIAM J. Numer. Anal., vol. 58, no. 3, pp. 1393â1421, 2020.
[67] J. M. Varah, âA Lower Bound for the Smallest Singular Value of a Matrix,â Linear Algebra and its Applications, vol. 11, no. 1, pp. 3â5, 1975.
[68] A. Pinkus, âApproximation theory of the MLP model in neural networks,â Acta Numer., vol. 8, pp. 143â 195, 1999.
[69] P. Kidger and T. Lyons, âUniversal Approximation with Deep Narrow Networks,â arXiv:1905.08539, 2019.
[70] D. Kingma and J. Ba, âAdam: A method for stochastic optimization,â ICLR, 2015.
[71] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, âPyTorch: An Imperative Style, High-Performance Deep Learning Library,â in Advances in Neural Information Processing Systems 32, pp. 8024â8035, Curran Associates, Inc., 2019.
12
# Supplementary material
Appendix A discusses the technical considerations of schemes for constructing a path X from data. Appendix B proves universal approximation of the Neural CDE model, and is substantially more technical than the rest of this paper. Appendix C proves that the Neural CDE model subsumes alternative ODE models which depend directly and nonlinearly on the data. Appendix D gives the full details of every experiment, such as choice of optimiser, hyperparameter searches, and so on.
A Other schemes for constructing the path X
To evaluate the model as discussed in Section 3.2, X must be at least continuous and piecewise differentiable.
# A.1 Differentiating with respect to the time points
However, there is a technical caveat in the speciï¬c case that derivatives with respect to the initial time t0 are required, and that training is done with the adjoint method. In this case the derivative with respect to t0 is computed using, and thus requires, derivatives of the vector ï¬eld with respect to t.
To be precise, suppose we have a Neural CDE as before:
t zt = zt0 + fθ(zs)dXs for t â (t0, tn]. t0
# Z
Let L be some (for simplicity scalar-valued) function of ztn, for example a loss. Consider
gθ,X(z, s) = fθ(z) dX ds (s)
as before, and let
as = dL dzs ,
which is vector-valued, with size equal to the size of zs, the number of hidden channels.
Then, applying [15, Equation 52] to our case:
dL dt0 = = dL dtn dL dtn â â Z t0 tn t0 tn as · âgθ,X âs as · fθ(zs) (zs, s)ds d2X ds2 (s)ds, (6)
# Z
where · represents the dot product.
2X/ds2 is merely measure valued, but in In principle we may make sense of equation (6) when d practice most code is only set up to handle classical derivatives. If derivatives with respect to t0 are desired, then practically speaking X must be at least twice differentiable.
# A.2 Adaptive step size solvers
There is one further caveat that must be considered. Suppose X is twice differentiable, but that the second derivative is discontinuous. For example this would be accomplished by taking X to be a quadratic spline interpolation.
If seeking to solve equation (6) with an adaptive step-size solver, we found that the solver would take a long time to compute the backward pass, as it would have to slow down to resolve each jump in d
13
# A.3 Natural cubic splines
This is then the reason for our selection of natural cubic splines: by ensuring that X is twice continuously differentiable, the above issue is ameliorated, and adaptive step size solvers operate acceptably. Cubic splines give essentially the minimum regularity for the techniques discussed in this paper to work âout of the boxâ in all cases.
Other than this smoothness, however, there is little that is special about natural cubic splines. Other possible options are for example Gaussian processes [10, 11] or kernel methods [12]. Furthermore, especially in the case of noisy data it need not be an interpolation scheme â approximation and curve-ï¬tting schemes are valid too.
# B Universal Approximation
The behaviour of controlled differential equations are typically described through the signature transform (also known as path signature or simply signature) [20] of the driving process X. We demonstrate here how a (Neural) CDE may be reduced to consideration of just the signature transform, in order to prove universal approximation.
The proof is essentially split into two parts. The ï¬rst part is to prove universal approximation with respect to continuous paths, as is typically done for CDEs. The second (lengthier) part is to interpret what this means for the natural cubic splines that we use in this paper, so that we can get universal approximation with respect to the original data as well. Deï¬nition B.1. Let Ï, T â R with Ï < T and let v â N. Let V 1([Ï, T ]; Rv) represent the space of continuous functions of bounded variation. Equip this space with the norm
X 7â X V = X â + X BV .
This is a somewhat unusual norm to use, as bounded variation seminorms are more closely aligned with ZL? norms than L° norms. Definition B.2. For any X ⬠V}({7,T];Râ) let X, = (Xz,t) ⬠V4({r,T];R°t!). We choose to use the notation of âremoving the hatâ to denote this time augmentation, for consistency with the main text which uses X for the time-augmented path. Definition B.3. For any N,v â¬N, let «(N,v) = DN o(v + 1).
use,
# as
Deï¬nition B.3. For any N, v â N, let κ(N, v) = Deï¬nition B.4 (Signature transform). For any k â N and any y â Rk, let M (y) â Rk(v+1)Ã(v+1) be the matrix
M (y) = y1 y2 ... yk 0 0 0 0 ... 0 0 0 ... 0 y1 ... yk 0 ... 0 0 0 ... 0 0 0 0 0 ... 0 · · · · · · · · · · · · · · · . . . · · · · · · · · · 0 0 ... 0 0 ... 0 y1 ... yk
Fix N â N and X â V 1([Ï, T ]; Rv+1). Let y0,X,N : [Ï, T ] â R be constant with y0,X,N t = 1. For all i â {1, . . . , N }, iteratively let yi,X,N : [Ï, T ] â R(v+1) i be the value of the integral
yi,X,N t = yi,X,N Ï + t M (yiâ1,X,N s )dXs for t â (Ï, T ],
Ï Z
# i
with yi,X,N Ï = 0 â R(v+1) .
14
Then we may stack these together:
yX,N = (y0,X,N , . . . , yN,X,N ) : [Ï, T ] â Rκ(N,v), M (yX,N ) = (0, M ⦠y0,X,N , . . . , M ⦠yN â1,X,N ) : [Ï, T ] â Rκ(N,v)Ãv.
Then yX,N is the unique solution to the CDE
# f
yX,N t = yX,N Ï + t M (yX,N )sdXs for t â (Ï, T ],
Ï Z
with yX,N Ï = (1, 0, . . . , 0).
# f
Then the signature transform truncated to depth N is deï¬ned as the map
SigN : V 1([Ï, T ]; Rv+1) â Rκ(N,v), SigN : X 7â yX,N
If this seems like a strange deï¬nition, then note that for any a â Rk and any b â Rv that M (a)b is equal to a ï¬attened vector corresponding to the outer product a â b. As such, the signature CDE is instead typically written much more concisely as the exponential differential equation
yX,N t = yX,N Ï + t yX,N s â dXs for t â (Ï, T ],
Ï Z
however we provide the above presentation for consistency with the rest of the text, which does not introduce â.
Deï¬nition B.5. Let V 1 0 ([Ï, T ]; Rv) = X â V 1([Ï, T ]; Rv) X0 = 0 .
With these deï¬nitions out of the way, we are ready to state the famous universal nonlinearity property of the signature transform. We think [22, Theorem 4.2] gives the most straightforward proof of this result. This essentially states that the signature gives a basis for the space of functions on compact path space. Theorem B.6 (Universal nonlinearity). Let Ï, T â R with Ï < T and let v, u â N. Let K â V 1 Let SigN : V 1([Ï, T ]; Rv+1) â Rκ(N,v) denote the signature transform truncated to depth N . Let
0 ([Ï, T ]; Rv) be compact. (Note the subscript zero.)
J N,u = â : Rκ(N,v) â Ru â is linear .
Then
is dense in C(K; Ru).
# | ⬠peut
# n
X 7â â(SigN (X))
N âN n [
# b
# o
# o
With the universal nonlinearity property, we can now prove universal approximation of CDEs with respect to controlling paths X. Theorem B.7 (Universal approximation with CDEs). Let Ï, T â R with Ï < T and let v, u â N. For any w â N let
Fv= {f: RY = RYX@+D \F is continuous LY" = {£: R® > R" | Lis linear} , v= {¢: Rt! _, RY | Cis continuous} .
,
For any w â N, any f â F w and any ζ â ξw and any be the unique solution to the CDE X â V 1([Ï, T ]; Rv), let zf,ζ,X : [Ï, T ] â Rw
zf,ζ,X t = zf,ζ,X Ï + t f (zf,ζ,X s b )dXs for t â (Ï, T ],
Ï Z
15
with 2£6* = ¢(X;). Let K C V1(([r,T];Râ) be compact. Then . U {85 ech) [Fert cere. ceet} wen is dense in C(K;R").
Proof. We begin by prepending a straight line segment to every element of K. For every deï¬ne
# b
(t â Ï + 1) Xt XÏ t â [Ï â 1, Ï ), t â [Ï, T ]. X â t = (
# b
b Similarly deï¬ne X â, so that a hat means that time is not a channel, whilst a star means that an extra straight-line segment has been prepended to the path.
Now let K â = Theorem B.6, n X â X â K o . Then K â â V 1 0 ([Ï â 1, T ]; Rv) and is also compact. Therefore by
X â 7â â(SigN (X â)) â â J N,u
# b
N âN n [
# o
is dense in C(K â; Ru).
# b
X â is a homeomorphism, so we may ï¬nd X â K. Next, there exists some N â N and X 7â X â) = α( X) for all b b
So let α â C(K; Ru) and ε > 0. The map β â C(K â; Ru) such that β( â â J N,u such that γ deï¬ned by γ : By Deï¬nition B.4 there exists f â F κ(N,v) so that SigN (X â) = yX â T is the unique solution of the CDE
X â 7â â(SigN (X â)) is ε-close to β. b
# b
# b
for all X â â K â, where yX â b
t t = yX â yX â f (yX â )dX â s for t â (Ï â 1, T ], Ï â1 + s Ï â1
# Z
with yX â Now let ζ â ξ be deï¬ned by ζ(XÏ ) = yX â only depends on XÏ for t â [Ï â 1, Ï ]. Now for any X â K let zX : [Ï, T ] â Rw be the unique solution to the CDE
# because the value of y;X"
with zX Ï = ζ(XÏ ).
t t = zX zX Ï + f (zX s )dXs for t â (Ï, T ],
Ï Z
t = yX â for t â [Ï, T ], and so in particular SigN (X â) = yX â t T =
Then by uniqueness of solution, zX zX T . Finally it remains to note that â â J N,u = Lκ(N,v),u.
So let δ be deï¬ned by δ : of (with w = κ(N, v), f â F w, â â Lw,u and ζ â ξ as chosen above), whilst for all
# b
T ) = â(SigN (X â)) = γ( X). Thus density has been established.
# δ(
is ε-close to β( X â) = α(
# b
(X")
# b
# b
Lemma B.8. Let K C C?([r,T],R") be uniformly bounded with uniformly bounded first and second derivatives. That is, there exists some C > 0 such that + leÂ¥e) 8/2. <C for all X ⬠K. Then K C V'([r,T];Râ) and is relatively compact (that is, its closure is compact) with respect to || - ||).
# b
16
# Oo
Proof. K is bounded in C2([Ï, T ], Rv) so it is relatively compact in C1([Ï, T ], Rv). Furthermore for any X â K,
ll, =I FI v BV ax âdt 1 aX dt < |5| + oo â oo Râ) + V1([r, 1]; Râ) is continuous.
# b
oo and so the embedding C1([r, T], Râ) + V1([r, 1]; Râ) is continuous. Therefore K is also relatively compact in V!({r, T]; R®). oO
Next we need to understand how a natural cubic spline is controlled by the size of its data. We establish the following crude bounds. Lemma B.9. Let v â N. Let x0, . . . , xn â Rv. Let t0, . . . , tn â R be such that t0 < t1 < · · · < tn. Let X(ti) = xi. Let Ïi = ti+1 â ti for all i. Then there exists an absolute constant C > 0 such that
# b
# b
X â + d bX/dt â+ d 2 bX/dt2 â < C kÏ kâ kxkâ (min i Ïi)â2(kÏ kâ + (min i Ïi)â1).
Proof. Surprisingly, we could not ï¬nd a reference for a fact of this type, but it follows essentially straightforwardly from the derivation of natural cubic splines.
Without loss of generality assume v = 1, as we are using the inï¬nity norm over the dimensions v, and each cubic interpolation is performed separately for each dimension.
Let the i-th piece of X, which is a cubic on the interval [ti, ti+1], be denoted Yi. Without loss of generality, translate each Yi to the origin so as to simplify the algebra, so that Yi : [0, Ïi] â R. Let Yi(t) = ai + bit + cit2 + dit3 for some coefï¬cients ai, bi, ci, di and i â {0, . . . , n â 1}. Letting Di = Y â² conditions imposed at each knot are Yi(0) = xi, Yi(Ïi) = xi+1, Y â² This then implies that ai = xi, bi = Di,
# b
ci = 3Ï â2 di = 2Ï â3
(xi+1 â xi) â Ï â1 (xi â xi + 1) + Ï â2
i i (Di+1 + 2Di), (7)
i i (Di+1 + Di). (8)
Letting . denote âless than or equal up to some absolute constantâ, then these equations imply that
X â = max i kYikâ . max i (|xi| + Ïi |Di|) ⤠kxkâ + kÏ kâ kDkâ , (9)
aX 4 OX] =maxllÂ¥ill., Smax(re? Jail + [Dil) < " co x _ . < me 2 1 EA) = max [Vilas Smax(ay? los +75 [Dil) oo Next, the second derivative condition at each knot is Y;â and the natural condition is Yjâ(0) = 0 and Y,"_; (t,-1)
Ïi)â1 + kDkâ , (Ï â1 i (10) = max |xi| + |Di|) ⤠kxkâ (min kYikâ . max i i i â
(Ï â2 i |xi| + Ï â1 Ïi)â1. Ïi)â2 + kDkâ (min kYikâ . max = max |Di|) ⤠kxkâ (min i i i i i â (11)
iâ1(Ïiâ1) = Y â²â² i (0) for i â {1, . . . , n â 1}, 0 (0) = 0 and Y â²â² nâ1(Ïnâ1) = 0. With equations (7), (8) this gives
T D = k,
17
where
T = D = 2Ï â1 0 Ï â1 0 D0 ... Dn , Ï â1 0 0 + Ï â1 1 ) Ï â1 1 2(Ï â1 Ï â1 1 1 + Ï â1 2 ) . . . 2(Ï â1 Ï â1 2 . . . Ï â1 nâ2 . . . nâ2 + Ï â1 Ï â1 nâ1 2(Ï â1 nâ1) Ï â1 nâ1 2Ï â1 nâ1 k = 3Ï â2 0 (x1 â x0) 1 (x2 â x1) + 3Ï â2 3Ï â2 0 (x1 â x0) ... nâ1(xn â xnâ1) + 3Ï â2 3Ï â2 nâ2(xnâ1 â xnâ2) nâ1(xn â xnâ1). ,
3Ï â2
T â1 Let diagonally dominant, so the Varah bound [67] and HM-AM inequality gives â denote the operator norm and kDkâ, kkkâ denote the elementwise norm. Now T is
T â1 (Ï â1 i + Ï â1 i+1))â1 . kÏ kâ . â ⤠(min i
Thus
kDkâ . kÏ kâ kkkâ . kÏ kâ kxkâ (min
# i
Together with equations (9)â(11) this gives the result.
Deï¬nition B.10 (Space of time series). Let v â N. and Ï, T â R such that Ï < T . We deï¬ne the space of time series in [Ï, T ] over Rv as
TS [Ï,T ] (Rv) = {((t0, x0), . . . , (tn, xn)) | n â N, ti â [Ï, T ], xn â Rv, t0 = Ï, tn = T, n ⥠2} .
To our knowledge, there is no standard topology on time series. One option is to treat them as sequences, however it is not clear how best to treat sequences of different lengths, or how to incorporate timestamp information. Given that a time series is typically some collection of observations from some underlying process, we believe the natural approach is to treat them as subspaces of functions.
Deï¬nition B.11 (General topologies on time series). Let v â N. and Ï, T â R such that Ï < T . Let F denote some topological space of functions. Let ι : TS [Ï,T ] (Rv) â F be some map. Then we may deï¬ne a topology on TS [Ï,T ] (Rv) as the weakest topology with respect to which ι is continuous.
Recall that we use subscripts to denote function evaulation.
Deï¬nition B.12 (Natural cubic spline topology). Let v â N. and Ï, T â R such that Ï < T . Let F = C([Ï, T ]; Rv) equipped with the uniform norm. For all x = ((t0, x0), . . . , (tn, xn)) â ι(x)ti = xi with ι : TS [Ï,T ] (Rv) â F produce the natural cubic spline such that TS [Ï,T ] (Rv), let knots at t0, . . . , tn. Then this deï¬nes a topology on TS [Ï,T ] (Rv) as in the previous deï¬nition. Remark B.13. In fact this deï¬nes a seminorm on TS [Ï,T ] (Rv), by kxk = k ι(x)kâ. This is only a seminorm as for example ((0, 0), (2, 2)) and ((0, 0), (1, 1), (2, 2)) have the same natural cubic spline interpolation. This can be worked around so as to instead produce a full norm, but it is a deliberate choice not to: we would often prefer that these time series be thought of as equal. (And if it they are not equal, then ï¬rst augmenting with observational intensity as in the main paper should distinguish them.)
# b
# b
18
Theorem B.14 (Universal approximation with Neural CDEs via natural cubic splines). Let Ï, T â R with Ï < T and let v, u â N. For any w â N let
Fit = {f: RY â RYx@+) | f is a feedforward neural network} ; LY" = {£: RY > R* | Lis linear} , EN = {¢: Rât! _ RY | ¢ is a feedforward neural network} .
,
Let ι denote the natural cubic spline interpolation as in the previous deï¬nition, and recall that âremoving the hatâ is our notation for augmenting with time. For any w â N, any f â F w and any NN and any x â TS [Ï,T ] (Rv), let zf,ζ,x : [Ï, T ] â Rw be the unique solution to the CDE ζ â ξw b
zf,ζ,x t = zf,ζ,x Ï + t f (zf,ζ,x s )dι(x)s for t â (Ï, T ],
Ï Z
with zf,ζ,x Ï Let K â TS [Ï,T ] (Rv) be such that there exists C > 0 such that
kxkâ (min i (ti+1 â ti))â3 < C (12)
for every x = ((t0, x0), . . . , (tn, xn)) â K. (With C independent of x.) Then
x 7â â(zf,ζ,x T ) f â F w NN , â â Lw,u, ζ â ξw NN
wâN n [
wen is dense in C(K; R") with respect to the natural cubic spline topology on TS{,,1) (R°).
# o
Proof. Fix x = ((t0, x0), . . . , (tn, xn)) â K. Let ι(x). Now kÏ kâ ⤠T â Ï is bounded so by Lemma B.9 and the assumption of equation (12), there exists a constant C1 > 0 independent of x such that
# b
aX âde V({7, 7];
aX âde ax | wt ae <1 oo oo 7(K) is relatively compact in V({7, 7]; RY).
Thus by Lemma B.8,
ι(K), where the overline denotes a closure. Now by Theorem B.7, and deï¬ning F w and Let K1 = ξw as in the statement of that theorem, b
# b
is dense in C(K1, Ru).
ι(x) 7â â(zf,ζ,x T ) f â F w, â â Lw,u, ζ â ξw
# wâN n [ b
# o
NN , the terminal values zf,ζ,x
For any f â F w, any ζ â ξw, any fNN â F w NN and any ζNN â ξw and zfNN ,ζNN ,x may be compared by standard estimates, for example as commonly used in the proof of T Picardâs theorem. Classical universal approximation results for neural networks [68, 69] then yield that
ι(x) 7â â(zf,ζ,x T ) f â F w NN , â â Lw,u, ζ â ξw NN
wâN n [ is dense in C(K1, Ru). b By the deï¬nition of the natural cubic spline topology on TS [Ï,T ] (Rv), then
# o
is dense in C(K, Ru).
x 7â â(zf,ζ,x T ) f â F w NN , â â Lw,u, ζ â ξw NN
wâN n [
# o
19
# oO
# C Comparison to alternative ODE models
Suppose if instead of equation (4), we replace gθ,X(z, s) by hθ(z, Xs) for some other vector ï¬eld hθ. This might seem more natural. Instead of having gθ,X be linear in dX/ds, we take a hθ that is potentially nonlinear in the control Xs.
Have we gained anything by doing so? It turns out no, and in fact we have lost something. The Neural CDE setup directly subsumes anything depending directly on X.
Theorem C.1. Let Ï, T â R with Ï < T , let v, w â N with v + 1 < w. Let
F= {f: RY = RYXx(+) |F is continuous ; H= {h: RY-e-1 x Rett 4 Rumen | his continuous} ; â¬={¢: Rt, RY | ¢ is continuous} , X= {X: [7,T] 3 RY X continuous and of bounded variation}
# X
.
# n
# o
# b
Xt, t). Let Ï : Rw â Rwâvâ1 be the orthogonal projection onto the ï¬rst X â X, let Xt = ( For any w â v â 1 coordinates.
b b For any f â F , any ζ â ξ, and any X â X, let zf,ζ,X : [Ï, T ] â Rw be the unique solution to
zf,ζ,X t b = zf,ζ,X Ï + t f (zf,ζ,X s )dXs for t â (Ï, T ],
Ï Z
with zf,ζ,X Ï = ζ(XÏ ).
Similarly for any h â H, any ζ â ξ, and any solution to X â X, let yf,X : [Ï, T ] â Rwâvâ1 be the unique
yh,ζ,X t = yh,ζ,X Ï + t b h(yh,ζ,X s , Xs)ds for t â (Ï, T ],
Ï Z with yh,ζ,X Ï = Ï(ζ(XÏ )). Let Y = X 7â yh,ζ,X h â H, ζ â ξ and Z = X 7â Ï â¦ zf,ζ,X f â F, ζ â ξ . n o n o
# Then Y ( Z. b
# b
In the above statement, then a practical choice of f â F or h â H will typically correspond to some trained neural network.
Note the inclusion of time via the augmentation X 7â X. Without it, then the reparameterisation invariance property of CDEs [18], [23, Proposition A.7] will restrict the possible functions that CDEs can represent. This hypothesis is not necessary for the Y 6= Z part of the conclusion. b Note also how the CDE uses a larger state space of w, compared to wâvâ1 for the alternative ODE. The reason for this is that whilst f has no explicit nonlinear dependence on X, we may construct it to have such a dependence implicitly, by recording X into v+1 of its w hidden channels, whereupon X is hidden state and may be treated nonlinearly. This hypothesis is also not necessary to demonstrate the Y 6= Z part of the conclusion.
This theorem is essentially an algebraic statement, and is thus not making any analytic claims, for example on universal approximation.
Proof.
That Y 6= Z: Let zf,ζ,· â Z for ζ â ξ arbitrary and f â F constant and such that
20
f (z) = 1 0 0 · · · 0 0 0 0 · · · 0 ... ... ... 0 0 0 · · · 0 ... w v + 1
X â X, the corresponding CDE solution in Z is
# b
zf,ζ,X t = zf,ζ,X Ï + t f (zf,ζ,X s )dXs,
Ï Z
and so the ï¬rst component of its solution is
zf,ζ,X,1 t = X 1 t â X 1 Ï + ζ1(XÏ ),
whilst the other components are constant
zf,ζ,X,i t = ζi(XÏ )
for i â {2, . . . , w}, where superscripts refer to components throughout. Now suppose for contradiction that there exists yh,ζ,· â Y for some Î â ξ and h â H such that yh,Î,X = Ï â¦ zf,ζ,X for all
yh,Î,X b t = yh,Î,X Ï + t h(yh,Î,X s , Xs)ds,
Ï Z
and so
t (X 1 t âX 1 Ï +ζ1(XÏ ), 0, . . . , 0) = Ï(Î(XÏ ))+ Ï +ζ1(XÏ ), ζ2(XÏ ), . . . , ζw(XÏ )), Xs)ds. h((X 1 s âX 1
Ï Z
Consider those X which are differentiable. Differentiating with respect to t now gives
dX 1 dt (t) = h1((X 1 Ï + ζ1(XÏ ), ζ2(XÏ ), . . . , ζw(XÏ )), Xt). s â X 1 (13)
That is, h1 satisï¬es equation (13) for all differentiable X. This is clearly impossible: the right hand side is a function of t, Xt and XÏ only, which is insufï¬cient to determine dX 1/dt(t).
Let yh,Î,X â Y for some Î â ξ and h â H. Let Ï : Rw â Rv+1 be the orthogonal That Y â Z: projection onto the last v + 1 coordinates. Let ζ â ξ be such that Ï â¦ Î¶ = Ï â¦ Î and Ï(ζ(XÏ )) = XÏ . Then let f â F be deï¬ned by
f (z) = h1(Ï(z), Ï(z)) ... 0 0 · · · 0 ... ... 0 0 · · · 0 hwâvâ1(Ï(z), Ï(z)) 1 0 · · · 0 0 1 · · · 0 ... ... . . . 0 0 · · · 1 0 0 · · · 0 ... 0 0 ... 0 1 ... w â v â 1 v + 1 v 1
21
Then for t â (Ï, T ],
# t
= ζ(XÏ ) + f (zf,ζ,X s )dXs = ζ(XÏ ) + = ζ(XÏ ) + = ζ(XÏ ) + Z Z Z Ï Ï Ï t t t h1(Ï(zf,ζ,X ), Ï(zf,ζ,X 0 s ... ... 0 hwâvâ1(Ï(zf,ζ,X · · · 0 · · · 0 · · · ... . . . 1 · · · · · · 0 h1(Ï(zf,ζ,X · · · 0 0 ... ... 0 0 1 0 0 1 ... ... 0 0 0 0 )) s ), Ï(zf,ζ,X s s 0 0 ... 0 1 ), Ï(zf,ζ,X s ... hwâvâ1(Ï(zf,ζ,X X 1 d s ... b X v d s ds b ), Ï(zf,ζ,X s dXs ))ds s ), Ï(zf,ζ,X s ))ds s h(Ï(zf,ζ,X s ))ds . ))
# zf,ζ,X t
# X 1 d s ... b X v d s ds b
# Z
Thus in particular
Ï(zf,ζ,X t ) = Ï(ζ(XÏ )) + t dXs = Ï(ζ(XÏ )) â XÏ + Xt = Xt.
Ï Z
Thus
Ï(zf,ζ,X t ) = Ï(ζ(XÏ )) + t h(Ï(zf,ζ,X s ), Ï(zf,ζ,X s ))ds = Ï(Î(XÏ )) + t h(Ï(zf,ζ,X s ), Xs)ds. Ï
Ï Z
# Z
Thus we see that Ï(zf,ζ,X ) satisï¬es the same differential equation as yh,Î,X. So by uniqueness of solution [20, Theorem 1.3], yh,Î,X = Ï(zf,ζ,X) â Z.
# D Experimental details
# D.1 General notes
Code https://github.com/patrick-kidger/NeuralCDE. Code to reproduce every experiment can be found at
Normalisation one. Every dataset was normalised so that each channel has mean zero and variance
Loss Every binary classiï¬cation problem used binary cross-entropy loss applied to the sigmoid of the output of the model. Every multiclass classiï¬cation problem used cross-entropy loss applied to the softmax of the output of the model.
Architectures For both the Neural CDE and ODE-RNN, the integrand fθ was taken to be a feedforward neural network. A ï¬nal linear layer was always used to map from the terminal hidden state to the output.
Activation functions For the Neural CDE model we used ReLU activation functions. Following the recommendations of [13], we used tanh activation functions for the ODE-RNN model, who remark that for the ODE-RNN model, tanh activations seem to make the model easier for the ODE
22
solver to resolve. Interestingly we did not observe this behaviour when trying tanh activations and method=âdopri5â with the Neural CDE model, hence our choice of ReLU.
Optimiser Every problem used the Adam [70] optimiser as implemented by PyTorch 1.3.1 [71]. Learning rate and batch size varied between experiments, see below. The learning rate was reduced if metrics failed to improve for a certain number of epochs, and training was terminated if metrics failed to improve for a certain (larger) number of epochs. The details of this varied by experiment, see the individual sections. Once training was ï¬nished, then the parameters were rolled back to the parameters which produced the best validation accuracy throughout the whole training procedure. The learning rate for the ï¬nal linear layer (mapping from the hidden state of a model to the output) was typically taken to be much larger than the learning rate used elsewhere in the model; this is a standard trick that we found improved performance for all models.
Hyperparameter selection baseline, and equivalent hyperparameters used for the other models. In brief, hyperparameters were selected to optimise the ODE-RNN
# In more detail:
We began by selecting the learning rate. This was selected by starting at 0.001 and reducing it until good performance was achieved for a small ODE-RNN model with batch size 32.
After this, we increased the batch size until the selected model trained at what was in our judgement a reasonable speed. As is standard practice, we increased the learning rate proportionate to the increase in batch size.
Subsequently we selected model hyperparameters (number of hidden channels, width and depth of the vector ï¬eld network) via a grid search to optimise the ODE-RNN baseline. A single run of each hyperparameter choice was performed. The equivalent hyperparameters were then used on the GRU-ât, GRU-D, GRU-ODE baselines, and also our Neural CDE models, after being adjusted to produce roughly the same number of parameters for each model.
The grids searched over and the resulting hyperparameters are stated in the individual sections below. Weight regularisation L2 weight regularisation was applied to every parameter of the ODE- RNN, GRU-ât and GRU-D models, and to every parameter of the vector ï¬elds for the Neural CDE and GRU-ODE models.
ODE Solvers The ODE components of the ODE-RNN, GRU-ODE, and Neural CDE models were all computed using the fourth-order Runge-Kutta with 3/8 rule solver, as implemented by passing method=ârk4â to the odeint adjoint function of the torchdiffeq [24] package. The step size was taken to equal the minimum time difference between any two adjacent observations.
Adjoint backpropagation The GRU-ODE, Neural CDE and the ODE component of the ODE-RNN are all trained via the adjoint backpropagation method [15], as implemented by odeint adjoint function of the torchdiffeq package.
Computing infrastructure All experiments were run on one of two computers; both used Ubuntu 18.04.4 LTS, were running PyTorch 1.3.1, and used version 0.0.1 of the torchdiffeq [24] package. One computer was equipped with a Xeon E5-2960 v4, two GeForce RTX 2080 Ti, and two Quadro GP100, whilst the other was equipped with a Xeon Silver 4104 and three GeForce RTX 2080 Ti.
# D.2 CharacterTrajectories
The learning rate used was 0.001 and the batch size used was 32. If the validation loss stagnated for 10 epochs then the learning rate was divided by 10 and training resumed. If the training loss or training accuracy stagnated for 50 epochs then training was terminated. The maximum number of epochs allowed was 1000.
We combined the train/test split of the original dataset (which are of unusual proportion, being 50%/50%), and then took a 70%/15%/15% train/validation/test split.
The initial condition ζθ of the Neural CDE model was taken to be a learnt linear map from the ï¬rst observation to the hidden state vector. (Recall that is an important part of the model, to avoid translation invariance.)
23
The hyperparameters were optimised (for just the ODE-RNN baseline as previously described) by performing most of a grid search over 16 or 32 hidden channels, 32, 48, 64, 128 hidden layer size, and 1, 2, 3 hidden layers. (The latter two hyperparameters corresponding to the vector ï¬elds of the ODE-RNN and Neural CDE models.) A few option combinations were not tested due to the poor performance of similar combinations. (For example every combination with hidden layer size of 128 demonstrated relatively poor performance.) The search was done on just the 30% missing data case, and the same hyperparameters were used for the 50% and 70% missing data cases.
The hyperparameters selected were 32 hidden channels for the Neural CDE and ODE-RNN models, and 47 hidden channels for the GRU-ât, GRU-D and GRU-ODE models. The Neural CDE and ODE-RNN models both used a feedforward network for their vector ï¬elds, with 3 hidden layers each of width 32. The resulting parameter counts for each model were 8212 for the Neural CDE, 8436 for the ODE-RNN, 8386 for the GRU-D, 8292 for the GRU-ât, and 8372 for the GRU-ODE.
# D.3 PhysioNet sepsis prediction
The batch size used was 1024 and learning rate used was 0.0032, arrived at as previously described. If the training loss stagnated for 10 epochs then the learning rate was divided by 10 and training resumed. If the training loss or validation accuracy stagnated for 100 epochs then training was terminated. The maximum number of epochs allowed was 200. The learning rate for the ï¬nal linear layer (a component of every model, mapping from the ï¬nal hidden state to the output) used a learning rate that 100 times larger, so 0.32.
The original dataset does not come with an existing split, so we took our own 70%/15%/15% train/validation/test split.
As this problem featured static (not time-varying) features, we incorporated this information by allowing the initial condition of every model to depend on these. This was taken to be a single hidden layer feedforward network with ReLU activation functions and of width 256, which we did not attempt a hyperparameter search over.
As this dataset is partially observed, then for the ODE-RNN, GRU-ât, GRU-D models, which require something to be passed at each time step, even if a value is missing, then we ï¬ll in missing values with natural cubic splines, for ease of comparison with the Neural CDE and ODE- RNN models. (We do not describe this as imputation as for the observational intensity case the observational mask is additionally passed to these models.) In particular this differs slightly from the usual implementation of GRU-D, which usually use a weighted average of the last observation and the mean. Splines accomplishes much the same thing, and help keep things consistent between the various models.
The hyperparameters were optimised (for just the ODE-RNN baseline as previously described) by performing most of a grid search over 64, 128, 256 hidden channels, 64, 128, 256 hidden layer size, and 1, 2, 3, 4 hidden layers. (The latter two hyperparameters corresponding to the vector ï¬elds of the ODE-RNN and Neural CDE models.)
The hyperparameters selected for the ODE-RNN model were 128 hidden channels, and a vector ï¬eld given by a feedforward neural network with hidden layer size 128 and 4 hidden layers. In order to keep the number of parameters the same between each model, this was reduced to 49 hidden channels and hidden layer size 49 for the Neural CDE model, and increased to 187 hidden channels for the GRU-ât, GRU-D and GRU-ODE models. When using observational intensity, the resulting parameter counts were 193541 for the Neural CDE, 194049 for the ODE-RNN, 195407 for the GRU- D, 195033 for the GRU-ât, and 194541 for the GRU-ODE. When not using observational intensity, the resulting parameter counts were 109729 for the Neural CDE, 180097 for the ODE-RNN, 175260 for the GRU-D, 174886 for the GRU-ât, and 174921 for the GRU-ODE. Note the dramatically reduced parameter count for the Neural CDE; this is because removing observational intensity reduces the number of channels, which affects the parameter count dramatically as discussed in Section 6.3.
# D.4 Speech Commands
The batch size used was 1024 and the learning rate used was 0.0016, arrived at as previously described. If the training loss stagnated for 10 epochs then the learning rate was divided by 10 and
24
training resumed. If the training loss or validation accuracy stagnated for 100 epochs then training was terminated. The maximum number of epochs allowed was 200. The learning rate for the ï¬nal linear layer (a component of every model, mapping from the ï¬nal hidden state to the output) used a learning rate that 100 times larger, so 0.16.
Each time series from the dataset is univariate and of length 16000. We computed 20 Mel- frequency cepstral coefï¬cients of the input as implemented by torchaudio.transforms.MFCC, with logarithmic scaling applied to the coefï¬cients. The window for the short-time Fourier transform component was a Hann window of length 200, with hop length of 100, with 200 frequency bins. This was passed through 128 mel ï¬lterbanks and 20 mel coefï¬cients extracted. This produced a time series of length 161 with 20 channels. We took a 70%/15%/15% train/validation/test split.
The hyperparameters were optimised (for just the ODE-RNN baseline as previously described) by performing most of a grid search over 32, 64, 128 hidden channels, 32, 64, 128 hidden layer size, and 1, 2, 3, 4 hidden layers. (The latter two hyperparameters corresponding to the vector ï¬elds of the ODE-RNN and Neural CDE models.)
The hyperparameters selected for the ODE-RNN model were 128 hidden channels, and a vector ï¬eld given by a feedforward neural network with hidden layer size 64 and 4 hidden layers. In order to keep the number of parameters the same between each model, this was reduced to 90 hidden channels and hidden layer size 40 for the Neural CDE model, and increased to 160 hidden channels for the GRU-ât, GRU-D and GRU-ODE models. The resulting parameter counts were 88940 for the Neural CDE model, 87946 for the ODE-RNN model, 89290 for the GRU-D model, 88970 for the GRU-dt model, and 89180 for the GRU-ODE model.
25 | {
"id": "1902.10298"
} |
2005.12766 | CERT: Contrastive Self-supervised Learning for Language Understanding | Pretrained language models such as BERT, GPT have shown great effectiveness
in language understanding. The auxiliary predictive tasks in existing
pretraining approaches are mostly defined on tokens, thus may not be able to
capture sentence-level semantics very well. To address this issue, we propose
CERT: Contrastive self-supervised Encoder Representations from Transformers,
which pretrains language representation models using contrastive
self-supervised learning at the sentence level. CERT creates augmentations of
original sentences using back-translation. Then it finetunes a pretrained
language encoder (e.g., BERT) by predicting whether two augmented sentences
originate from the same sentence. CERT is simple to use and can be flexibly
plugged into any pretraining-finetuning NLP pipeline. We evaluate CERT on 11
natural language understanding tasks in the GLUE benchmark where CERT
outperforms BERT on 7 tasks, achieves the same performance as BERT on 2 tasks,
and performs worse than BERT on 2 tasks. On the averaged score of the 11 tasks,
CERT outperforms BERT. The data and code are available at
https://github.com/UCSD-AI4H/CERT | http://arxiv.org/pdf/2005.12766 | Hongchao Fang, Sicheng Wang, Meng Zhou, Jiayuan Ding, Pengtao Xie | cs.CL, cs.LG, stat.ML | null | null | cs.CL | 20200516 | 20200618 | 0 2 0 2
n u J 8 1 ] L C . s c [
2 v 6 6 7 2 1 . 5 0 0 2 : v i X r a
1â16
# CERT: Contrastive Self-supervised Learning for Language Understanding
Hongchao Fangâ UC San Diego Sicheng Wangâ * UC San Diego Meng Zhouâ * UC San Diego Jiayuan Ding* VMware [email protected] [email protected] [email protected] [email protected] [email protected]
Abstract Pretrained language models such as BERT, GPT have shown great eï¬ectiveness in language understanding. The auxiliary predictive tasks in existing pretraining approaches are mostly deï¬ned on tokens, thus may not be able to capture sentence-level semantics very well. To address this issue, we propose CERT: Contrastive self-supervised Encoder Representations from Transformers, which pretrains language representation models using contrastive self- supervised learning at the sentence level. CERT creates augmentations of original sentences using back-translation. Then it ï¬netunes a pretrained language encoder (e.g., BERT) by predicting whether two augmented sentences originate from the same sentence. CERT is simple to use and can be ï¬exibly plugged into any pretraining-ï¬netuning NLP pipeline. We evaluate CERT on 11 natural language understanding tasks in the GLUE benchmark where CERT outperforms BERT on 7 tasks, achieves the same performance as BERT on 2 tasks, and performs worse than BERT on 2 tasks. On the averaged score of the 11 tasks, CERT outperforms BERT. The data and code are available at https://github.com/UCSD-AI4H/ CERT
# 1. Introduction
Large-scale pretrained language representation models such as BERT (Devlin et al., 2018), GPT (Radford et al., a), BART (Lewis et al., 2019), etc. have achieved dominating per- formance in various natural language processing tasks, such as text generation, reading comprehension, text classiï¬cation, etc. The architectures of these models are mostly based on Transformer (Vaswani et al., 2017), which uses self-attention to capture long-range de- pendency between tokens. The Transformer encoder or decoder is pretrained on large-scale text corpus by solving self-supervised tasks, such as predicting masked tokens (Devlin et al.,
. â The work was done during internship at UCSD. . âEqual contribution.
© H. Fangt, S. Wangâ, M. Zhoutâ, J. Dingâ & P. Xie.
# CERT: Contrastive Self-supervised Learning for Language Understanding
2018), generating future tokens (Radford et al., a), denoising corrupted tokens (Lewis et al., 2019), etc. In these works, the targets to be predicted are mostly at the word level. As a result, the global semantics at the sentence level may not be suï¬ciently captured.
To address this issue, we propose CERT: Contrastive self-supervised Encoder Represen- tations from Transformers, which uses contrastive self-supervised learning (CSSL) (He et al., 2019; Chen et al., 2020a) to learn sentence-level representations. Recently, CSSL has shown promising results in learning visual representations in an unsupervised way (He et al., 2019; Chen et al., 2020a). The key idea of CSSL is: create augments of original examples, then learn representations by predicting whether two augments are from the same original data example or not. CERT creates augments of sentences using back-translation (Edunov et al., 2018), then ï¬netunes a pretrained language representation model (e.g., BERT, BART) by predicting whether two augments are from the same original sentence or not. Diï¬erent from existing pretraining methods where the prediction tasks are deï¬ned on tokens, CERT deï¬nes the prediction task on sentences, which can presumably better capture global semantics at the sentence level.
CERT uses back-translation (Edunov et al., 2018) to perform sentence augmentation. Given a sentence x in source language 5S, we use an S-to-T translation model to translate x into a sentence y in target language Tâ. Then we use a T-to-S translation model to translate y into vâ in the source language. wvâ is regarded as an augment of x different target languages are used to create different augments of a source sentence. Given Translation models for these augmented sentences, a Momentum Con to perform CSSL. MoCo maintains a queue of encoded using a pretrained text-encoder (e.g., augmented sentence (called query), a similari rast (MoCo) (He et al., 2019) method is used augmented sentences (called keys) which are BERT) with momentum updates. Given an y score is calculated between the BERT (or any other pretrained text-encoder) encoding of the query and each key in the queue. The query and a key are labeled as a positive pair if they are augments of the same original sentence and as a negative pair if otherwise. These binary labels and similarity scores are used to define contrastive losses (Hadsell et al., 2006). The weights of the pretrained text encoder are further pretrained by minimizing the contrastive losses. To apply the pretrained CERT model on a downstream task, we finetune the CERT weights using input data and labels from the downstream task. CERT is a flexible module that can be integrated with any pretrained language representation models, such as BERT, BART, ERNIE 2.0 (Sun et al., 2019), T5 (Raffel et al., 2019), etc. We evaluate CERT on 11 natural language understanding tasks in the GLUE (Wang et al., 2018) benchmark where CERT outperforms BERT on 7 tasks, achieves the same performance as BERT on 2 tasks, and performs worse than BERT on 2 tasks. On the averaged score of the 11 tasks, CERT outperforms BERT. These results demonstrate the effectiveness of contrastive self-supervised learning for language representation by capturing sentence-level semantics.
The major contributions of this paper are as follows:
⢠We propose CERT, a new language representation pretraining method based on con- trastive self-supervised learning. The predictive tasks of CERT are deï¬ned at the sentence level, thus presumably can better capture sentence-level semantics.
⢠We perform evaluation of CERT on 11 natural language understanding tasks in the GLUE benchmark, where CERT outperforms BERT on the averaged score of 11 tasks.
2
CERT: Contrastive Self-supervised Learning for Language Understanding
⢠We perform ablation studies to investigate how the performance of CERT is aï¬ected by sentence augmentation methods and the source of pretraining corpora.
The rest of the papers are organized as follows. Section 2 and 3 present the methods and experiments. Section 4 reviews related works and Section 5 concludes the paper.
# 2. Pretraining of Transformers for Language Understanding
Among the recent works for pretraining language representation models, most of them are based on the Transformer (Vaswani et al., 2017) architecture. For example, BERT pretrains Transformer encoder. GPT pretrains Transformer decoder. BART pretrains Transformer encoder and decoder jointly.
# 2.1. Transformer
Transformer (Vaswani et al., 2017) is an encode-decoder architecture for sequence-to-sequence (seq2seq) modeling (Sutskever et al., 2014). Diï¬erent from seq2seq models (Sutskever et al., 2014) that are based on recurrent neural networks (e.g., LSTM (Hochreiter and Schmidhu- ber, 1997), GRU (Chung et al., 2014)) which model a sequence of tokens via a recurrent manner and hence is computationally ineï¬cient. Transformer eschews recurrent computa- tion and instead uses self-attention which not only can capture the dependency between tokens but also is amenable for parallel computation with high eï¬ciency. Self-attention calculates the correlation among every pair of tokens and uses these correlation scores to create âattentiveâ representations by taking weighted summation of tokensâ embeddings. Transformer is composed of building blocks, each consisting of a self-attention layer and a position-wise feed-forward layer. Residual connection (He et al., 2016) is applied around each of the two sub-layers, followed by layer normalization (Ba et al., 2016). Given the input sequence, an encoder, which is a stack of such building blocks, is applied to obtain a representation for each token. Then the decoder takes these representations as inputs and decodes the sequence of output tokens. To decode the i-th token, the decoder ï¬rst uses self- attention to encode the already decoded sequence y1, · · · , yiâ1, then performs input-output attention between the encodings of y1, · · · , yiâ1 and those of the input sequence. The âat- tentiveâ representations are then fed into a feed-forward layer. The three steps are repeated for multiple times. Finally, the representation is fed into a linear layer to predict the next token. The weight parameters in Transformer are learned by maximizing the conditional likelihood of output sequences conditioned on the corresponding input sequences.
# 2.2. BERT
BERT (Devlin et al., 2018) aims to learn a Transformer encoder for representing texts. BERTs model architecture is a multi-layer bidirectional Transformer encoder. In BERT, the Transformer uses bidirectional self-attention. To train the encoder, BERT masks some percentage of the input tokens at random, and then predicts those masked tokens by feeding the ï¬nal hidden vectors (produced by the encoder) corresponding to the mask tokens into an output softmax over the vocabulary. To apply the pretrained BERT to a downstream task such as sentence classiï¬cation, one can add an additional layer on top of the BERT architecture and train this newly-added layer using the labeled data in the target task.
3
CERT: Contrastive Self-supervised Learning for Language Understanding
# 3. Contrastive Self-supervised Learning
Self-supervised learning (SSL) (Wu et al., 2018; He et al., 2019; Chen et al., 2020b,a) is a learning paradigm which aims to capture the intrinsic patterns and properties of input data without using human-provided labels. The basic idea of SSL is to construct some auxiliary tasks solely based on the input data itself without using human-annotated labels and force the network to learn meaningful representations by performing the auxiliary tasks well, such rotation prediction (Gidaris et al., 2018), image inpainting (Pathak et al., 2016), automatic colorization (Zhang et al., 2016), context prediction (Nathan Mundhenk et al., 2018), etc. The auxiliary tasks in SSL can be constructed using many diï¬erent mechanisms. Recently, a contrastive mechanism (Hadsell et al., 2006) has gained increasing attention and demonstrated promising results in several studies (He et al., 2019; Chen et al., 2020b). The basic idea of contrastive SSL is: generate augmented examples of original data examples, create a predictive task where the goal is to predict whether two augmented examples are from the same original data example or not, and learn the representation network by solving this task.
# Contrastive loss
= Similarit q x query an ky Ky ks see Momentum Encoder A \ key key key _ Xo Xp Xp
Figure 1: Keys in the queue are encoded using a momentum encoder. Given an augmented data example in the current minibatch (called query) and a key in the queue, they are considered as a positive pair if they originate from the same data example, and a negative pair if otherwise. A similarity score is calculated between the encoding of the query and the encoding of each key. Contrastive losses are deï¬ned on the similarity scores and binary labels.
Diï¬erent methods have been proposed to implement contrastive SSL. In SimCLR (Chen et al., 2020a) designed for image data, given the input images, random data augmentation is applied to these images. If two augmented images are created from the same original image, they are labeled as being similar; otherwise, they are labeled as dissimilar. Then
4
# CERT: Contrastive Self-supervised Learning for Language Understanding
SimCLR learns a network to ï¬t these similar/dissimilar binary labels. The network consists of two modules: a feature extraction module f (·) which extracts the latent representation h = f (x) of an image x and a multi-layer perceptron g(·) which takes h as input and generates another latent representation z = g(h) used for predicting whether two images are similar. Given a similar pair (xi, xj) and a set of images {xk} that are dissimilar from xi, a contrastive loss can be deï¬ned as follows:
exp(sim(z;,z;)/T) exp(sim(z;,z;)/T) + 0, exp(sim(zi, z~)/T) (1) log
where sim(·, ·) denotes cosine similarity between two vectors and Ï is a temperature param- eter. SimCLR learns the network weights by minimizing losses of this kind. After training, the feature extraction sub-network is used for downstream tasks and g(·) is discarded.
While SimCLR is easy to implement, it requires a large minibatch size to yield high performance, which is computationally prohibitive. MoCo (Chen et al., 2020a) addresses this problem by using a queue that is independent of minibatch size. This queue contains a dynamic set of augmented data examples (called keys). In each iteration, the latest minibatch of examples are added into the queue; meanwhile, the oldest minibatch is removed from the queue. In this way, the queue is decoupled from minibatch size. Figure 1 shows the architecture of MoCo. The keys are encoded using a momentum encoder. Given an augmented data example (called query) in the current minibatch and a key in the queue, they are considered as a positive pair if they originate from the same image, and a negative pair if otherwise. A similarity score is calculated between the encoding of the query and the encoding of each key. Contrastive losses are deï¬ned on the similarity scores and binary labels.
# 4. CERT
CERT takes a pretrained language representation model (e.g., BERT) and ï¬netunes it using contrastive self-supervised learning on the input data of the target task. Figure 2 shows the workï¬ow of CERT. For the ease of presentation, we use BERT as a running example of the pretrained language representation model. But note that CERT can be used on top of other pretrained language representation models such as XLNet, T5, etc. as well and is not speciï¬c to BERT. Given the large-scale input texts (without labels) from source tasks, a BERT model is ï¬rst pretrained on these texts. Then we continue to train this pretrained BERT model using CSSL on the input texts (without labels) from the target task. We refer to this model as pretrained CERT model. Then we ï¬netune the CERT model using the input texts and their associated labels in the target task and get the ï¬nal model that performs the target task. In CSSL training, CERT augments the original sentences in the target task using back-translation. Two augmented sentences are considered as a positive pair if they are created from the same original sentence and a negative pair if otherwise. The augmented sentences are encoded with BERT and a similarity score is calculated on the BERT encodings of a pair of sentences. Contrastive losses are deï¬ned on the binary labels and similarity scores. Then the pretrained BERT encoder is further ï¬netuned by minimizing the contrastive losses. We use MoCo to implement CSSL to avoid using large minibatches which are computationally heavy.
5
# CERT: Contrastive Self-supervised Learning for Language Understanding
Input texts in source tasks
| BERT Pretraining t Pretrained BERT model Input texts in target task CSSL Pretraining / | / Labels in target task Pretrained CERT model a _ 7 Fine-tuning Final model
Figure 2: The workï¬ow of CERT. Given the large-scale input texts (without labels) from source tasks, a BERT model is ï¬rst pretrained on these texts. Then we continue to train this pretrained BERT model using CSSL on the input texts (without labels) from the target task. We refer to this model as pretrained CERT model. Then we ï¬netune the CERT model using the input texts and their associated labels in the target task and get the ï¬nal model that performs the target task.
Data Augmentation Figure 3 shows the workflow of data augmentation based on back translation. For each input sentence x in the target task, we augment it using back- translation (Edunov et al., 2018). Without loss of generality, we assume the language in the target task is English. We use an English-to-German machine translation (MT) model to translate x to y. Then we use a German-to-English translation model to translate y to xâ. Then 2â is regarded as an augmented sentence of x. Similarly, we use an English- to-Chinese MT model and a Chinese-to-English MT model to obtain another augmented sentence 2â. We use the machine translation models developed in (Britz et al., 2017) for back-translation.
7 Translated 7 - English-to-German German German-to-English . Translated English / translation model translation model sentence xâ sentence y English-to-Chinese Translated Chinese-to-English Translated English : > Chinese : 7 translation model translation model sentence xâ sentence z Original English sentence x
Figure 3: The workï¬ow of data augmentation based on back translation.
6
CERT: Contrastive Self-supervised Learning for Language Understanding
CSSL Pretraining We use MoCo (He et al., 2019) to implement CSSL. Given two aug- mented sentences, if they originate from the same original sentence, they are labeled as a positive pair; if they are from diï¬erent sentences, they are labeled as a negative pair. We use a queue to maintain a set of augmented sentences {ki}K i=1 (called keys). Given an aug- mented sentence q (called query) in the currently sampled minibatch, it is compared against each key in the queue. The query is encoded with a pretrained BERT model fq(·; θq) where θq denotes the weights. Each key is encoded with a pretrained BERT fk(·; θk), where the weights θk are updated with momentum: θk â mθk + (1 â m)θq (m is the momentum coeï¬cient). Without loss of generality, we assume there is a single key k+ in the queue that forms a positive pair with q. A contrastive loss can be deï¬ned as:
exp(sima(fa(q; 94); Fe(+59x))/7) SE, exp(sim( fa (q5 9), Frei; x))/7) log (2)
where Ï is a temperature parameter. The weights of the BERT encoder are ï¬netuned by minimizing losses of such kind. Note that in this step, only the input sentences of the target task are used. The labels of these sentences are not touched. To apply the pretrained CERT model to a downstream task, we further ï¬netune its weights on both the input sentences and their labels in the target task.
# 5. Experiments
In this section, we evaluate CERT on eleven natural language understanding tasks in the GLUE (Wang et al., 2018) benchmark and compare with BERT.
# 5.1. Tasks and Datasets
The General Language Understanding Evaluation (GLUE) benchmark has 11 tasks, includ- ing 2 single-sentence tasks, 3 similarity and paraphrase tasks, and 5 inference tasks. The evaluation metric for a task is accuracy, unless otherwise noted. For each task, labels of the validation set are publicly available while those of the test set are not released. We obtain performance on the test sets by submitting inference results to the GLUE evaluation server.
⢠CoLA On the Corpus of Linguistic Acceptability (CoLA) (Warstadt et al., 2019), the task is to judge whether a sequence of words is a grammatical English sentence. Matthews correlation coeï¬cient (Matthews, 1975) is used as the evaluation metric. The higher, the better.
⢠SST-2 On the Stanford Sentiment Treebank (SST) (Socher et al., 2013), the task is to judge whether the sentiment of a sentence is positive or negative.
⢠MRPC On the Microsoft Research Paraphrase Corpus (MRPC) (Dolan and Brockett, 2005), the task is to predict whether a pair of sentences are semantically equivalent. Accuracy and F1 are used as evaluation metrics.
. https://gluebenchmark.com/leaderboard
7
CERT: Contrastive Self-supervised Learning for Language Understanding
⢠QQP On the Quora Question Pairs (QQP), the task is to determine whether a pair of questions are semantically equivalent. Accuracy and F1 are used as evaluation metrics.
⢠STS-B On the Semantic Textual Similarity Benchmark (STS-B) (Cer et al., 2017), the task is to predict the similarity score (from 1 to 5) of a pair of sentences. Pearson and Spearman correlation coeï¬cients are used for evaluation.
⢠MNLI On the Multi-Genre Natural Language Inference (MNLI) corpus (Williams et al., 2017), given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis or not. The task contains two sub-tasks: MNLI-m and MNLI-mm which perform evaluation on matched (in-domain) and mismatched (cross- domain) sections.
⢠QNLI On the Stanford Question Answering Dataset (Rajpurkar et al., 2016), given a question and a paragraph, the task is to predict whether the paragraph contains an answer to the question.
⢠RTE The Recognizing Textual Entailment (RTE) (Dagan et al., 2005) task is deï¬ned as follows: given two sentences, judge whether one sentence can entail another. Entailment means the truth of one sentence follows from the other sentence.
⢠WNLI In the Winograd Schema Challenge (Levesque et al., 2012), the task is to read a sentence with a pronoun and select the reference of that pronoun from a list of choices.
Table 1 shows the statistics of data split in each task.
Train Dev Test COLA RTE QNLI 104742 2491 8551 5462 278 1043 5462 2985 1064 STS-B MRPC WNLI 3668 5749 408 1500 1725 1379 636 72 147 SST-2 67350 873 1821 MNLI (m/mm) 392702 9815/9832 9796/9847 QQP 363871 40432 390965
Table 1: Dataset statistics.
# 5.2. Experimental Settings
In MoCo, the size of the queue was set to 96606. The coeï¬cient of MoCo momentum of updating the key encoder was set to 0.999. The temperature parameter in contrastive loss was set to 0.07. Multi-layer perceptron head was used. For MoCo training, a stochastic gradient descent solver with momentum was used. The minibatch size was set to 16. The initial learning rate was set to 4e-5. The learning rate was adjusted using cosine scheduling. The number of epochs was set to 100. Weight decay was used with a coeï¬cient of 1e-5. For ï¬netuning on GLUE tasks, the maximum sequence length was set to 128. Minibatch size was set to 16. The learning rate was set to 3e-5 for CoLA, MNLI, STS-B; 2e-5 for RTE, QNLI, MRPC, SST-2, WNLI; and 1e-5 for QQP. The number of training epochs was set to 20 for CoLA; 5 for RTE, WNLI, QQP; 3 for QNLI, MNLI; 15 for MRPC; 4 for SST-2; and 10 for STS-B.
. https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs
8
# CERT: Contrastive Self-supervised Learning for Language Understanding
# 5.3. Results
Table 2 shows the performance on the development sets in GLUE tasks. Following (Lan et al., 2019), we perform 5 random restarts of the ï¬netuning and report the median and best performance among the 5 runs. Due to randomness of the 5 restarts, our median performance is not the same as that reported in (Lan et al., 2019). But generally they are close. From this table, we make the following observations. First, in terms of me- dian performance, our proposed CERT outperforms BERT (our run) on 5 tasks, including CoLA, RTE, QNLI, SST-2, and QQP. On the other 5 tasks, CERT is on par with BERT. This demonstrates the eï¬ectiveness of CERT in learning better language representations via contrastive self-supervised learning. Second, in terms of best performance, CERT out- performs BERT on 7 tasks, including CoLA, RTE, QNLI, STS-B, MNLI-m, MNLI-mm, and QQP. CERT is on par with BERT on MRPC and WNLI. CERT performs less well than BERT on SST-2. These results further demonstrate the eï¬ectiveness of CERT. Third, the improvement of CERT over BERT is more signiï¬cant on tasks where the training data is small, such as CoLA and RTE. One possible reason is that small-sized training data is more prone to overï¬tting, which makes the necessity of CSSL pretraining more prominent. Fourth, on small-sized datasets such as CoLA and RTE, CERT achieves great improvement over BERT. This shows that CERT is promising in solving low-resource NLP tasks where the amount of training data is limited.
MNLI CoLA RTE QNLI STS-B MRPC WNLI SST-2 (m/mm) QQP BERT (reported in Lan et al.2019 median) BERT (our run, median)59.8 CERT (ours, median) BERT (our run, best) CERT (ours, best) XLNet ERNIE2 RoBERTa ALBERT 60.6 62.1 60.9 62.9 63.6 65.4 68.0 71.4 70.4 71.4 72.2 72.1 74.0 83.8 85.2 86.6 89.2 92.3 91.9 92.3 92.1 92.5 93.9 94.3 94.7 95.3 90.0 89.8 89.8 90.1 90.6 91.8 92.3 92.4 93.0 - 91.1 91.0 92.5 92.5 - - - - - 56.3 56.3 56.3 56.3 - - - - 93.2 93.4 93.6 94.4 93.9 95.6 96.0 96.4 96.9 86.6/- 86.3/86.2 86.3/86.2 86.4/86.3 86.6/86.5 89.8/- 89.1/- 90.2/- 90.8/- 91.3 91.2 91.4 91.4 91.7 91.8 92.5 92.2 92.2
Table 2: Performance on the validation dataset. The metric for MRPC is F1. The metric for QQP is accuracy. The metric for STS-B is Pearson correlation.
For the convenience of the readers, we also show the results of state-of-the-art pretraining methods including XLNet (Yang et al., 2019), RoBERTa (Liu et al., 2019), ERNIE 2.0 (Sun et al., 2019), and ALBERT (Lan et al., 2019). CERT achieves a performance close to XLNet on CoLA, with a much lower consumption of computing resources and on a much smaller text corpus. XLNet, RoBERTa, ERNIE 2.0, and ALBERT are trained on hundreds of to
9
# CERT: Contrastive Self-supervised Learning for Language Understanding
BERT 80.5 60.5 70.1 92.7 87.6/86.5 89.3/85.4 65.1 94.9 86.7 85.9 72.1/89.3 39.6 CERT 80.7 58.9 71.2 93.0 87.9/86.8 89.8/85.9 65.1 94.6 87.2 86.4 72.5/90.3 39.6 XLNet - 70.2 88.5 - 93.0/92.6 92.9/90.5 92.5 97.1 90.9 90.9 74.7/90.4 48.4 ERNIE2 90.4 74.4 90.9 96.6 93.0/92.6 93.5/91.4 94.5 97.5 91.4 91.0 75.2/90.9 51.7 T5 90.3 71.6 92.8 96.9 93.1/92.8 92.8/90.4 94.5 97.5 92.2 91.9 75.1/90.6 53.1 RoBERTa ALBERT 88.1 67.8 88.2 95.4 92.2/91.9 92.3/89.8 89.0 96.7 90.8 90.2 74.3/90.2 48.7 - 69.1 89.2 - 92.5/92.0 93.4/91.2 91.8 97.1 91.3 91.0 74.2/90.5 50.2
Table 3: Performance on the test datasets of GLUE. The metrics for MRPC and QQP are F1/accuracy. The metrics for STS-B are Pearson correlation and Spearman correlation.
thousands of GPU machines for several days, while CERT is trained using a single GPU for a dozen of hours. In addition, these models are trained on tens of or hundreds of gigabytes of texts while CERT is only trained on about 400 KB of texts. CERT can be built on top of these pretrained models as well, which will be left for future study.
Table 3 shows the performance on the test datasets of GLUE. Among the 11 tasks, CERT outperforms BERT on 7 tasks, including RTE, QNLI, STS-B, MRPC, MNLI-m, MNLI-mm, QQP. CERT achieves the same performance as BERT on WNLI and AX. CERT performs worse than BERT on CoLA and SST-2. CERT achieves an average score of 80.7, which is better than BERT. Overall, CERT performs better than BERT, which further demonstrates that contrastive self-supervised learning is an eï¬ective approach for learning better representations of language. While CERT achieves much better performance than BERT on the validation set of CoLA, it performs worse than BERT on the CoLA test set. This is probably because the test set and validation set in CoLA have a large domain diï¬erence. Table 3 also lists the performance of other state-of-the-art methods. For the next step, we plan to replace the BERT-based sentence encoder in CERT with XLNet, ERNIE2, T5, RoBERTa, and ALBERT, to see whether CSSL pretraining can improve the performance of these encoders.
Ablation on Data Augmentation One key ingredient in CERT is to create augmented sentences. By default, we use back-translation for augmentation. It is interesting to inves- tigate other augmentation methods as well. We compare back-translation with a recently proposed text augmentation method â Easy Data Augmentation (EDA) (Wei and Zou, 2019). Given a sentence in the training set, EDA randomly chooses and performs one of the following operations: synonym replacement, random insertion, random swap, and random deletion.
Table 4 shows the results achieved by CERT using back-translation and EDA for aug- mentation respectively, on the CoLA and RTE tasks. As can be seen, in general, back- translation achieves better results than EDA, except that the median performance of back-
10
# CERT: Contrastive Self-supervised Learning for Language Understanding
Back-translation (median) EDA (median) Back-translation (best) EDA (best) CoLA STS-B 62.1 60.5 62.9 62.3 89.8 89.9 90.6 90.2
Table 4: Performance on CoLA and STS-B, under diï¬erent data-augmentation methods.
translation on STS-B is 0.1% lower than that of EDA. The reason that back-translation works better is probably because back-translation performs augmentation at the sentence level globally: the entire sentence is translated back and forth, while EDA performs augmen- tation at the word/phrase level locally. Therefore, back-translation can better capture the global semantics of sentences while EDA captures local semantics. Another reason might be that the sentences augmented by back-translation are more diï¬erent from the original ones. In contrast, the sentences augmented by EDA are close to the original ones since EDA per- forms local edits of the original sentences and keeps the majority of a sentence untouched. It is more diï¬cult to predict whether two augmented sentences by back-translation are from the same original sentence than to predict those augmented by EDA. Solving a more challenging CSSL task usually leads to better representations.
Target Training Data (median) BERT Training Data (median) Target Training Data (best) BERT Training Data (best) CoLA STS-B 62.1 60.0 62.9 61.3 89.8 90.3 90.6 90.8
Table 5: Performance on CoLA and STS-B, under diï¬erent pretraining corpus of CSSL.
Ablation on CSSL Pretraining Corpora Another interesting point to study is what corpora should be used for training CSSL. In CERT, by default, we use the training data (excluding labels) of the target task for CSSL. We compare with the following setting. We randomly sample x sentences from the training corpora of BERT, where x is the number of training sentences in the target task. Table 5 shows the performance on CoLA and STS-B, with CSSL pretrained on diï¬erent corpora. On the CoLA task, better performance is achieved when using the training data of CoLA for CSSL. On the STS-B task, better performance is achieved when using the BERT pretraining corpus for CSSL. From this study, we do not reach a clear conclusion regarding which one is better. In practice, it might be useful to try both and ï¬nd out which one is more eï¬ective.
# 6. Related Works
# 6.1. Pretraining for learning language representation
Recently, pretraining on large-scale text corpus for language representation learning has achieved substantial success. The GPT model (Radford et al., a) is a language model (LM) based on Transformer. Diï¬erent from Transformer which deï¬nes a conditional probability
11
# CERT: Contrastive Self-supervised Learning for Language Understanding
on an output sequence given an input sequence, GPT deï¬nes a marginal probability on a single sequence. In GPT, the conditional probability of the next token given the historical sequence is deï¬ned using the Transformer decoder. The weight parameters are learned by maximizing the likelihood on the sequence of tokens. GPT-2 (Radford et al., b) is an ex- tension of GPT, which modiï¬es GPT by moving layer normalization to the input of each sub-block and adding an additional layer normalization after the ï¬nal self-attention block. Byte pair encoding (BPE) (Sennrich et al., 2015) is used to represent the input sequence of tokens. BERT-GPT (Wu et al., 2019) is a model used for sequence-to-sequence modeling where pretrained BERT is used to encode the input text and GPT is used to generate the output text. In BERT-GPT, the pretraining of the BERT encoder and the GPT decoder is conducted separately, which may lead to inferior performance. Auto-Regressive Trans- formers (BART) (Lewis et al., 2019) has a similar architecture as BERT-GPT, but trains the BERT encoder and GPT decoder jointly. To pretrain the BART weights, the input text is corrupted randomly, such as token masking, token deletion, text inï¬lling, etc., then the network is learned to reconstruct the original text. ALBERT (Lan et al., 2019) uses parameter-reduction methods to reduce the memory consumption and increase the train- ing speed of BERT. It also introduces a self-supervised loss which models inter-sentence It coherence. RoBERTa (Liu et al., 2019) is a replication study of BERT pretraining. shows that the performance of BERT can be signiï¬cantly improved by carefully tuning the training process, such as (1) training the model longer, with bigger batches, over more data; (2) removing the next sentence prediction objective; (3) training on longer sequences, etc. XLNet (Yang et al., 2019) learns bidirectional contexts by maximizing the expected likeli- hood over all permutations of the factorization order and uses a generalized autoregressive pretraining mechanism to overcome the pretrain-ï¬netune discrepancy of BERT. T5 (Raï¬el et al., 2019) compared pretraining objectives, architectures, unlabeled datasets, transfer approaches on a wide range of language understanding tasks and proposed a uniï¬ed frame- work that casts these tasks as a text-to-text task. The uniï¬ed model was trained on a large Colossal Clean Crawled Corpus, which was then transferred to diverse downstream tasks. ERNIE 2.0 (Sun et al., 2019) proposed a continual pretraining framework which builds and learns incrementally pretraining tasks through constant multi-task learning, to capture the lexical, syntactic and semantic information from training corpora.
# 6.2. Contrastive Self-supervised learning
Contrastive self-supervised learning has arisen much research interest recently. Hnaï¬ et al. (H´enaï¬ et al., 2019) studied data-eï¬cient image recognition based on contrastive pre- dictive coding (Oord et al., 2018), which predicts the future in latent space by using powerful autoregressive models. Srinivas et al. (Srinivas et al., 2020) proposed to learn contrastive unsupervised representations for reinforcement learning. Khosla et al. (Khosla et al., 2020) investigated supervised contrastive learning, where clusters of points belonging to the same class are pulled together in embedding space, while clusters of samples from diï¬erent classes are pushed apart. Klein and Nabi (Klein and Nabi, 2020) proposed a contrastive self- supervised learning approach for commonsense reasoning. He et al. (He et al., 2020) pro- posed an Self-Trans approach which applies contrastive self-supervised learning on top of networks pretrained by transfer learning.
12
CERT: Contrastive Self-supervised Learning for Language Understanding
# 7. Conclusions and Future Works
In this work, we propose CERT, a pretraining approach for language representation learning. CERT takes a pretrained language representation model such as BERT and continues to train it using contrastive self-supervised learning on the input texts of the target task. It uses back-translation to generate augmented sentences for each original sentence in the target data, and trains the network by predicting whether two augments are from the same original sentence or not. Then the pretrained CERT model is ï¬netuned on the input texts and their labels in the target task. We evaluate CERT on 11 natural language understanding tasks in the GLUE benchmark. On both test set and validation set, CERT outperforms BERT on majority of tasks, which demonstrates the eï¬ectiveness of contrastive self-supervised learning in learning language representations. For future works, we plan to study more challenging loss functions for self-supervised learning. We are interested in investigating a ranking-based loss, where each sentence is augmented with a ranked list of sentences which have decreasing discrepancy with the original sentence. The auxiliary task is to predict the order given the augmented sentences. Predicting an order is presumably more challenging than binary classiï¬cation as in contrastive SSL and may facilitate the learning of better representations.
# References
Jimmy Lei Ba, Jamie Ryan Kiros, and Geoï¬rey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
Denny Britz, Anna Goldie, Minh-Thang Luong, and Quoc Le. Massive exploration of neural machine translation architectures. arXiv preprint arXiv:1703.03906, 2017.
Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez-Gazpio, and Lucia Specia. Semeval- 2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation. arXiv preprint arXiv:1708.00055, 2017.
Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoï¬rey Hinton. A simple frame- work for contrastive learning of visual representations. arXiv preprint arXiv:2002.05709, 2020a.
Xinlei Chen, Haoqi Fan, Ross Girshick, and Kaiming He. Improved baselines with momen- tum contrastive learning. arXiv preprint arXiv:2003.04297, 2020b.
Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014.
Ido Dagan, Oren Glickman, and Bernardo Magnini. The pascal recognising textual entail- In Machine Learning Challenges Workshop, pages 177â190. Springer, ment challenge. 2005.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre- training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
13
CERT: Contrastive Self-supervised Learning for Language Understanding
William B Dolan and Chris Brockett. Automatically constructing a corpus of senten- tial paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005), 2005.
Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. Understanding back- translation at scale. arXiv preprint arXiv:1808.09381, 2018.
Spyros Gidaris, Praveer Singh, and Nikos Komodakis. Unsupervised representation learning by predicting image rotations. arXiv preprint arXiv:1803.07728, 2018.
Raia Hadsell, Sumit Chopra, and Yann LeCun. Dimensionality reduction by learning an invariant mapping. In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPRâ06), volume 2, pages 1735â1742. IEEE, 2006.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770â778, 2016.
Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning. arXiv preprint arXiv:1911.05722, 2019.
Xuehai He, Xingyi Yang, Shanghang Zhang, Jinyu Zhao, Yichen Zhang, Eric Xing, and Pengtao Xie. Sample-eï¬cient deep learning for covid-19 diagnosis based on ct scans. medRxiv, 2020.
Olivier J H´enaï¬, Ali Razavi, Carl Doersch, SM Eslami, and Aaron van den Oord. arXiv preprint Data-eï¬cient image recognition with contrastive predictive coding. arXiv:1905.09272, 2019.
Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735â1780, 1997.
Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. Supervised contrastive learning. arXiv preprint arXiv:2004.11362, 2020.
Tassilo Klein and Moin Nabi. Contrastive self-supervised learning for commonsense reason- ing. arXiv preprint arXiv:2005.00669, 2020.
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. Albert: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942, 2019.
Hector Levesque, Ernest Davis, and Leora Morgenstern. The winograd schema challenge. In Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning, 2012.
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461, 2019.
14
CERT: Contrastive Self-supervised Learning for Language Understanding
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
Brian W Matthews. Comparison of the predicted and observed secondary structure of t4 phage lysozyme. Biochimica et Biophysica Acta (BBA)-Protein Structure, 405(2):442â 451, 1975.
T Nathan Mundhenk, Daniel Ho, and Barry Y Chen. Improvements to context based self- In Proceedings of the IEEE Conference on Computer Vision and supervised learning. Pattern Recognition, pages 9339â9348, 2018.
Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018.
Deepak Pathak, Philipp Krahenbuhl, Jeï¬ Donahue, Trevor Darrell, and Alexei A Efros. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2536â2544, 2016.
Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. a.
Alec Radford, Jeï¬rey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. b.
Colin Raï¬el, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a uniï¬ed text-to-text transformer. arXiv preprint arXiv:1910.10683, 2019.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250, 2016.
Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909, 2015.
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, An- drew Y Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1631â1642, 2013.
Aravind Srinivas, Michael Laskin, and Pieter Abbeel. Curl: Contrastive unsupervised representations for reinforcement learning. arXiv preprint arXiv:2004.04136, 2020.
Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Hao Tian, Hua Wu, and Haifeng Wang. Ernie 2.0: A continual pre-training framework for language understanding. arXiv preprint arXiv:1907.12412, 2019.
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104â3112, 2014.
15
CERT: Contrastive Self-supervised Learning for Language Understanding
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008, 2017.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bow- man. Glue: A multi-task benchmark and analysis platform for natural language under- standing. arXiv preprint arXiv:1804.07461, 2018.
Alex Warstadt, Amanpreet Singh, and Samuel R Bowman. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics, 7:625â641, 2019.
Jason W Wei and Kai Zou. Eda: Easy data augmentation techniques for boosting perfor- mance on text classiï¬cation tasks. arXiv preprint arXiv:1901.11196, 2019.
Adina Williams, Nikita Nangia, and Samuel R Bowman. A broad-coverage challenge corpus for sentence understanding through inference. arXiv preprint arXiv:1704.05426, 2017.
Qingyang Wu, Lei Li, Hao Zhou, Ying Zeng, and Zhou Yu. Importance-aware learning for neural headline editing. arXiv preprint arXiv:1912.01114, 2019.
Zhirong Wu, Yuanjun Xiong, Stella X Yu, and Dahua Lin. Unsupervised feature learning via non-parametric instance discrimination. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3733â3742, 2018.
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in neural information processing systems, pages 5754â5764, 2019.
Richard Zhang, Phillip Isola, and Alexei A Efros. Colorful image colorization. In European conference on computer vision, pages 649â666. Springer, 2016.
16 | {
"id": "2002.05709"
} |
2005.08100 | Conformer: Convolution-augmented Transformer for Speech Recognition | Recently Transformer and Convolution neural network (CNN) based models have
shown promising results in Automatic Speech Recognition (ASR), outperforming
Recurrent neural networks (RNNs). Transformer models are good at capturing
content-based global interactions, while CNNs exploit local features
effectively. In this work, we achieve the best of both worlds by studying how
to combine convolution neural networks and transformers to model both local and
global dependencies of an audio sequence in a parameter-efficient way. To this
regard, we propose the convolution-augmented transformer for speech
recognition, named Conformer. Conformer significantly outperforms the previous
Transformer and CNN based models achieving state-of-the-art accuracies. On the
widely used LibriSpeech benchmark, our model achieves WER of 2.1%/4.3% without
using a language model and 1.9%/3.9% with an external language model on
test/testother. We also observe competitive performance of 2.7%/6.3% with a
small model of only 10M parameters. | http://arxiv.org/pdf/2005.08100 | Anmol Gulati, James Qin, Chung-Cheng Chiu, Niki Parmar, Yu Zhang, Jiahui Yu, Wei Han, Shibo Wang, Zhengdong Zhang, Yonghui Wu, Ruoming Pang | eess.AS, cs.LG, cs.SD | Submitted to Interspeech 2020 | null | eess.AS | 20200516 | 20200516 | 0 2 0 2
y a M 6 1 ] S A . s s e e [ 1 v 0 0 1 8 0 . 5 0 0 2 : v i X r a
# Conformer: Convolution-augmented Transformer for Speech Recognition
Anmol Gulati, James Qin, Chung-Cheng Chiu, Niki Parmar, Yu Zhang, Jiahui Yu, Wei Han, Shibo Wang, Zhengdong Zhang, Yonghui Wu, Ruoming Pang
Google Inc. {anmolgulati, jamesqin, chungchengc, nikip, ngyuzh, jiahuiyu, weihan, shibow, zhangzd, yonghui, rpang}@google.com
# Abstract
Recently Transformer and Convolution neural network (CNN) based models have shown promising results in Automatic Speech Recognition (ASR), outperforming Recurrent neural networks (RNNs). Transformer models are good at captur- ing content-based global interactions, while CNNs exploit lo- In this work, we achieve the best of cal features effectively. both worlds by studying how to combine convolution neural networks and transformers to model both local and global de- pendencies of an audio sequence in a parameter-efï¬cient way. To this regard, we propose the convolution-augmented trans- former for speech recognition, named Conformer. Conformer signiï¬cantly outperforms the previous Transformer and CNN based models achieving state-of-the-art accuracies. On the widely used LibriSpeech benchmark, our model achieves WER of 2.1%/4.3% without using a language model and 1.9%/3.9% with an external language model on test/testother. We also observe competitive performance of 2.7%/6.3% with a small model of only 10M parameters. Index Terms: speech recognition, attention, convolutional neu- ral networks, transformer, end-to-end
Layernorm Oâ * w2x ) âNv Feed Forward Module Dropout es) 40 ms rato x Qs Linear Convolution Module Convolution ©-â Subsampling ~ Multi-Head Self Attention 10 ms rato t Module SpecAug [-â___ lan <â_â Ww 10 ms rate 12x Feed Forward Module 4 ' 40 ms rate a, ae
# 1. Introduction
End-to-end automatic speech recognition (ASR) systems based on neural networks have seen large improvements in recent years. Recurrent neural networks (RNNs) have been the de- facto choice for ASR [1, 2, 3, 4] as they can model the temporal dependencies in the audio sequences effectively [5]. Recently, the Transformer architecture based on self-attention [6, 7] has enjoyed widespread adoption for modeling sequences due to its ability to capture long distance interactions and the high train- ing efï¬ciency. Alternatively, convolutions have also been suc- cessful for ASR [8, 9, 10, 11, 12], which capture local context progressively via a local receptive ï¬eld layer by layer.
However, models with self-attention or convolutions each has its limitations. While Transformers are good at modeling long-range global context, they are less capable to extract ï¬ne- grained local feature patterns. Convolution neural networks (CNNs), on the other hand, exploit local information and are used as the de-facto computational block in vision. They learn shared position-based kernels over a local window which main- tain translation equivariance and are able to capture features like edges and shapes. One limitation of using local connectivity is that you need many more layers or parameters to capture global information. To combat this issue, contemporary work Con- textNet [10] adopts the squeeze-and-excitation module [13] in each residual block to capture longer context. However, it is still limited in capturing dynamic global context as it only applies a global averaging over the entire sequence.
Recent works have shown that combining convolution and
Figure 1: Conformer encoder model architecture. Conformer comprises of two macaron-like feed-forward layers with half- step residual connections sandwiching the multi-headed self- attention and convolution modules. This is followed by a post layernorm.
self-attention improves over using them individually [14]. To- gether, they are able to learn both position-wise local features, and use content-based global interactions. Concurrently, papers like [15, 16] have augmented self-attention with relative posi- tion based information that maintains equivariance. Wu et al. [17] proposed a multi-branch architecture with splitting the in- put into two branches: self-attention and convolution; and con- catenating their outputs. Their work targeted mobile applica- tions and showed improvements in machine translation tasks.
In this work, we study how to organically combine con- volutions with self-attention in ASR models. We hypothesize that both global and local interactions are important for being parameter efï¬cient. To achieve this, we propose a novel combi- nation of self-attention and convolution will achieve the best of both worlds â self-attention learns the global interaction whilst the convolutions efï¬ciently capture the relative-offset-based lo- cal correlations. Inspired by Wu et al. [17, 18], we introduce a novel combination of self-attention and convolution, sand- wiched between a pair feed forward modules, as illustrated in Fig 1.
Our proposed model, named Conformer, achieves state-of- the-art results on LibriSpeech, outperforming the previous best published Transformer Transducer [7] by 15% relative improve-
Pointwise tchNorm: eu Iaegintsy Activation 1 Conv Swish Activation L ) Dropout
Figure 2: Convolution module. The convolution module contains a pointwise convolution with an expansion factor of 2 projecting the number of channels with a GLU activation layer, followed by a 1-D Depthwise convolution. The 1-D depthwise conv is followed by a Batchnorm and then a swish activation layer.
ment on the testother dataset with an external language model. We present three models based on model parameter limit con- straints of 10M , 30M and 118M. Our 10M model shows an im- provement when compared to similar sized contemporary work [10] with 2.7%/6.3% on test/testother datasets. Our medium 30M parameters-sized model already outperforms transformer transducer published in [7] which uses 139M model parameters. With the big 118M parameter model, we are able to achieve 2.1%/4.3% without using language models and 1.9%/3.9% with an external language model.
# 2.2. Convolution Module
Inspired by [17], the convolution module starts with a gating mechanism [23]âa pointwise convolution and a gated linear unit (GLU). This is followed by a single 1-D depthwise convo- lution layer. Batchnorm is deployed just after the convolution to aid training deep models. Figure 2 illustrates the convolution block.
# 2.3. Feed Forward Module
We further carefully study the effects of the number of at- tention heads, convolution kernel sizes, activation functions, placement of feed-forward layers, and different strategies of adding convolution modules to a Transformer-based network, and shed light on how each contributes to the accuracy improve- ments.
The Transformer architecture as proposed in [6] deploys a feed forward module after the MHSA layer and is composed of two linear transformations and a nonlinear activation in between. A residual connection is added over the feed-forward layers, fol- lowed by layer normalization. This structure is also adopted by Transformer ASR models [7, 24].
# 2. Conformer Encoder
Our audio encoder ï¬rst processes the input with a convolution subsampling layer and then with a number of conformer blocks, as illustrated in Figure 1. The distinctive feature of our model is the use of Conformer blocks in the place of Transformer blocks as in [7, 19].
A conformer block is composed of four modules stacked together, i.e, a feed-forward module, a self-attention module, a convolution module, and a second feed-forward module in the end. Sections 2.1, 1, and 2.3 introduce the self-attention, convolution, and feed-forward modules, respectively. Finally, 2.4 describes how these sub blocks are combined.
# 2.1. Multi-Headed Self-Attention Module
We follow pre-norm residual units [21, 22] and apply layer normalization within the residual unit and on the input before the ï¬rst linear layer. We also apply Swish activation [25] and dropout, which helps regularizing the network. Figure 4 illus- trates the Feed Forward (FFN) module.
# 2.4. Conformer Block
Our proposed Conformer block contains two Feed Forward modules sandwiching the Multi-Headed Self-Attention module and the Convolution module, as shown in Figure 1.
This sandwich structure is inspired by Macaron-Net [18], which proposes replacing the original feed-forward layer in the Transformer block into two half-step feed-forward layers, one before the attention layer and one after. As in Macron-Net, we employ half-step residual weights in our feed-forward (FFN) modules. The second feed-forward module is followed by a ï¬nal layernorm layer. Mathematically, this means, for input xi to a Conformer block i, the output yi of the block is:
We employ multi-headed self-attention (MHSA) while integrat- ing an important technique from Transformer-XL [20], the rel- ative sinusoidal positional encoding scheme. The relative po- sitional encoding allows the self-attention module to general- ize better on different input length and the resulting encoder is more robust to the variance of the utterance length. We use pre- norm residual units [21, 22] with dropout which helps training and regularizing deeper models. Figure 3 below illustrates the multi-headed self-attention block.
t SFEN(c:) + MHSA (aii) i + Conv(2/) (1) yi = Layernorm(2// + SFFN(2!))
where FFN refers to the Feed forward module, MHSA refers to the Multi-Head Self-Attention module, and Conv refers to the Convolution module as described in the preceding sections.
âMulti-Head Attention with Layernorm Relative Positional Dropout +) Embedding -
Figure 3: Multi-Headed self-attention module. We use multi- headed self-attention with relative positional embedding in a pre-norm residual unit.
Our ablation study discussed in Sec 3.4.3 compares the Macaron-style half-step FFNs with the vanilla FFN as used in previous works. We ï¬nd that having two Macaron-net style feed-forward layers with half-step residual connections sand- wiching the attention and convolution modules in between pro- vides a signiï¬cant improvement over having a single feed- forward module in our Conformer architecture.
The combination of convolution and self-attention has been studied before and one can imagine many ways to achieve
Layernorm: S Linear Swish Linear x Layer [Activation |») Dropout toe oe Le +)
Figure 4: Feed forward module. The ï¬rst linear layer uses an expansion factor of 4 and the second linear layer projects it back to the model dimension. We use swish activation and a pre-norm residual units in feed forward module.
that. Different options of augmenting convolutions with self- attention are studied in Sec 3.4.2. We found that convolution module stacked after the self-attention module works best for speech recognition.
Table 1: Model hyper-parameters for Conformer S, M, and L models, found via sweeping different combinations and choos- ing the best performing models within the parameter limits.
# 3.1. Data
# 3. Experiments
We evaluate the proposed model on the LibriSpeech [26] dataset, which consists of 970 hours of labeled speech and an additional 800M word token text-only corpus for building language model. We extracted 80-channel ï¬lterbanks features computed from a 25ms window with a stride of 10ms. We use SpecAugment [27, 28] with mask parameter (F = 27), and ten time masks with maximum time-mask ratio (pS = 0.05), where the maximum-size of the time mask is set to pS times the length of the utterance.
# 3.2. Conformer Transducer
We identify three models, small, medium and large, with 10M, 30M, and 118M params, respectively, by sweeping different combinations of network depth, model dimensions, number of attention heads and choosing the best performing one within model parameter size constraints. We use a single-LSTM-layer decoder in all our models. Table 1 describes their architecture hyper-parameters.
For regularization, we apply dropout in each residual unit of the conformer, i.e, to the output of each module, before it is added to the module input. We use a rate of Parop = 0.1. Variational noise [5] [30) is introduced to the model as a regu- larization. A £2 regularization with le â 6 weight is also added to all the trainable weights in the network. We train the models with the Adam optimizer with 3; = 0.9, B2 = 0.98 and ⬠= 10° and a transformer learning rate schedule {6}, wit 10k warm-up steps and peak learning rate 0.05/Vd where d is the model dimension in conformer encoder.
We use a 3-layer LSTM language model (LM) with width 4096 trained on the LibriSpeech langauge model corpus with the LibriSpeech960h transcripts added, tokenized with the 1k WPM built from LibriSpeech 960h. The LM has word-level perplexity 63.9 on the dev-set transcripts. The LM weight λ for shallow fusion is tuned on the dev-set via grid search. All models are implemented with Lingvo toolkit [32].
Model Num Params (M) Encoder Layers Encoder Dim Attention Heads Conv Kernel Size Decoder Layers Decoder Dim Conformer (S) 10.3 16 144 4 32 1 320 Conformer (M) 30.7 16 256 4 32 1 640 Conformer (L) 118.8 17 512 8 32 1 640
Table 2: Comparison of Conformer with recent published mod- els. Our model shows improvements consistently over various model parameter size constraints. At 10.3M parameters, our model is 0.7% better on testother when compared to contempo- rary work, ContextNet(S) [10]. At 30.7M model parameters our model already signiï¬cantly outperforms the previous published state of the art results of Transformer Transducer [7] with 139M parameters.
Method #Params (M) WER Without LM WER With LM testclean testother testclean testother Hybrid Transformer [33] - - - 2.26 CTC QuartzNet [9] 19 3.90 11.28 2.69 LAS Transformer [34] Transformer [19] LSTM Transducer 270 - 360 2.89 2.2 2.6 6.98 5.6 6.0 2.33 2.6 2.2 Transformer [7] ContextNet(S) [10] ContextNet(M) [10] ContextNet(L) [10] 139 10.8 31.4 112.7 2.4 2.9 2.4 2.1 5.6 7.0 5.4 4.6 2.0 2.3 2.0 1.9 Conformer (Ours) Conformer(S) Conformer(M) Conformer(L) 10.3 30.7 118.8 2.7 2.3 2.1 6.3 5.0 4.3 2.1 2.0 1.9 4.85 7.25 5.17 5.7 5.2 4.6 5.5 4.5 4.1 5.0 4.3 3.9
error rate among all the existing models. This clearly demon- strates the effectiveness of combining Transformer and convo- lution in a single neural network.
# 3.3. Results on LibriSpeech
# 3.4. Ablation Studies
Table 2 compares the (WER) result of our model on Lib- riSpeech test-clean/test-other with a few state-of-the-art mod- els include: ContextNet [10], Transformer transducer [7], and QuartzNet [9]. All our evaluation results round up to 1 digit after decimal point.
the performance of our medium model already achieve competitive results of 2.3/5.0 on test/testother outperforming the best known Transformer, LSTM based model, or a similar sized convolution model. With the language model added, our model achieves the lowest word
3.4.1. Conformer Block vs. Transformer Block
A Conformer block differs from a Transformer block in a number of ways, in particular, the inclusion of a convolution block and having a pair of FFNs surrounding the block in the Macaron-style. Below we study these effects of these differ- ences by mutating a Conformer block towards a Transformer block, while keeping the total number of parameters unchanged. Table 3 shows the impact of each change to the Conformer block. Among all differences, convolution sub-block is the most
important feature, while having a Macaron-style FFN pair is also more effective than a single FFN of the same number of parameters. Using swish activations led to faster convergence in the Conformer models.
Table 3: Disentangling Conformer. Starting from a Conformer block, we remove its features and move towards a vanilla Trans- former block: (1) replacing SWISH with ReLU; (2) remov- ing the convolution sub-block; (3) replacing the Macaron-style FFN pairs with a single FFN; (4) replacing self-attention with relative positional embedding [20] with a vanilla self-attention layer [6]. All ablation study results are evaluated without the external LM.
Model Architecture Conformer Model â SWISH + ReLU â Convolution Block â Macaron FFN â Relative Pos. Emb. dev clean 1.9 1.9 2.1 2.1 2.3 dev other 4.4 4.4 4.8 5.1 5.8 test clean 2.1 2.0 2.1 2.1 2.4 test other 4.3 4.5 4.9 5.0 5.6
3.4.2. Combinations of Convolution and Transformer Modules
We study the effects of various different ways of combining the multi-headed self-attention (MHSA) module with the convolu- tion module. First, we try replacing the depthwise convolution in the convolution module with a lightweight convolution [35], see a signiï¬cant drop in the performance especially on the dev- other dataset. Second, we study placing the convolution mod- ule before the MHSA module in our Conformer model and ï¬nd that it degrades the results by 0.1 on dev-other. Another pos- sible way of the architecture is to split the input into parallel branches of multi-headed self attention module and a convolu- tion module with their output concatenated as suggested in [17]. We found that this worsens the performance when compared to our proposed architecture.
These results in Table 4 suggest the advantage of placing the convolution module after the self-attention module in the Conformer block.
Table 4: Ablation study of Conformer Attention Convolution Blocks. Varying the combination of the convolution block with the multi-headed self attention: (1) Conformer architecture; (2) Using Lightweight convolutions instead of depthwise convolu- tion in the convolution block in Conformer; (3) Convolution be- fore multi-headed self attention; (4) Convolution and MHSA in parallel with their output concatenated [17].
Model Architecture Conformer â Depthwise conv + Lightweight convolution Convolution block before MHSA Parallel MHSA and Convolution dev clean 1.9 2.0 1.9 2.0 dev other 4.4 4.8 4.5 4.9
# 3.4.3. Macaron Feed Forward Modules
Instead of a single feed-forward module (FFN) post the atten- tion blocks as in the Transformer models, the Conformer block has a pair of macaron-like Feed forward modules sandwiching the self-attention and convolution modules. Further, the Con- former feed forward modules are used with half-step residuals. Table 5 shows the impact of changing the Conformer block to use a single FFN or full-step residuals.
Table 5: Ablation study of Macaron-net Feed Forward mod- ules. Ablating the differences between the Conformer feed for- ward module with that of a single FFN used in Transformer models: (1) Conformer; (2) Conformer with full-step residuals in Feed forward modules; (3) replacing the Macaron-style FFN pair with a single FFN.
Model Architecture Conformer Single FFN Full step residuals dev clean 1.9 1.9 1.9 dev other 4.4 4.5 4.5 test clean 2.1 2.1 2.1 test other 4.3 4.5 4.5
# 3.4.4. Number of Attention Heads
In self-attention, each attention head learns to focus on different parts of the input, making it possible to improve predictions beyond the simple weighted average. We perform experiments to study the effect of varying the number of attention heads from 4 to 32 in our large model, using the same number of heads in all layers. We ï¬nd that increasing attention heads up to 16 improves the accuracy, especially over the devother datasets, as shown in Table 6.
Table 6: Ablation study on the attention heads in multi-headed self attention.
Attention Heads 4 8 16 32 Dim per Head 128 64 32 16 dev clean 1.9 1.9 2.0 1.9 dev other 4.6 4.4 4.3 4.4 test clean 2.0 2.1 2.2 2.1 test other 4.5 4.3 4.4 4.5
3.4.5. Convolution Kernel Sizes
To study the effect of kernel sizes in the depthwise convolu- tion, we sweep the kernel size in {3, 7, 17, 32, 65} of the large model, using the same kernel size for all layers. We ï¬nd that the performance improves with larger kernel sizes till kernel sizes 17 and 32 but worsens in the case of kernel size 65, as show in Table 7. On comparing the second decimal in dev WER, we ï¬nd kernel size 32 to perform better than rest.
Table 7: Ablation study on depthwise convolution kernel sizes.
Kernel size 3 7 17 32 65 dev clean 1.88 1.88 1.87 1.83 1.89 dev other 4.41 4.30 4.31 4.30 4.47 test clean 1.99 2.02 2.04 2.03 1.98 test other 4.39 4.44 4.38 4.29 4.46
# 4. Conclusion
In this work, we introduced Conformer, an architecture that integrates components from CNNs and Transformers for end- to-end speech recognition. We studied the importance of each component, and demonstrated that the inclusion of convolution modules is critical to the performance of the Conformer model. The model exhibits better accuracy with fewer parameters than previous work on the LibriSpeech dataset, and achieves a new state-of-the-art performance at 1.9%/3.9% for test/testother.
5. References [1] C.-C. Chiu, T. N. Sainath, Y. Wu, R. Prabhavalkar, P. Nguyen, Z. Chen, A. Kannan, R. J. Weiss, K. Rao, E. Gonina et al., âState- of-the-art speech recognition with sequence-to-sequence models,â in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[2] K. Rao, H. Sak, and R. Prabhavalkar, âExploring architectures, data and units for streaming end-to-end speech recognition with rnn-transducer,â in 2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU).
[3] Y. He, T. N. Sainath, R. Prabhavalkar, I. McGraw, R. Alvarez, D. Zhao, D. Rybach, A. Kannan, Y. Wu, R. Pang, Q. Liang, D. Bhatia, Y. Shangguan, B. Li, G. Pundak, K. C. Sim, T. Bagby, S.-Y. Chang, K. Rao, and A. Gruenstein, âStreaming End-to-end Speech Recognition For Mobile Devices,â in Proc. ICASSP, 2019.
[4] T. N. Sainath, Y. He, B. Li, A. Narayanan, R. Pang, A. Bruguier, S.-y. Chang, W. Li, R. Alvarez, Z. Chen, and et al., âA streaming on-device end-to-end model surpassing server-side conventional model quality and latency,â in ICASSP, 2020.
[5] A. Graves, âSequence transduction with recurrent neural net- works,â arXiv preprint arXiv:1211.3711, 2012.
[6] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, âAttention is all you need,â 2017.
[7] Q. Zhang, H. Lu, H. Sak, A. Tripathi, E. McDermott, S. Koo, and S. Kumar, âTransformer transducer: A streamable speech recog- nition model with transformer encoders and rnn-t loss,â in ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020, pp. 7829â7833.
[8] J. Li, V. Lavrukhin, B. Ginsburg, R. Leary, O. Kuchaiev, J. M. Co- hen, H. Nguyen, and R. T. Gadde, âJasper: An end-to-end convo- lutional neural acoustic model,â arXiv preprint arXiv:1904.03288, 2019.
[9] S. Kriman, S. Beliaev, B. Ginsburg, J. Huang, O. Kuchaiev, V. Lavrukhin, R. Leary, J. Li, and Y. Zhang, âQuartznet: Deep automatic speech recognition with 1d time-channel separable con- volutions,â arXiv preprint arXiv:1910.10261, 2019.
[10] W. Han, Z. Zhang, Y. Zhang, J. Yu, C.-C. Chiu, J. Qin, A. Gulati, R. Pang, and Y. Wu, âContextnet: Improving convolutional neural networks for automatic speech recognition with global context,â arXiv preprint arXiv:2005.03191, 2020.
[11] T. N. Sainath, A.-r. Mohamed, B. Kingsbury, and B. Ramabhad- ran, âDeep convolutional neural networks for lvcsr,â in 2013 IEEE international conference on acoustics, speech and signal process- ing.
[12] O. Abdel-Hamid, A.-r. Mohamed, H. Jiang, L. Deng, G. Penn, and D. Yu, âConvolutional neural networks for speech recogni- tion,â IEEE/ACM Transactions on audio, speech, and language processing, vol. 22, no. 10, pp. 1533â1545, 2014.
[13] J. Hu, L. Shen, and G. Sun, âSqueeze-and-excitation networks,â in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7132â7141.
[14] I. Bello, B. Zoph, A. Vaswani, J. Shlens, and Q. V. Le, âAttention augmented convolutional networks,â in Proceedings of the IEEE International Conference on Computer Vision, 2019, pp. 3286â 3295.
[15] B. Yang, L. Wang, D. Wong, L. S. Chao, and Z. Tu, âConvolu- tional self-attention networks,â arXiv preprint arXiv:1904.03107, 2019.
[16] A. W. Yu, D. Dohan, M.-T. Luong, R. Zhao, K. Chen, M. Norouzi, and Q. V. Le, âQanet: Combining local convolution with global self-attention for reading comprehension,â arXiv preprint arXiv:1804.09541, 2018.
[17] Z. Wu, Z. Liu, J. Lin, Y. Lin, and S. Han, âLite transformer with long-short range attention,â arXiv preprint arXiv:2004.11886, 2020.
[18] Y. Lu, Z. Li, D. He, Z. Sun, B. Dong, T. Qin, L. Wang, and T.-Y. Liu, âUnderstanding and improving transformer from a multi-particle dynamic system point of view,â arXiv preprint arXiv:1906.02762, 2019.
[19] S. Karita, N. Chen, T. Hayashi, T. Hori, H. Inaguma, Z. Jiang, M. Someki, N. E. Y. Soplin, R. Yamamoto, X. Wang et al., âA comparative study on transformer vs rnn in speech applications,â arXiv preprint arXiv:1909.06317, 2019.
[20] Z. Dai, Z. Yang, Y. Yang, J. Carbonell, Q. V. Le, and R. Salakhut- dinov, âTransformer-xl: Attentive language models beyond a ï¬xed-length context,â 2019.
[21] Q. Wang, B. Li, T. Xiao, J. Zhu, C. Li, D. F. Wong, and L. S. Chao, âLearning deep transformer models for machine translation,â in Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Lin- guistics, Jul. 2019, pp. 1810â1822.
tears: Improving the normalization of self-attention,â arXiv preprint arXiv:1910.05895, 2019.
[23] Y. N. Dauphin, A. Fan, M. Auli, and D. Grangier, âLanguage modeling with gated convolutional networks,â in Proceedings of the 34th International Conference on Machine Learning-Volume 70.
[24] L. Dong, S. Xu, and B. Xu, âSpeech-transformer: a no-recurrence sequence-to-sequence model for speech recognition,â in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[25] P. Ramachandran, B. Zoph, and Q. V. Le, âSearching for activa- tion functions,â arXiv preprint arXiv:1710.05941, 2017.
[26] V. Panayotov, G. Chen, D. Povey, and S. Khudanpur, âLib- rispeech: an asr corpus based on public domain audio books,â in 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
[27] D. S. Park, W. Chan, Y. Zhang, C.-C. Chiu, B. Zoph, E. D. Cubuk, and Q. V. Le, âSpecaugment: A simple data augmen- tation method for automatic speech recognition,â arXiv preprint arXiv:1904.08779, 2019.
[28] D. S. Park, Y. Zhang, C.-C. Chiu, Y. Chen, B. Li, W. Chan, Q. V. Le, and Y. Wu, âSpecaugment on large scale datasets,â arXiv preprint arXiv:1912.05533, 2019.
I. Sutskever, and R. Salakhutdinov, âDropout: A simple way to prevent neural net- works from overï¬tting,â Journal of Machine Learning Research, vol. 15, no. 56, pp. 1929â1958, 2014.
[30] K.-C. Jim, C. L. Giles, and B. G. Horne, âAn analysis of noise in recurrent neural networks: convergence and generalization,â IEEE Transactions on neural networks, vol. 7, no. 6, pp. 1424â 1438, 1996.
[31] D. P. Kingma and J. Ba, âAdam: A method for stochastic opti- mization,â arXiv preprint arXiv:1412.6980, 2014.
[32] J. Shen, P. Nguyen, Y. Wu, Z. Chen, and et al., âLingvo: a modu- lar and scalable framework for sequence-to-sequence modeling,â 2019.
[33] Y. Wang, A. Mohamed, D. Le, C. Liu, A. Xiao, J. Mahadeokar, H. Huang, A. Tjandra, X. Zhang, F. Zhang et al., âTransformer- based acoustic modeling for hybrid speech recognition,â arXiv preprint arXiv:1910.09799, 2019.
[34] G. Synnaeve, Q. Xu, J. Kahn, T. Likhomanenko, E. Grave, V. Pratap, A. Sriram, V. Liptchinsky, and R. Collobert, âEnd-to- end asr: from supervised to semi-supervised learning with modern architectures,â 2019.
[35] F. Wu, A. Fan, A. Baevski, Y. N. Dauphin, and M. Auli, âPay less attention with lightweight and dynamic convolutions,â arXiv preprint arXiv:1901.10430, 2019. | {
"id": "2004.11886"
} |
2005.07648 | Language Conditioned Imitation Learning over Unstructured Data | Natural language is perhaps the most flexible and intuitive way for humans to
communicate tasks to a robot. Prior work in imitation learning typically
requires each task be specified with a task id or goal image -- something that
is often impractical in open-world environments. On the other hand, previous
approaches in instruction following allow agent behavior to be guided by
language, but typically assume structure in the observations, actuators, or
language that limit their applicability to complex settings like robotics. In
this work, we present a method for incorporating free-form natural language
conditioning into imitation learning. Our approach learns perception from
pixels, natural language understanding, and multitask continuous control
end-to-end as a single neural network. Unlike prior work in imitation learning,
our method is able to incorporate unlabeled and unstructured demonstration data
(i.e. no task or language labels). We show this dramatically improves language
conditioned performance, while reducing the cost of language annotation to less
than 1% of total data. At test time, a single language conditioned visuomotor
policy trained with our method can perform a wide variety of robotic
manipulation skills in a 3D environment, specified only with natural language
descriptions of each task (e.g. "open the drawer...now pick up the block...now
press the green button..."). To scale up the number of instructions an agent
can follow, we propose combining text conditioned policies with large
pretrained neural language models. We find this allows a policy to be robust to
many out-of-distribution synonym instructions, without requiring new
demonstrations. See videos of a human typing live text commands to our agent at
language-play.github.io | http://arxiv.org/pdf/2005.07648 | Corey Lynch, Pierre Sermanet | cs.RO, cs.AI, cs.CL, cs.CV | Published at RSS 2021 | null | cs.RO | 20200515 | 20210707 | 1 2 0 2
l u J 7 ] O R . s c [
2 v 8 4 6 7 0 . 5 0 0 2 : v i X r a
# Language Conditioned Imitation Learning over Unstructured Data
Corey Lynch * Robotics at Google [email protected] Pierre Sermanet * Robi at Google [email protected]
AbstractâNatural language is perhaps the most ï¬exible and intuitive way for humans to communicate tasks to a robot. Prior work in imitation learning typically requires each task be speciï¬ed with a task id or goal imageâsomething that is often impractical in open-world environments. On the other hand, previous approaches in instruction following allow agent behavior to be guided by language, but typically assume structure in the observations, actuators, or language that limit their applicability to complex settings like robotics. In this work, we present a method for incorporating free-form natural language condition- ing into imitation learning. Our approach learns perception from pixels, natural language understanding, and multitask continuous control end-to-end as a single neural network. Unlike prior work in imitation learning, our method is able to incorporate unlabeled and unstructured demonstration data (i.e. no task or language labels). We show this dramatically improves language conditioned performance, while reducing the cost of language annotation to less than 1% of total data. At test time, a single language conditioned visuomotor policy trained with our method can perform a wide variety of robotic manipulation skills in a 3D environment, speciï¬ed only with natural language descriptions of each task (e.g. âopen the drawer...now pick up the block...now press the green button...â) (see video). To scale up the number instructions an agent can follow, we propose combining of text conditioned policies with large pretrained neural language models. We ï¬nd this allows a policy to be robust to many out-of-distribution synonym instructions, without requiring new demonstrations. See videos of a human typing live text commands to our agent at https://language-play.github.io
# I. INTRODUCTION
Imitation learning [4, 3] is a popular framework for acquiring complex robotic skills from raw onboard sensors. Traditionally, imitation learning has been applied to learning individual skills from structured and isolated human demonstrations [39, 1, 53]. These collection requirements are difï¬cult to scale to real world scenarios, where robots are expected to be generalistsâ capable of autonomously performing a wide variety of skills. As we consider deploying imitation learning in this setting, a critical challenge is scale: how can we lessen the data requirements of learning each new skill? Is it possible instead to learn many skills simultaneously from large amounts of unstructured, unlabeled demonstration data [16, 26, 14]?
Recent works have focused on scaling up multitask imita- tion learning over unstructured data [8, 26]. However, these approaches typically assume that tasks are speciï¬ed to the agent at test time via mechanisms like one-hot task selectors
[37, 43], goal images [29, 11, 26], or target conï¬gurations of the state space [14]. While these types of conditioning are straightforward to provide in simulation, they are often impractical to provide in open-world settings. Thus, another important consideration when deploying imitation learning in everyday settings is scalable task speciï¬cation: how can untrained users instruct robot behavior? This motivates robots that can follow free-form natural language instructions.
Training agents to follow instructions is an active area of research in the broader machine learning community [25], yet it remains a difï¬cult open challenge. Prior approaches have trained agents that map observations and language inputs directly to actions using neural networks, but often make assumptions that limit their applicability to robotics. Typical studies involve 2D observation spaces (e.g. games [24, 32, 27, 12] and gridworlds [52]), simpliï¬ed actuators, (e.g. binary pick and place primitives [18, 48]), or synthetic predeï¬ned language [17, 6, 20, 54].
In this work, we study the setting of human language conditioned robotic manipulation (Fig. 1, step 3). In this setting, a single agent must execute a series of visual manipulation tasks in a row, each expressed in free-form natural language, e.g. âopen the door all the way to the right...now grab the block...now push the red button...now open the drawerâ. Furthermore, agents in this scenario are expected to be able to perform any combination of subtasks in any order. This is the ï¬rst version of instruction following, to our knowledge, that combines: natural language conditioning, high-dimensional pixel inputs, 8-DOF continuous control, and complex tasks like long-horizon robotic object manipulation. Text conditioning is also a new and difï¬cult setting for goal-directed imitation learning [26, 8], which introduces important new research questions. For example, how can we learn the mapping between language and actions with the fewest language labels? How can we leverage large unstructured demonstration datasets that have no language labels? How can we follow the maximum number of instructions at test time, given a ï¬nite training set? To address this setting, we propose a simple approach for combining imitation learning with free-form text conditioning (Fig. 1). Our method consists of only standard supervised learning subroutines, and learns perception, language under- standing, and control end-to-end as a single neural network. Critically, unlike prior work in instruction following, our method can leverage large amounts of unstructured and
# â Equal contribution
1) Pair play with crowdsourced language = 2) Train onim eu all 1-2s windows hindsight goal image max likelihood loss goal conditioned policy age and language goals 3) Follow human language next action action goal conditioned policy current image natural language state latent goal instructions âopen the doorâ cane âopen the door hindsight goal image âopen the doorâ . Now pick up instruction the block fam .. NOW push the red buttonâ (state-actions, (state-actions, (state-actions, language) (state-actions, language) goal image) goal image) <1% <1% test time >99% >99%
Fig. 1: Scaling up free-form natural language instruction following with unstructured data. 1) First, relabel unstructured teleoperated âplayâ data (no task labels) into hindsight goal image examples. Next, pair a small amount of play with hindsight instructions. 2) Train a single policy to follow either image or language goals. 3) Use only language conditioning at test time.
unlabeled demonstration data (i.e. with no language or task labels). We show this reduces the burden of language annotation to less than 1% of total data. To scale up the maximum amount of instructions our agent can follow at test time, we introduce a simple technique for combining any language conditioned policy with large pretrained language models [7, 36]. We ï¬nd that this simple modiï¬cation allows our agent to follow thousands of new synonym instructions at test time, without requiring that new robot demonstrations be collected for each synonym. We believe that the capabilities proposed here of learning from unlabeled demonstrations, end-to-end learning of text conditioned visuomotor policies, and following new synonym instructions without new robot data constitute important steps towards scalable robot learning systems.
We evaluate our method in a dynamically accurate simulated 3D tabletop environment with a ï¬xed set of objects. Our experiments show that a language conditioned visuomotor policy trained with our method can perform many complex robotic manipulation skills in a row speciï¬ed entirely with language (see video), outperforming conventional natural imitation baselines trained on structured data.
Contributions To summarize the contributions of this work, we:
⢠introduced a setting of human language conditioned robotic visual manipulation.
introduced a simple learning method for combining free- form text conditioning with multitask imitation learning. ⢠introduced multicontext imitation learning, applicable to any contextual imitation learning setup, allowing us to train a language conditioned policy over mostly unstructured and unlabeled demonstrations (i.e. with no task or language labels).
the resulting language conditioned visuomotor policy can follow many free-form human text instructions over a long horizon in a simulated 3D tabletop
setting, i.e. âopen the door. . . now pick up the block. . . now press the red buttonâ (see video).
⢠introduced a simple way to combine any language con- ditioned policy with large pretrained language models. We show this improves manipulation performance and allows an agent to be robust to thousands of new synonym instructions at test time, in 16 languages, without requiring new demonstrations for each synonym.
# II. RELATED WORK
Robotic learning from general sensors. In general, learn- ing complex robotic skills from low-level sensors is possible, but requires substantial human supervision. Two common approaches are imitation learning (IL) [3] and reinforcement learning (RL) [23]. When combined with deep function ap- proximators, IL typically requires many human demonstrations [38, 37, 53] to drive supervised learning of a policy. In RL, supervision takes the form of hand-designed task rewards. Reward design is non-trivial in complex environments, often requiring either task-speciï¬c instrumentation [13] or learned perceptual rewards [41, 42]. Additionally, RL agents often require hand-designed strategies [22, 9] or human demonstra- tions [38] to overcome hard exploration problems. Finally, even under multitask formulations of RL [46] and IL [37], each new task considered requires a corresponding and sizable human effort. This makes it difï¬cult to scale either approach naively to a broad task setting. In this work, we focus on scaling imitation learning to be multitask and language conditioned. While there are several ways to perform imitation learning such as inverse reinforcement learning [30, 50, 10] and occupancy matching [19], we restrict our attention in this work to behavior cloning [35] given its stability and ease of use.
Imitation learning from large unstructured datasets. Recent works [16, 26, 14] have sought to mitigate the costs of conventional multitask imitation learning by instead learning many skills at once over large unstructured demonstration
single [| molar | se, 2) \ 2, policy St Zz A latent goal eo . , Natural image ime id language encoder encoder encoder (T, S¢) (t, id) (x, 1) (state-actions, natural language) (state-actions, goal image) multiple imitation datasets, each with a different task description
Fig. 2: Multicontext Imitation Learning (MCIL). We introduce a simple generalization of contextual imitation learning to multiple heterogeneous contexts (e.g. goal image, task id, natural language). MCIL trains a single latent goal conditioned policy over all datasets simultaneously, as well as one encoder per dataset, each mapping to the shared latent goal space. This allows training a language conditioned policy over both labeled and unlabeled demonstration datasets.
datasets. Like these works, we incorporate unstructured demon- stration data into our method. Unlike these works, the resulting goal conditioned policies can be conditioned with natural language.
Task agnostic control. This paper builds on the setting of task agnostic control, where a single agent is trained to reach any reachable goal state in its environment upon command [21, 40]. One way of acquiring this kind of control is to ï¬rst learn a model of the environment through interaction [31, 9] then use it for planning. However, these approaches rely on accurate forward models of visual dynamics, a challenging open problem. A powerful model-free strategy for task agnostic control is goal relabeling [21, 2]. This technique trains goal conditioned policies to reach any previously visited state upon demand, with many recent examples in RL [29, 14, 28] and IL [26, 8]. A limitation to models combining relabeling with image observation spaces is that tasks must be speciï¬ed with goal images at test time. The present work builds on relabeled imitation, but additionally equips policies with natural language conditioning.
Multicontext learning. A number of previous methods have focused on generalizing across tasks [5], or generalizing across goals [40]. We introduce multicontext learning, a framework for generalizing across heterogeneous task and goal descriptions. When one of the training sources is plentiful and the other scarce, multicontext learning can be seen as transfer learning [45] through a shared goal space. Multicontext imitation is a central component of our method, as it reduces the cost of human language supervision to the point where it can be practically applied.
Instruction following. There is a long history of research into agents that not only learn a grounded language under-
âââ action goal-conditioned policy goal âA a) â âBewegen MERIT, «slide the Sie die Tar panel all the way rightâ - gene noch âAre la (training | puertaâ OO âouvre la porteâ
Fig. 3: Pretrained language models make language conditioned policies robust to out-of-distribution synonyms. Simply by training on top of pretrained language embeddings, we can give a language conditioned policy the ability to follow out- of-distribution synonym instructions at test time. The pretrained embedding space is responsible for relating new synonym instructions (green) to ones from the agentâs training set (black).
standing [49], but demonstrate that understanding by following instructions (survey [25]). Recently, authors have had success using deep learning to directly map raw input and text instructions to actions. However, prior work has often studied 2D environments [24, 32, 27, 52] and simpliï¬ed actuators [48, 18, 12]. Additionally, learning to follow natural language is still not the standard in instruction following research [25], with typical implementations instead assuming access to simulator- provided instructions drawn from a restricted vocabulary and grammar [17]. This work, in contrast, studies 1) natural language instructions, 2) high-dimensional continuous sensory inputs and actuators, and 3) complex tasks like long-horizon 3D robotic object manipulation. Furthermore, unlike existing RL approaches to instruction following [18], our IL method is sample efï¬cient, requires no reward deï¬nition, and scales easily to the multitask setting. Unlike concurrent IL approaches to robotic instruction following [44], which assume access to labeled task demonstrations and pretrained object detection, our method learns from unstructured and unlabeled demonstration data and learns perception, language understanding, and control fully end-to-end via a single imitation loss.
# III. PROBLEM FORMULATION
We consider the problem of learning a natural language conditioned control policy Ïθ(a|s, l), which outputs next action a â A, conditioned on current state s â S and free-form language l â L describing a short-horizon task. natural Note that S is not the true environment state, but rather the high dimensional onboard observations of the robot, e.g. S = {image, proprioception}. l is provided by humans and has no limits on vocabulary or grammar.
At test time, a human gives a robot a series of instructions
in a row, one at a time: {l0, ..., lN }. The language conditioned visuomotor policy Ïθ(a|s, l) issues high frequency continuous control in closed loop to obtain the desired behavior described in l. The human may provide new instructions l to the agent at any time, either commanding a new subtask or providing guidance (i.e. âmove your hand back slightlyâ).
Standard imitation learning setups learn single task policies Ïθ(a|s) using supervised learning over a dataset D = {Ïi}N i , of expert state-action trajectories Ï = {(s0, a0), ...}. Closest to our problem setting is goal conditioned imitation learning [21], which instead aims to learn a policy Ïθ(a|s, g), conditioned on s â S and a task descriptor g â G (often a one-hot task encoding [37]). When tasks can be described as a goal state g = sg â S to reach, this allows any state visited during collection to be relabeled [21, 2, 8] as a âreached goal stateâ, with the preceding states and actions treated as optimal behavior for reaching that goal. At test time, learned behaviors are conditioned using a goal image sg.
Relabeled imitation learning can be applied to unstructured and unlabeled demonstration data [26] to learn general purpose goal-reaching policies. Here, demonstrators are not constrained to a predeï¬ned set of tasks, but rather engage in every available object manipulation in a scene (see example). This yields one long unsegmented demonstration of semantically meaningful behaviors, which can be relabeled using Algorithm 2 into a training set Dplay = {(Ï, sg)i}Dplay i=0 . These short horizon goal image conditioned demonstrations are fed to a maximum likelihood goal conditioned imitation objective: LGCIL = E(Ï, sg)â¼Dplay
However, when learning language conditioned policies Ïθ(a|s, l), we cannot easily relabel any visited state s to a natural language goal as the goal space G is no longer equivalent to the observation space L. Consequently, this limits language conditioned imitation learning from being able to incorporate large unstructured demonstration datasets Dplay.
Next, we describe our approach that relaxes this limitation.
# IV. LEARNING TO FOLLOW HUMAN INSTRUCTIONS FROM UNSTRUCTURED DATA
We present a framework that aims for a scalable combination of self-supervised relabeled imitation learning and labeled instruction following. Our approach (Fig. 1, Sec. IV-C) can be summarized as follows:
1) Collection: Collect a large unstructured âplayâ demon- stration dataset.
2) Relabeling: Relabel unstructured data into goal image demonstrations. Pair a small number of random windows with language after-the-fact. (Sec. IV-A)
3) Multicontext imitation: Train a single imitation policy to solve for either goal image or language goals. (Sec. IV-B)
4) Use only language conditioning at test time.
A. Pairing unstructured demonstrations with natural language Learning to follow language instructions involves addressing a difï¬cult language grounding problem [15]: how do agents relate unstructured language l to their onboard perceptions s
and actions a? We take a statistical machine learning approach, creating a paired corpora of (demonstration, language) by pairing random windows from unstructured data Dplay with hindsight instructions after-the-fact (Algorithm 3, part 1 of Fig. 1). Here, we show untrained human annotators on- board videos, then ask them âwhat instruction would you give the agent to get from ï¬rst frame to last frame?â See training examples in Table V, video examples here, and a full description of the collection process in Appendix M. Annotators were encouraged to use free-form natural language and not constrain themselves to predeï¬ned vocabulary or grammar. This yields a new dataset D(play,lang) = {(Ï, l)i} , a dataset of text conditioned demonstrations. Crucially, we do not need pair every window from play with language to learn to follow instructions. This is made possible with Multicontext Imitation Learning, described next.
Algorithm 1 Multicontext imitation learning
i=0, One dataset per context type (e.g. goal image, language instruction, task id), each holding pairs of (demonstration, context). θ , ..., f K
1:
2: Input: F = {f 0 θ }, One encoder per context type, θ (ck).
mapping context to shared latent goal space, i.e. z = f k 3: Input: Ïθ(at|st, z), Single latent goal conditioned policy. 4: Input: Randomly initialize parameters θ = {θÏ, θf 0 , ..., θf K }
5: while True do LMCIL ââ 0 6: # Loop over datasets. 7: for k = 0...K do 8: 9:
# Sample a (demonstration, context) batch from this dataset.
(r*, ck) ~ D* # Encode context in shared latent goal space. z= fk(c*) # Accumulate imitation loss. Luci. += vie log 79 (at|se, 2)
10: (r*, ck) ~ D*
10: 11: 12: 13: 14: 15: 16:
# θ (ck)
12: z= fk(c*)
13: # Accumulate imitation loss.
# end for # Average gradients over context types. LMCIL â=
15: end for
1 |D|
17:
# Train policy and all encoders end-to-end. Update θ by taking a gradient step w.r.t. LMCIL
18: 19: 20: end while
B. Multicontext Imitation Learning
So far, we have described a way to create two contextual imitation datasets: Dplay, holding goal image demonstrations and D(play,lang), holding language demonstrations. Ideally, we could train a single imitation policy that could be conditioned with either task description. Critically, this would enable language conditioned imitation to make use of self-supervised imitation over unstructured data.
With this motivation, we introduce multicontext imitation learning (MCIL), a simple and universally applicable gen- eralization of contextual imitation to multiple heterogeneous contexts. The main idea is to represent a large set of policies by a single, uniï¬ed function approximator that generalizes over states, tasks, and task descriptions. Concretely, MCIL
assumes access to multiple contextual imitation datasets D = {D0, ..., DK}, each with a different way of describing tasks. Each Dk = {(Ï k i=0 holds demonstrations Ï paired with some context c â C k. For example, D0 might contain one-hot task demonstrations (a conventional multitask imitation dataset), D1 might contain goal image demonstrations (obtained by hindsight relabeling), and D2 might contain language goal demonstrations. MCIL trains a single latent goal conditioned policy Ïθ(at|st, z) over all datasets simultaneously, as well as one parameterized encoder per dataset F = {f 0 θ }, learning to map raw task descriptions to a common latent goal space z â Rd. See Fig. 2. For instance, these could be a one-hot task embedding lookup, an image encoder, and a language encoder respectively.
MCIL has a simple training procedure, shown in Algorithm 1: at each training step, sample a batch of contextual imitation examples from each dataset, encode contexts in the shared latent space using the respective encoders, then compute a latent goal-conditioned imitation loss, averaged over all datasets. The policy and goal encoders are trained end-to-end to maximize this objective.
MCIL allows for a highly efï¬cient training scheme, broadly useful beyond this paper: learn the majority of control from the data source that is cheapest to collect, while simultaneously learning scalable task conditioning from a small number of labeled examples. As we will see empirically, MCIL allows us to train an instruction following agent with less than 1% of data requiring language annotation, with the majority of perception and control learned instead from relabeled goal image imitation.
C. LangLfP: following image or language goals.
We now have all the components to introduce LangLfP (language conditioned learning from play). LangLfP is a special case of MCIL (Sec. IV-B) applied to our problem setting, and learns perception from pixels, natural language understanding, and control end-to-end with no auxiliary losses.
Training LangLfP. LangLfP trains a single multicontext policy Ïθ(at|st, z) over datasets D = {Dplay, D(play,lang)}. We deï¬ne F = {genc, senc}, neural network encoders map- ping from goal images (Appendix D) and text instructions (Appendix E) respectively to z. Fig. 4 compares LangLfP training to prior LfP training. At each training step, LangLfP: 1) samples a batch of image goal tasks from Dplay and a batch of language goal tasks from D(play,lang), 2) encodes image and language goals into z, and 3) computes the MCIL objective. We then take a combined gradient step with respect to all modulesâperception (Appendix C), language (Appendix E), and control (Appendix F)âoptimizing the whole architecture end-to-end as a single neural network. See full training details in Appendix G.
language instructions at test time. time, LangLfP conditions only on onboard pixel At observations and a free-form natural language task description to solve user-deï¬ned tasks in closed loop. See picture in part 3 of Fig. 1 and details in Appendix H.
Leveraging pretrained language models While LangLfP offers a straightforward way for learning natural language end- to-end from imitation, this may not always be desirable. In open-world scenarios, instruction following robots are likely to be given instructions that are synonyms to ones they have been trained to follow, but do not overlap exactly with a ï¬nite training set. For example, âshut the doorâ is a valid, but potentially out-of-distribution way of describing training task âclose the doorâ. Many recent works have successfully transferred knowledge from unlabeled text to downstream NLP tasks via pretrained embeddings [7, 36]. Can we achieve similar knowledge transfer to robotic manipulation? There are two motivations for this kind of transfer: 1) improve language conditioned manipulation and 2) allow an agent to follow out of distribution synonym instructions (Fig. 3). To test these hypotheses, we augment LangLfP, encoding language inputs l to the policy at training and test time in the pretrained embedding space of a multilingual neural language model [51]. We refer to this augmented model as TransferLangLfP. See Appendix E for details.
# V. EXPERIMENTS
Our experiments aim to answer the following questions: Q0) Can a policy trained with our method learn perception, natural language understanding, and control end-to-end to solve many free-form text conditioned tasks in a 3D tabletop environment? Q1) Can our language conditioned multitask imitation learning (LangLfP) match the performance of goal image conditioned imitation learning (LfP)? Q2) Does our methodâs ability to leverage large unlabeled imitation datasets improve language conditioned performance? Q3) Does incorporating pretrained language models grant any beneï¬t to manipulation perfor- mance? Q4) Does incorporating pretrained language models allow our agent to be robust to out of distribution synonym instructions?
Dataset and tasks. We conduct our experiments in the simulated 3D Playroom environment introduced in [26], shown in Fig. 10. It consists of an 8-DOF robotic arm controlled with continuous position control from visual observations at 30Hz to accomplish manipulation tasks from 18 distinct task families. See [26] for a complete discussion of tasks. We modify the environment to include a text channel which the agent observes at each timestep, allowing humans to type unconstrained language commands (details in Appendix H). We deï¬ne two sets of experiments for each baseline: pixel experiments, where models receive pixel observations and must learn perception end-to-end, and state experiments, where models instead receive simulator state consisting of positions and orientations for all objects in the scene. The latter provides an upper bound on how all well the various methods can learn language conditioned control, independent of a difï¬cult perception problem (which might be improved upon independently with self-supervised representation learning methods (e.g. [41, 33, 34])).
Methods. We compare the following methods (details on each in Appendix O): LangBC (âlanguage, but no playâ):
max likelihood loss LfP action goal-conditioned policy state goal i wage "hindsight goal image all 1-2s windows 9 unstructured teleoperated play LangLfP aes hindsight >99% goal image all 1-2s windows 2 : max likelihood loss. action multicontext imitation goal-conditioned policy state latent goal ee ~ âopen the doorâ ay instruction AnD hindsight instruction pairing <1% unstructured teleoperated play
Fig. 4: Comparing prior LfP (left) to our LangLfP (right). Both are trained on unstructured and unsegmented demonstrations, relabeled into goal image demonstrations. LangLfP is additionally trained on random windows paired with hindsight natural language instructions.
a baseline natural language conditioned multitask imitation policy [37], trained on D(demo,lang), 100 expert demonstrations for each of the 18 evaluation tasks paired with hindsight instructions. LfP (âplay, but no languageâ): a baseline LfP model trained on Dplay, conditioned on goal images at test time. LangLfP (ours) (âplay and languageâ): Multicontext imitation trained on unstructured data Dplay and unstructured data paired with language D(play,lang). Tasks are speciï¬ed at test time using only natural language. Restricted LangLfP: LangLfP trained on ârestricted Dplayâ, a play dataset restricted to be the same size as D(demo,lang). This restriction is somewhat artiï¬cial, as more unsegmented play can be collected for the same budget of time. This provides a controlled comparison to LangBC. TransferLangLfP (ours): LangLfP trained on top of pretrained language embeddings. To perform a controlled comparison, we use the same network architecture (details in Sec. B) across all baselines. See Appendix K for a detailed description of all data sources.
A. Human Language Conditioned Visual Manipulation
We construct a large number of multi-stage human language conditioned manipulation tasks by treating the original 18 evaluation tasks in [26] as subtasks, then considering all valid N-stage transitions between them. This results in 2-stage, 3- stage, and 4-stage benchmarks, referred to here as Chain-2, Chain-3, and Chain-4. See Appendix O for benchmark details. We obtain language instructions for each subtask in the same way as training (Sec. IV-A), by presenting human annotators with videos of the completed tasks and asking for hindsight instructions. We present results in Table I and Fig. 5 and discuss below.
Language conditioned manipulation results. In Table I, we see that LangLfP can achieve 68.6% success on individual free-form instructions (covering 18 different tasks), 52.1% success on 925 4-stage instructions. This helps answer the ï¬rst main question of this work (Q0), whether our approach can
learn perception, control, and language understanding end-to- end to follow many free-form human instructions. We also see that LangLfP matches the performance of prior goal image conditioned LfP on all benchmarks within margin of error (Q1). This is important, as it shows a more scalable mode of task conditioning can be achieved with only â¼0.1% of demonstration data requiring language annotation.
Large unlabeled imitation datasets improve language conditioned performance. We see in Table I and Fig. 5 that LangLfP outperforms LangBC on every benchmark. This is important because it shows that by allowing our policy to train over unlabeled demonstration data (via MCIL), it achieves signiï¬cantly better performance than baseline policies trained only on typical predeï¬ned task demonstrations (Q2). We note this holds even when unlabeled training sources are restricted to the same size as labeled ones (Restricted LangLfP vs. LangBC). Qualitatively (videos), we see clear differences between models that incorporate unstructured data and those that do not. We ï¬nd that on long-horizon evaluations, MCIL- trained policies tend to transition well between tasks and recover from initial failures, whereas baseline policies trained on conventional demonstrations tend to quickly encounter compounding imitation error.
High capacity models make the most use of unlabeled data. In Fig. 6 and Appendix Q, we additionally ï¬nd the phenomenon that performance scales linearly with model size for policies that leverage unstructured data, whereas performance peaks and then declines for models trained on conventional predeï¬ned demonstrations. We see this holds even when datasets are restricted to the same size. This suggests that the simple recipe of collecting large unstructured imitation datasets, pairing them with a small amount of language data, then training large capacity imitation learning models may be a valid way to scale up language conditioned control.
Language unlocks human assistance. Natural language conditioning allows for new modes of interactive test time
Method LangBC Restricted LangLfP LfP LangLfP (ours) TransferLangLfP (ours) LangBC Restricted LangLfP LangLfP (ours) TransferLangLfP (ours) Input pixels pixels pixels pixels pixels states states states states Training source predeï¬ned demos unstructured demos unstructured demos unstructured demos unstructured demos predeï¬ned demos unstructured demos unstructured demos unstructured demos Task condi- tioning text text image text text text text text text Multi-18 Success (18 tasks) 20.0% ±3.0 47.1% ±2.0 66.4% ±2.2 68.6% ±1.7 74.1% ±1.5 38.5% ±6.3 88.0% ±1.4 88.5% ±2.9 90.5% ±0.8 Chain-4 Success (925 long-horizon tasks) 7.1% ±1.5 25.0% ±2.0 53.0% ±5.0 52.1% ±2.0 61.8% ±1.1 13.9% ±1.4 64.2% ±1.5 63.2% ±0.9 71.8% ±1.6
TABLE I: Human language conditioned visual manipulation experiments
= 70 ââ 8 â 2 60 Se E50 , 8 g 40 8 Fa 30 H 20 8 Te 10 1 2 3 4 (18 tasks) (925 tasks) Number of instructions in a row @ = TransferLangLfP (ours) L#P (goal image) @ LangBc @ _LangLfP (ours) Restricted LangLfP
90 x in 80 8 79 3g ® 60 7 x multitask â : 8 50 demonstrations % t 40 Ss = 30 20 10M 20M 40M 120M Model parameters @ Restricted LangLfP @ LangBC
Fig. 5: Long horizon language conditioned visual manipulation results.
Fig. 6: Performance scales linearly with model capacity, but only for unstructured demonstrations. We see that for the same amount of data, more diverse unstructured imitation data is better utilized by large model learning.
agent gets stuck human assists with language agent solves task next: next: âpress the red buttonâ âmove backâ âmove the doorall âpress the red buttonâ the way rightâ
Fig. 7: Getting out of trouble with human language assistance: Unlike other forms of task conditioning, natural language conditioning allows a human operator to offer quick interactive assistance when an agent gets stuck.
behavior, allowing humans to give guidance to agents that would be impractical to give via goal image or one-hot task conditioned control. See Fig. 7, this video and Sec. R for a concrete example. Sec. O additionally shows how humans can quickly compose tasks with language that are outside the 18-task benchmark (but covered by the training set), like âput the block in the trash binâ.
B. Instruction Following with Large Pretrained Language Models
Positive transfer to robotic manipulation. In Table I and Fig. 8, we see that TransferLangLfP systematically outperforms LangLfP and LfP. This is the ï¬rst evidence, to our knowledge, that sentence embeddings obtained from large pretrained language models can signiï¬cantly improve the convergence of language-guided robotic control policies (Q3).
Robustness to out-of-distribution synonym instructions. In Table II, we study how robust our language conditioned
90 Ke) £ L 2 80 E fo) o) Â¥70 Ww sg 2 3 60 x 50 1 2 3 4 (18 tasks) (925 tasks) Number of instructions in a row ââ States TransferLangLfp ââ States LangLfP == Pixels TransferLangLfp == Pixels LangLfP
Fig. 8: Knowledge transfer from generic text corpora beneï¬ts robotic manipulation. We see models that take pretrained embeddings as language input (purple) converge to higher performance than those that must learn language from scratch (blue).
Method Random Policy LangLfP TransferLangLfP OOD-syn (â¼15k instructions) 0.0% ± 0.0 37.6% ± 2.3 60.2% ± 3.2 OOD-16-lang (â¼240k instructions) 0.0% ± 0.0 27.94% ± 3.5 56.0% ± 1.4
TABLE II: Out of distribution synonym robustness.
policies are to synonyms not found in the training set. We see that only agents equipped with large pretrained language models (TransferLangLfP) are robust to these out of distribution synonyms (OOD-syn), giving afï¬rmative support to Q4. Addi- tionally, just by choosing a multilingual pretrained model [51], we see this kind of robustness extends to 16 different languages, which have no vocabulary overlap with training (OOD-16-lang). See videos (link) of TransferLangLfP following multilingual synonym instructions. We stress that these experiments do not test the ability of a policy to generalize to new kinds of manipulation tasks than the ones seen in training. Rather, this shows that a simple training modiï¬cation increases the number of ways a language-guided agent can be conditioned to execute the same ï¬xed set of behaviors from its training distribution, effectively expanding the training instruction set. Find more details on these experiments in Appendix S.
# C. Limitations and Future Work
Although the coverage of unstructured play demonstration data mitigates failure modes in conventional imitation setups, we observe several limitations in our policies at test time. In this video, we see the policy make multiple attempts to solve the task, but times out before it is able to do so. We see in this video that the agent encounters a particular kind of compounding error, where the arm ï¬ips into an awkward conï¬guration, likely avoided by humans during teleoperated play. This is potentially mitigated by a more stable choice of rotation representation, or more varied play collection. We note that the human is free
to help the agent out of these awkward conï¬gurations using language assistance, as demonstrated in Sec. R. More examples of failures can be seen here. While LangLfP relaxes important constraints around task speciï¬cation, it is fundamentally a goal-directed imitation method and lacks a mechanism for autonomous policy improvement. An exciting area for future work may be one that combines the coverage of teleoperated play, the scalability and ï¬exibility of multicontext imitation pretraining, and the autonomous improvement of reinforcement learning, similar to prior successful combinations of LfP and RL [14]. Additionally, the scope of this work is task agnostic control in a single simulated environment with a ï¬xed set of objects. We note this is consistent with the standard imitation assumptions that training and test tasks are drawn i.i.d. from the same distribution. An interesting question for future work is whether training on a large play corpora covering many rooms and objects allows for generalization to unseen rooms or objects.
# VI. CONCLUSION
We proposed a scalable framework for combining multitask imitation with free-form text conditioning. Our method can learn language conditioned visuomotor policies, capable of following multiple human instructions over a long horizon in a dynamically accurate 3D tabletop setting. Key to our method is the ability to learn over unstructured and unlabeled imitation data, a property we made possible by introducing Multicontext Imitation Learning. Critically, the ability to learn from unstructured data reduced the cost of language annotation to less than 1% of total data, while also resulting in much higher language conditioned task success. Finally, we showed a simple, but effective way to combine any language conditioned policy with large pretrained language models. We found that this small modiï¬cation allowed our policy to be robust to many out-of-distribution synonym instructions, without requiring the collection of additional demonstration data.
Acknowledgments
We thank Karol Hausman, Eric Jang, Mohi Khansari, Kanishka Rao, Jonathan Thompson, Luke Metz, Anelia An- gelova, Sergey Levine, and Vincent Vanhoucke for providing helpful feedback on this manuscript. We additionally thank the annotators for providing paired language instructions.
REFERENCES [1] Baris Akgun, Maya Cakmak, Karl Jiang, and Andrea L Thomaz. International Journal
Keyframe-based learning from demonstration. of Social Robotics, 4(4):343â355, 2012.
[2] Marcin Andrychowicz, Filip Wolski, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Josh Tobin, OpenAI Pieter Abbeel, and Wojciech Zaremba. Hindsight experience replay. In Advances in neural information processing systems, pages 5048â5058, 2017.
[3] Brenna D Argall, Sonia Chernova, Manuela Veloso, and Brett Browning. A survey of robot learning from demonstration. Robotics and autonomous systems, 57(5):469â483, 2009.
[4] Aude Billard, Sylvain Calinon, Ruediger Dillmann, and Stefan Schaal. Technical report, Survey: Robot programming by demonstration. Springrer, 2008.
[5] Rich Caruana. Multitask learning. Machine learning, 28(1):41â75, 1997.
[6] Maxime Chevalier-Boisvert, Dzmitry Bahdanau, Salem Lahlou, Lucas Willems, Chitwan Saharia, Thien Huu Nguyen, and Yoshua Bengio. Babyai: A platform to study the sample efï¬ciency of grounded language learning. In International Conference on Learning Representations, 2018. [7] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. transformers for language
Bert: Pre-training of deep bidirectional understanding. arXiv preprint arXiv:1810.04805, 2018.
[8] Yiming Ding, Carlos Florensa, Pieter Abbeel, and Mariano Phielipp. Goal-conditioned imitation learning. In Advances in Neural Information Processing Systems, pages 15298â15309, 2019.
[9] Frederik Ebert, Chelsea Finn, Sudeep Dasari, Annie Xie, Alex Lee, and Sergey Levine. Visual foresight: Model-based deep reinforcement learn- ing for vision-based robotic control. arXiv preprint arXiv:1812.00568, 2018.
[10] Chelsea Finn, Sergey Levine, and Pieter Abbeel. Guided cost learning: Deep inverse optimal control via policy optimization. In International conference on machine learning, pages 49â58. PMLR, 2016.
[11] Carlos Florensa, Jonas Degrave, Nicolas Heess, Jost Tobias Springenberg, and Martin Riedmiller. Self-supervised learning of image embedding for continuous control. arXiv preprint arXiv:1901.00943, 2019.
[12] Prasoon Goyal, Scott Niekum, and Raymond J. Mooney. Using natural language for reward shaping in reinforcement learning, 2019.
[13] Shixiang Gu, Ethan Holly, Timothy Lillicrap, and Sergey Levine. Deep reinforcement learning for robotic manipulation with asynchronous off- policy updates. In 2017 IEEE international conference on robotics and automation (ICRA), pages 3389â3396. IEEE, 2017.
[14] Abhishek Gupta, Vikash Kumar, Corey Lynch, Sergey Levine, and Karol Hausman. Relay policy learning: Solving long horizon tasks via imitation and reinforcement learning. Conference on Robot Learning (CoRL), 2019. [15] Stevan Harnad. The symbol grounding problem. Physica D: Nonlinear
Phenomena, 42(1-3):335â346, 1990.
[16] Karol Hausman, Yevgen Chebotar, Stefan Schaal, Gaurav Sukhatme, and Joseph Lim. Multi-modal imitation learning from unstructured demonstra- tions using generative adversarial nets. arXiv preprint arXiv:1705.10479, 2017.
[17] Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, et al. Grounded language learning in a simulated 3d world. arXiv preprint arXiv:1706.06551, 2017.
[18] Felix Hill, Andrew Lampinen, Rosalia Schneider, Stephen Clark, Matthew Botvinick, James L McClelland, and Adam Santoro. Emergent systematic generalization in a situated agent. arXiv preprint arXiv:1910.00571, 2019. [19] Jonathan Ho and Stefano Ermon. Generative adversarial imitation
learning. arXiv preprint arXiv:1606.03476, 2016.
[20] Yiding Jiang, Shixiang Gu, Kevin Murphy, and Chelsea Finn. Language as an abstraction for hierarchical deep reinforcement learning, 2019.
[21] Leslie Pack Kaelbling. Learning to achieve goals. In IJCAI, pages 1094â1099. Citeseer, 1993.
[22] Dmitry Kalashnikov, Alex Irpan, Peter Pastor, Julian Ibarz, Alexander Herzog, Eric Jang, Deirdre Quillen, Ethan Holly, Mrinal Kalakrishnan, Vincent Vanhoucke, et al. Qt-opt: Scalable deep reinforcement learning for vision-based robotic manipulation. arXiv preprint arXiv:1806.10293, 2018.
[23] Jens Kober, J Andrew Bagnell, and Jan Peters. Reinforcement learning in robotics: A survey. The International Journal of Robotics Research, 32(11):1238â1274, 2013.
[24] Thomas Kollar, Stefanie Tellex, Deb Roy, and Nicholas Roy. Toward In 2010 5th ACM/IEEE understanding natural language directions. International Conference on Human-Robot Interaction (HRI), pages 259â266. IEEE, 2010.
[25] Jelena Luketina, Nantas Nardelli, Gregory Farquhar, Jakob Foerster, Jacob Andreas, Edward Grefenstette, Shimon Whiteson, and Tim Rockt¨aschel. A survey of reinforcement learning informed by natural language. arXiv preprint arXiv:1906.03926, 2019.
[26] Corey Lynch, Mohi Khansari, Ted Xiao, Vikash Kumar, Jonathan Tompson, Sergey Levine, and Pierre Sermanet. Learning latent plans from play. Conference on Robot Learning (CoRL), 2019. URL https://arxiv.org/abs/1903.01973.
[27] Dipendra Misra, John Langford, and Yoav Artzi. Mapping instructions and visual observations to actions with reinforcement learning. arXiv preprint arXiv:1704.08795, 2017.
[28] Ashvin Nair, Shikhar Bahl, Alexander Khazatsky, Vitchyr Pong, Glen Berseth, and Sergey Levine. Contextual imagined goals for self- supervised robotic learning. In Conference on Robot Learning, pages
530â539. PMLR, 2020.
[29] Ashvin V Nair, Vitchyr Pong, Murtaza Dalal, Shikhar Bahl, Steven Lin, and Sergey Levine. Visual reinforcement learning with imagined goals. In Advances in Neural Information Processing Systems, pages 9191â9200, 2018.
[30] Andrew Y Ng, Stuart J Russell, et al. Algorithms for inverse reinforce- ment learning. In Icml, volume 1, page 2, 2000.
[31] Junhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard L Lewis, and Satinder Singh. Action-conditional video prediction using deep networks in atari games. In Advances in neural information processing systems, pages 2863â2871, 2015.
[32] Junhyuk Oh, Satinder Singh, Honglak Lee, and Pushmeet Kohli. Zero- shot task generalization with multi-task deep reinforcement learning. In Proceedings of the 34th International Conference on Machine Learning- Volume 70, pages 2661â2670. JMLR. org, 2017.
[33] Sherjil Ozair, Corey Lynch, Yoshua Bengio, Aaron Van den Oord, Sergey Levine, and Pierre Sermanet. Wasserstein dependency measure for representation learning. In Advances in Neural Information Processing Systems, pages 15578â15588, 2019.
[34] S¨oren Pirk, Mohi Khansari, Yunfei Bai, Corey Lynch, and Pierre Sermanet. Online object representations with contrastive learning. arXiv preprint arXiv:1906.04312, 2019.
[35] Dean A Pomerleau. Efï¬cient training of artiï¬cial neural networks for autonomous navigation. Neural computation, 3(1):88â97, 1991. [36] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9, 2019.
[37] Rouhollah Rahmatizadeh, Pooya Abolghasemi, Ladislau B¨ol¨oni, and Sergey Levine. Vision-based multi-task manipulation for inexpensive robots using end-to-end learning from demonstration. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pages 3758â3765. IEEE, 2018.
[38] Aravind Rajeswaran, Vikash Kumar, Abhishek Gupta, Giulia Vezzani, John Schulman, Emanuel Todorov, and Sergey Levine. Learning complex dexterous manipulation with deep reinforcement learning and demonstrations. arXiv preprint arXiv:1709.10087, 2017.
[39] Stefan Schaal. Is imitation learning the route to humanoid robots? Trends in cognitive sciences, 3(6):233â242, 1999.
[40] Tom Schaul, Daniel Horgan, Karol Gregor, and David Silver. Universal value function approximators. In International conference on machine learning, pages 1312â1320, 2015.
[41] Pierre Sermanet, Corey Lynch, Yevgen Chebotar, Jasmine Hsu, Eric Jang, Stefan Schaal, and Sergey Levine. Time-contrastive networks: Self- supervised learning from video. Proceedings of International Conference in Robotics and Automation (ICRA), 2018. URL http://arxiv.org/abs/ 1704.06888.
[42] Avi Singh, Larry Yang, Kristian Hartikainen, Chelsea Finn, and Sergey Levine. End-to-end robotic reinforcement learning without reward engineering. arXiv preprint arXiv:1904.07854, 2019.
[43] Avi Singh, Eric Jang, Alexander Irpan, Daniel Kappler, Murtaza Dalal, Sergey Levine, Mohi Khansari, and Chelsea Finn. Scalable multi- task imitation learning with autonomous improvement. arXiv preprint arXiv:2003.02636, 2020.
[44] Simon Stepputtis, Joseph Campbell, Mariano Phielipp, Stefan Lee, Chitta Baral, and Heni Ben Amor. Language-conditioned imitation learning for robot manipulation tasks. arXiv preprint arXiv:2010.12083, 2020. [45] Chuanqi Tan, Fuchun Sun, Tao Kong, Wenchang Zhang, Chao Yang, and Chunfang Liu. A survey on deep transfer learning. In International conference on artiï¬cial neural networks, pages 270â279. Springer, 2018. [46] Yee Teh, Victor Bapst, Wojciech M Czarnecki, John Quan, James Kirkpatrick, Raia Hadsell, Nicolas Heess, and Razvan Pascanu. Distral: In Advances in Neural Robust multitask reinforcement Information Processing Systems, pages 4496â4506, 2017.
[47] Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In IROS, pages 5026â5033. IEEE, 2012. ISBN 978-1-4673-1737-5. URL http://dblp.uni-trier.de/db/conf/iros/iros2012. html#TodorovET12.
[48] Xin Wang, Qiuyuan Huang, Asli Celikyilmaz, Jianfeng Gao, Dinghan Shen, Yuan-Fang Wang, William Yang Wang, and Lei Zhang. Reinforced cross-modal matching and self-supervised imitation learning for vision- language navigation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6629â6638, 2019.
[49] Terry Winograd. Understanding natural language. Cognitive psychology, 3(1):1â191, 1972.
[50] Markus Wulfmeier, Peter Ondruska, and Ingmar Posner. Maxi- arXiv preprint mum entropy deep inverse reinforcement learning. arXiv:1507.04888, 2015.
[51] Yinfei Yang, Daniel Cer, Amin Ahmad, Mandy Guo, Jax Law, Noah Constant, Gustavo Hernandez Abrego, Steve Yuan, Chris Tar, Yun-Hsuan Sung, et al. Multilingual universal sentence encoder for semantic retrieval. arXiv preprint arXiv:1907.04307, 2019. [52] Haonan Yu, Haichao Zhang, and Wei Xu.
Interactive grounded language acquisition and generalization in a 2d world. arXiv preprint arXiv:1802.01433, 2018.
[53] Tianhao Zhang, Zoe McCarthy, Owen Jow, Dennis Lee, Xi Chen, Ken Goldberg, and Pieter Abbeel. Deep imitation learning for complex In 2018 IEEE manipulation tasks from virtual reality teleoperation. International Conference on Robotics and Automation (ICRA), pages 1â8. IEEE, 2018.
[54] Victor Zhong, Tim Rockt¨aschel, and Edward Grefenstette. Rtfm: Generalising to novel environment dynamics via reading. arXiv preprint arXiv:1910.08210, 2019.
# APPENDIX
A. Relabeling Play
Algorithm 2 Creating many goal image conditioned imitation examples from teleoperated play. 1: Input: S = {(s0:t, a0:t)n}â
n , the unsegmented stream of observations and actions recorded during play.
2: Input: Dplay ââ {}. 3: Input: wlow, whigh, bounds on hindsight window size. 4: while True do 5: 6:
# Get next play episode from stream. (s0:t, a0:t) â¼ S for w = wlow...whigh do for i = 0...(t â w) do
7: 8: 9: 10: 11:
# Select each w-sized window. Ï = (si:i+w, ai:i+w) # Treat last observation in window as goal. sg = sw Add (Ï, sg) to Dplay
10: T = (Siitws Qiitw)
12: Sg = Sw
12: 13: 14: end for 15: 16: end while
13: Add (1, 8g) to Dplay
# for
B. Pairing Play with Language
# Algorithm 3 Pairing robot sensor data with natural language instructions.
1: Input: Dplay, a relabeled play dataset holding (Ï, sg) pairs.
2: Input: D(play,lang) ââ {}. 3: Input: get hindsight instruction(): human overseer, pro- viding after-the-fact natural language instructions for a given Ï .
4: Input: K, number of pairs to generate, K << |Dplay|. 5: for 0...K do 6: 7: 8: 9: 10: 11: end for
# Sample random trajectory from play. (Ï, ) â¼ Dplay # Ask human for instruction making Ï optimal. l = get hindsight instruction(Ï ) Add (Ï, l) to D(play,lang)
Below we describe the networks we use for perception, natural language understanding, and controlâall trained end- to-end as a single neural network to maximize our multicontext imitation objective. We stress that this is only one implementa- tion of possibly many for LangLfP, which is a general high-level framework 1) combining relabeled play, 2) play paired with language, and 3) multicontext imitation.
C. Perception Module
We map raw observations (image and proprioception) to low dimensional perceptual embeddings that are fed to the rest of the network. To do this we use the perception module described
Layer Input RGB Image Conv2D Conv2D Conv2D Spatial softmax Flatten Fully connected Fully connected Details 200 x 200 x 3 32 ï¬lters, 8 x 8, stride 4 64 ï¬lters, 4 x 4, stride 2 64 ï¬lters, 3 x 3, stride 1 512 hidden units, relu 64 output units, linear
TABLE III: Hyperparameters for vision network.
in Fig. 9. We pass the image through the network described in Table III, obtaining a 64-dimensional visual embedding. We then concatenate this with the proprioception observation (normalized to zero mean, unit variance using training statistics). This results in a combined perceptual embedding of size 72. This perception module is trained end to end to maximize the ï¬nal imitation loss. We do no photometric augmentation on the image inputs, but anticipate this may lead to better performance.
D. Image goal encoder
genc takes as input image goal observation Og and outputs latent goal z. To implement this, we reuse the perception module Pθ, described in Appendix C, which maps Og to sg. We then pass sg through a 2 layer 2048 unit ReLU MLP to obtain z.
E. Language understanding module
language goal encoder, senc, is described in Fig. 9. It maps raw text to a 32 dimensional embedding in goal space as follows: 1) apply subword tokenization, 2) retrieve learned 8-dim subword embeddings from a lookup table, 3) summarize the sequence of subword embeddings (we considered average pooling and RNN mechanisms for this, choosing average pooling based on validation), and 4) pass the summary through a 2 layer 2048 unit ReLU MLP. Out-of-distribution subwords are initialized as random embeddings at test time.
Transfer learning. For our experiments, we chose Multilin- gual Universal Sentence Encoder described in [51]. MUSE is a multitask language architecture trained on generic multilingual corpora (e.g. Wikipedia, mined question-answer pair datasets), and has a vocabulary of 200,000 subwords. MUSE maps sentences of any length to a 512-dim vector. We simply treat these embeddings as language observations and do not ï¬netune the weights of the model. This vector is fed to a 2 layer 2048 unit ReLU MLP, projecting to latent goal space.
We note there are many choices available for encoding raw text in a semantic pretrained vector space. MUSE showed results immediately, allowing us to focus on other parts of the problem setting. We look forward to experimenting with different choices of pretrained embedder in the future.
# F. Control Module
We could in principle use any goal-directed imitation network architecture to implement the policy Ïθ(at|st, z). For direct comparison to prior work, we use Latent Motor Plans (LMP)
Perception 72 perceptual j embedding ârl concat 512 mip ss (32, 8, 4) |_ conv (32, 8, 4) |_ conv (32, 8, 4) |_ conv J 200x200 8-DOF rgb image proprioception No, 7 Language goal encoder 32 jatent goal â 2048 embedding lookup MLP 2048 f avg pool t 8 â- embedding oO sequence subword tokenizer i l
Fig. 9: Perception and language embedder modules
architecture Lynch et al. [26] to implement this network. LMP uses latent variables to model the large amount of multimodal- ity inherent in freeform imitation datasets. Concretely it is a sequence-to-sequence conditional variational autoencoder (seq2seq CVAE), autoencoding contextual demonstrations through a latent âplanâ space. The decoder is a goal and latent plan conditioned policy. As a CVAE, LMP lower bounds maximum likelihood contextual imitation (LLfP), and is easily adapted to our multicontext setting.
goal embedding. senc is a subword embedding summarizer described in Sec. E. Unless speciï¬ed otherwise, the LMP implementation in this paper uses the same hyperparameters and network architecture as in [26]. See appendix there for details on the posterior, conditional prior, and decoder architecture, consisting of a RNN, MLP, and RNN respectively.
G. Training Details
Multicontext LMP. Here we describe âmulticontext LMPâ: LMP adapted to be able to take either image or language goals. This imitation architecture learns both an abstract visuo- lingual goal space zg, as well as a plan space zp capturing the many ways to reach a particular goal. We describe this implementation now in detail.
As a conditional seq2seq VAE, original LMP trains 3 networks. 1) A posterior q(zp|Ï ), mapping from full state- action demonstration Ï to a distribution over recognized plans. 2) A learned prior p(zp|s0, sg), mapping from initial state in the demonstration and goal to a distribution over possible plans for reaching the goal. 3) A goal and plan conditioned policy p(at|st, sg, zp), decoding the recognized plan with teacher forcing to reconstruct actions that reach the goal. See [26] for details.
To train multicontext LMP, we simply replace the goal conditioning on sg everywhere with conditioning on zg, the latent goal output by multicontext encoders F = {genc, senc}. In our experiments, genc is a 2 layer 2048 unit ReLU MLP, mapping from encoded goal image sg to a 32 dimensional latent
At each training step, we compute two contextual imitation losses: image goal and language goal. The image goal forward pass is described in Fig. 16. The language goal forward pass is described in Fig. 17. We share the perception network and LMP networks (posterior, prior, and policy) across both steps. We average minibatch gradients from image goal and language goal passes and we train everythingâperception networks, genc, senc, posterior, prior, and policyâend-to-end as a single neural network to maximize the combined training objective. We describe all training hyperparameters in Table IV.
# H. Test time details
At the beginning of a test episode the agent receives as input its onboard observation Ot and a human-speciï¬ed natural language goal l. The agent encodes l in latent goal space z using the trained sentence encoder senc. The agent then solves for the goal in closed loop, repeatedly feeding the current observation and goal to the learned policy Ïθ(a|st, z), sampling actions, and executing them in the environment. The human operator can type a new language goal l at any time.
Hyperparameter hardware conï¬guration action distribution optimizer learning rate hindsight window low hindsight window high LMP β batch size (per GPU) training time Value 8 NVIDIA V100 GPUs discretized logistic, 256 bins per action dimension Adam 2e-4 16 32 0.01 64 sequences * padded max length 32 = 2048 frames 3 days
TABLE IV: Training Hyperparameters
Fig. 10: The Playground environment.
experiments, observations consist of the 8D proprioceptive state, the position and euler angle orientation of a movable block, and a continuous 1-d sensor describing: door open amount, drawer open amount, red button pushed amount, blue button pushed amount, green button pushed amount. In both experiments, the agent additionally observes the raw string value of a natural language text channel at each timestep.
K. Action space
We use the same action space as [26]: 8-DOF cartesian position, euler, and gripper angle of the end effector. Similarly, during training we quantize each action element into 256 bins. All stochastic policy outputs are represented as discretized logistic distributions over quantization bins.
We use the same simulated 3D playground environment as in [26], keeping the same observation and action spaces. We deï¬ne these below for completeness.
I. 3D simulated environment
L. Play dataset
We use the same play logs collected in [26] as the basis for all relabeled goal image conditioned learning. This consists of â¼7h of play relabeled in to Dplay: â¼10M short-shorizon windows, each spanning 1-2 seconds.
The environment contains a desk with a sliding door and drawer that can be opened and closed. On the desk is a movable rectangular block and 3 buttons that can be pushed to control different colored lights. Next to the desk is a trash bin. Physics are simulated using the MuJoCo physics engine [47]. Videos here of play data collected in this environment give a good idea of the available complexity. In front of the desk is a realistic simulated 8-DOF robot (arm and parallel gripper). The agent perceives its surroundings from egocentric high- dimensional RGB video sensors. It additionally has access to continuous internal proprioceptive sensors, relaying the cartesian position and rotation of its end effector and gripper angles. We modify the environment to include a text channel which the agent observes at each timestep, allowing humans to type unconstrained language commands. The agent must perform high-frequency, closed-loop continuous control to solve for user-described manipulation tasks, sending continuous position, rotation, and gripper commands to its arm at 30hz.
# J. Observation space
We consider two types of experiments: pixel and state experiments. In the pixel experiments, observations consist of (image, proprioception) pairs of 200x200x3 RGB images and 8-DOF internal proprioceptive state. Proprioceptive state consists of 3D cartesian position and 3D euler orientation of the end effector, as well as 2 gripper angles. In the state
M. (Play, language) dataset
We pair 10K windows from Dplay with natural language hindsight instructions using the interface shown in Fig. 11 to obtain D(play,lang). See real examples below in Table V. Note the language collected has signiï¬cant diversity in length and content.
Task: type the instruction that answers: âHow do | go from start to finish?â <type instruction> finish frame Start frame
Fig. 11: A schematic rendering of our hindsight language collection tool.
Fig. 11 demonstrates how hindsight instructions are collected. Overseers are presented with a webpage that contains a looping video clip of 1 to 2 seconds of play, along with the ï¬rst and the last frames of that video, to help them identify the beginning and the end of the clip if necessary. Overseers are asked to type in a text box the sentence that best answers the question âHow do I go from start to ï¬nish?â. Several considerations can go into the design of this interface. First, we can ask overseers to type in multiple instructions that are as different from each other as possible, to increase generalization and diversity of the language dataset. After experimenting with one or multiple text boxes, we opted for using only one, while asking users to write as diverse sentences as possible throughout the collection process. A disadvantage of having multiple boxes is that it can sometimes be challenging to come up with diverse sentences when the observed action is simple. It also leads to less video- language pairs for the same budget. Thus we decided one sentence per video was most beneï¬cial in our case. Another collection consideration is the level of details in a sentence. For example, for a generalist robot application, it seems âopen the drawerâ is a more likely use case than âmove your hand 10 cm to the right above the drawer handle, then grasp the drawer handle, then pull on the drawer handleâ. Similarly, an instruction geared toward a function such as âopen the drawerâ is more likely useful than one detached from it function, e.g. âput your ï¬ngers around the drawer handle, pull your hand backâ. Finally, given the temporal horizon of our video clips, multiple things can be happening within one clip. How many events should be described? For example it might be important to describe multiple motions such as âclose the drawer then reach for the objectâ. With all these observations in mind, we instructed the overseers to only describe the main actions, while asking themselves: âwhich instructions would I give to another human being so that they could produce a similar video if they were in that scene?â.
N. (Demo, language) dataset
To deï¬ne D(demo,lang), we pair each of the 100 demon- strations of the 18 evaluation tasks from [26] with hindsight instructions using the same process as in IV-A. We similarly pair the 20 test time demonstrations of each of the 18 tasks. See examples for each of the tasks in Table VI. 74.3% of the natural language instructions in the test dataset appear at least once in Dplay. 85.08% of the instructions in D(play,lang) never never appear in the test set.
# O. Restricted play dataset
For a controlled comparison between LangLfP and LangBC, we train on a play dataset restricted to the same size as the aggregate multitask demonstration dataset (â¼1.5h). This was obtained by randomly subsampling the original play logs â¼7h to â¼1.5h before relabeling.
Below we describe our various baselines and their training sources. Note that for a controlled comparison, all methods are trained using the exact same architecture described in Sec. B, differing only on the source of data.
âgrasp the object from the drawer and drop it on the tableâ âpick the object and then lift it up.â âpull the drawer.â âdrag the object into the drawerâ âdrop the object,and again pickup the object highâ âclose the drawerâ âdo nothing.â âmove your hand to the rightâ âpush the door to the right.â âreach for the green buttonâ âslightly close your hand.â âgo grasp the door handleâ âclose the drawerâ âdrop the object and push it inside the shelfâ âgrasp the block and drop it on top of the tableâ âplace the block on top of the tableâ ârotate the block 90 degrees to the rightâ âmove the cabinet door to the right and then release your ï¬ngers.â âopen the drawerâ âslightly close the drawer, then drag it a little bit , close it and then let goâ âreach for the object.â âpress the red button and then release itâ âmove your hand towards the door handleâ âdrop the objectâ âslightly pull the drawer and then reach for its handle.â âdrop the object in the drawerâ ârelease the block and move your hand upwards.â âdrop the object on the table and reach for the door handleâ âpush the object into to the drawerâ âslightly move the door to the right, then drop the object and again turn the object to the leftâ âslowly move the door to the leftâ
TABLE V: Randomly sampled examples from the training set of hindsight instructions paired with play. As hindsight instruction pairing sits on top of play, the language we collect is similarly not constrained by predeï¬ned tasks, and instead covers both speciï¬c functional behaviors and general, non task-speciï¬c behaviors.
Training Data Method D(demo,lang) LangBC Dplay LfP (goal image) restricted Dplay, D(play,lang) LangLfP Restricted Dplay, D(play,lang) LangLfP (ours) TransferLangLfP (ours) Dplay, D(play,lang)
TABLE VII: Methods and their training data sources. All baselines are trained to maximize the same generalized contextual imitation objective MCIL.
Task construction. We construct long-horizon evaluations by considering transitions between the 18 subtasks deï¬ned in [26]. These span multiple diverse families, e.g. opening and closing doors and drawers, pressing buttons, picking and placing a movable block, etc. For example, one of the 925 Chain-4 tasks may be âopen sliding, push blue, open drawer, sweepâ. We exclude as invalid any transitions that would result in instructions that would have necessarily already been satisï¬ed, e.g. âopen drawer, press red, open drawerâ.
Note this multi-stage scenario subsumes the original Multi- 18 in Lynch et al. [26], which can be seen as just testing the ï¬rst stage. This allows for direct comparison to prior work.
To allow natural language conditioning we pair 20 test demonstrations of each subtask with human instructions using the same process as in Sec. IV-A (example sentences in Table VI).
Success is reported with conï¬dence intervals over 3 seeded
# Task
Natural language instructions âmove the door all the way to the rightâ âslide the door to the rightâ âmove the sliding door all the way to the right and let goâ âGrasp the door handle, then slide the door to the leftâ âmove the door all the way to the leftâ ââopen the cabinet drawerâ âopen the drawer and let goâ ââclose the drawer and let goâ ââclose the drawerâ âPick up the blockâ âgrasp the object and lift it upâ âgrasp the object and move your hand upâ âPick up the object from the drawer and drop it on the table.â âhold the block and place it on top of the tableâ âPick the object and lift it upâ âgrasp the object and liftâ âpush the block forwardâ âpush the object towards the doorâ âDrag the block from the shelf towards the drawerâ âpick up the object from the shelf and drop it on the tableâ âgrasp the object and place it inside the cabinet shelfâ âPick the object, move it into the shelf and then drop it.â âgo press the red buttonâ âPress the red buttonâ âpress the green buttonâ âpush the green buttonâ âpush the blue buttonâ âpress down on the blue button.â âPick up the object, rotate it 90 degrees to the left and drop it on the tableâ ârotate the object 90 degrees to the leftâ âturn the object to the rightâ ârotate the block90 degrees to the rightâ âroll the object into the drawerâ âdrag the block into the drawerâ âRoll the block to the leftâ âclose your ï¬ngers and roll the object to the leftâ âroll the object to the rightâ âPush the block to the right.â
open sliding door close sliding door open drawer close drawer grasp ï¬at grasp lift grasp upright knock pull out shelf put in shelf push red push green push blue rotate left rotate right sweep sweep left sweep right
TABLE VI: Example natural language instructions used to specify the 18 test-time visual manipulation tasks.
# training runs.
Evaluation walkthrough. The long horizon evaluation happens as follows. For each of the N-stage tasks in a given benchmark, we start by resetting the scene to a neutral state. This is discussed more in the next section. Next, we sample a natural language instruction for each task in the N-stage sequence from a test set. Next, for each subtask in a row, we condition the agent on the current subtask instruction and rollout the policy. If at any timestep the agent successfully completes the current subtask (according to the environment- deï¬ned reward), we transition to the next subtask in the
sequence (after a half second delay). This attempts to mimic the qualitative scenario where a human provides one instruction after another, queuing up the next instruction, then entering it only once the human has deemed the current subtask solved. If the agent does not complete a given subtask in 8 seconds, the entire episode ends in a timeout. We score each multi-stage rollout by the percent of subtasks completed, averaging over all multi-stage tasks in the benchmark to arrive at the ï¬nal N-stage number.
The importance of neutral starting conditions in mul- titask evaluation. When evaluating any context conditioned policy (goal-image, task id, natural language, etc.), a natural question that arises is: how much is the policy relying on the context to infer and solve the task? In [26], evaluation began by resetting the simulator to the ï¬rst state of a test demonstration, ensuring that the commanded task was valid. We ï¬nd that under this reset scheme, the initial pose of the agent can in some cases become correlated with the task, e.g. the arm starts nearby the drawer for a drawer opening task. This is problematic, as it potentially reveals task information to the policy through the initial state, rather than the context. In this work, we instead reset the arm to a ï¬xed neutral position at the beginning of every test episode. This âneutralâ initialization scheme is used to run all evaluations in this paper. We note this is a fairer, but signiï¬cantly more difï¬cult benchmark. Neutral initialization breaks correlation between initial state and task, forcing the agent to rely entirely on language to infer and solve the task. For completeness, we present evaluation curves for both LangLfP and LangBC under this and the prior possibly âcorrelatedâ initialization scheme in Fig. 12. We see that when LangBC is allowed to start from the exact ï¬rst state of a demonstration (a rigid real world assumption), it does well on the ï¬rst task, but fails to transition to the other tasks, with performance degrading quickly after the ï¬rst instruction (Chain-2, Chain-3, Chain-4). Play-trained LangLfP on the other hand, is far more robust to change in starting position, experiencing only a minor degradation in performance when switching from the correlated initialization to neutral.
886 â LangLiP, correlated == LangLiP (ours), neutral â Langac, correlated == LangBe, neutral % Tasks Completed NweauUo 8s 8 bea no, S 1 2 4 (18 tasks) (925 tasks) Number of instructions in a row
Fig. 12: Training on relabeled play leads to robustness. Models trained on relabeled play (LangLfP) are robust to a ï¬xed neutral starting position during multitask evaluation, models trained on conventional predeï¬ned demonstrations (LangBC) are not.
P. Composing new tasks with language
In this section, we show that LangLfP can achieve difï¬cult new tasks, not included in the standard 18-task benchmark, in
zero shot. In Fig. 13, the operator can compose a new task âput object in trashâ at test time, simply by breaking the task into two text instructions: 1) âpick up the objectâ and 2) âput the object in the trashâ. We see similarly that the operator can ask the agent to put the object on top of a shelf in Fig. 14. Note that neither âput on shelfâ nor âput in trashâ are included in the 18-task benchmark.
GET put the object in the trash] next:
Fig. 13: Putting the object in the trash: An operator is able to put the object in the trash by breaking down the task into 2 smaller subtasks with the following sentences; 1) âpick up the objectâ 2) âput the object in the trashâ
Gam put the next: ton top of the shelf]
Fig. 14: Putting the object on top of the shelf: An operator is able to put the object in the trash by breaking down the task into 2 smaller subtasks with the following sentences; 1) âpick up the objectâ 2) âput the object on top of the shelfâ.
Q. Play scales with model capacity
In Fig. 6, we consider task performance as a function of model size for models trained on play or conventional predeï¬ned demonstrations. For fair comparison, we compare LangBC to Restricted LangLfP, trained on the same amount of data. We study both models from state inputs to understand upper bound performance. As we see, performance steadily improves as model size is increased for models trained on play, but peaks and then declines for models trained on conventional demonstrations. Our interpretation is that larger capacity is effectively utilized to absorb the diversity in play, whereas additional capacity is wasted on equivalently sized datasets constrained to predeï¬ned behaviors. This suggests that the simple recipe of collecting large play datasets and scaling up model capacity to achieve higher performance is a valid one.
R. The Operator Can Help the Robot
In this section, we show an example of the human operator adapting its instructions if the robot gets stuck, and adding
extra sub-steps to get out of trouble and ï¬x the situation in order to achieve the initial instruction. In Fig. 7, the robot gets its end-effector stuck against the door on the way to pressing the red button. The operator then changing the command to move back, then opening the door more to the right so that it has better access to the red button, then pressing the red button.
S. Pretrained language models give robustness to synonym instructions
Robustness to out of distribution synonym instructions. To study whether our proposed transfer augmentation allows an agent to follow out-of-distribution synonyms, we replace one or more words in each Multi-18 evaluation instruction with a synonym outside training, e.g. âdrag the block from the shelfâ ââ âretrieve the brick from the cupboardâ. Note these substitutions only happen at test time, not training time. Enumerating all substitutions results in a set of 14,609 out-of- distribution instructions covering all 18 tasks. We evaluate a random policy, LangLfP, and TransferLangLfP on this benchmark, OOD-syn, reporting the results in Table II. Success is reported with conï¬dence intervals over 3 seeded training runs. We see while LangLfP is able to solve some novel tasks by inferring meaning from context (e.g. âpick up the blockâ and âpick up the brickâ might reasonably map to the same task for one freely moving object), its performance degrades signiï¬cantly. TransferLangLfP on the other hand, degrades less. This shows that the simple transfer learning technique we demonstrate greatly magniï¬es the test time scope of an instruction following agent. We stress that this technique only increases the number of ways a language-guided agent can be conditioned at test time to execute the same ï¬xed set of training behaviors.
Following out of distribution instructions in 16 different languages. Here we show that simply choosing a multilingual pretrained language model leads to further robustness across languages not seen during training (e.g. French, Mandarin, German, etc.). To study this, we combine the original set of evaluation instructions from Multi-18 with the expanded synonym instruction set OOD-syn, then translate both into 16 languages using the Google translate API. This results in â¼240k out-of-distribution instructions covering all 18 tasks. We evaluate the previous methods on this cross-lingual manipulation benchmark, OOD-16-lang, reporting success in Table II. We see that when LangLfP receives instructions with no training overlap, it resorts to producing maximum likelihood play actions. This results in some spurious task success, but performance degrades materially. TransferLangLfP, on the other hand, is far more robust to these perturbations, degrading only slightly from the english-only benchmark. See videos of TransferLangLfP following instructions in 16 novel languages. While in this paper we do not ï¬netune the pretrained embeddings with our imitation loss, an exciting direction for future work would be to see if grounding language embeddings in embodied imitation improves representation learning over text-only inputs.
We study how the performance of LangLfP scales with the number of collected language pairs. Fig. 15 compares models trained from pixel inputs to models trained from state inputs as we sweep the amount of collected language pairs by factors of 2. Success is reported with conï¬dence intervals over 3 seeded training runs. Interestingly, for models trained from states, doubling the size of the language dataset from 5k to 10k has marginal beneï¬t on performance. Models trained from pixels, on the other hand, have yet to converge and may beneï¬t from even larger datasets. This suggests that the role of larger language pair datasets is primarily to address the complicated perceptual grounding problem.
90 80 $70 5 60 a Â¥ 50 8 40 2 30 20 â States LangLfp â LangLfp 25k 5k 1k 2.5k 5k 10k Number of (play, language) pairs
Fig. 15: 18-task success vs amount of human language pairs collected.
Multicontext LMP: goal image encode observations encode plan decode plan 1 . 4 prior perceptual embeddings initial state fim latent j TE / p \/ p \/ p \/ P \ visual f t t goal 5 O; O; O; O; RNN RNN RNN 3 _s latent ââT | action decoder image state-actions latent 6 goal plan posterior
Fig. 16: Latent image goal LMP: This image goal conditioned forward pass of multicontext LMP. 1) Map sequence of raw observations Ot to perceptual embeddings (see Fig. 9). 2) Map image goal to latent goal space. 3) Map full state-action sequence to recognized plan through posterior. 4) Map initial state and latent goal through prior network to distribution over plans for achieving goal. 5) Minimize KL divergence between posterior and prior. 6) Compute maximum likelihood action reconstruction loss by decoding plan into actions with teacher forcing. Each action prediction is conditioned on current perceptual embedding, latent image goal, and latent plan. Note perception net, posterior, prior, and policy are shared with language goal forward pass.
Multicontext LMP: language encode observations encode plan decode plan 1 . 4 prior Liang perceptual embeddings initial state es ee fee\ie\ie\) | ows aerger fanguage 5 [em \] RNN \] RNN \ O OF OO OO; $ 3 7 action decoder language latent ââ 6 2 l goal language state-actions latent encoder goal plan posterior
Fig. 17: Latent language goal LMP: This describes the language goal conditioned forward pass of multicontext LMP. 1) Map sequence of raw observations Ot to perceptual embeddings (see Fig. 9). 2) Map language observation to latent goal space. 3) Map full state-action sequence to recognized plan through posterior. 4) Map initial state and latent goal through prior network to distribution over plans for achieving goal. 5) Minimize KL divergence between posterior and prior. 6) Compute maximum likelihood action reconstruction loss by decoding plan into actions with teacher forcing. Each action prediction is conditioned on current perceptual embedding, latent language goal, and latent plan. Note perception net, posterior, prior, and policy are shared with image goal LMP. | {
"id": "1802.01433"
} |
2005.07572 | Participatory Problem Formulation for Fairer Machine Learning Through Community Based System Dynamics | Recent research on algorithmic fairness has highlighted that the problem
formulation phase of ML system development can be a key source of bias that has
significant downstream impacts on ML system fairness outcomes. However, very
little attention has been paid to methods for improving the fairness efficacy
of this critical phase of ML system development. Current practice neither
accounts for the dynamic complexity of high-stakes domains nor incorporates the
perspectives of vulnerable stakeholders. In this paper we introduce community
based system dynamics (CBSD) as an approach to enable the participation of
typically excluded stakeholders in the problem formulation phase of the ML
system development process and facilitate the deep problem understanding
required to mitigate bias during this crucial stage. | http://arxiv.org/pdf/2005.07572 | Donald Martin Jr., Vinodkumar Prabhakaran, Jill Kuhlberg, Andrew Smart, William S. Isaac | cs.CY, cs.LG, stat.ML | Eighth Annual Conference on Learning Representations (ICLR 2020),
Virtual Workshop: Machine Learning in Real Life, April 26, 2020, 6 pages, 1
figure, fix comment typo, fix author name | null | cs.CY | 20200515 | 20200522 | 0 2 0 2
y a M 2 2 ] Y C . s c [
3 v 2 7 5 7 0 . 5 0 0 2 : v i X r a
Published as a conference paper at ICLR 2020
PARTICIPATORY PROBLEM FORMULATION FOR FAIRER MACHINE LEARNING THROUGH COMMUNITY BASED SYSTEM DYNAMICS
Donald Martin, Jr. Google [email protected] Vinodkumar Prabhakaran Google [email protected]
Jill Kuhlberg System Stars [email protected]
Andrew Smart Google [email protected]
William S. Isaac DeepMind [email protected]
# ABSTRACT
Recent research on algorithmic fairness has highlighted that the problem formula- tion phase of ML system development can be a key source of bias that has signif- icant downstream impacts on ML system fairness outcomes. However, very little attention has been paid to methods for improving the fairness efï¬cacy of this crit- ical phase of ML system development. Current practice neither accounts for the dynamic complexity of high-stakes domains nor incorporates the perspectives of vulnerable stakeholders. In this paper we introduce community based system dy- namics (CBSD) as an approach to enable the participation of typically excluded stakeholders in the problem formulation phase of the ML system development process and facilitate the deep problem understanding required to mitigate bias during this crucial stage.
# INTRODUCTION
Problem formulation is a crucial ï¬rst step in any machine learning (ML) based interventions that have the potential of impacting the real lives of people; a step that involves determining the strategic goals driving the interventions and translating those strategic goals into tractable machine learning problems (Barocas et al., 2017; Passi & Barocas, 2019). The decisions made during this step can have profound impact in shaping the core aspects of those interventions, including how they impact different communities in society. Recent studies have demonstrated many instances where ML-aided interventions in high-stakes domains such as health-care risk-assessment (Obermeyer et al., 2019), criminal justice (Chouldechova, 2017) and online content moderation (Sap et al., 2019) resulted in unintended unfair outcomes that further disadvantaged already marginalized communities.
Researchers have pointed out two major pitfalls in the problem formulation step that contributes to such undesirable outcomes. First, the problem formulation step necessitates developing a model of the problem domain that accommodates the constraints of leveraging existing ML tools (often the pre-chosen intervention method), most of which operate on (i.e., classiï¬es or regresses over) data that are static snap-shots of the problem domain, and consequently often ignores the non-linear and dynamically complex nature of society that involves feedback loops and time-delays between ac- tions and impacts (Ensign et al., 2017; Liu et al., 2018). Second, the stakeholders that are involved in problem formulation â e.g., product managers, business analysts, computer scientists, ML prac- titioners â often lack the lived experiences required to comprehensively approximate and account for the various peripheral stakeholders their interventions will impact, especially the communities that are the most vulnerable to unfair outcomes (Eubanks, 2018; Campolo et al., 2017).
For instance, let us consider the recent study (Obermeyer et al., 2019) which discovered that a pre- diction algorithm broadly used by health-care risk-assessment tools exhibited racial bias against African-Americans. The strategic goal of those tools was to improve the care of patients with com- plex health needs while reducing overall costs, by targeting high-risk patients (i.e., those with com-
1
Published as a conference paper at ICLR 2020
plex health needs) with special programs and resources. The goal itself implies an interventionist (Ben-Menahem, 2018) approach that relies on the causal inference that special programs for high- risk patients will lead to or cause lower overall, system-wide healthcare costs. During the problem formulation phase, this strategic goal was reduced to identifying the patients who had the highest health care costs; essentially using costs/spending as a proxy for needs. This reduction relied on the additional human causal inference that patients â regardless of population or background â with more complex health needs would have spent more on health care in the past and that no other fac- tors were causally relevant to their health care spending. However, this inference proved incorrect as it failed to consider a multitude of confounding factors and the dynamically complex ways they impact health care spending. Speciï¬cally, the historic disparities in health care access (among other things) that African American individuals face in the US healthcare system means that they end up spending less on health care. Consequently, the health risk-assessment algorithm tended to mistak- enly infer African American individuals to not be high-risk patients, regardless of the complexity of their illnesses, further denying them access to special programs and resources.
One of the core mistakes made in the above scenario is in the problem formulation step itself â using health-care costs (spending) as a proxy variable for health-care need. The causal theories that guided this process emerge from an opaque and iterative process among key stakeholders (e.g. product managers, executives, business analysts) that we collectively refer to as âcausal reasonersâ (Pearl & Mackenzie, 2018). Psychological research has shown that human causal inference is based on a priori intuitive theories about the causal structure of the problem to be intervened on (Tenenbaum & Grifï¬ths, 2003; Pearl & Mackenzie, 2018). These causal theories are the result of the cumulative lived experiences of the individual causal reasoner and reï¬ect views of reality ï¬ltered through their world views and biases. If the problem formulation step had facilitated the equitable participation of African American community members who have lived experience within the US healthcare system, it is likely this undesired outcome could have been averted.
While researchers have recognized the importance of problem formulation in ensuring fair and eth- ical machine learning interventions in society, the process that guides this step still remains ad-hoc, informal, and fueled by intuition (Barocas & Selbst, 2016; Barocas et al., 2017). Such reliance on the implicit causal theories of causal reasoners, who may lack the lived experiences required to com- prehensively approximate the causal model of the problem domain upon which to base inferences, will continue to result in undesirable outcomes. Hence, the problem formulation step, especially in high-stakes situations, should have at its core a formal approach to developing causal models of the socio-technical problem domains being intervened upon. Such an approach should incor- porate two key attributes: (1) ability to contend with the delayed impacts and feedback loops that characterize the dynamically complex nature of high-stakes contexts, and (2) optimized for making causal inferences explicit and for iterative learning of causal structures in partnership with peripheral stakeholders including policy makers and social groups most vulnerable to unfair outcomes.
# 2 COMMUNITY BASED SYSTEM DYNAMICS
In this paper we introduce Community Based System Dynamics (CBSD) as an approach to engage multiple and diverse stakeholder groups in problem formulation to design fairer ML-based inter- ventions. CBSD is a participatory method for involving communities in the process of developing a shared understanding of complex systems from the feedback perspective. It relies on visual tools and simulation to support groups in the co-development of explicit and transparent causal theories (Hov- mand, 2014a). CBSDâs explicit goal to build capacity among stakeholders to derive deeper system insights together through their participation sets it apart from other approaches where stakeholders are viewed as informants. It has been used to engage and center the perspectives of marginalized and vulnerable communities in the development of more effective interventions in ecology, public health, and social work (Stave & Kopainsky, 2015; Trani et al., 2016; Escobedo et al., 2019).
Unlike other causal modeling techniques such as Structural Causal Models (SCMs) and Causal Bayesian Networks (CBNs) that have been recently proposed to model causality in ML problem formulation (Madras et al., 2019; Chiappa & Isaac, 2018), CBSD is founded upon a system dy- namics (SD) (Hovmand, 2014a) approach, which takes a characteristically feedback approach to modeling dynamically complex problems (Sterman, 2010; Richardson, 2011). In addition, while
2
Published as a conference paper at ICLR 2020
: Max cyodt Sexe _â Average Credit Seore * vi \ Loans | Recwes GN tcan GY â Payments Se Made Borrowers 4 pplication total vale populaton b, Stock and low Diagram
Figure 1: Examples of system dynamics causal loop diagrams and stock and ï¬ow diagrams.
CBNs and SCMs are independent causal modeling tools, combining causal modeling tools with formal practices for collaborative and iterative causal modeling is inherent to both SD and CBSD.
2.1 CAUSAL MODELING USING SYSTEM DYNAMICS
System Dynamics (SD) is deï¬ned as the process of using both informal maps/diagrams and formal models with computer simulation to uncover and understand the dynamics of complex problems from a feedback perspective (Richardson, 2011). It is this emphasis on feedback â reinforcing and balancing processes that unfold over time â that distinguishes SD from other causal modeling approaches, and makes it apt for the dynamically complex nature of high-stakes problems at the center of risk-prediction systems. To uncover and understand feedback processes, SD has developed a series of tools that vary in degree of formalism and are designed to provide insight into different aspects of the complex problems they model. Many of these tools are graphical in nature, requiring modelers to make their causal theories explicit, thereby fostering transparency (Lane, 2008).
One of the most commonly used visual tools in SD is the causal loop diagram (CLD). The main purpose of the CLD is to show the feedback processes in a system (understood as the set of posited causal structures related to the phenomenon of interest) using a directed graph. CLDs are often used to quickly elicit hypothesized causal relations between variables in a problem space and/or communicate the main feedback loops in a more detailed computer simulation model.
An example of a CLD is shown in Figure 1a, which offers a simpliï¬ed representation of a credit score based lending system. Such systems resemble the health-risk system described earlier in that they utilize risk prediction algorithms to intervene on high-stakes problem domains with vulnerable stakeholders. The arrows in CLDs represent hypothesized causal links between variables, with the arrowheads and polarity indicating the direction and the nature of inï¬uence. Positive polarity represents relationships where an increase (decrease) in one variable triggers an increase (decrease) in the other, all else equal. Negative polarity is used to depict relationships where an increase (decrease) in one variable triggers a decrease (increase) in the other, all else equal. In the example in Figure 1a, the relationship between Payments Made and Average Credit Score is assumed to be of positive polarity since making payments towards debt generally builds credit, ceteris paribus, whereas the link between Loan Defaults and Average Credit Score is negative, since defaulting generally results in score reductions. Any increase (decrease) in the average credit score of a group leads to a corresponding increase (decrease) in the number of loans received by that group, which in-turn increases (decreases) its borrower pool.
A more formal treatment of the causal structures, including the concept of delays and their impact on the system is offered by stock and ï¬ow diagrams, perhaps the most commonly used tool in system dynamics. In addition to representing relationships between variables and feedback loops, stock and ï¬ow diagrams require explicit deï¬nitions of variables that vary, and the precise ways that they accumulate or are depleted over time. In these diagrams, variables that accumulate are called stocks and are drawn as rectangles, and the processes that add to or drain them are called ï¬ows (inï¬ows and outï¬ows) and are depicted as double-lined/thick arrows or âpipesâ with valves. The âcloudsâ are the sources and sinks of the ï¬ows, and are assumed to have inï¬nite capacity over the time horizon of the
3
Published as a conference paper at ICLR 2020
model. These clouds show the modelâs assumed boundary â once information or material passes through the ï¬ows into a cloud, it ceases to impact the system.
Figure 1b shows a stock and ï¬ow representation of the lending system represented in the CLD (Figure 1a), in which Borrowers and Average Credit Score of the population are now represented as stocks, and are thus assumed to accumulate value over time. The number of borrowers (units = people) accumulates the inï¬ow of people receiving loans per year and is depleted by the outï¬ows of people paying off the loan completely and defaulting on loans per year. In this context the cloud before receiving loans indicates that there is an endless source of individuals who could apply for loans. In turn, those leaving the system, by defaulting or paying off, are assumed to not affect the system in any meaningful way, and are thus represented as clouds at the ends of the outï¬ows.
Stock and ï¬ow diagrams serve a qualitative purpose, but they are also the critical bridge to simulation and the quantitative aspect of SD. Visualizing behavior over time is extremely difï¬cult with static diagrams. Simulation enables the visualization and validation of dynamic hypotheses of causal structures upon which interventions are ultimately based. Additionally, simulation is a critical step for gaining a deep understanding of dynamic causal structures and enables âen silicoâ intervention experimentation.
2.2 PARTICIPATORY MODELING
SD has a rich history of participatory modeling that involves stakeholders in the model building process to foster collaboration and learning (Kir´aly & Miskolczi, 2019). Community based system dynamics (Hovmand, 2014a) is a particular SD practice approach that engages stakeholders who are embedded in the system of interest to conceptualize the problem, identify the related issues and prioritize interventions based on model supported insights. More than just involving participants in the modeling process to elicit information, CBSD has the explicit goal of building capabilities within communities to use SD on their own, distinguishing it from other participatory approaches in SD. Building capabilities enables stakeholders to more accurately represent their causal theories in the models, which is especially critical when the stakeholders are from marginalized communities that are not represented in the modeling process. In this view, individual and community perspectives on the structures that underlie everyday experiences are valued as valid and necessary sources of data, and community perspectives on the analysis and interpretation of models is essential for realizing the value of the approach.
Best practices for engaging stakeholders in the process of reï¬ecting problem causal structure us- ing CLDs and stock and ï¬ow diagrams and reï¬ning simulation models are documented (Hovmand, 2014a;b). These activities can be adapted for diverse contexts and support the development of capa- bilities for collaborative causal inference development. Overall, CBSD has been shown to be useful in a broad range of problem domains such as maternal and child health (Munar et al., 2015), iden- tifying food system vulnerabilities (Stave & Kopainsky, 2015), mental health interventions (Trani et al., 2016) and alcohol abuse (Apostolopoulos et al., 2018), to name a few. In the domain of ML (un)fairness, the use of CBSD can help center the voices and lived experiences of those marginal- ized communities that are potentially impacted by ML-based products. If the goal is to design fairer ML-based tools and products that do not harm peripheral stakeholders, it is imperative to not only model the long-term dynamics created by those products, but to also incorporate the perspectives of those stakeholders in deï¬ning what fairness means in a particular domain or context.
# 3 DISCUSSION AND CONCLUSION
In this paper we highlight the causal inferences of key stakeholders as the central point of risk in the currently adhoc and informal problem formulation process. When the process that generates those causal inferences is opaque and insular real harms can result. We introduced CBSD as a mature can- didate to foster the formalization of the problem formulation process in a manner that considers the dynamically complex nature of high-stakes contexts and enables the diversiï¬cation of the sources of causal theories upon which human causal inferences are ultimately based. A key advantage of an SD-based approach is that it draws heavily on visual diagramming conventions which emphasize transparency and facilitate the engagement of diverse stakeholders to add, revise and critique causal theories. A long lineage of participatory approaches within SD, including CBSD and group model
4
Published as a conference paper at ICLR 2020
building, provide evidence of success in developing and using system dynamics models in diverse contexts, and serve as resources for groups interested in developing SD capabilities in their commu- nities/contexts. Moreover, a strength SD shares with other causal modeling approaches, including Bayesian networks, is the correspondence between its visualizations and their underlying mathemat- ical representations, which allows stakeholders to do more than visualize, but continue to develop deep insights about important data to collect and consider, as well as evaluate the impact of products and decisions through simulation.
ACKNOWLEDGMENTS
We would like to thank Emily Denton, Ben Hutchinson, Sean Legassick, Silvia Chiappa, Matt Botvinick, Reena Jana, Dierdre Mulligan, and Deborah Raji for their valuable feedback on this paper.
# REFERENCES
Yorghos Apostolopoulos, Michael K Lemke, Adam E Barry, and Kristen Hassmiller Lich. Moving alcohol prevention research forward-part ii: new directions grounded in community-based system dynamics modeling. Addiction, 113(2):363â371, 2018.
Solon Barocas and Andrew D Selbst. Big dataâs disparate impact. Calif. L. Rev., 104:671, 2016.
Solon Barocas, Elizabeth Bradley, Vasant Honavar, and Foster Provost. Big data, data science, and civil rights. arXiv preprint arXiv:1706.03102, 2017.
Yemima Ben-Menahem. Causation in Science. Princeton University Press, 2018.
Alex Campolo, Madelyn Sanï¬lippo, Meredith Whittaker, and Kate Crawford. Ai now 2017 report. AI Now Institute at New York University, 2017.
Silvia Chiappa and William S Isaac. A causal bayesian networks viewpoint on fairness. In IFIP International Summer School on Privacy and Identity Management, pp. 3â20. Springer, 2018.
Alexandra Chouldechova. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big data, 5(2):153â163, 2017.
Danielle Ensign, Sorelle A Friedler, Scott Neville, Carlos Scheidegger, and Suresh Venkatasubra- manian. Runaway feedback loops in predictive policing. arXiv preprint arXiv:1706.09847, 2017.
Patricia Escobedo, Karina Dominguez Gonzalez, Jill Kuhlberg, Maria Lou Calanche, Lourdes Baezconde-Garbanati, Robert Contreras, and Ricky Bluthenthal. Community needs assessment among latino families in an urban public housing development. Hispanic Journal of Behavioral Sciences, 41(3):344â362, 2019.
Virginia Eubanks. Automating inequality: How high-tech tools proï¬le, police, and punish the poor. St. Martinâs Press, 2018.
Peter S Hovmand. Community Based System Dynamics. Springer, 2014a.
Peter S Hovmand. Group model building and community-based system dynamics process. Community Based System Dynamics, pp. 17â30. Springer, 2014b. In
G´abor Kir´aly and P´eter Miskolczi. Dynamics of participation: System dynamics and participationâ an empirical review. Systems Research and Behavioral Science, 36(2):199â210, 2019.
David C Lane. The emergence and use of diagramming in system dynamics: a critical account. Systems Research and Behavioral Science: The Ofï¬cial Journal of the International Federation for Systems Research, 25(1):3â23, 2008.
Lydia T. Liu, Sarah Dean, Esther Rolf, Max Simchowitz, and Moritz Hardt. Delayed impact of fair machine learning. CoRR, abs/1803.04383, 2018. URL http://arxiv.org/abs/1803. 04383.
5
Published as a conference paper at ICLR 2020
David Madras, Elliot Creager, Toniann Pitassi, and Richard Zemel. Fairness through causal aware- ness: Learning causal latent-variable models for biased data. In Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 349â358. ACM, 2019.
Wolfgang Munar, Peter S Hovmand, Carrie Fleming, and Gary L Darmstadt. Scaling-up impact in perinatology through systems science: Bridging the collaboration and translational divides in cross-disciplinary research and public policy. In Seminars in perinatology, volume 39, pp. 416â 423. Elsevier, 2015.
Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464):447â453, 2019.
Samir Passi and Solon Barocas. Problem formulation and fairness. In Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 39â48. ACM, 2019.
Judea Pearl and Dana Mackenzie. The book of why: the new science of cause and effect. Basic Books, 2018.
George P Richardson. Reï¬ections on the foundations of system dynamics. System Dynamics Review, 27(3):219â243, 2011.
Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A. Smith. The risk of racial bias in hate speech detection. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pp. 1668â1678, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1163. URL https://www.aclweb.org/anthology/ P19-1163.
Krystyna A Stave and Birgit Kopainsky. A system dynamics approach for examining mechanisms and pathways of food supply vulnerability. Journal of Environmental Studies and Sciences, 5(3): 321â336, 2015.
John Sterman. Business dynamics. Irwin/McGraw-Hill c2000.., 2010.
Joshua B Tenenbaum and Thomas L Grifï¬ths. Theory-based causal inference. In Advances in neural information processing systems, pp. 43â50, 2003.
Jean-Francois Trani, Ellis Ballard, Parul Bakhshi, and Peter Hovmand. Community based system dynamic as an approach for understanding and acting on messy problems: a case study for global mental health intervention in afghanistan. Conï¬ict and health, 10(1):25, 2016.
6 | {
"id": "1706.03102"
} |
2005.05921 | Intersectional Bias in Hate Speech and Abusive Language Datasets | Algorithms are widely applied to detect hate speech and abusive language in
social media. We investigated whether the human-annotated data used to train
these algorithms are biased. We utilized a publicly available annotated Twitter
dataset (Founta et al. 2018) and classified the racial, gender, and party
identification dimensions of 99,996 tweets. The results showed that African
American tweets were up to 3.7 times more likely to be labeled as abusive, and
African American male tweets were up to 77% more likely to be labeled as
hateful compared to the others. These patterns were statistically significant
and robust even when party identification was added as a control variable. This
study provides the first systematic evidence on intersectional bias in datasets
of hate speech and abusive language. | http://arxiv.org/pdf/2005.05921 | Jae Yeon Kim, Carlos Ortiz, Sarah Nam, Sarah Santiago, Vivek Datta | cs.CL, cs.SI | null | null | cs.CL | 20200512 | 20200528 | # Intersectional Bias in Hate Speech and Abusive Language Datasets
Jae Yeon Kimâ, Carlos Ortiz, Sarah Nam, Sarah Santiago, Vivek Datta University of California, Berkeley Berkeley, California, USA
Accepted at the ICWSM 2020 Data Challenge Workshop
# Abstract
Algorithms are widely applied to detect hate speech and abusive language in social media. We investigated whether the human-annotated data used to train these algorithms are biased. We utilized a publicly available annotated Twitter dataset (Founta et al. 2018) and clas- siï¬ed the racial, gender, and party identiï¬cation dimen- sions of 99,996 tweets. The results showed that African American tweets were up to 3.7 times more likely to be labeled as abusive, and African American male tweets were up to 77% more likely to be labeled as hateful compared to the others. These patterns were statisti- cally signiï¬cant and robust even when party identiï¬ca- tion was added as a control variable. This study pro- vides the ï¬rst systematic evidence on intersectional bias in datasets of hate speech and abusive language.
Introduction Algorithms are widely applied to detect hate speech and abusive language in popular social media platforms such as YouTube, Facebook, Instagram, and Twitter. Using al- gorithms helps identify, at scale, which posts contain so- cially undesirable content. This computational method is ef- ï¬cient but not perfect. Most algorithms are trained with la- beled data. What if the training data, used to detect bias in social media, were itself biased? Then, we would be in a situation where the algorithms that detect hate speech and abusive language, developed to prevent harm to pro- tected groups such as racial minorities and women, exac- erbate existing social disparities (Citron and Pasquale 2014; Oâneil 2016).
shortcomings. First and foremost, the evidence is sugges- tive because it is associational. Second, the magnitude of the intersectional bias is small and could be sensitive to mea- surement bias. Notwithstanding these caveats, this study is valuable because it provides the ï¬rst systematic evidence on intersectional bias in datasets of hate speech and abusive lan- guage. Practitioners need to pay greater attention to the var- ious forms of bias embedded in these datasets to avoid re- inforcing a socially constructed stereotype, such as the pre- sumed criminality of black men.
Related Work The unique contribution of this study lies in its intersectional angle. A robust body of work exists on the bias in datasets of hate speech and abusive language. Nevertheless, these studies examine this problem either from an exclusively racial (Waseem 2016; Waseem and Hovy 2016; Davidson, Bhattacharya, and Weber 2019; Sap et al. 2019) or gender- bias perspective (Tatman 2017; Park, Shin, and Fung 2018; Dixon et al. 2018). We build upon these works but also go one step further by looking at how the intersection of racial and gender bias matters. Highlighting the intersec- tional bias could be relevant because social science literature broadly emphasizes how the media portrays black men as threatening, hateful, and presumed criminals (Oliver 2003; Mastro, Blecha, and Seate 2015; Kappeler and Potter 2017) and how such media frames inï¬uence police interactions (Najdowski, Bottoms, and Goff 2015; Hall, Hall, and Perry 2016) and policy preference formation (Skinner and Hass 2016).
We utilized a publicly available annotated Twitter dataset (Founta et al. 2018) and classiï¬ed the racial, gender, and party identiï¬cation dimensions of 99,996 tweets. The results showed that African American tweets were up to 3.7 times more likely to be labeled as abusive, and African Ameri- can male tweets were up to 77% more likely to be labeled as hateful compared to the others. These patterns were sta- tistically signiï¬cant and robust even when party identiï¬ca- tion was added as a control variable. This study has many
Research Design For transparency and reproducibility, we only used pub- licly available datasets. The annotated Twitter dataset (N = 99,996)1 on hate speech and abusive language was created by a crowd-sourcing platform and its quality has been en- sured by several rounds of validations. Founta et al., who generated the aforementioned dataset, deï¬ned abusive lan- guage as âany strongly impolite, rude, or hurtful language
âCorresponding author. PhD candidate in political science and graduate fellow at D-Lab and Data Science Education Program, University of California, Berkeley. [email protected]
1In the data wrangling process, we discovered that 8,045 tweets were duplicates and removed them. Consequently, the size of the ï¬nal dataset was reduced to 91,951 tweets.
using profanityâ and hate speech as âlanguage used to ex- press hatred towards a targeted individual or groupâ (5). We followed their deï¬nitions.
The Twitter dataset does not contain any information on the racial, gender, or partisan identities of the authors of these tweets. We utilized additional public datasets to clas- sify and ï¬ll in the missing information. Our primary interest was the interaction between race (deï¬ned as black or white) and gender (deï¬ned as male or female) and whether that generated a biased distribution of hateful and abusive labels. Such an underlying data distribution would generate uneven false positive and negative rates for different groups. How- ever, human annotators could be biased not only in terms of race and gender but also in terms of political afï¬liation. This is likely true if annotators were recruited in the United States, where political polarization is extreme (Sides and Hopkins 2015; Broockman 2016). For this reason, we also classiï¬ed party identiï¬cation, the degree to which a person identiï¬es with a particular political party.2 To be clear, what we classiï¬ed were not the actual racial, gender, or parti- san identities of the authors of these tweets. The objective was to classify whether the language features expressed in a tweet were closer to the ones commonly expressed by one racial/gender/partisan group than those of other groups. To classify race, we leveraged African American and White English dialectal variations based on the model developed by Blodgett, Green, and OâConnor. This model matched 59.2 million geolocated Tweets from 2.8 million users with U.S. Census data. Gender and party identiï¬cation classiï¬ca- tion were both based on the labeled data available at Kag- gleâs website. The gender data was originally provided by the Data for Everyone Library on CrowdFlower.3 The party identiï¬cation data was based on the tweets related to the 2018 US Congressional Election.4
For consistency, we applied identical preprocessing and feature extraction techniques to the tweets. We removed spe- cial characters from the tokenized tweets, turned them into lowercase, and reduced inï¬ected words to their base forms using the Lancaster stemming algorithm (Paice 1990). We transformed these tokens into a document-term matrix using the bag-of-words model and constructing an n-gram with a maximum length of two. Then, we trained and tested the least absolute shrinkage and selection operator (Lasso) (Tib- shirani 1996), naive Bayes (Maron 1961), and extreme gra- dient boosting (XGBoost) algorithms (Chen and Guestrin 2016). In the process, we divided 80% of the data into the training set and the rest of the data into the test set. After- ward, we measured the performance of each classiï¬er using accuracy, precision, and recall scores. Table 1 summarizes the performances of these classiï¬ers and shows Lasso out- performed the other classiï¬ers.
2Party identiï¬cation is different from political ideology, which demands more sophisticated political knowledge (Converse 1964). see
3For more information,
# https://www.kaggle.com/crowdï¬ower/twitter-user-gender- classiï¬cation 4For
more information, https://www.kaggle.com/kapastor/democratvsrepublicantweets
see
Models Lasso Bayes XGBoost Lasso Bayes XGBoost Lasso Bayes XGBoost Precision Recall Label 0.73 Male 0.86 Male 0.65 Male 0.77 0.84 0.71 0.71 0.66 0.67 0.68 0.58 0.65 0.73 0.65 0.72 0.72 0.72 0.70 Female Female Female Party ID Party ID Party ID
The data analysis was correlational and thus, descriptive. We ï¬rst described the bivariate relationship between race and gender and then added uncertainty about the measures using bootstrapping (Efron and Tibshirani 1986). We further investigated how the interaction between race and gender in- ï¬uences the distribution of hateful and abusive labels using logistic regression models. By taking a statistical modeling approach, we estimated the partial effect of the interaction bias on the outcomes while controlling for partisan bias.
Hypotheses ⢠Racial bias: The ï¬rst hypothesis is about between-group differences. Consistent with the prior research, we expect that tweets more closely associated with African Ameri- can than White English language features would be more likely to be labeled as abusive and hateful.
⢠Intersectional bias: The second hypothesis is about within-group differences. Inï¬uenced by broad social sci- ence literature, we argue that tweets more closely asso- ciated with African American males than other groupsâ language features are more likely to be labeled as hateful.
# Results
How race interacts with gender Label [J Abusive (I) Hatetur [J Norma White Male White Female African American Male African American Female 0.00 0.25 0.50 0.75 1.00 Proportion Founta et al. 2018 and the authors
Figure 1: Descriptive analysis
To begin with, Figure 1 displays the bivariate relationship between tweets classiï¬ed by race and by gender. The X-
axis indicates the proportion of tweets labeled as either abu- sive, hateful, or normal within four intersectional group cat- egories (e.g., African American male). The ï¬gure shows two patterns. First, African American tweets are more likely to be labeled as abusive than White tweets are. Second, African American male tweets are more likely to be labeled as hate- ful as compared to the other groups.
One limitation of Figure 1 is that it does not show the uncertainty of the measures. Figure 2 addresses this prob- lem by randomly resampling the data 1,000 times with re- placement and stratifying on race, gender, and label type. There are several differences between Figure 1 and Figure 2. The bootstrapping method produces 95% conï¬dence in- tervals, indicated by the error bars in the ï¬gure. Another difference is that the Y-axis shows label types rather than intersectional group categories. This ï¬gure reafï¬rms what we found earlier: African American tweets are overwhelm- ingly more likely to be labeled as abusive than their White counterparts. An opposite pattern is found in the normal la- bel; White tweets are far more likely to be labeled as nor- mal than their African American counterparts. These pat- terns are statistically signiï¬cant because they are far outside conï¬dence intervals. Gender difference matters little in these cases. By contrast, the intersection between race and gender matters in hate speech annotation. African American male tweets are far more likely to be labeled as hateful than the rest of the groups are. African American female tweets are only slightly more likely to be labeled as hateful than their White counterparts are.
How race interacts with gender (with bootstrapping) [BB Atican american Female [fil) African American Male Subgrou cue? BB wnite Female Ey white Mate i Se Hateful - âmene lh 0.0 0.2 0.4 0.6 Proportion Founta et al. 2018 and the authors
Figure 2: Bootstrapping results
Figure 3 extends the previous investigation by adding party identiï¬cation as a control variable. We constructed two logistic regression models. In both models, the depen- dent variable was an abusive or hateful category deï¬ned as dummy variables (yes = 1, no = 0). The ï¬rst model did not involve party controls and its predictor variables were race, gender, and their interaction. The second model involved party controls and its predictor variables were race, gender, party identiï¬cation, the intersection between race and gen- der, and the intersection between and race and party identi- ï¬cation. In the ï¬gure, the results of the ï¬rst model are in-
Logistic regression analysis Models 4 With party controls @ Without party controls Abusive Hateful African American o â Republican e - Male ° ° African American:Male} oo Republican:White ° - 100% 200% 300% 100% 200% 300% Likelihood Founta et al (2018) and the authors.
Abusive Hateful African American o â Republican e - Male ° ° African American:Male} oo Republican:White ° - 100% 200% 300% 100% 200% 300% Likelihood Founta et al (2018) and the authors.
Figure 3: Logistic regression analysis
dicated by light blue, and the second model by red dots. The error bars indicate two standard errors (approximately 95% conï¬dence intervals). The Y-axis indicates key predic- tor variables and the X-axis shows the likelihoods, which we calculated from the logistic regression estimates to help interpretation. The ï¬gure demonstrates that African Ameri- can language features are most likely to be labeled as abu- sive, and African American male language features are most likely to be labeled as hateful. This pattern is robust across the two models. To be precise, if tweets were associated with African American language features, the likelihood of these tweets to be labeled as abusive increased by up to 3.7 times. If tweets were associated with African American male lan- guage features, the likelihood of these tweets to be labeled as hateful increased by up to 77%.
Discussions and Conclusion This study provides the ï¬rst systematic evidence on inter- sectional bias in datasets of hate speech and abusive lan- guage. More importantly, the ï¬nding that African Ameri- can men are closely associated with hate speech is consistent with broad social science research on the criminalization of African American men. The evidence emphasizes the im- portance of taking an interdisciplinary approach. The pro- liferation of machine learning applications is new. However, human biases have a much longer history and are broadly studied outside computer science. Social science knowledge could provide insights into the explicit and implicit bias em- bedded in datasets used in machine learning applications, in- cluding systems to detect hate speech and abusive language. However, this study has many shortcomings. The ï¬rst caveat is that the statistical model could be incomplete. The multivariate regression model is naive and likely to be un- derspeciï¬ed. Missing variables in the model may cause se- lection bias. A better approach would be to design an exper- iment in which researchers could manipulate the source of biasesâdifferent types of language featuresâand directly examine their causal effects. Sap et al. showed how a be- havioral experiment can be conducted for identifying racial bias in datasets of hate speech and abusive language. Exper- imental evidence for intersectional bias is still lacking and remains critical for future research.
The second caveat is that the data could be inaccurate. This problem is particularly concerning because the magni- tude of the intersectional bias is small (see Figure 3). All of the key predictor variables were not directly observed but were based on machine-enabled text classiï¬cation. Table 1 displays that these predictions show modest performance (between 69% and 73% accuracy scores). Uncertainty in the data may not destabilize inference if the effect size is large enough; an increase of up to 3.7 times due to racial bias is difï¬cult to remove. By contrast, an increase of up to 77% due to intersectional bias could be sensitive to measurement error. The ï¬ndings reported here should not be taken at their face value and should be followed up with further investiga- tion.
References Blodgett, S. L.; Green, L.; and OâConnor, B. 2016. Demo- graphic dialectal variation in social media: A case study of african-american english. ArXiv:1608.08868. Broockman, D. E. 2016. Approaches to studying policy rep- resentation. Legislative Studies Quarterly 41(1):181â215. Chen, T., and Guestrin, C. 2016. Xgboost: A scalable tree boosting system. In KDD â16: The 22nd ACM SIGKDD In- ternational Conference on Knowledge Discovery and Data Mining, 785â794. Citron, D. K., and Pasquale, F. 2014. The scored society: Due process for automated predictions. Washington Law Review 89:1â33. Converse, P. E. 1964. The nature of belief systems in mass publics. Critical Review 18(1-3):1â74. Davidson, T.; Bhattacharya, D.; and Weber, I. 2019. Racial bias in hate speech and abusive language detection datasets. ArXiv 1905.12516. Dixon, L.; Li, J.; Sorensen, J.; Thain, N.; and Vasserman, L. 2018. Measuring and mitigating unintended bias in text classiï¬cation. In Proceedings of the 2018 AAAI/ACM Con- ference on AI, Ethics, and Society, 67â73. Efron, B., and Tibshirani, R. 1986. Bootstrap methods for standard errors, conï¬dence intervals, and other measures of statistical accuracy. Statistical Science 54â75. Founta, A. M.; Djouvas, C.; Chatzakou, D.; Leontiadis, I.; Blackburn, J.; Stringhini, G.; Vakali, A.; Sirivianos, M.; and Kourtellis, N. 2018. Large scale crowdsourcing and char- acterization of twitter abusive behavior. In Twelfth Interna- tional AAAI Conference on Web and Social Media. Hall, A. V.; Hall, E. V. H.; and Perry, J. L. 2016. Black and blue: Exploring racial bias and law enforcement in the killings of unarmed black male civilians. American Psychol- ogist 71(3):175â186. Kappeler, V. E., and Potter, G. W. 2017. The mythology of crime and criminal justice. Waveland Press. Maron, M. E. 1961. Automatic indexing: an experimental inquiry. Journal of the ACM 8(3):404â417. Mastro, D. E.; Blecha, E.; and Seate, A. A. 2015. Charac- terizations of criminal athletes: A systematic examination of
sports news depictions of race and crime. Journal of Broad- casting & Electronic Media 55(4):526â542. Najdowski, C. J.; Bottoms, B. L.; and Goff, P. A. 2015. Stereotype threat and racial differences in citizensâ expe- riences of police encounters. Law and Human Behavior 39(5):463â477. Oliver, M. B. 2003. African american men asâ criminal and dangerousâ. Journal of African American Studies 3â18. Oâneil, C. 2016. Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books. Paice, C. D. 1990. Another stemmer. In ACM Sigir Forum, volume 24, 56â61. ACM New York, NY, USA. Park, J. H.; Shin, J.; and Fung, P. 2018. Reducing gender bias in abusive language detection. ArXiv 1808.07231. Sap, M.; Card, D.; Gabriel, S.; Choi, Y.; and Smith, N. A. 2019. The risk of racial bias in hate speech detection. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 1668â1678. Sides, J., and Hopkins, D. J. 2015. Political polarization in American politics. Bloomsbury Publishing. Skinner, A. L., and Hass, Ingrid, J. 2016. Perceived threat associated with police ofï¬cers and black men predicts sup- port for policing policy reform. Frontiers in Psychology 7:1â 17. Tatman, R. 2017. Gender and dialect bias in youtubeâs au- tomatic captions. In Proceedings of the First ACL Workshop on Ethics in Natural Language Processing, 53â59. Tibshirani, R. 1996. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society 58(1):267â 288. Waseem, Z., and Hovy, D. 2016. Hateful symbols or hateful people? predictive features for hate speech detection on twit- ter. In Proceedings of the NAACL Student Research Work- shop, 88â93. Waseem, Z. 2016. Are you a racist or am i seeing things? an- notator inï¬uence on hate speech detection on twitter. In Pro- ceedings of the ï¬rst workshop on NLP and computational social science, 138â142.
Author Contributions Jae Yeon Kim is a project lead. Kim designed the research, collected and analyzed data, and wrote the paper. The rest of the authors are undergraduate research assistants. They helped with data collection and analysis. Their names are listed alphabetically.
# Additional Resources
All https://github.com/jaeyk/intersectional-bias-in-ml
Acknowledgments We are grateful to two anonymous reviewers for their con- structive comments. | {
"id": "1608.08868"
} |
2005.05257 | A Dataset for Statutory Reasoning in Tax Law Entailment and Question Answering | Legislation can be viewed as a body of prescriptive rules expressed in
natural language. The application of legislation to facts of a case we refer to
as statutory reasoning, where those facts are also expressed in natural
language. Computational statutory reasoning is distinct from most existing work
in machine reading, in that much of the information needed for deciding a case
is declared exactly once (a law), while the information needed in much of
machine reading tends to be learned through distributional language statistics.
To investigate the performance of natural language understanding approaches on
statutory reasoning, we introduce a dataset, together with a legal-domain text
corpus. Straightforward application of machine reading models exhibits low
out-of-the-box performance on our questions, whether or not they have been
fine-tuned to the legal domain. We contrast this with a hand-constructed
Prolog-based system, designed to fully solve the task. These experiments
support a discussion of the challenges facing statutory reasoning moving
forward, which we argue is an interesting real-world task that can motivate the
development of models able to utilize prescriptive rules specified in natural
language. | http://arxiv.org/pdf/2005.05257 | Nils Holzenberger, Andrew Blair-Stanek, Benjamin Van Durme | cs.CL | null | null | cs.CL | 20200511 | 20200812 | 0 2 0 2
g u A 2 1 ] L C . s c [
3 v 7 5 2 5 0 . 5 0 0 2 : v i X r a
# A Dataset for Statutory Reasoning in Tax Law Entailment and Question Answering
Nils Holzenberger Johns Hopkins University Baltimore, Maryland, USA [email protected]
Andrew Blair-Stanek U. of Maryland Carey School of Law Baltimore, Maryland, USA Johns Hopkins University Baltimore, Maryland, USA [email protected]
Benjamin Van Durme Johns Hopkins University Baltimore, Maryland, USA [email protected]
ABSTRACT Legislation can be viewed as a body of prescriptive rules expressed in natural language. The application of legislation to facts of a case we refer to as statutory reasoning, where those facts are also ex- pressed in natural language. Computational statutory reasoning is distinct from most existing work in machine reading, in that much of the information needed for deciding a case is declared exactly once (a law), while the information needed in much of machine read- ing tends to be learned through distributional language statistics. To investigate the performance of natural language understanding approaches on statutory reasoning, we introduce a dataset, together with a legal-domain text corpus. Straightforward application of ma- chine reading models exhibits low out-of-the-box performance on our questions, whether or not they have been fine-tuned to the legal domain. We contrast this with a hand-constructed Prolog-based system, designed to fully solve the task. These experiments support a discussion of the challenges facing statutory reasoning moving forward, which we argue is an interesting real-world task that can motivate the development of models able to utilize prescriptive rules specified in natural language.
CCS CONCEPTS ⢠Applied computing â Law; ⢠Computing methodologies â Natural language processing; Knowledge representation and reason- ing.
# KEYWORDS Law, NLP, Reasoning, Prolog
sources, e.g. textbooks, and connecting high-performance symbolic solvers with large-scale language models.
In parallel, models have begun to consider task definitions like Machine Reading (MR) [46] and Recognizing Textual Entailment (RTE) [15, 16] as not requiring the use of explicit structure. Instead, the problem is cast as one of mapping inputs to high-dimensional, dense representations that implicitly encode meaning [18, 45], and are employed in building classifiers or text decoders, bypassing classic approaches to symbolic inference.
This work is concerned with the problem of statutory reasoning [62, 66]: how to reason about an example situation, a case, based on complex rules provided in natural language. In addition to the reasoning aspect, we are motivated by the lack of contemporary systems to suggest legal opinions: while there exist tools to aid lawyers in retrieving relevant documents for a given case, we are unaware of any strong capabilities in automatic statutory reasoning. Our contributions, summarized in Figure 2, include a novel dataset based on US tax law, together with test cases (Section 2). Decades-old work in expert systems could solve problems of the sort we construct here, based on manually derived rules: we repli- cate that approach in a Prolog-based system that achieves 100% accuracy on our examples (Section 3). Our results demonstrate that straightforward application of contemporary Machine Read- ing models is not sufficient for our challenge examples (Section 5), whether or not they were adapted to the legal domain (Section 4). This is meant to provoke the question of whether we should be concerned with: (a) improving methods in semantic parsing in order to replace manual transduction into symbolic form; or (b) improving machine reading methods in order to avoid explicit symbolic solvers. We view this work as part of the conversation including recent work in multi-hop inference [61], where our task is more domain-specific but potentially more challenging.
1 INTRODUCTION Early artificial intelligence research focused on highly-performant, narrow-domain reasoning models, for instance in health [37, 40, 54] and law [30, 38]. Such expert systems relied on hand-crafted inference rules and domain knowledge, expressed and stored with the formalisms provided by databases [21]. The main bottleneck of this approach is that experts are slow in building such knowledge bases and exhibit imperfect recall, which motivated research into models for automatic information extraction (e.g. Lafferty et al. [36]). Systems for large-scale automatic knowledge base construction have improved (e.g. Etzioni et al. [20], Mitchell et al. [41]), as well as systems for sentence level semantic parsing [64]. Among others, this effort has led to question-answering systems for games [22] and, more recently, for science exams [14, 23, 27]. The challenges include extracting ungrounded knowledge from semi-structured
2 DATASET Here, we describe our main contribution, the StAtutory Reason- ing Assessment dataset (SARA): a set of rules extracted from the statutes of the US Internal Revenue Code (IRC), together with a set of natural language questions which may only be answered correctly by referring to the rules1.
The IRC2 contains rules and definitions for the imposition and calculation of taxes. It is subdvided into sections, which in general,
1The dataset can be found under https://nlp.jhu.edu/law/ 2https://uscode.house.gov/browse/prelim@title26&edition=prelim
§63. Taxable income Binary defined §2. Definitions and [special rules §1. Tax imposed Numerical (a) Married individuals filing joint returns land surviving spouses ns There is hereby imposed fion the jon the taxable income [ng of (1) every married individual...
Figure 1: Sample cases from our dataset. The questions can be answered by applying the rules contained in the statutes to the context.
us 26 case. law Prolog rules Tax Corpus Tax Vectors Legal BERT ae § dodnaindn aa dodn
Figure 2: Resources. Corpora on the left hand side were used to build the datasets and models on the right hand side.
define one or more terms: section 3306 defines the terms employ- ment, employer and wages, for purposes of the federal unemploy- ment tax. Sections are typically structured around a general rule, followed by a number of exceptions. Each section and its subsec- tions may be cast as a predicate whose truth value can be checked against a state of the world. For instance, subsection 7703(a)(2):
an individual legally separated from his spouse under a decree of divorce or of separate maintenance shall not be considered as married
can be checked given an individual.
Slots are another major feature of the law. Each subsection refers to a certain number of slots, which may be filled by existing entities (in the above, individual, spouse, and decree of divorce or of separate maintenance). Certain slots are implicitly filled: §7703(a)(1) and (b)(3) mention a âspouse", which must exist since the âindividual" is married. Similarly, slots which have been filled earlier in the section may be referred to later on. For instance, âhousehold" is mentioned for the first time in §7703(b)(1), then again in §7703(b)(2) and in §7703(b)(3). Correctly resolving slots is a key point in successfully applying the law.
Overall, the IRC can be framed as a set of predicates formulated in human language. The language used to express the law has an open texture [29], which makes it particularly challenging for a computer- based system to determine whether a subsection applies, and to identify and fill the slots mentioned. This makes the IRC an excellent corpus to build systems that reason with rules specified in natural language, and have good language understanding capabilities.
2.1 Statutes and test cases As the basis of our set of rules, we selected sections of the IRC well-supported by Treasury Regulations, covering tax on individu- als (§1), marriage and other legal statuses (§2, 7703), dependents (§152), tax exemptions and deductions (§63, 68, 151) and employ- ment (§3301, 3306). We simplified the sections to (1) remove highly specific sections (e.g. those concerning the employment of sailors) in order to keep the statutes to a manageable size, and (2) ensure that the sections only refer to sections from the selected subset. For ease of comparison with the original statutes, we kept the original numbering and lettering, with no adjustment for removed sections. For example, there is a section 63(d) and a section 63(f), but no section 63(e). We assumed that any taxable year starts and ends at the same time as the corresponding calendar year.
For each subsection extracted from the statutes, we manually created two paragraphs in natural language describing a case, one where the statute applies, and one where it does not. These snippets, formulated as a logical entailment task, are meant to test a systemâs understanding of the statutes, as illustrated in Figure 1. The cases were vetted by a law professor for coherence and plausibility. For the purposes of machine learning, the cases were split into 176 train and 100 test samples, such that (1) each pair of positive and negative cases belongs to the same split, and (2) each section is split between train and test in the same proportions as the overall split. Since tax legislation makes it possible to predict how much tax a person owes, we created an additional set of 100 cases where the task is to predict how much tax someone owes. Those cases were created by randomly mixing and matching pairs of cases from the first set of cases, and resolving inconsistencies manually. Those cases are no longer a binary prediction task, but a task of predicting an integer. The prediction results from taking into account the entirety of the statutes, and involves basic arithmetic. The 100 cases were randomly split into 80 training and 20 test samples.
Because the statutes were simplified, the answers to the cases are not those that would be obtained with the current version of the IRC. Some of the IRC counterparts of the statutes in our dataset have been repealed, amended, or adjusted to reflect inflation.
2.2 Key features of the corpus While the corpus is based on a simplification of the Internal Rev- enue Code, care was taken to retain prominent features of US law. We note that the present task is only one aspect of legal reason- ing, which in general involves many more modes of reasoning, in particular interpreting regulations and prior judicial decisions. The following features are quantified in Tables 1 to 4.
Reasoning with time. The timing of events (marriage, retirement, income...) is highly relevant to determining whether certain sections apply, as tax is paid yearly. In total, 62 sections refer to time. Some sections require counting days, as in §7703(b)(1):
Cross-references Within the section To another section Explicit 30 34 Implicit 25 44
# Table 1: Number of subsections containing cross-references
min max 6 6 Table 2: Statistics about the tree structure of the statutes
a household which constitutes for more than one-half of the taxable year the principal place of abode of a child or taking into account the absolute point in time as in §63(c)(7): In the case of a taxable year beginning after December 31, 2017, and before January 1, 2026-
Exceptions and substitutions. Typically, each section of the IRC starts by defining a general case and then enumerates a number of exceptions to the rule. Additionally, some rules involve applying a rule after substituting terms. A total of 50 sections formulate an exception or a substitution. As an example, §63(f)(3):
In the case of an individual who is not married and is not a surviving spouse, paragraphs (1) and (2) shall be applied by substituting â$750" for â$600".
Numerical reasoning. Computing tax owed requires knowledge of the basic arithmetic operations of adding, subtracting, multiplying, dividing, rounding and comparing numbers. 55 sections involve numerical reasoning. The operation to be used needs to be parsed out of natural text, as in §1(c)(2):
$3,315, plus 28% of the excess over $22,100 if the taxable income is over $22,100 but not over $53,500
Cross-references. Each section of the IRC will typically reference other sections. Table 1 shows how this feature was preserved in our dataset. There are explicit references within the same section, as in §7703(b)(1):
an individual who is married (within the meaning of subsection (a)) and who files a separate return
explicit references to another section, as in §3301:
There is hereby imposed on every employer (as defined in section 3306(a)) for each calendar year an excise tax and implicit references, as in §151(a), where âtaxable income" is defined in §63:
the exemptions provided by this section shall be allowed as deductions in computing taxable income.
Common sense knowledge. Four concepts, other than time, are left undefined in our statutes: (1) kinship, (2) the fact that a marriage ends if either spouse dies, (3) if an event has not ended, then it is ongoing; if an event has no start, it has been true at any time before it ends; and some events are instantaneous (e.g. payments ), (4) a personâs gross income is the sum of all income and payments received by that person.
Hierarchical structure. Law statutes are divided into sections, themselves divided into subsections, with highly variable depth and structure. This can be represented by a tree, with a special ROOT node of depth 0 connecting all the sections. This tree contains 132 leaves and 193 nodes (node includes leaves). Statistics about depth are in Table 2.
Vocabulary size Sentence length (in words) Case length (in sentences) Case length (in words) Section length train test statutes combined train test combined train test combined sentences words train test min max 138 34 88 138 9 7 9 179 81 179 16 1151 4 4 1 1 1 2 1 17 17 17 2 62 867 535 avg 12.3 11.6 16.5 12.7 4.2 3.8 4.1 48.5 41.6 46.3 8.3 488.9 statutes combined 768 1596 stddev median 11 10 12.5 11 4 4 4 43 38 41 9 549 9.1 4.5 14.9 9.5 1.7 1.3 1.6 22.2 14.7 20.3 4.7 310.4
Table 3: Language statistics. The word âcombinedâ means merging the corpora mentioned above it.
train test combined min 0 0 0 max 2,242,833 243,097 2,242,833 average 85,804.86 65,246.50 81,693.19 stddev 258,179.30 78,123.13 233,695.33 median 15,506.50 26,874.00 17,400.50 Table 4: Answers to numerical questions (in $).
3 PROLOG SOLVER It has been shown that subsets of statutes can be expressed in first- order logic, as described in Section 6. As a reaffirmation of this, and as a topline for our task, we have manually translated the statutes into Prolog rules and the cases into Prolog facts, such that each case can be answered correctly by a single Prolog query3. The Prolog rules were developed based on the statutes, meaning that the Prolog code clearly reflects the semantics of the textual form, as in Gunning et al. [27]. This is primarily meant as a proof that a carefully crafted reasoning engine, with perfect natural language understanding, can solve this dataset. There certainly are other ways of representing this given set of statutes and cases. The point of this dataset is not to design a better Prolog system, but to help the development of language understanding models capable of reasoning.
3.1 Statutes Each subsection of the statutes was translated with a single rule, true if the section applies, false otherwise. In addition, subsections define slots that may be filled and reused in other subsections, as described in Section 2. To solve this coreference problem, any term appearing in a subsection and relevant across subsections is turned into an argument of the Prolog rule. The corresponding variable may then be bound during the execution of a rule, and reused in a rule executed later. Unfilled slots correspond to unbound variables. To check whether a given subsection applies, the Prolog sys- tem needs to rely on certain predicates, which directly reflect the facts contained in the natural language descriptions of the cases. For instance, how do we translate Alice and Bob got married on January 24th, 1993 into code usable by Prolog? We rely on a set of 61 predicates, following neo-davidsonian semantics [9, 17, 42]. The level of detail of these predicates is based on the granularity of the statutes themselves. Anything the statutes do not define, and which is typically expressed with a single word, is potentially
3The Prolog program can be found under https://nlp.jhu.edu/law/
such a predicate: marriage, residing somewhere, someone paying someone else, etc. The example above is translated in Figure 3.
3.2 Cases The natural lan- guage description of each case was manually translated into the facts men- tioned above. The question or log- ical entailment prompt was translated into a Prolog query. For instance, Section 7703(b)(3) applies to Alice maintaining her home for the year 2018. translates to s7703_b_3(alice,home,2018). and How much tax does Alice have to pay in 2017? translates to tax(alice,2017,Amount).
In the broader context of computational statutory reasoning, the Prolog solver has three limitations. First, producing it requires domain experts, while automatic generation is an open question. Second, translating natural language into facts requires semantic parsing capabilities. Third, small mistakes can lead to catastrophic failure. An orthogonal approach is to replace logical operators and explicit structure with high-dimensional, dense representations and real-valued functions, both learned using distributional statistics. Such a machine learning-based approach can be adapted to new legislation and new domains automatically.
4 LEGAL NLP As is commonly done in MR, we pretrained our models using two unsupervised learning paradigms on a large corpus of legal text.
4.1 Text corpus We curated a corpus consisting solely of freely-available tax law documents with 147M tokens. The first half is drawn from cas [1], a project of Harvardâs Law Library that scanned and OCRâed many of the libraryâs case-law reporters, making the text available upon request to researchers. The main challenge in using this resource is that it contains 1.7M U.S. federal cases, only a small percentage of which are on tax law (as opposed to criminal law, breach of contract, bankruptcy, etc.). Classifying cases by area is a non-trivial problem [55], and tax-law cases are litigated in many different courts. We used the heuristic of classifying a case as being tax-law if it met one of the following criteria: the Commissioner of Internal Revenue was a party; the case was decided by the U.S. Tax Court; or, the case was decided by any other federal court, other than a trade tribunal, with the United States as a party, and with the word tax appearing in the first 400 words of the caseâs written opinion.
The second half of this corpus consists of IRS private letter rul- ings and unpublished U.S. Tax Court cases. IRS private letter rulings are similar to cases, in that they apply tax law to one taxpayerâs facts; they differ from cases in that they are written by IRS attorneys (not judges), have less precedential authority than cases, and redact names to protect taxpayer privacy. Unpublished U.S. Tax Court cases are viewed by the judges writing them as less important than those worthy of publication. These were downloaded as PDFs from the IRS and Tax Court websites, OCRâed with tesseract if needed, and otherwise cleaned.
4.2 Tax vectors Before training a word2vec model [39] on this corpus, we did two tax-specific preprocessing steps to ensure that semantic units re- mained together. First, we put underscores between multi-token collocations that are tax terms of art, defined in either the tax code, Treasury regulations, or a leading tax-law dictionary. Thus, âsurviv- ing spouse" became the single token âsurviving_spouse". Second, we turned all tax code sections and Treasury regulations into a single token, stripped of references to subsections, subparagraphs, and subclauses. Thus, âTreas. Reg. §1.162-21(b)(1)(iv)" became the single token âsec_1_162_21". The vectors were trained at 500 dimen- sions using skip-gram with negative sampling. A window size of 15 was found to maximize performance on twelve human-constructed analogy tasks.
4.3 Legal BERT We performed further training of BERT [18], on a portion of the full case.law corpus, including both state and federal cases. We did not limit the training to tax cases. Rather, the only cases excluded were those under 400 characters (which tend to be summary orders with little semantic content) and those before 1970 (when judicial writing styles had become recognizably modern). We randomly selected a subset of the remaining cases, and broke all selected cases into chunks of exactly 510 tokens, which is the most BERTâs architecture can handle. Any remaining tokens in a selected case were discarded. Using solely the masked language model task (i.e. not next sentence prediction), starting from Bert-Base-Cased, we trained on 900M tokens.
The resulting Legal BERT has the exact same architecture as Bert-Base-Cased but parameters better attuned to legal tasks. We applied both models to the natural language questions and answers in the corpus we introduce in this paper. While Bert-Base-Cased had a perplexity of 14.4, Legal BERT had a perplexity of just 2.7, suggesting that the further training on 900M tokens made the model much better adapted to legal queries.
We also probed how this further training impacted ability to handle fine-tuning on downstream tasks. The downstream task we chose was identifying legal terms in case texts. For this task, we defined legal terms as any tokens or multi-token collocations that are defined in Blackâs Law Dictionary [25], the premier legal dictio- nary. We split the legal terms into training/dev/test splits. We put a 4-layer fully-connected MLP on top of both Bert-Base-Cased and Legal BERT, where the training objective was B-I-O tagging of tokens in 510-token sequences. We trained both on a set of 200M tokens randomly selected from case.law cases not previ- ously seen by the model and not containing any of the legal terms in dev or test, with the training legal terms tagged using string comparisons. We then tested both fine-tuned modelsâ ability to identify legal terms from the test split in case law. The model based on Bert-Base-Cased achieved F1 = 0.35, whereas Legal BERT achieved F1 = 0.44. As a baseline, two trained lawyers given the same task on three 510-token sequences each achieved F1 = 0.26. These results indicate that Legal BERT is much better adapted to the legal domain than Bert-Base-Cased. Blackâs Law Dictionary has well- developed standards for what terms are or are not included. BERT models learn those standards via the train set, whereas lawyers are not necessarily familiar with them. In addition, pre-processing
dropped some legal terms that were subsets of too many others, which the lawyers tended to identify. This explains how BERT- based models could outperform trained humans.
5 EXPERIMENTS 5.1 BERT-based models In the following, we frame our task as textual entailment and numer- ical regression. A given entailment prompt q mentions the relevant subsection (as in Figure 1)4. We extract s, the text of the relevant subsection, from the statutes. In q, we replace Section XYZ applies with This applies. We feed the string â[CLS] + s + [SEP] + q + c + [SEP]", where â+" is string concatenation, to BERT [18]. Let r be the vector representation of the token [CLS] in the final layer. The answer (entailment or contradiction) is predicted as д(θ1 · r ) where θ1 is a learnable parameter and д is the sigmoid function. For numerical questions, all statutes have to be taken into account, which would exceed BERTâs length limit. We encode â[CLS] all [SEP] + q + c + [SEP]" into r and predict the answer as µ + Ïθ2 · r where θ2 is a learned parameter, and µ and Ï are the mean and standard deviation of the numerical answers on the training set.
For entailment, we use a cross-entropy loss, and evaluate the models using accuracy. We frame the numerical questions as a taxpayer having to compute tax owed. By analogy with the concept of âsubstantial understatement of income taxâ from §6662(d), we define â(y, Ëy) = max(0.1y,5000) where y is the true amount of tax owed, and Ëy is the taxpayerâs prediction. The case â(y, Ëy) ⥠1 corresponds to a substantial over- or understatement of tax. We compute the fraction of predictions Ëy such that â(y, Ëy) < 1 and report that as numerical accuracy.5 The loss function used is:
L= Y yilog gi + (1 - yi) log(1 - gi) + Y* max(A(yi, Gi) - 1,0) ieh iely
where I1 (resp. I2) is the set of entailment (resp. numerical) ques- tions, yi is the ground truth output, and Ëyi is the modelâs output. We use Adam [34] with a linear warmup schedule for the learning rate. We freeze BERTâs parameters, and experiment with unfreezing BERTâs top layer. We select the final model based on early stopping with a random 10% of the training examples reserved as a dev set. The best performing model for entailment and for numerical questions are selected separately, during a hyperparameter search around the recommended setting (batch size=32, learning rate=1e- 5). To check for bias in our dataset, we drop either the statute, or the context and the statute, in which case we predict the answer from BERTâs representation for â[CLS] + c + [SEP] + q + [SEP]" or â[CLS] + q + [SEP]", whichever is relevant.
5.2 Feedforward models We follow Arora et al. [2] to embed strings into vectors, with smoothing parameter equal to 10â3. We use either tax vectors de- scribed in Section 4 or word2vec vectors [39]. We estimate unigram counts from the corpus used to build the tax vectors, or the train- ing set, whichever is relevant. For a given context c and question
4The code for these experiments can be found under https://github.com/SgfdDttt/sara 5For a company, a goal would be to have 100% accuracy (resulting in no tax penalties) while paying the lowest amount of taxes possible (giving them something of an interest- free loan, even if the IRS eventually collects the understated tax).
Inputs - question context statutes statutes question context statutes statutes question context statutes question context statutes question context statutes question context statutes Table 5: Test set scores. We report the 90% confidence inter- val. All confidence intervals for entailment round to 8.3%.
or prompt q, we retrieve relevant subsection s as above. Using Arora et al. [2], s is mapped to vector vs , and (c, q) to vc+q . Let r = [vs , vq+c , |vs â vc+q |, vs â vc+q ] where [a, b] is the concate- nation of a and b, |.| is the element-wise absolute value, and â is the element-wise product. The answer is predicted as д(θ1 · f (r )) or µ + Ïθ2 · f (r ), as above, where f is a feed-forward neural net- work. We use batch normalization between each layer of the neural network [31]. As above, we perform ablation experiments, where we drop the statute, or the context and the statute, in which case r is replaced by vc+q or vq . We also experiment with f being the identity function (no neural network). Training is otherwise done as above, but without the warmup schedule.
5.3 Results We report the accuracy on the test set (in %) in Table 5. In our ab- lation experiments, âquestion" models have access to the question only, âcontext" to the context and question, and âstatute" to the statutes, context and question. For entailment, we use a majority baseline. For the numerical questions, we find the constant that minimizes the hinge loss on the training set up to 2 digits: $11,023. As a check, we swapped in the concatenation of the RTE datasets of Bentivogli et al. [5], Dagan et al. [16], Giampiccolo et al. [26], Haim et al. [28], and achieved 73.6% accuracy on the dev set with BERT, close to numbers reported in Wang et al. [59]. BERT was trained on Wikipedia, which contains snippets of law text: see article United States Code and links therefrom, especially Internal Revenue Code. Overall, models perform comparably to the baseline, independent of the underlying method. Performance remains mostly unchanged when dropping the statutes or statutes and context, meaning that models are not utilizing the statutes. Adapting BERT or word vec- tors to the legal domain has no noticeable effect. Our results suggest that performance will not be improved through straightforward application of a large-scale language model, unlike it is on other datasets: Raffel et al. [45] achieved 94.8% accuracy on COPA [49]
using a large-scale multitask Transformer model, and BERT pro- vided a huge jump in performance on both SQuAD 2.0 [46] (+8.2 F1) and SWAG [63] (+27.1 percentage points accuracy) datasets as compared to predecessor models, pre-trained on smaller datasets. Here, we focus on the creation of resources adapted to the legal domain, and on testing off-the-shelf and historical solutions. Future work will consider specialized reasoning models.
6 RELATED WORK There have been several efforts to translate law statutes into expert systems. Oracle Policy Automation has been used to formalize rules in a variety of contexts. TAXMAN [38] focuses on corporate reorga- nization law, and is able to classify a case into three different legal types of reorganization, following a theorem-proving approach. Sergot et al. [52] translate the major part of the British National- ity Act 1981 into around 150 rules in micro-Prolog, proving the suitability of Prolog logic to express and apply legislation. Bench- Capon et al. [4] further discuss knowledge representation issues. Closest to our work is Sherman [53], who manually translated part of Canadaâs Income Tax Act into a Prolog program. To our knowl- edge, the projects cited did not include a dataset or task that the programs were applied to. Other works have similarly described the formalization of law statutes into rule-based systems [24, 30, 32, 51]. Yoshioka et al. [62] introduce a dataset of Japanese statute law and its English translation, together with questions collected from the Japanese bar exam. To tackle these two tasks, Kim et al. [33] investigate heuristic-based and machine learning-based methods. A similar dataset based on the Chinese bar exam was released by Zhong et al. [66]. Many papers explore case-based reasoning for law, with expert systems [43, 56], human annotations [8] or automatic annotations [3] as well as transformer-based methods [44]. Some datasets are concerned with very specific tasks, as in tagging in contracts [10], classifying clauses [11], and classification of documents [12] or single paragraphs [6]. Ravichander et al. [47] have released a dataset of questions about privacy policies, elicited from turkers and answered by legal experts. Saeidi et al. [50] frame the task of statutory reasoning as a dialog between a user and a dialog agent. A single rule, with or without context, and a series of followup questions are needed to answer the original question. Contrary to our dataset, rules are isolated from the rest of the body of rules, and followup questions are part of the task.
Clark et al. [14] describe a decades-long effort to answer science exam questions stated in natural language, based on descriptive knowledge stated in natural language. Their system relies on a variety of NLP and specialized reasoning techniques, with their most significant gains recently achieved via contextual language modeling. This line of work is the most related in spirit to where we believe research in statutory reasoning should focus. An interesting contrast is that while scientific reasoning is based on understanding the physical world, which in theory can be informed by all manner of evidence beyond texts, legal reasoning is governed by human- made rules. The latter are true by virtue of being written down and agreed to, and are not discovered through evidence and a scientific process. Thus, statutory reasoning is an exceptionally pure instance of a reasoner needing to understand prescriptive language.
Weston et al. [60] introduced a set of prerequisite toy tasks for AI systems, which require some amount of reasoning and common
sense knowledge. Contrary to the present work, the types of ques- tion in the train and test sets are highly related, and the vocabulary overlap is quite high. Numeric reasoning appears in a variety of MR challenges, such as in DROP [19].
Understanding procedural language â knowledge needed to per- form a task â is related to the problem of understanding statutes, and so we provide a brief description of some example investiga- tions in that area. Zhang et al. [65] published a dataset of how-to instructions, with human annotations defining key attributes (actee, purpose...) and models to automatically extract the attributes. Simi- larly, Chowdhury et al. [13] describe a dataset of human-elicited procedural knowledge, and Wambsganà and Fromm [58] automati- cally detect repair instructions from posts on an automotive forum. Branavan et al. [7] employed text from an instruction manual to improve the performance of a game-playing agent.
7 CONCLUSION We introduce a resource of law statutes, a dataset of hand-curated rules and cases in natural language, and a symbolic solver able to represent these rules and solve the challenge task. Our hand- built solver contrasts with our baselines based on current NLP approaches, even when we adapt them to the legal domain.
The intersection between NLP and the legal domain is a growing area of research [3, 11, 33, 35, 48], but with few large-scale system- atic resources. Thus, in addition to the exciting challenge posed by statutory reasoning, we also intend this paper to be a contribution to legal-domain natural language processing.
Given the poor out-of-the box performance of otherwise very powerful models, this dataset, which is quite small compared to typ- ical MR resources, raises the question of what the most promising direction of research would be. An important feature of statutory reasoning is the relative difficulty and expense in generating care- fully constructed training data: legal texts are written for and by lawyers, who are cost-prohibitive to employ in bulk. This is un- like most instances of MR where everyday texts can be annotated through crowdsourcing services. There are at least three strategies open to the community: automatic extraction of knowledge graphs from text with the same accuracy as we did for our Prolog solver [57]; improvements in MR to be significantly more data efficient in training; or new mechanisms for the efficient creation of training data based on pre-existing legal cases.
Going forward, we hope our resource provides both (1) a bench- mark for a challenging aspect of natural legal language processing as well as for machine reasoning, and (2) legal-domain NLP models useful for the research community.
REFERENCES [1] 2019. Caselaw Access Project. http://case.law [2] Sanjeev Arora, Yingyu Liang, and Tengyu Ma. 2016. A simple but tough-to-beat
baseline for sentence embeddings. (2016).
[3] Kevin D Ashley and Stefanie Brüninghaus. 2009. Automatically classifying case texts and predicting outcomes. Artificial Intelligence and Law 17, 2 (2009), 125â165.
[4] Trevor JM Bench-Capon, Gwen O Robinson, Tom W Routen, and Marek J Sergot. 1987. Logic programming for large scale applications in law: A formalisation of supplementary benefit legislation. In Proceedings of the 1st international conference on Artificial intelligence and law. 190â198.
[5] Luisa Bentivogli, Peter Clark, Ido Dagan, and Danilo Giampiccolo. 2009. The Fifth PASCAL Recognizing Textual Entailment Challenge.. In TAC.
[6] Carlo Biagioli, Enrico Francesconi, Andrea Passerini, Simonetta Montemagni, and Claudia Soria. 2005. Automatic semantics extraction in law documents. In Proceedings of the 10th international conference on Artificial intelligence and law. 133â140.
[7] SRK Branavan, David Silver, and Regina Barzilay. 2012. Learning to win by reading manuals in a monte-carlo framework. Journal of Artificial Intelligence Research 43 (2012), 661â704.
[8] Stefanie Bruninghaus and Kevin D Ashley. 2003. Predicting outcomes of case based legal arguments. In Proceedings of the 9th international conference on Artifi- cial intelligence and law. 233â242.
[9] Hector Neri Castañeda. 1967. Comment on D. Davidsonâs âThe logical forms of action sentencesâ. The Logic of Decision and Action (1967).
[10] Ilias Chalkidis and Ion Androutsopoulos. 2017. A Deep Learning Approach to Contract Element Extraction.. In JURIX. 155â164.
[11] Ilias Chalkidis, Ion Androutsopoulos, and Achilleas Michos. 2018. Obligation and prohibition extraction using hierarchical rnns. arXiv preprint arXiv:1805.03871 (2018).
[12] Ilias Chalkidis, Manos Fergadiotis, Prodromos Malakasiotis, and Ion Androut- sopoulos. 2019. Large-Scale Multi-Label Text Classification on EU Legislation. arXiv preprint arXiv:1906.02192 (2019).
[13] Debajyoti Paul Chowdhury, Arghya Biswas, Tomasz Sosnowski, and Kristina Yordanova. 2020. Towards Evaluating Plan Generation Approaches with Instruc- tional Texts. arXiv preprint arXiv:2001.04186 (2020).
[14] Peter Clark, Oren Etzioni, Tushar Khot, Bhavana Dalvi Mishra, Kyle Richardson, Ashish Sabharwal, Carissa Schoenick, Oyvind Tafjord, Niket Tandon, Sumithra Bhakthavatsalam, et al. 2019. FromâFâtoâAâon the NY Regents Science Exams: An Overview of the Aristo Project. arXiv preprint arXiv:1909.01958 (2019).
[15] Robin Cooper, Dick Crouch, Jan Van Eijck, Chris Fox, Johan Van Genabith, Jan Jaspars, Hans Kamp, David Milward, Manfred Pinkal, Massimo Poesio, et al. 1996. Using the framework. Technical Report. Technical Report LRE 62-051 D-16, The FraCaS Consortium.
[16] Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The PASCAL recog- nising textual entailment challenge. In Machine Learning Challenges Workshop. Springer, 177â190.
[17] Donald Davidson. 1967. The logical forms of action sentences. The Logic of Decision and Action (1967).
[18] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).
[19] Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. arXiv preprint arXiv:1903.00161 (2019). [20] Oren Etzioni, Michele Banko, Stephen Soderland, and Daniel S Weld. 2008. Open information extraction from the web. Commun. ACM 51, 12 (2008), 68â74. [21] Edward A Feigenbaum. 1992. Expert systems: principles and practice. (1992). [22] David Ferrucci, Eric Brown, Jennifer Chu-Carroll, James Fan, David Gondek, Aditya A Kalyanpur, Adam Lally, J William Murdock, Eric Nyberg, John Prager, et al. 2010. Building Watson: An overview of the DeepQA project. AI magazine 31, 3 (2010), 59â79.
[23] Noah S Friedland, Paul G Allen, Gavin Matthews, Michael Witbrock, David Baxter, Jon Curtis, Blake Shepard, Pierluigi Miraglia, Jurgen Angele, Steffen Staab, et al. 2004. Project halo: Towards a digital aristotle. AI magazine 25, 4 (2004), 29â29. [24] Wachara Fungwacharakorn and Ken Satoh. 2018. Legal Debugging in Proposi- tional Legal Representation. In JSAI International Symposium on Artificial Intelli- gence. Springer, 146â159.
[25] Bryan A Gardner. 2019. Blackâs Law Dictionary (11 ed.). [26] Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. 2007. The third pascal recognizing textual entailment challenge. In Proceedings of the ACL- PASCAL workshop on textual entailment and paraphrasing. Association for Com- putational Linguistics, 1â9.
[27] David Gunning, Vinay K Chaudhri, Peter E Clark, Ken Barker, Shaw-Yi Chaw, Mark Greaves, Benjamin Grosof, Alice Leung, David D McDonald, Sunil Mishra, et al. 2010. Project Halo UpdateâÄŤProgress Toward Digital Aristotle. AI Maga- zine 31, 3 (2010), 33â58.
[28] R Bar Haim, Ido Dagan, Bill Dolan, Lisa Ferro, Danilo Giampiccolo, Bernardo Magnini, and Idan Szpektor. 2006. The second pascal recognising textual entail- ment challenge. In Proceedings of the Second PASCAL Challenges Workshop on Recognising Textual Entailment.
[29] Herbert Lionel Adolphus Hart and Herbert Lionel Adolphus Hart. 2012. The concept of law. Oxford university press.
[30] Robert Hellawell. 1980. A computer program for legal planning and analysis: Taxation of stock redemptions. Columbia Law Review 80, 7 (1980), 1363â1398.
[31] Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167 (2015).
[32] Imran Khan, Muhammad Sher, Javed I Khan, Syed M Saqlain, Anwar Ghani, Husnain A Naqvi, and Muhammad Usman Ashraf. 2016. Conversion of legal text
to a logical rules set from medical law using the medical relational model and the world rule model for a medical decision support system. In Informatics, Vol. 3. Multidisciplinary Digital Publishing Institute, 2.
[33] Mi-Young Kim, Juliano Rabelo, and Randy Goebel. 2019. Statute Law Informa- tion Retrieval and Entailment. In Proceedings of the Seventeenth International Conference on Artificial Intelligence and Law. 283â289.
[34] Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic opti- mization. arXiv preprint arXiv:1412.6980 (2014).
[35] Anastassia Kornilova and Vladimir Eidelman. 2019. BillSum: A Corpus for Automatic Summarization of US Legislation. In Proceedings of the 2nd Workshop on New Frontiers in Summarization. Association for Computational Linguistics, Hong Kong, China, 48â56. https://doi.org/10.18653/v1/D19-5406
[36] John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. (2001).
[37] Robert S Ledley and Lee B Lusted. 1959. Reasoning foundations of medical diagnosis. Science 130, 3366 (1959), 9â21.
[38] L Thorne McCarty. 1976. Reflections on TAXMAN: An experiment in artificial intelligence and legal reasoning. Harv. L. Rev. 90 (1976), 837.
[39] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems. 3111â3119.
[40] Randolph A Miller, Harry E Pople Jr, and Jack D Myers. 1982. Internist-I, an experimental computer-based diagnostic consultant for general internal medicine. New England Journal of Medicine 307, 8 (1982), 468â476.
[41] Tom Mitchell, William Cohen, Estevam Hruschka, Partha Talukdar, Bishan Yang, Justin Betteridge, Andrew Carlson, Bhanava Dalvi, Matt Gardner, Bryan Kisiel, et al. 2018. Never-ending learning. Commun. ACM 61, 5 (2018), 103â115. [42] Terence Parsons. 1990. Events in the Semantics of English. Vol. 334. MIT press
Cambridge, MA.
[43] Walter G Popp and Bernhard Schlink. 1974. Judith, a computer program to advise lawyers in reasoning a case. Jurimetrics J. 15 (1974), 303.
[44] Juliano Rabelo, Mi-Young Kim, and Randy Goebel. 2019. Combining Similarity and Transformer Methods for Case Law Entailment. In Proceedings of the Seventeenth International Conference on Artificial Intelligence and Law. 290â296.
[45] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the lim- its of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683 (2019).
[46] Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know What You Donât Know: Unanswerable Questions for SQuAD. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics.
[47] Abhilasha Ravichander, Alan W Black, Shomir Wilson, Thomas Norton, and Norman Sadeh. 2019. Question Answering for Privacy Policies: Combining Computational and Legal Perspectives. arXiv preprint arXiv:1911.00841 (2019).
[48] Edwina L Rissland, Kevin D Ashley, and Ronald Prescott Loui. 2003. AI and Law: A fruitful synergy. Artificial Intelligence 150, 1-2 (2003), 1â15.
[49] Melissa Roemmele, Cosmin Adrian Bejan, and Andrew S Gordon. 2011. Choice of plausible alternatives: An evaluation of commonsense causal reasoning. In 2011 AAAI Spring Symposium Series.
[50] Marzieh Saeidi, Max Bartolo, Patrick Lewis, Sameer Singh, Tim Rocktäschel, Mike Sheldon, Guillaume Bouchard, and Sebastian Riedel. 2018. Interpretation of natural language rules in conversational machine reading. arXiv preprint arXiv:1809.01494 (2018).
[51] Ken Satoh, Kento Asai, Takamune Kogawa, Masahiro Kubota, Megumi Naka- mura, Yoshiaki Nishigai, Kei Shirakawa, and Chiaki Takano. 2010. PROLEG: an implementation of the presupposed ultimate fact theory of Japanese civil code by PROLOG technology. In JSAI International Symposium on Artificial Intelligence. Springer, 153â164.
[52] Marek J. Sergot, Fariba Sadri, Robert A. Kowalski, Frank Kriwaczek, Peter Ham- mond, and H Terese Cory. 1986. The British Nationality Act as a logic program. Commun. ACM 29, 5 (1986), 370â386.
[53] David M Sherman. 1987. A Prolog model of the income tax act of Canada. In Proceedings of the 1st international conference on Artificial intelligence and law. 127â136.
[54] Edward H Shortliffe and Bruce G Buchanan. 1975. A model of inexact reasoning in medicine. Mathematical biosciences 23, 3-4 (1975), 351â379.
[55] Jerrold Soh, How Khang Lim, and Ian Ernst Chai. 2019. Legal Area Classification: A Comparative Study of Text Classifiers on Singapore Supreme Court Judgments. Association for Computational Linguistics, Minneapolis, Minnesota.
[56] Anne vdL Gardner. 1983. The design of a legal analysis program. In AAAI-83. 114â118.
[57] Lai Dac Viet, Vu Trong Sinh, Nguyen Le Minh, and Ken Satoh. 2017. ConvAMR: Abstract meaning representation parsing for legal document. arXiv preprint arXiv:1711.06141 (2017).
[58] Thiemo Wambsganà and Hansjörg Fromm. 2019. Mining User-Generated Repair Instructions from Automotive Web Communities. In Proceedings of the 52nd Hawaii International Conference on System Sciences.
[59] Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. Superglue: A stickier bench- mark for general-purpose language understanding systems. In Advances in Neural Information Processing Systems. 3261â3275.
[60] Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart van Mer- riënboer, Armand Joulin, and Tomas Mikolov. 2015. Towards ai-complete question answering: A set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698 (2015). [61] Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Brussels, Belgium, 2369â2380. https://doi.org/10. 18653/v1/D18-1259
[62] Masaharu Yoshioka, Yoshinobu Kano, Naoki Kiyota, and Ken Satoh. 2018. Overview of japanese statute law retrieval and entailment task at coliee-2018. In Twelfth International Workshop on Juris-informatics (JURISIN 2018).
[63] Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. Swag: A large- scale adversarial dataset for grounded commonsense inference. arXiv preprint arXiv:1808.05326 (2018).
[64] Sheng Zhang, Xutai Ma, Kevin Duh, and Benjamin Van Durme. 2019. Broad- Coverage Semantic Parsing as Transduction. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Association for Computational Linguistics, Hong Kong, China, 3786â3798. https://doi.org/ 10.18653/v1/D19-1392
[65] Ziqi Zhang, Philip Webster, Victoria S Uren, Andrea Varga, and Fabio Ciravegna. 2012. Automatically Extracting Procedural Knowledge from Instructional Texts using Natural Language Processing.. In LREC, Vol. 2012. 520â527.
[66] Haoxi Zhong, Chaojun Xiao, Cunchao Tu, Tianyang Zhang, Zhiyuan Liu, and Maosong Sun. 2019. JEC-QA: A Legal-Domain Question Answering Dataset. arXiv preprint arXiv:1911.12011 (2019). | {
"id": "1502.03167"
} |
2005.06625 | Cyberbullying Detection with Fairness Constraints | Cyberbullying is a widespread adverse phenomenon among online social
interactions in today's digital society. While numerous computational studies
focus on enhancing the cyberbullying detection performance of machine learning
algorithms, proposed models tend to carry and reinforce unintended social
biases. In this study, we try to answer the research question of "Can we
mitigate the unintended bias of cyberbullying detection models by guiding the
model training with fairness constraints?". For this purpose, we propose a
model training scheme that can employ fairness constraints and validate our
approach with different datasets. We demonstrate that various types of
unintended biases can be successfully mitigated without impairing the model
quality. We believe our work contributes to the pursuit of unbiased,
transparent, and ethical machine learning solutions for cyber-social health. | http://arxiv.org/pdf/2005.06625 | Oguzhan Gencoglu | cs.CL, cs.LG, cs.SI | 11 pages, 2 figures | null | cs.CL | 20200509 | 20200929 | 0 2 0 2
p e S 9 2 ] L C . s c [
2 v 5 2 6 6 0 . 5 0 0 2 : v i X r a
# Cyberbullying Detection with Fairness Constraints
Oguzhan Gencoglu Tampere University, Faculty of Medicine and Health Technology, Tampere, Finland [email protected]
AbstractâCyberbullying is a widespread adverse phenomenon among online social interactions in todayâs digital society. While numerous computational studies focus on enhancing the cyberbullying detection performance of machine learning algorithms, proposed models tend to carry and reinforce unintended social biases. In this study, we try to answer the research question of âCan we mitigate the unintended bias of cyberbullying detection models by guiding the model training with fairness constraints?â. For this purpose, we propose a model training scheme that can employ fairness constraints and validate our approach with different datasets. We demonstrate that various types of unintended biases can be successfully mitigated without impairing the model quality. We believe our work contributes to the pursuit of unbiased, transparent, and ethical machine learning solutions for cyber-social health.
CYBERBULLYING (CB) has emerged as a seri- ous and widespread adverse phenomenon among online social interactions in todayâs digital soci- ety. Estimates indicate that between 15-35% of young people have been victims of cyberbullying and between 10-20% of individuals admit to hav- ing cyberbullied others [1]. Cyberbullying can be observed in the form of aggression, harassment, derogation, dehumanization, toxicity, profanity, racism, sexism, call for violence, hate speech, threatening, or any combination thereof. Conse- quences of cyberbullying are severe as victims were shown to be at a greater risk of depression, anxiety, negative emotions, self-harm, suicidal be- haviors, and psychosomatic symptoms than non- victims.
i.e., automatically classifying online text messages based on bully- ing content with machine learning, has been pro- posed by numerous studies to detect and prevent this phenomenon [2]. Accurate and timely detec- tion of cyberbullying remains an unsolved task as cyberbullying is immensely context-dependent
and occasionally subtle (e.g. hidden mockery and sarcasm). In addition, signiï¬cant amount of cyberbullying takes place in private conversations without any publicity. Statistical drifts and over- all changes in popular culture, politics, human language, and media of communication further increase the complexity of this task.
While cyberbullying detection research focus predominantly on increasing the overall accuracy and timeliness of detection, recent studies reveal that proposed systems carry signiï¬cant amount of unintended bias against certain groups of users, demographics, languages, dialects, terms, phrases, or topics. For instance, terms like âgayâ and âjewâ, present in an informative or conversa- tional context, tend to incline the model predic- tions towards âhate speechâ or âhigh toxicityâ. tweets in African-American English Similarly, were shown to be more likely to be classiï¬ed as abusive or offensive compared to other tweets [3]. As such groups can be minorities among the whole population, trying to maximize the detec- tion accuracy and evaluating approaches solely
1
2
on overall detection performance exacerbates the bias even further. The source of these undesir- able biases can be machine learning algorithms, datasets used for training, pre-trained language models, human annotators, or typically a combi- nation thereof.
In this study, we try to answer the following two research questions: âCan we mitigate the unintended bias of cyberbullying detection mod- els by guiding the model training with fairness constraintsâ? and âIf so, how much does such bias mitigation impair the cyberbullying detection performanceâ?. We hypothesize that, if formu- lated in a fairness context, recent advancements in constrained optimization [4] can be successfully utilized to mitigate the unintended bias in cyber- bullying detection models without hindering over- all detection performance. To test our hypothesis, we formulate frequently used fairness assessment metrics from the literature as constraints that can be utilized during model training. We then validate the proposed approach with 4 different datasets and varying bias contexts such as gender bias, language bias, and recency bias. We show that our approach enables the discovery of models with more equitable performance across differ- ent groups of interest while improving overall and group-speciï¬c cyberbullying detection per- formances. In addition, our approach is agnostic to data modality (e.g. text, image, graph, tabular) and does not require group labels during infer- ence. We release the full source code of our study along with the trained models (https://github.com/ ogencoglu/fair cyberbullying detection). We be- lieve our work contributes to the pursuit of un- biased, transparent, ethical, and accountable ma- chine learning solutions for cyber-social health.
# THE PURSUIT OF FAIRNESS
Previous studies of cyberbullying detection utilized numerous machine learning approaches varying from rule-based systems to deep learn- ing [2]. Reported performances of the proposed approaches differ signiï¬cantly between studies as well (F1 scores in the range of 0.4-0.8 for most studies), depending on the experimentation dataset and deï¬nition of CB [2]. As in several other natural language processing (NLP) tasks, most recent CB detection studies utilize deep neu- ral networks and transfer learning, i.e., employing
a pre-trained language model to retrieve informa- tive numerical representations of the textual data (e.g. representations of tokens, words, sentences, or paragraphs) and training a classiï¬er with these representations [5], [6].
Computational methods can propagate and even reinforce social biases. A machine learning model is considered to contain unintended bias if it performs better for some demographic groups or groups of interest than others. In natural lan- guage processing, unintended biases emerge from bias in datasets, bias in distributed word embed- dings (e.g. word2vec, GloVe), bias in contextual word embeddings (e.g. BERT, ELMo, GPT-2), bias in sentence embeddings, bias in machine learning algorithms, or bias in human annotators. For instance, a recent study by Nadeem et al. show that stereotypical bias is ampliï¬ed as pre- trained language models (e.g. BERT, RoBERTa, XLNet, GPT-2) become stronger [7]. While there have been several studies of CB detection and several studies of algorithmic bias analysis in ma- chine learning, only a handful of studies attempt to combine the two. To the best of our knowledge, an extensive systematic review of bias quantiï¬- cation and mitigation methods in cyberbullying detection does not exist as of August 2020. In cyberbullying detection context,
even though studies show that CB is more common to be perpetrated by certain identity groups (e.g. young males) and perpetrated against certain identity groups (e.g. non-heterosexuals), detection performance of an unbiased model in terms of equality of odds is expected to be the same among all groups as well as with respect to the overall population. For instance, an unbiased model should not classify a text containing iden- tity terms (e.g. black, jew, gay, white, christian, transgender) in an innocuous context as âcyber- bullyingâ only because such terms appear fre- quently in cyberbullying incidents. Nonetheless, previous studies show that models tend to label non-toxic statements containing identity terms as toxic, manifesting a false positive bias [8], [9]. Bias is observed against different languages and dialects as well. Classiï¬ers were shown to predict African-American English tweets more likely to be abusive or offensive than Standard American English [3].
Proposed approaches for mitigation of unin-
tended bias in CB detection predominantly in- volve balancing the training data in some statisti- cal sense, such as oversampling, data augmenta- tion, or sample weighting. For instance, Dixon et al. add additional non-toxic examples containing identity terms from Wikipedia articles to training data [8]. Badjatiya et al. detect bias-sensitive words and replace them with neutral words or tokens to reduce stereotypical bias for hate speech detection [10]. Nozza et al. sample additional data from an external corpus to augment training data for misogyny detection [5]. Similarly, Park et al. augment the training data by identifying male entities and swapping them with equivalent female entities and vice-versa in order to reduce gender bias in abusive language detection [9]. These methods, inevitably, introduce additional calibration hyper-parameters that are highly inï¬u- ential both on the performance of cyberbullying detection and bias mitigation. For instance, in- creasing the prevalence of disadvantaged groups during training may impair overall classiï¬cation performance due to high distortion of decision boundary in the feature space. Furthermore, as the statistical properties of the modeled phenomena change over time in unforeseen ways (also known as concept drift), proposed calibration parameters become obsolete with time in real-life applica- tions.
In this study, we formulate the task of training more equitable CB detection models as a con- strained optimization problem. Just like standard deep neural network training, the optimization task corresponds to adjusting model weights by minimizing a loss function, iterated over the training data. However, by imposing fairness con- straints during model training, we enforce training to converge to a more equitable solution for the chosen groups of interest. Essentially, our method does not alter the training data and can be used simultaneously with the abovementioned data de- biasing methods for further bias mitigation.
# METHODS
# Datasets
We validate our methods by performing exper- iments with 4 publicly available datasets collected from 4 different media. In order to further test the generalization power of the proposed bias
May 2020
mitigation approach, we appoint different identity including group contexts for each experiment identity-related groups, language-related groups, and date-related groups. Datasets differ from each other regarding total number of samples, overall proportion of cyberbullying samples, and group- speciï¬c proportions of cyberbullying samples.
Focus of the ï¬rst experiment is gender bias. We demonstrate our method on the Perspective APIâs Jigsaw dataset of 403,957 comments with toxicity and identity annotations [11]. We select male and female identities as groups of interest. Percentage of the male and female groups are 11.0% and 13.2%, respectively. Toxicity propor- tion among the male and female groups are 15.0% and 13.7%, respectively and overall toxicity pro- portion in the dataset is 11.4%.
Focus of the second experiment is language bias. We employ a multilingual dataset of 107,441 tweets with hate speech annotations in 5 lan- guages, namely English, Italian, Polish, Por- tuguese, and Spanish [6]. Percentage of English, Italian, Polish, Portuguese, and Spanish tweets are 78.1%, 5.4%, 10.2%, 1.8%, and 4.6%, respec- tively. Hate speech proportion among English, Italian, Polish, Portuguese, and Spanish tweets are 26.2%, 19.5%, 9.0%, 20.3%, and 39.7% respec- tively and overall hate speech proportion in the dataset is 24.6%.
Third experiment has a more practical fo- cus of date bias or recency bias. As popular culture and languages change, new phrases of insult may emerge, leading to underperforming models on more recent observations. We would like to ï¬rst demonstrate, then mitigate the bias of recency in CB classiï¬ers. The dataset uti- lized for validating our mitigation approach is WikiDetox dataset (shortly referred as Wiki from now on) from Wikipedia Talk Pages [12]. This dataset consists of three distinct corpora of com- ments from English Wikipedia Talk pages posted between 2001-2016 and labeled for aggression, personal attack, and toxicity. We select 77,972 comments that have all of the annotations regard- ing aggression, personal attack, and toxicity for this study. The groups of interest are comments posted between the years 2001-2014 and 2015- 2016 (recent group). Percentage of the 2001- 2014 and 2015-2016 groups are 93.8% and 6.2%, respectively. CB proportion among the 2001-2014
3
4
and 2015-2016 groups are 20.9% and 19.3%, respectively and overall CB proportion in the dataset is 20.8%.
Final experiment concentrates on bias among hate speech on the basis of religion, race, and nationality. For this purpose, we utilize the largest corpus of hate speech to date with theoretically- justiï¬ed annotations called Gab Hate Corpus [13]. Dataset consists of 27,665 posts from the social network platform Gab. Percentage of hate speech on the basis of religion, race, and nationality are 5.1%, 6.5%, and 4.8%, respectively. Hate speech proportion among the religion, race, and nationality groups are 51.2%, 53.7%, and 39.6%, respectively and overall hate speech proportion in the dataset is 8.0%.
# Cyberbullying Detection Model
We utilize recently introduced deep neu- ral network architectures from NLP research, i.e. transformer networks, for performing binary classiï¬cation of cyberbullying (cyberbullying or not). We employ a multilingual language model, sentence-DistilBERT [14], for extracting sentence embeddings to represent each post/comment. We then train a simple 3-layer (2 dense layers of size 512 and 32, respectively and an output layer) fully-connected neural network using the sentence embeddings as input features.
We chose sentence embeddings instead of ï¬ne-tuning on token embeddings (e.g. out of original BERT model or its variants) ï¬rstly due to its low computational requirements [14]. Sec- ondly, even though it is possible to represent sentences with a standard BERT model as well (e.g. by averaging the token embeddings out of layer), for several semantic textual the output similarity tasks BERT sentence embeddings were shown to be inferior to embeddings speciï¬cally trained for representing sentences [14]. We chose a multilingual model to be able to extract useful representations from the Twitter dataset as well, as it includes tweets from 5 different languages. For each dataset, we train baseline (uncon- strained) and fairness-constrained models in a mini-batch manner for 75 epochs with a batch size of 128 with Adam optimizer (learning rate of 5 à 10â4). In order to avoid overï¬tting, mod- els at the epoch that maximizes the F1 score on the validation set (70%-15%-15% training-
validation-test split) is selected as ï¬nal model for each training. Baseline and fairness-constrained models are trained, validated, and tested with the same data and employ the same neural network architecture with the same hyper-parameters in order to establish fair comparison. We would like to emphasize that instead of maximizing the cy- berbullying detection performance, mitigation of unintended bias while maintaining the detection performance has been the main focus of this study. Therefore, we neither experimented with different text preprocessing schemes, pretrained models, sophisticated neural network architec- tures, nor performed any hyper-parameter tuning.
# Fairness Constraints
Fairness of machine learning models can be assessed by the metrics False Negative Equality Difference (FNED) and False Positive Equality Difference (FPED), as done in several studies [6], [8], [9]. FNED/FPED is simply the sum of deviations of group-speciï¬c False Negative Rates (FNRs)/False Positive Rates (FPRs) from the overall FNR/FPR. For N groups and set of observations belonging to each group being Gi â {1,··· ,N },
FNED= S_ |FNR-FNRe, iâ¬{1,--,N} FPED= \S_ |FPR-FPRe||. iâ¬{1,-- ,N}
In essence, group-speciï¬c error rates of fairer models deviate less from the overall error rates and consequently from each other, approaching an ideal equality of odds. Being related to the equalized odds fairness criteria, sum of FNED and FPED corresponds to the total unintended bias of a model (ideally zero).
Neural network training can be formulated as ï¬nding a set of parameters (network weights), θ, minimizing an objective function, fL:
min θ fL(θ), (2)
where fL is typically binary cross-entropy loss for binary classiï¬cation tasks (such as here) and min- imization is performed by backpropagation algo- rithm on the training set. In fairness-constrained neural network training, we would like to min- imize the same function in Equation (2) while
constraining the deviation of each group-speciï¬c FNR/FPR from the overall FNR/FPR in order to decrease the unintended bias, i.e.,
min θ fL(θ) subject to |F N R â F N RG1| < ÏF N R |F P R â F P RG1| < ÏF P R · · · |F N R â F N RGN | < ÏF N R |F P R â F P RGN | < ÏF P R,
where ÏF N R and ÏF P R are allowed deviations (corresponding to biases) from overall FNRs and FPRs, respectively.
In principle, implementation of Equation (3) involves 2N constraints (one for FNR and one for FPR for each group of interest). As constraints are inherently guiding the neural network training in our method, convergence becomes more difï¬cult with increasing number of constraints. Therefore, it is beneï¬cial to express the same inequalities with fewer number of constraints such as:
min θ s.t. max{F N RG1, · · · } â F N R < ÏF N R F N R â min{F N RG1, · · · } < ÏF N R max{F P RG1, · · · } â F P R < ÏF P R F P R â min{F P RG1, · · · } < ÏF P R. (4)
By constraining the upper and lower bounds of group-speciï¬c FNRs/FPRs with respect to over- all FNR/FPR, number of constraints can be de- creased strictly to four. Proposed constraints in Equation (4) correspond to direct incorporation of fairness metrics into neural network training in Equation (1). Essentially, such worst-case ori- ented reformulation of the fairness constraints can be considered as a robust optimization problem. Typical approaches that perform constrained optimization are based on Lagrange multipliers where a saddle point to the Lagrangian will cor- respond to an optimal solution (under fairly gen- eral conditions). However, this no longer holds for non-convex settings such as neural network training with gradient descent. In fact, gradient descent may not converge at all and oscillate in such settings. Furthermore, constrained neural network training (as well as training of several
May 2020
(3)
other machine learning algorithms) requires dif- ferentiable constraints and loss functions in order to compute the gradients for minimizing a given empirical loss. Considering fairness constraints are data-dependent counts and proportions (e.g. false positive rate), gradients (or subgradients) are either unavailable or always zero. To over- come this challenge, we propose utilizing recent advancements in solving constrained, non-convex optimization problems in which constraints can be non-differentiable (as in our study) [4]. Cotter et al. formulate such a problem as a two-player zero-sum game and introduce proxy constraints that are differentiable approximations of the orig- inal constraints [4]. During training, each proxy constraint function is penalized in such a propor- tion that the penalty satisï¬es the original (exact) constraint.
implementation of constrained training is performed via two data streams. First data stream feeds random data batches and cor- responding binary labels (cyberbullying or not) to the model. Second data stream traces the samples belonging to the groups of interest in a given batch and computes the constraint vi- olations, which in return will be used for ad- ditional penalty on top of the misclassiï¬ca- tion penalization from the ï¬rst stream. For in- stance, if the interest groups are race, religion, and nationality and ÏF N R is set the model will be penalized if the maximum of (F N Rrace, F N Rreligion, F N Rnationality) goes above F N Roverall + 0.1 or if the minimum of those goes below F N Roverall â 0.1 during each iteration. Same logic applies to FPRs. Once the model is trained, access to group attributes is not needed during inference, i.e., constrained models can be used in the same manner as unconstrained models. We implemented our experiments using TensorFlow framework in Python 3.7.
We set ÏF N R to 0.02, 0.15, 0.005, and 0.1; ÏF P R to 0.03, 0.1, 0.005, and 0.15 for Jigsaw, Twitter, Wiki, and Gab experiments, respectively. Even though a hypothetical, perfectly unbiased model would have zero deviation between group- speciï¬c rates and overall rates, it is important to remember that perfect calibration (outcomes being independent of group attributes) can not be satisï¬ed simultaneously with the balance for false negative and false positive rates. Therefore,
5
# 6
la +) True Label X S Positive negative o 2 True False o eas. eae 3 Q Positive Positive a 4 uv vo g a 3 o i o a & False True & . . g Negative Negative False False Negatives Positives FNR = ââ FPR = Label Label Positives Negatives 2. The Positives F. score = True False False Positives Negatives Positives TP: TN - FP: FN arr + FP) (TP + EN) (TN + FP)-(TN + FN) MCC =
Figure 1. Deï¬nition of common performance metrics for cyberbullying detection.
forcing the model to satisfy overly-conservative fairness constraints (very small ÏF N R and ÏF P R) or multiple fairness notions simultaneously may not only impair classiï¬cation performance, but may also result in training failing to converge (oscillating loss curves). Furthermore, overcon- straining the model may result in convergence to a trivial unbiased solution, such as a model outputting the same prediction regardless of the input.
# Evaluation
For evaluation of our methods we randomly split every dataset into training, validation, and test sets with 70%, 15%, and 15% splits, re- spectively (random seed is ï¬xed for every exper- iment for reproducibility and fair comparison).
All results reported in this study are calculated from the test set. Due to its popularity, we report the standard metric of F1 score of our binary classiï¬ers. As accuracy, area under the receiver operating characteristic curve, and F1 score may lead to misleading conclusions on imbalanced datasets (such as here), we evaluate our mod- els with Matthews correlation coefï¬cient (MCC) which does not exhibit such drawback [15]. MCC generates a high score only if the binary pre- dictor was able to correctly predict the majority of positive instances and the majority of nega- tive instances [15]. We use MCC for comparing group-speciï¬c detection performances between the baseline and fairness-constrained models as well. Furthermore, we perform McNemarâs test (also known as within-subjects chi-squared test) to test whether a statistically signiï¬cant difference between baseline (unconstrained) and constrained models exists in terms of cyberbullying classi- ï¬cation accuracy. Deï¬nitions of False Negative Rate, False Positive Rate, F1 score, and Matthews correlation coefï¬cient is depicted in Figure 1. For quantifying the unintended bias of our models, we use the sum of FNED and FPED, calculated on the test set.
# RESULTS
Results of each experiment on the test set including F1 score, Matthews correlation coef- ï¬cient, precision, recall, false negative equality difference, false positive equality difference, and total bias for baseline and fairness-constrained models can be examined in Table 1. In addition, percentage decrease in total bias (baseline model vs. constrained model) is also reported in the same table. Group speciï¬c FNRs and FPRs as well as corresponding biases of each group for all experiments are displayed in Figure 2 for detailed inspection. Table 2 reports Matthews correlation coefï¬cients of classiï¬ers for each group of inter- est. Finally, we report the contingency tables of unconstrained and constrained models, calculated over the same test sets of all experiments in Table 3. All McNemarâs tests calculated from these contingency tables resulted in statistical signiï¬cance with p < 0.001, therefore the null hypothesis (âpredictive performance of two mod- els are equalâ) can be rejected.
Our results (see Table 1) show that training
Jigsaw Dataset [ female ] Baas male [ female ] male 60 50 40 30 20 10 0 0 5 10 15 20 25 False Negative Rate (%) False Positive Rate (%)
Twitter Dataset
# Spanish
4
}
# Portuguese 4
+
# Polish
# 4 a
# italian
4
# L
# English
4
# Spanish
Portuguese Polish Italian UZZZZZZZZZ3+ English Se ee r . + + + 80 70 60 50 40 30 20 10 O 0 10 20 30 40 False Negative Rate (%) False Positive Rate (%) Wiki Dataset 2001-2014 25206. ââââ 2001-2014 [ voiszoe | 30 20 10 0 0 5 To False Negative Rate (%) False Positive Rate (%) GAB Dataset EE , vxtionality | a eetigion ZZ (WLLL tae LZZZZZEEEZLLEEIELEEED eligion 60 50 40 30 20 10 0 0 10 20 30 40 50 60 False Negative Rate (%) False Positive Rate (%) ined regi overall FNR/FPR (unconstrained) unconstrained for group rates overall FNR/FPR
(constrained)
constrained
5
# bias
Figure 2. False negative rates, false positive rates and biases for each group for unconstrained and fairness- constrained cyberbullying detection models on the test set among 4 experiments.
May 2020
7
8
Table 1. Performance of baseline (unconstrained) and fairness-constrained cyberbullying detection models evaluated
on the test set and corresponding biases (P = Precision, R = Recall). Baseline (unconstrained) Model
Fairness-constrained Model Dataset F1 MCC P Jigsaw 0.51 0.46 0.39 0.75 0.74 0.65 0.70 0.79 Twitter 0.77 0.71 0.73 0.82 Wiki 0.45 0.41 0.34 0.65 Gab R FNED FPED 0.031 1.293 0.013 0.236 0.116 0.488 0.004 0.968 Total Bias 0.147 1.781 0.018 1.203 F1 MCC P 0.54 0.49 0.59 0.49 0.74 0.66 0.78 0.70 0.84 0.81 0.91 0.78 0.47 0.42 0.46 0.48 R FNED FPED 0.001 1.316 0.001 0.228 0.031 0.263 0.000 0.644 Total Bias 0.031 1.579 0.001 0.872 Bias Decr. (%) 78.7 11.3 94.9 27.5
Table 2. Group-speciï¬c Matthews correlation coefï¬cients for baseline (unconstrained) and fairness-constrained models (better performance highlighted).
Baseline Constrained Jigsaw male 0.427 0.453 female 0.401 0.470 Eng 0.711 0.710 Ita 0.315 0.332 Twitter Pol 0.263 0.300 Port 0.230 0.335 Spa 0.329 0.335 â01-â14 0.703 0.807 Wiki â15-â16 0.710 0.807 rel. 0.240 0.279 Gab race 0.198 0.225 nat. 0.473 0.412
Table 3. Contingency tables of baseline (unconstrained) and fairness-constrained cyberbullying detection models calculated from the test sets of each experiment.
Jigsaw Twitter Wiki Gab Baseline model correct Baseline model wrong Baseline model correct Baseline model wrong Baseline model correct Baseline model wrong Baseline model correct Baseline model wrong Constrained model correct Constrained model wrong 48,802 1,888 5,849 4,055 13,398 539 758 1,422 10,205 299 768 424 3,582 57 222 289
p < 0.001 for all McNemarâs tests.
neural network models with fairness constraints decrease the overall unintended bias of cyber- bullying classiï¬ers signiï¬cantly, while increasing overall CB detection performance in terms of MCC. Total bias decrease with respect to uncon- strained models corresponds to 78.7%, 11.3%, 94.9%, and 27.5% for Jigsaw, Twitter, Wiki, and Gab experiments, respectively. Total bias is mainly coming from the FNRs for Twitter and Wiki experiments while the main contributor of bias is the FPRs for Jigsaw and Gab experiments. Our approach manages to mitigate the total un- intended bias for both situations. Models trained on the Twitter dataset (ï¬ve languages as groups of interest) carry the highest amount of total bias. Figure 2 shows that high FPR of Spanish and high FNR of Portuguese and Polish tweets are the main contributors to the bias. Models trained on the Gab dataset carry the second highest total bias especially due to high FPR of race and religion identity groups. Baseline model trained on the Wiki dataset exhibits very little bias among comments posted in the recent years vs. older comments. Guiding the model training with fair- ness constraints reduces that bias almost to zero.
Our results in Table 2 show that group-speciï¬c CB detection performance increases with fairness constraints for almost all groups as well. Fur- thermore, Table 3 shows that constrained mod- els perform statistically signiï¬cantly better than unconstrained models in terms of overall cyber- bullying detection accuracy. For instance for the same Jigsaw test set, 1,888 observations have been correctly classiï¬ed by the unconstrained model while being misclassiï¬ed by the con- strained model; however, 5,849 observations have been misclassiï¬ed by the unconstrained model while being correctly classiï¬ed by the constrained model. We believe future studies would beneï¬t from similar contingency table analyses as well, especially if performance metrics of compared models are close to each other.
# DISCUSSION
We show that we can mitigate the unintended bias of cyberbullying detection models by guiding the model training with fairness constraints. Our experiments demonstrate the validity of our ap- proach for different contexts such as gender bias and language bias. Furthermore, we show that
bias mitigation does not impair model quality, on the contrary, improves it. This improvement may be attributed to constraints serving as a regular- ization mechanism during training, similar to the common practice of constraining magnitudes of gradients or model weights. As our approach does not require modiï¬cation of model architecture and does not need group labels during inference, we can conclude that the model is able to jointly learn the statistical properties of cyberbullying and group membership.
High bias reduction of 78.7% and 94.9% is achieved in cases where there are only two groups of interest, i.e. male and female for Jigsaw and old and recent for Wiki experiment. We suspect that this is because learning to classify cyberbullying incidents are easier when only a few groups need to satisfy the upper-/lower-bound constraints. The lowest bias reduction, 11.3%, is achieved in the language bias mitigation experiment with the Twitter dataset where both unconstrained and constrained models perform considerably better for English tweets. Even though we utilize a mul- tilingual language model, this is not unexpected as English is the most prevalent language in the training data of pre-trained models as well as in the Twitter dataset itself. Models also show better performance in detecting hate speech on the basis of nationality vs. race and religion due to high false positive rates among race and religion groups in the Gab experiment.
One interesting observation is that fairness constraints tend to lower overall FPR by sacri- ï¬cing FNR. Consequently, precisions increase (as FNR is 1 - recall). This behaviour is meaningful in cases where the bias comes predominantly from FPED (e.g. Jigsaw and Gab), because there will be less room to deviate when FPRs are approaching to zero. However, we observe this phenomenon in Twitter and Wiki experiments as well and do not have an explanation to it. We also observe that constraints do not have to be satisï¬ed perfectly for successful bias mitigation. For instance, Twitter and Gab experiments have several groups that do not fall into the allowed deviation region (green bands in Figure 2), yet still carry considerably less bias compared to unconstrained models.
Our bias mitigation approach has several ad- vantages, especially in terms of generalization
May 2020
ability. First, is agnostic to neural network model architecture, regardless of complexity. Sec- ond, it is agnostic to data type and number of identity groups. While previously proposed miti- gation methods rely heavily on textual data and statistical balancing of it, our method can work with any modes of data including images, text, or network attributes. Another practical advantage of constrained training of neural network models is that it enables deï¬ning arbitrary performance constraints in terms of various rates including FNR, FPR, precision, recall, or accuracy. Certain real-life CB detection applications may not afford to miss any cyberbullying incidents while others may require minimum false-alarms. The former will beneï¬t from an allowed lower bound con- straint on the overall recall (i.e. recall should not decrease below the given value), while the latter application can employ an allowed upper bound constraint on the FPR. Our framework provides an uncomplicated mechanism to incorporate such constraints into training. Finally, as we address the problem of bias mitigation by adjusting the machine learning model rather than the data, our approach can be seamlessly combined with previ- ously proposed approaches for further reduction of unintended biases.
Considering the high prevalence of cyberbul- lying in private conversations instead of public ones, our study shares the same limitations with the previous studies of automatic cyberbullying detection: data scarcity and data representative- ness. There is a scarcity of annotated datasets that are available to public for research due to protection of users privacy and immense need of resources for quality annotations. Such scarcity is ampliï¬ed considering the research question of this study, as datasets having annotations of both identity attributes and cyberbullying are further limited. Therefore, we acknowledge that statistical distributions may differ largely among research datasets as well as between research datasets and real-life phenomena. Models trained and validated on speciï¬c cohorts, including our models, may not generalize to other cohorts or real-life scenarios. As discussed in [2], this may be due to varying deï¬nitions of cyberbullying or varying methodologies for data collection and annotation. Furthermore, models trained on data
9
# 10
collected from a certain period of time may not generalize to future scenarios due to distribution shift. Another limitation of our study is that we have adopted a broader deï¬nition of cyberbul- lying and treated each comment or post as an observation; ignoring the repetitiveness criteria of deï¬ning cyberbullying [2].
Future work includes investigating the perfor- mance of our bias mitigation approach among sub-groups (e.g. young Hispanic males) and com- bining our approach with previous studies that counteract bias by strategically altering the train- ing data. Validating our approach in a cross- dataset manner to test whether our mitigation approach generalizes across different cohorts is also in the scope of future work. As we have not performed any hyper-parameter tuning in this study, we also expect detection and bias mitiga- tion performance improvements by investigating the parameter choices. For instance, we have neither preprocessed the textual data, nor inves- tigated different network architectures. We also chose the same amount of allowed deviation from overall FNR/FPR as a fairness constraint for each group as we aimed to decrease the total bias in our experiments. Setting different thresholds for each group may lead to even more equitable models (yet increases the number of hyper-parameters to tune).
Considering the rate of utilization of algo- rithms to all aspects of our lives, the demand from individuals for ethical and fair algorithmic decision making is expected to increase. Failing to mitigate unintended biases in such systems might negatively affect the business models of companies that utilize these technologies as well. Incorrect and unfair algorithmic content media- tion (takedowns, ï¬ltering, and automatic content moderation) may infringe on users free speech. Implementing fair machine learning systems and publishing the outcomes could instill greater con- ï¬dence in users by increasing transparency and trust. Therefore, we should elevate our awareness for developing more equitable algorithms and should continue to develop methods to reduce the existing biases at the same time.
# CONCLUSION
In this study, we have aimed to mitigate the bias of cyberbullying detection models. For this
purpose, we proposed a scheme to train cyberbul- lying detection models under fairness constraints. With our approach, we demonstrated that various types of unintended biases can be successfully mitigated without impairing the model quality. We believe our work contributes to the pursuit of more equitable machine learning solutions for cyberbullying detection. Future directions include combining our approach with other bias mitiga- tion methods and establishing a comprehensive framework for unbiased machine learning algo- rithms in cyber-social health applications.
# REFERENCES
1. S. Hinduja and J. W. Patchin, âBullying, cyberbullying, and suicide,â Archives of Suicide Research, vol. 14, no. 3, pp. 206â221, 2010.
2. H. Rosa, N. Pereira, R. Ribeiro, P. C. Ferreira, J. P. Carvalho, S. Oliveira, L. Coheur, P. Paulino, A. V. Sim Ëao, and I. Trancoso, âAutomatic cyberbullying detection: A systematic review,â Computers in Human Behavior, vol. 93, pp. 333â345, 2019.
3. T. Davidson, D. Bhattacharya, and I. Weber, âRacial bias in hate speech and abusive language detection datasets,â in Proceedings of the Third Workshop on Abusive Language Online, 2019, pp. 25â35.
4. A. Cotter, H. Jiang, and K. Sridharan, âTwo-player games for efï¬cient non-convex constrained optimiza- tion,â in Proceedings of the 30th International Confer- ence on Algorithmic Learning Theory, vol. 98, 2019, pp. 300â332.
5. D. Nozza, C. Volpetti, and E. Fersini, âUnintended bias in misogyny detection,â in IEEE/WIC/ACM International Conference on Web Intelligence, 2019, pp. 149â155.
6. X. Huang, X. Linzi, F. Dernoncourt, and M. J. Paul, âMultilingual Twitter corpus and baselines for evaluating demographic bias in hate speech recognition,â in Proceedings of the Twelveth International Conference on Language Resources and Evaluation (LREC), 2020. [Online]. Available: github.com/xiaoleihuang/Multilingual Fairness LREC
7. M. Nadeem, A. Bethke, and S. Reddy, âStereoSet: Measuring stereotypical bias in pretrained language models,â arXiv preprint arXiv:2004.09456, 2020.
8. L. Dixon, J. Li, J. Sorensen, N. Thain, and L. Vasser- man, âMeasuring and mitigating unintended bias in text classiï¬cation,â in Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, 2018, pp. 67â 73.
9. J. H. Park, J. Shin, and P. Fung, âReducing gender bias in abusive language detection,â in Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, 2018, pp. 2799â2804.
10. P. Badjatiya, M. Gupta, and V. Varma, âStereotypical bias removal for hate speech detection task using knowledge-based generalizations,â in The World Wide Web Conference, 2019, pp. 49â59.
11. D. Borkan, L. Dixon, J. Sorensen, N. Thain, and L. Vasserman, âNuanced metrics for measuring unintended bias with real data for text classiï¬cation,â in Companion Proceedings of The 2019 World Wide Web Conference, 2019, pp. 491â500. [Online]. Available: kaggle.com/c/ jigsaw-unintended-bias-in-toxicity-classiï¬cation/data
12. E. Wulczyn, N. Thain, and L. Dixon, âEx machina: Personal attacks seen at scale,â in Proceedings of the 26th International Conference on World Wide Web, 2017, pp. 1391â1399. [Online]. Available: ï¬gshare.com/ projects/Wikipedia Talk/16731
13. B. Kennedy, M. Atari, A. M. Davani, L. Yeh, A. Omrani, Y. Kim, K. Coombs Jr., G. Portillo- Wightman, S. Havaldar, E. Gonzalez, J. Hoover, A. Azatian, A. Hussain, A. Lara, G. Cardenas, A. Omary, C. Park, X. Wang, C. Wijaya, Y. Zhang, B. Meyerowitz, and M. Dehghani, âThe gab hate corpus: A collection of 27k posts annotated for hate speech,â PsyArxiv Preprint 10.31234/osf.io/hqjxn, 2020. [Online]. Available: osf.io/edua3/
14. N. Reimers and I. Gurevych, âSentence-BERT: Sen- tence embeddings using Siamese BERT-networks,â in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 2019, pp. 3982â3992.
15. D. Chicco and G. Jurman, âThe advantages of the matthews correlation coefï¬cient (MCC) over F1 score and accuracy in binary classiï¬cation evaluation,â BMC Genomics, vol. 21, no. 1, p. 6, 2020.
May 2020
11 | {
"id": "2004.09456"
} |
2005.04305 | Measuring the Algorithmic Efficiency of Neural Networks | Three factors drive the advance of AI: algorithmic innovation, data, and the
amount of compute available for training. Algorithmic progress has
traditionally been more difficult to quantify than compute and data. In this
work, we argue that algorithmic progress has an aspect that is both
straightforward to measure and interesting: reductions over time in the compute
needed to reach past capabilities. We show that the number of floating-point
operations required to train a classifier to AlexNet-level performance on
ImageNet has decreased by a factor of 44x between 2012 and 2019. This
corresponds to algorithmic efficiency doubling every 16 months over a period of
7 years. By contrast, Moore's Law would only have yielded an 11x cost
improvement. We observe that hardware and algorithmic efficiency gains multiply
and can be on a similar scale over meaningful horizons, which suggests that a
good model of AI progress should integrate measures from both. | http://arxiv.org/pdf/2005.04305 | Danny Hernandez, Tom B. Brown | cs.LG, cs.CV, stat.ML | 20 pages, 5 figures | null | cs.LG | 20200508 | 20200508 | # Measuring the Algorithmic Efï¬ciency of Neural Networks
Danny Hernandez ⤠Tom B. Brown OpenAI OpenAI [email protected] [email protected]
# Abstract
Three factors drive the advance of AI: algorithmic innovation, data, and the amount of compute available for training. Algorithmic progress has traditionally been more difï¬cult to quantify than compute and data. In this work, we argue that algorithmic progress has an aspect that is both straightforward to measure and interesting: reductions over time in the compute needed to reach past capabilities. We show that the number of ï¬oating- point operations required to train a classiï¬er to AlexNet-level performance on ImageNet has decreased by a factor of 44x between 2012 and 2019. This corresponds to algorithmic efï¬ciency doubling every 16 months over a period of 7 years. Notably, this outpaces the original Mooreâs law rate of improvement in hardware efï¬ciency (11x over this period). We observe that hardware and algorithmic efï¬ciency gains multiply and can be on a similar scale over meaningful horizons, which suggests that a good model of AI progress should integrate measures from both.
â¤Danny Hernandez led the research. Tom Brown paired on initial experiments, scoping, and debugging.
# Contents
# 1 Introduction
1.1 Measuring algorithmic progress in AI is critical to the ï¬eld, policymakers, and industry leaders
1.2 Efï¬ciency is the primary way we measure algorithmic progress on classic computer science problems. We can apply the same lens to machine learning by holding performance constant
# 2 Related Work
2.1 Algorithmic progress had similar rate to Mooreâs Law in some domains over decades . . .
2.2 Linear programming gains were well-deï¬ned, steady, and faster than Mooreâs Law for 21 years
2.3
184x reduction in training cost (in dollars) to get to ResNet-50 performance since 2017 . . .
2.4 We can estimate costly-to-observe algorithmic efï¬ciency improvements through scaling laws
2.5 Total investment in AI through private startups, public offerings, and mergers/acquisitions . . . . .
# 3 Methods
3.1 Main result primarily based on existing open source re-implementations of popular models .
3.2 We made few hyperparameter adjustments between architectures and did minimal tuning . .
# 4 Results
4.1 Key Result: 44x less compute needed to get to AlexNet-level performance . . . . .
.......
4.2 FLOPs based learning curves can help clarify comparisons between models . . . .
.........
4.3 We observed a similar rate of progress for ResNet-50 level classiï¬cation performance and . . . . . .
# 5 Discussion
5.1 We attribute the 44x efï¬ciency gains to sparsity, batch normalization, residual connections, . . . . . . . . . .
5.2 Itâs unclear the degree to which the observed efï¬ciency trends generalize to other AI tasks . 5.3 Why new capabilities are probably a larger portion of progress than observed efï¬ciency gains 5.4 We estimate a 7.5 million times increase in the effective training compute available to the . . . . . . . . . . largest AI experiments between 2012 and 2018 . . . . . . . . . . . . . . 5.5 Itâs possible thereâs an algorithmic Mooreâs Law for optimization problems of interest . . . . 5.6 Research provides leading indicators of the future economic impact of AI . . . . . . . . . . 5.7 Major limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Conclusion 7 Acknowledgements A Calculations for efï¬ciency improvements in Go, Dota, and Machine Translation B Calculations for efï¬ciency improvements in image classiï¬cation C Accuracy achieved in relevant models 11 12 12 14 15 15 15 16 18 19 20
2
# 3
3
# . nunst & &
# 4
6
6
7
9
9
10
10
# Introduction
# 1.1 Measuring algorithmic progress in AI is critical to the ï¬eld, policymakers, and industry leaders
Thereâs widespread agreement thereâs been impressive progress in AI/ML in the domains of vision, natural language, and game playing in the last decade [Krizhevsky et al., 2012, Xie et al., 2016, Silver et al., 2018]. However, thereâs massive disagreement as to how much progress in capabilities we should expect in the near and long term [Grace et al., 2017]. For this reason, we believe measuring overall progress in AI/ML is a crucial question, because it can ground the discussion in evidence. Measuring AI progress is critical to poli- cymakers, economists, industry leaders, potential researchers, and others trying to navigate this disagreement and decide how much money and attention to invest in AI.
For example, the compute used by the largest AI training runs per year grew at 300,000x between 2012 and 2018 [Amodei & Hernandez, 2018]. Given the divergence from the past trend of approximately Mooreâs [Sastry et al., 2019] suggests policymakers increase funding for Law level growth for such training runs, compute resources for academia, so they can continue to do the types of AI research that are becoming more expensive. Measurements of AI progress inform policymakers that are making such decisions.
Hardware trends are relatively quantiï¬able. Mooreâs Law explains much of the advance from mainframes, to personal computers, to omnipresent smartphones [Moore, 1965]. Better measurement of scientiï¬c progress has the potential for a lot of impact on a variety of fronts. Given the existing understanding of key hardware trends, we were primarily interested in measures that represented exclusively algorithmic improvement that could help paint a picture of the overall progress of the ï¬eld.
We present measurements of algorithmic efï¬ciency state of the arts over time that:
1. Are informative to a wide audience of decision makers
2. Help measure novel contributions produced with smaller amounts of compute
1.2 Efï¬ciency is the primary way we measure algorithmic progress on classic computer science problems. We can apply the same lens to machine learning by holding performance constant
In a classic computer science problem like sorting, algorithmic quality is primarily measured in terms of how cost asymptotically increases with problem difï¬culty, generally denoted in Big O Notation. For example, quicksort [Hoare, 1962] has O(n log n) average cost in terms of operations to ï¬nd a perfect solution whereas many sorting algorithms are O(n2) (where n is the length of the list to be sorted). Itâs impractical to perform similar analysis for deep learning, because weâre looking for approximate solutions and donât have as clear a measure of problem difï¬culty. For these reasons, in machine learning, algorithmic progress is often presented in terms of new states of the art, like a 1% absolute increase in top-5 accuracy on ImageNet, ignoring cost. Itâs difï¬cult to reason about overall progress in terms of a large collection of such measures, because:
1. Performance is often measured in different units (accuracy, BLEU, points, ELO, cross-entropy loss, etc) and gains on many of the metrics are hard to interpret. For instance going from 94.99% accuracy to 99.99% accuracy is much more impressive than going from 89% to 94%.
2. The problems are unique and their difï¬culties arenât comparable quantitively, so assessment requires gaining an intuition for each problem.
3. Most research focuses on reporting overall performance improvements rather than efï¬ciency im- provements, so additional work is required to disentangle the gains due to algorithmic efï¬ciency from the gains due to additional computation.
4. The benchmarks of interest are being solved more rapidly, which exacerbates 1) and 2). For instance it took 15 years to get to human-level performance on MNIST [LeCun et al., 1998], 7 years on ImageNet [Deng et al., 2009, Russakovsky et al., 2015], and GLUE [Wang et al., 2018] only lasted 9 months [Devlin et al., 2018, Liu et al., 2019].
We show that we can gain clear insights into efï¬ciency trends by analyzing training costs while holding performance constant. We focused on training efï¬ciency rather than inference efï¬ciency, because weâre more interested in what systems are possible to produce than how much it costs to run those systems. Though we note increased inference efï¬ciency can have important economic implications [van den Oord et al., 2017]. In the research setting, weâve typically found ourselves FLOPS bound rather than memory or communication
3
bound. So we measured total ï¬oating-point operations used in training rather than parameters or another measure of efï¬ciency.
We focused on AlexNet-level performance, which we measured as 79.1% top-5 accuracy on ImageNet. AlexNet kicked off the wave of interest in neural networks and ImageNet is still a benchmark of wide in- terest, so this measure provided a long running trend to analyze.
# 2 Related Work
# 2.1 Algorithmic progress had similar rate to Mooreâs Law in some domains over decades
Grace compared algorithmic progress to hardware progress looked at over several decades in the domains of chess, go, physics simulations, mixed integer programming, and SAT solvers [Grace, 2013]. Graceâs overall conclusion was
Many of these areas appear to experience fast improvement, though the data are often noisy. For tasks in these areas, gains from algorithmic progress have been roughly ï¬fty to one hundred percent as large as those from hardware progress. Improvements tend to be incremental, forming a relatively smooth curve on the scale of years
For the most part, these estimates and their interpretation require substantial amounts of judgment. For instance, with chess and Go the approach was to use the available literature to estimate what kinds of returns came from a hardware doubling and then attribute all ELO improvement not explained by Mooreâs law to software. Additionally, Grace suggests we treat these estimates as "optimistic" rather than representative, because of increased saliency of problems that are making fast progress, problems with good measures being likely to progress faster, and the potential motivations of authors. Regardless, we think this related work shows that hardware and algorithmic progress can be on a similar scale, and that even a relatively simple model of progress should consider integrating measures from both domains.
Progress on mixed integer programming was particularly straightforward to measure, so weâve extended the original analysis of that domain below [Bixby, 2012].
# 2.2 Linear programming gains were well-deï¬ned, steady, and faster than Mooreâs Law for 21 years
Unlike some other optimization domains Grace looked at, linear programming was of commercial interest for a long period. Progress is easy to track in this domain over this 21 year period because there were distinct releases of commercial software (CPLEX and Gurobi) that can be compared with hardware held ï¬xed.
The trend of a 2x speedup every 13 months observed in Figure 1 is surprisingly consistent over a long time horizon. The smooth progress is partially explained by the measure being an aggregation of many problems of varying difï¬culty. Over this time Mooreâs Law yielded an efï¬ciency gain of approximately 1500x.
# Caveats
1. Itâs notable that the benchmark was designed and the analysis was performed by the CEO of Gurobi (a commercial MIPS solver) and that he had an incentive to demonstrate large amounts of progress.
2. Itâs worth pointing out the implications of the maximum search time of 30,000s for the optimal solution. When it took longer than 30,000s for the solver to ï¬nd the optimal solution, 30,000s is what would be recorded. Itâs expected that the maximum search time would have been invoked more for earlier, weaker solvers. Thus, the maximum search time made earlier solvers look relatively stronger, making the overall estimate conservative for this benchmark. We think using a maximum search time is reasonable, but we expect the overall speedup is sensitive to it. In this sense, these measurements are a little different than the AlexNet accuracy measurements, where we waited for the capability to be demonstrated before measuring progress.
3. This is the related domain with highest amount of measured algorithmic efï¬ciency progress weâre aware of for this period of time.
4
500,000x Speedup in Mixed Integer Programming over 20 Years
# Cumulative speedup
1000000 100000 10000 1000 100 10 1995 2000 2005 2010
Figure 1 A 2x speedup every 13 months was observed on a benchmark of 1,892 mixed-integer problems (MIPs), a subset of linear programming. This benchmark was created by Bixby, he describes it as a set of "real-world problems that had been collected from academic and industry sources over 21 years." Progress is based on the total time spent searching for the optimal solution for all problems in the benchmark. Progress is easy to track in this domain over this 21 year period because there were distinct releases of commercial software (CPLEX and Gurobi) that can be compared with hardware held ï¬xed. A maximum search time of 30,000 seconds (approximately 8 hours) per problem was used, so thatâs what was recorded for instances where the optimum wasnât found. We clariï¬ed the trend by graphing the trend by release date rather than by version number [Bixby, 2012].
# 2.3 184x reduction in training cost (in dollars) to get to ResNet-50 performance since 2017
The eventual unit institutions generally care about for training cost is dollars. Earlier we observed a 10x efï¬ciency improvement in terms of training FLOPs required to get ResNet-50 level accuracy (92.9% top-5 accuracy target on ImageNet). On the same target, DawnBench submissions have surpassed the contestâs original benchmark cost, $2323, by a factor of 184x [Coleman et al., 2017]. This brought the cost of such a training down to $12.60 in September 2017, less than a year after the competition was announced. Training cost in dollars is a useful overall measure, that aggregates:
1. The efï¬ciency gains from algorithmic progress we are most interested in within this paper.
2. Mooreâs Lawâs effect on GPUs, TPUs, etc.
3. Reduced cloud computing costs driven by modernization and increased competition.
4. Hardware utilization. Itâs not trivial to efï¬ciently use the FLOPS capacity of GPUs, TPUs, etc.
The DawnBench results make it clear that 3. and 4. can also be notable contributions to training efï¬ciency that are worth measuring. More targeted measurements, like training efï¬ciency in terms of FLOPs, help clarify the takeaway from measures like DawnBench that aggregate multiple effects.
# 2.4 We can estimate costly-to-observe algorithmic efï¬ciency improvements through scaling laws
Weâve focused on algorithmic efï¬ciency improvements that are observable empirically. [Kaplan McCandlish 2020] showed that language model performance on cross-entropy had power-law scaling with the amount of compute over several orders of magnitude. Empirical scaling laws can be extrapolated to provide an estimate of how much we would have needed to scale up older models to reach current levels of performance. Through
5
this mechanism scaling laws provide insight on efï¬ciency gains that may require prohibitively expensive amounts of compute to observe directly.
2.5 Total investment in AI through private startups, public offerings, and mergers/acquisitions went
# up 5x between 2012 and 2018
Weâve primarily considered algorithmic, hardware, and data as the inputs in progress in machine learning. Money spent would be another reasonable lens since thatâs the lever available to decision-makers at the highest level. [Bloom et al., 2017] looks into the relationship between scientiï¬c progress and spending:
In many models, economic growth arises from people creating ideas, and the long-run growth rate is the product of two terms: the effective number of researchers and their re- search productivity... A good example is Mooreâs Law. The number of researchers required today to achieve the famous doubling every two years of the density of computer chips is more than 18 times larger than the number required in the early 1970s. Across a broad range of case studies at various levels of (dis)aggregation, we ï¬nd that ideas â and the exponential growth they imply â are getting harder to ï¬nd. Exponential growth results from large increases in research effort that offset its declining productivity.
AI investment is also up substantially since 2012, and it seems likely this was important to maintaining algorithmic progress at the observed level. [Raymond Perrault & Niebles, 2019] notes that:
1. Private investment in AI startups rose from $7B in 2012 to $40B in 2018.
2. Investment through public offerings and mergers/acquisitions grew from $5B in 2012 to $23B in 2018.
3. The DOD is projected to invest $4.0B on AI R&D in ï¬scal year 2020.
4. Contract spending on AI by the US government has grown from about $150M to $728M between 2012 and 2018.
# 3 Methods
3.1 Main result primarily based on existing open source re-implementations of popular models
For the majority of the architectures shown in Figure 3 [Szegedy et al., 2014, Simonyan & Zisserman, 2014, He et al., 2015, Xie et al., 2016, Huang et al., 2016, Iandola et al., 2016, Zagoruyko & Komodakis, 2016, Zhang et al., 2017, Howard et al., 2017, Sandler et al., 2018, Ma et al., 2018, Tan & Le, 2019] we used PyTorchâs example models [Paszke et al., 2017] with Pytorchâs suggested hyperparameters. We mark our deviation from their hyperparameters in the next section. We supplemented PyTorchâs example models with existing implementations of MobileNet, Shufï¬eNet [Xiao, 2017, Huang, 2017].
Compute used is based on the product of the following:
1. FLOPs per training image, which was counted by a PyTorch library [Zhu, 2019] that we checked against other methods for several models
2. The number of images per epoch
3. The number of epochs it took an architecture to perform better than or equal to the AlexNet model we trained
# 3.2 We made few hyperparameter adjustments between architectures and did minimal tuning
We largely followed the suggested hyperparameters from the PyTorch example models. For all points shown in ï¬gure 3 we trained using SGD with a batch size of 256, momentum of 0.9, and weight decay of 1e-4, for 90 epochs. For pre-batch norm architectures, we began with the suggested learning rate of 0.01 (GoogleNet and VGG), for all other architectures we began with the suggested learning rate of 0.1.
For AlexNet we followed the original paperâs learning rate schedule of decaying by a factor of 10 every 30 epochs. For all other models, we followed the suggested 1000x total learning rate reduction. To sanity check that these were reasonable hyperparameters, we performed a scan on ResNet18 where we set the
6
initial learning rate to 0.0316, 0.1, and 0.316 and total decay to 250x, 1000x, and 2500x. The suggested hyperparameters performed the best. For all models other than AlexNet we smoothed out the learning rate schedule, which was important for early learning as shown in Figure 2.
Smooth learning rate schedule
Smooth schedule improved early learning
100 = Smooth 0.1 = Smooth 75 = Piece-wise ° = Piece-wise s 0.01 50 D £ £ 0.001 25 G ® 4 0 0 20 40 60 80 0 20 40 60 80 Epoch Epoch
> 8 8 2 Q a ° E
Figure 2 Smoothing out the learning rate improved early learning, which is the regime we were interested in. ResNet-50 learning curves pictured.
A natural concern would be that new models arenât optimized well for compute in reaching AlexNet-level performance. Before smoothing the learning rate schedule, many models hit AlexNet performance at exactly 31 epochs, when the learning rate was reduced by a factor of 10x. This adjustment often increased our measured efï¬ciency by 2-4x, but we didnât observe meaningful differences in ï¬nal performance from the change in learning rate schedule. So even though the change to the learning rate schedule could be considered minimal, it has a large effect on our measurements. The more simple shape of the updated learning curve, suggests that optimizing for convergence might be relatively compatible with optimizing for lower levels of performance, like AlexNet-level accuracy.
As context for the quality of these re-implementations we provide tables in Appendix C that compare the ï¬nal accuracy we reached to the original paper results.
# 4 Results
4.1 Key Result: 44x less compute needed to get to AlexNet-level performance
In ï¬gure 3 we show that between 2012 and 2019 the amount of compute that neural net architectures require to be trained from scratch to AlexNet level performance has gone down by a factor of 44x (16-month doubling time)
Most researchers found the algorithmic efï¬ciency gains to surprisingly high and regular. The progress is faster than the original Mooreâs Law rate (11x) over this period, where both trends made training models of AlexNet-level performance cheaper. Mooreâs Law is obviously a more general trend than what we observe in Figure 3. We believe itâs quite interesting to see what we can say about algorithmic efï¬ciency progress in general given these types of measurement, and we explore this question in sections 4.2 and 5.4.
7
44x less compute required to get to AlexNet performance 7 years later ° AlexNet . GoogleNet MobileNet_vle ° ShuffleNet_v1_1x . ) ShuffleNet_v2_1x EfficientNet-b0O e
Figure 3 Lowest compute points at any given time shown in blue, all points measured shown in gray. We observed an efï¬ciency doubling time of 16 months.
We can split the progress in training efï¬ciency into data efï¬ciency (needing fewer epochs) and reductions in the number of FLOPs required per epoch. Table 1 below shows this split for the models that were the efï¬ciency state of the art for a time.
We can see that both reductions in training epochs and FLOPs per training image play an important and varying factor in the overall algorithmic efï¬ciency gains. This type of analysis is somewhat sensitive to how far the original work pushed towards convergence.2 Other limitations are discussed in sections 5.4 and 5.7. Calculations for the ï¬gure 3 are provided in Appendix B. Relevant information for Efï¬cientNet training cost was provided through correspondence with authors.
2It only took 62 of the 90 epochs for AlexNet to train to 78.8% top 5 accuracy on ImageNet (99.6% of the 79.1% ï¬nal accuracy). So if the original AlexNet had only been trained for 62 epochs, we would have calculated the overall algorithmic efï¬ciency gain as 30x rather than 44x. We donât think itâs tractable to mitigate this confounder without adding a lot of complexity to explaining the measurement, but it seemed important to ï¬ag as a limitation of our approach.
8
Table 1 Breakdown of total training efï¬ciency gains in reaching AlexNet-level accuracy into reduction of training epochs and ï¬ops per epoch
Experiment Training epochs factor FLOPs per epoch factor Training efï¬ciency factor AlexNet 1.0 1.0 1.0 GoogleNet 11 0.38 4.3 MobileNet_v1 8.2 1.35 11 Shufï¬eNet_v1_1x 3.8 5.5 21 Shufï¬eNet_v2_1x 4.5 5.5 25 Efï¬cientNet-b0 22 2.0 44
# 4.2 FLOPs based learning curves can help clarify comparisons between models
We ï¬nd it noteworthy that in when we plot FLOPs based learning curves in ï¬gure 4 some architectures dominate others.
FLOPs used to train vs top5 accuracy on ImageNet
3 3 [s) z
100 == AlexNet == GoogleNet 75 == MobileNet_v2 50 == Resnet-50 == ShuffleNet_v2_1x 3% = Vgg-11 0 0.01 0.1 1 10
Teraflops/s-days
Figure 4 Some models reach all levels of of accuracy using less compute than other models
FLOPs based learning curves can help clarify what type of advances a new architecture consists of. ResNet- 50 dominates VGG-11 and GoogLeNet dominates AlexNet on this plot. That is for all amounts of training compute they get better accuracy. VGG-11 reached higher ï¬nal accuracy than AlexNet, but it took more compute to get to all levels of performance than AlexNet.
4.3 We observed a similar rate of progress for ResNet-50 level classiï¬cation performance and faster rates of efï¬ciency improvement in Go, Dota, and Machine Translation
Weâre also interested in measuring progress on frontier AI capabilities, the capabilities that are currently attracting the most attention and investment. It seems to us as if language modeling [Devlin et al., 2018, Radford et al., , Raffel et al., 2019] and playing games [Silver et al., 2016, Silver et al., 2017, Silver et al., 2018, OpenAI et al., 2019] are the domains of interest given our criteria.
Within those domains, our desiderata were:
1. task of sufï¬cient difï¬culty to demonstrate that improvements work at scale [Sutton, 2019]
9
2. benchmark of high interest over long horizon in which thereâs general agreement weâve observed large progress in capabilities.
3. sufï¬ciently good publicly available information/re-implementations to easily make an estimate
Itâs hard to get all these desiderata, but Table 2 below summarizes all the data we have observed.
Table 2 Increased efï¬ciency (in terms of FLOPs) in reaching the same performance on select tasks. Doubling Time Original
Improved Task Efï¬ciency Factor Period AlexNet Efï¬cientNet ImageNet 44x 6 years 16 months ResNet Efï¬cientNet ImageNet 10x 4 years 17 months Seq2Seq Transformer WMT-14 61x 3 years 6 months GNMT Transformer WMT-14 9x 1 year 4 months AlphaGo Zero AlphaZero GO 8x* 1 year* 4 months* OpenAI Five OpenAI Rerun Dota 5x* 2 months* 25 days*
*The work on Go and Dota are over shorter time scales and more the result of one research group rather than a large scientiï¬c community, so those rates of improvement should be considered to apply to a different regime than the rates in image recognition and translation.
When we apply this lens to translation [Sutskever et al., 2014, Vaswani et al., 2017] it shows more progress than vision over a shorter time horizon. Though we only have short horizon progress for Go and Dota, weâd only need to see a modest 3x and 5x efï¬ciency gain over 5 years for their rates to surpass the rate of progress on the vision task. The underlying calculations are provided in appendix A.
One might worry that the rate of progress in image recognition is very sensitive to performance level chosen, so we also did a shallow investigation of efï¬ciency gains at ResNet-50 level of performance. The relevant information, that Efï¬cientNet-b0 took 4 epochs to get to AlexNet level accuracy, and Efï¬cientNet-b1 [Tan & Le, 2019] took 71 epochs to get to ResNet-50 level accuracy was provided through correspondence with authors (where each was trained with 1 epoch of warmup rather than 5).
We observed a similar rate of progress for efï¬ciency gains in inference on ImageNet. We also did a shallow investigation into how the rate of progress on inference efï¬ciency has compared to training efï¬ciency. We observed that:
1. Shufï¬enet [Zhang et al., 2017] achieved AlexNet-level performance with an 18x inference efï¬ciency increase in 5 years (15-month doubling time).
2. Efï¬cientNet-b0 [Tan & Le, 2019] achieved ResNet-50-level performance with a 10x inference efï¬- ciency increase in 3 and a half years (13-month doubling time).
These results suggest that training efï¬ciency and inference efï¬ciency might improve at somewhat similar rates. Though itâs important to note we have many fewer points across time and domains for inference.
# 5 Discussion
5.1 We attribute the 44x efï¬ciency gains to sparsity, batch normalization, residual connections, architecture search, and appropriate scaling
A more thorough study would have carefully ablated all the features of interest from successful models while controlling for model size to be able to attribute the efï¬ciency gains to speciï¬c improvements in a quantitative manner [Lipton & Steinhardt, 2018]. We performed some ablations, but primarily rely on less direct evidence when forming opinions about which improvements we suspect were most important to the 44x increase in efï¬ciency. For instance we discuss what the original authors credit, though itâs important to recognize authors are incentivized to emphasize novelty. We think itâs important to note that efï¬ciency gains may compose in a hard to predict, non-linear manner.
10
Batch Normalization: Batch normalization enabled a 14x reduction in the number of ï¬oating-point oper- ations needed to train to Inception level accuracy [Ioffe & Szegedy, 2015]. Itâs unclear how such algorithmic efï¬ciency gains like batch normalization compose, but it seems reasonable to attribute some meaningful portion of the gains to normalization. We made a few attempts to try and train a Shufï¬eNet without batch normalization, but we were unable to get a model to learn. We suspect we would have needed to carefully initialize the network to do so [Zhang et al., 2019].
Residual Connections: Shufï¬eNet units, the building blocks of Shufï¬eNet, are residual blocks. Efï¬cient- Net also has residual connections.
Sparsity: GoogLeNet was explicit in describing sparsity as the primary inspiration for its architecture, and GoogLeNet alone was a 4.3x efï¬ciency improvement over AlexNet. [Szegedy et al., 2014].
This raises the question of whether there is any hope for a next, intermediate step: an architecture that makes use of the extra sparsity, even at ï¬lter level, as suggested by the theory, but exploits our current hardware by utilizing computations on dense matrices.
Shufï¬eNet largely credits replacing dense 1 x 1 convolutions with a sparser structure. If we assume all the Shufï¬eNet gains came from sparsity, batch normalization, and residual connections, it seems reasonable to credit sparsity with being able to produce at least the 4.3x that came with GoogLeNet (leaving 5.8x of the 25x gain shown in Table 1 for the other two conceptual improvements).
Appropriate Scaling: Given itâs architecture AlexNet was optimally sized for AlexNet-level performance. Given our tests of scaled up and scaled down models Shufï¬eNet_v2_1x, and Efï¬cientNet-b0 seem to be close to appropriately sized for AlexNet-level performance. We tested the effect of scaling by scaling down a ResNet-50 by Efï¬cientNetâs compound scaling factor twice (1.4x less depth, 1.2 less width, 1.3 lower resolution) [Tan & Le, 2019]. Scaling the ResNet architecture to a more appropriate size for AlexNet-level performance yielded a 2.1x improvement in algorithmic efï¬ciency for AlexNet-level performance. Figure 8 in the Efï¬cientNet paper shows that their compound scaling techniques (systematically scaling width, depth, and resolution) can result in 5x or more gains in algorithmic efï¬ciency over more naive scaling approaches.
Architecture Search: Efï¬cientNet seems to attribute much of its improved performance to leveraging ar- chitecture search rather than iterating on hand designed architectures. Efï¬cientNet was a 1.8x increase in algorithmic efï¬ciency over Shufï¬eNet at AlexNet-level performance.
# 5.2 Itâs unclear the degree to which the observed efï¬ciency trends generalize to other AI tasks
Weâre most interested in what our small number of data points suggest about algorithmic progress overall during this period. We recognize itâs difï¬cult to go from one or more speciï¬c measures to stating anything about overall progress. In this section we share our current impressions and suggest measures that could clarify the degree to which the trends weâve observed generalize.
All our measures were for tasks that have:
1. received large amounts of investment (researchefr time and/or compute) 2. in which thereâs general agreement weâve observed large progress in capabilities.
We suspect that this style of measurement on tasks that meet these criteria is likely to show similar rates of improvement in algorithmic efï¬ciency as weâve observed here. One concern we had, was that the rates of improvement would be very dependent on the level of performance. That may still be the case, but we were surprised how close the efï¬ciency doubling time was for AlexNet-level performance (16 months) and ResNet50-level performance (17 months). We also suspect, but are less conï¬dent, that such measurements would should similar progress in these domains (image recognition, natural language processing, and games). Weâd be very interested in such measurements.
However, weâre also interested in progress in high potential tasks that donât ï¬t these criteria, like certain reasoning tasks. In the previous section, we attributed the efï¬ciency gains over AlexNet primarily to sparsity, residual connections, normalization, principled scaling, and architecture search all of which are relatively task-agnostic. But, itâs possible that weâd observe only small efï¬ciency gains from these techniques on such tasks. We consider the degree to which the observed efï¬ciency trends generalize to other AI tasks a highly interesting open question.
11
# 5.3 Why new capabilities are probably a larger portion of progress than observed efï¬ciency gains
AlexNet achieved performance that no system had previously achieved. We can try to reason about how much compute would have been required in scaling up previous systems to match AlexNetâs performance. From this point of view, we believe AlexNet represented signiï¬cant progress in how much compute was required to achieve AlexNet-level performance. This analysis doesnât attempt to quantify that progress because itâs less tractable. More generally, the ï¬rst time a capability is created, algorithmic breakthroughs may have been leveraged to dramatically reduce the resources that would have otherwise been needed. For instance, if we imagine simply scaling up a DQN [Mnih et al., 2013] model to play Go it could easily have needed 1000x or more times as much compute to reach AlphaGo level. Such efï¬ciency gains are not generally observed empirically, though they can be calculated with asymptotic analysis in some cases and estimated with empirical scaling laws in others [McCandlish et al., 2018].
More formally, if we go far enough back in time, algorithmic progress takes us from brute force search to lower complexity classes, which is what enables capabilities of interest to be built at all. Within this zoomed- out view, the progress that went into making a capability possible at all, in total, yields an astronomically larger algorithmic efï¬ciency improvement factor than directly observed efï¬ciency improvements for capa- bilities that have recently been observed for the ï¬rst time. This limit analysis lends some support to the claim that the rate of gain in algorithmic efï¬ciency on a capability of interest might often be faster before a capability is observed.
In the DQN and brute force examples described above, we ï¬nd it most helpful to start by thinking of a scaling law, a plot of performance vs training compute used. Our algorithmic efï¬ciency data results are points we ï¬nd meaningful from those graphs, but sometimes similar comparisons would just yield an astronomical number that might not have much meaning. In such cases, weâd recommend analyzing a graph of the scaling law, since it contains the entire picture.
While most researchers weâve discussed the result with found the 44x number surprisingly high, because of this effect 44x may strongly underestimate algorithmic progress on image classiï¬cation during this period. When this analysis is discussed in the context of the relative importance of advancements in hardware and software in AI progress, we think itâs critical to remember this limitation [Sutton, 2019].
5.4 We estimate a 7.5 million times increase in the effective training compute available to the largest AI experiments between 2012 and 2018
This section explains why we estimate there was a 7.5 million times increase in the effective training compute (in FLOPs) available to the largest AI experiments during this period. The reasoning behind our estimate is thatâs what we get when we take the product of the AI and Compute trend [Amodei & Hernandez, 2018] (300,000x) and AlexNet efï¬ciency trend found in this work (25x over this period3), and carefully consider what this product means. When we consider that we have more compute and that each unit of compute can do more, it becomes clear that these two trends are somehow multiplicative.
This section is more speculative than the rest of the paper, but we think itâs important to explore the potential implications of our efï¬ciency measurements. We believe a 7.5 million times estimate is somewhat defensible when we:
1. Narrowly deï¬ne capabilities of interest so that 300,000x can be applied by deï¬nition.
2. Deï¬ne what we mean by effective compute.
3. Discuss major considerations for why 25x could be an underestimate/overestimate for algorithmic progress on capabilities of interest.
Capabilities of interest: We deï¬ne capabilities of interest as the training runs at close to the peak of size that was observed in 2018. Therefore itâs appropriate to apply the 300,000x from AI and Compute trend by deï¬nition. By 2020 such systems include AlphaZero, OpenAI Five, and NLP systems. This deï¬nition helps us avoid having to reason about what our measurements imply for distant domains. We have some measurements of progress for many of the capabilities of interest by the above deï¬nition. Though itâs possible there are unpublished results that ï¬t the capability of interest deï¬nition in relatively distant domains.
3Through 2018 we use the 25x efï¬ciency gains Shufï¬eNet represented rather than the 44x gains that Efï¬cientNet represented in 2019
12
Effective compute: The conception we ï¬nd most useful is if we imagine how much more efï¬cient it is to train models of interest in 2018 in terms of ï¬oating-point operations than it would have been to "scale up" training of 2012 models until they got to current capability levels. By "scale up," we mean more compute, the additional parameters that come with that increased compute, the additional data required to avoid overï¬tting, and some tuning, but nothing more clever than that. We considered many other conceptions we found less helpful 4.
Why our overall take is that 25x is likely an underestimate for algorithmic progress on capabilities of interest Our overall take relies heavily on our observations in the domain of interest. We saw larger overall progress in NLP and faster rates of short horizon progress for Go and Dota. In NLP we observed a 60x efï¬ciency factor over 3 years for machine translation. Though we only have short-horizon progress for Go and Dota, weâd only need to see a modest 3x and 5x efï¬ciency gains respectively over 5 years for their rates to surpass the rate of progress on the vision task.
On the other hand, algorithmic progress has a domain speciï¬c component, and itâs unclear how representative the 25x is of the average efï¬ciency progress in the broader domain of AI during this period. However, we believe this effect is smaller than the effect in the opposite direction of not measuring the contribution of new capabilities like AlexNet, Seq2Seq, or original AlphaGo systems during this period. In section 5.3 we provided arguments for why new capabilities might represent 100x or more algorithmic efï¬ciency progress.
To further clarify what drove changes in effective compute over this period, we split the AI and Compute trend into Mooreâs Law and increased spending/parallelization5. We graph an estimate for the effective compute trends in terms of these two components as well as progress in algorithmic efï¬ciency in ï¬gure 5 below.
Weâre uncertain whether hardware or algorithmic progress actually had a bigger impact on effective compute available to large experiments over this period, because of the ways weâve discussed in which the algorithmic estimate is conservative. Most researchers found the algorithmic efï¬ciency progress to be surprisingly fast. So, regardless of oneâs interpretation of what the AI and Compute trend implies about future AI progress, we believe our algorithmic efï¬ciency estimates suggests:
1. a modest update towards expecting faster progress along the edge of whatâs possible for AI to do in the short term.
2. potentially large update on long term expectations about AI if the algorithmic efï¬ciency on capabil- ities of interest continues to improve at a similar rate.
Directly commenting on the likelihood of any of the 3 macro trends in ï¬gure 5 continuing in the future is out of scope for this work. Making credible forecasts on such topics is a substantial enterprise, weâd rather avoid here than give insufï¬cient treatment. Rather we present the evidence we see as relevant for a reader whoâd like to form their own expectations about extrapolating trends in algorithmic efï¬ciency.
Additional reasons why 44x over 7 years could be an underestimate for progress on AlexNet-level algorithmic efï¬ciency:
1. Only AlexNet was heavily optimized for AlexNet level performance. Models are generally tuned for performance at convergence, not early learning. Our results were produced with minimum tuning for early learning and AlexNet-level performance, and tuning them could only increase their efï¬ciency gains.
2. Itâs our understanding that the re-implementation of AlexNet we used had a better initialization scheme than the original work. This effect adds another factor of conservativeness to our analysis. We expect future analysis to also be limited by this effect. This concern could be mitigated by researchers publishing their learning curves in addition to training compute used to train.
3. We donât account for gains from being able to use lower precision computation [Gupta et al., 2015]. 4. We donât account for gains from increased GPU utilization or improved GPU kernels.
4Our initial thinking was in terms of what an elite team in 2012 could have done if given a large amount com- pute, but this was unobservable. We could make something similar observable by having a group of smart physi- cists/mathematicians that were unfamiliar with modern ML methods work on problems without access to modern results, but that would be very expensive to observe.
5Increased spending and parallelization are coupled in that given ï¬xed time a researcher is limited by both (i) how many concurrent GPUâs are available to them which is primarily a ï¬nancial question, and (ii) how many GPUâs can productively be applied to the problem, which is a scientiï¬c question [McCandlish et al., 2018, Jia et al., 2018]
13
Effective compute available to largest experiments (estimate) 107 > a zs x= @ 10° = FS 5 De = <6 & 105 oS oO = 104 ® = Ww © 10° 3 a = £ fo} © 10? [o) (o) 10! 10°! 2013 2014 2015 2016 2017 2018
Figure 5 The notion of effective compute allows us to combine AI and Compute trend and this result in a single graph. These trends multiply as in addition to being able to do more with a ï¬xed amount of compute now, researchers have more of it. The AI and Compute trend is decomposed into a hardware efï¬ciency gain estimate (original Mooreâs Law) and money/parallelization [Moore, 1965,Amodei & Hernandez, 2018]. This estimate, as discussed in the body of this section, is more speculative than the rest of the paper, but we think itâs important to explore the potential implications of our efï¬ciency measurements.
5.5 Itâs possible thereâs an algorithmic Mooreâs Law for optimization problems of interest
This work suggests that in high investment areas of AI algorithmic efï¬ciency improvement is currently hav- ing a similar-sized effect as Mooreâs Law has had on hardware efï¬ciency in the past. Others have noticed comparable algorithmic progress over decades in related domains like Chess, Go, SAT solving, and opera- tions research. In light of that past analysis, itâs less surprising that weâve observed algorithmic efï¬ciency gains this large on training to an AlexNet level of performance. The common thread here seems to be that these along with AI systems are all optimization problems of interest.
Systematic measurement could make it clear whether an algorithmic equivalent to Mooreâs Law in the domain of AI exists, and if it exists, clarify its nature. We consider this a highly interesting open question. We suspect weâre more likely to observe similar rates of efï¬ciency progress on similar tasks. By similar tasks we mean within these sub-domains of AI, wide agreement of substantial progress, and comparable levels of investment (compute and/or researcher time). Itâs also unclear the degree to which general vs domain speciï¬c gains would be the drivers of such progress, and how gains compound over long periods as the ï¬eld progresses through several benchmarks. Problems of high investment might be be quite biased towards ones weâre making progress on rather, where an ideal measure might focus on the questions that are seen as most important.
An AI equivalent to Mooreâs Law would be harder to measure, because itâs not about progress on a single problem, itâs about progress on the frontier of optimization problems. Through that lens, it seems more plausible weâll see long term exponential progress on algorithmic efï¬ciency for AI capabilities of interest if our primary ï¬nding is an extension of an existing, long-running trend in progress on optimization problems of interest.
14
# 5.6 Research provides leading indicators of the future economic impact of AI
The eventual overall measure of AI researchâs impact on the world will likely be economic. However, it took past general-purpose technologies like electriï¬cation and information technology a surprisingly long time to become widespread. From the start of information technolology era it was about 30 years before personal computers were in more than half of US homes [Jovanovic & Rousseau, 2005] (similar timeline for personal computers). Analysis of past investments in basic research along 20-30 year timescales in domains like computers indicates that thereâs at least some tractability in foreseeing long term downstream impacts of technology like machine learning. Economic trends of AI are very informative, but measures of research progress are of particular interest to us as leading indicators of the eventual downstream economic and societal impact.
# 5.7 Major limitations
The limitations of this work are discussed throughout, but the major ones are reiterated here:
1. We only have a small number of algorithmic efï¬ciency data points on a few tasks (Section 4). Itâs unclear the degree to which weâd expect the rates of progress weâve observed to generalize to algorithmic efï¬ciency progress on other AI task and domains. We consider this a highly interesting open question that we discuss in Section 5.2.
2. We believe our approach underestimates algorithmic progress, primarily because new capabilities are likely a larger portion of algorithmic progress than observed efï¬ciency gains (Section 5.3). This weakness could be addressed by ï¬tting scaling laws to estimate the cost of prohibitively expensive training runs (Section 2.4).
3. This analysis focuses on the ï¬nal training run cost for an optimized model rather than total devel- opment costs. Some algorithmic improvements make it easier to train a model by making the space of hyper-parameters that will train stably and get good ï¬nal performance much larger. On the other hand, architecture searches increase the gap between the ï¬nal training run cost and total training costs. We believe a quantitative analysis of these effects would be very informative, but itâs beyond the scope of this paper.
4. We donât comment on the degree to which we believe efï¬ciency trends will extrapolate, we merely present our results (Section 4) and the related work (Section 2) we think is relevant for someone attempting to make such a prediction. Though we do comment on the implications if the trends persist.
# 6 Conclusion
We observe that hardware and algorithmic efï¬ciency gains multiply and that neither factor is negligible over meaningful horizons, which suggests that a good model of AI progress should integrate measures from both.
We hope this work is helpful to those trying to understand, measure, and forecast AI progress in a variety of settings. Weâve observed that AI models for high interest tasks are getting cheaper to train at an exponential rate faster than Mooreâs Law. Even though weâre early on in applying this trend to AI, we were surprised and inspired to learn that the original Mooreâs Law was coined when integrated circuits had a mere 64 transistors (6 doublings) [Moore, 1965] and naively extrapolating it out predicted personal computers and smartphones (an iPhone 11 has 8.5 billion transistors). If we observe decades of exponential improvement in the algorithmic efï¬ciency of AI, what might it lead to? Weâre not sure. That these results make us ask this question is a modest update for us towards a future with powerful AI services and technology. Conversely, if we were to start only observing incremental gains (say 2x improvements every 5 years), we think thatâd be a meaningful and widely understandable indicator that algorithmic progress had slowed down.
More ambitiously, we hope that reporting on algorithmic efï¬ciency improvements will become a strong and useful norm in the AI community. Improved performance is what AI algorithms are ultimately judged by. Algorithmically efï¬cient models on benchmarks of interest are promising candidates for scaling up and potentially achieving overall top performance. Efï¬ciency is straightforward to measure, as itâs just a meaningful slice of the learning curves that all experiments generate. Given these considerations and the primacy of efï¬ciency in measuring progress in computer science, we believe thereâs a strong case for reporting on and tracking training efï¬ciency states of the art over time.
15
# 7 Acknowledgements
Weâd like to thank the following people helpful conversations and/or feedback on this paper: Dario Amodei, Jack Clark, Alec Radford, Paul Christiano, Sam McCandlish, Ilya Sutskever, Jacob Steinhardt, Jared Kaplan, Amanda Askell, John Schulman, Ryan Lowe, Tom Henighan, Jacob Hilton, Asya Bergal, Katja Grace, Ryan Carey, Nicholas Joseph, and Geoffrey Irving.
Thanks to Niki Parmar for providing the relevant points from the transformer learning curves [Vaswani et al., 2017].
Also thanks to Mingxing Tan for providing the relevant points from Efï¬cientNet learning curves and running an experiment with reduced warmup [Tan & Le, 2019].
# References
[Amodei & Hernandez, 2018] Amodei, D. & Hernandez, D. (2018). AI and Compute. https://openai.com/ blog/ai-and-compute/. 3, 12, 14
[Bixby, 2012] Bixby, R. E. (2012). A brief history of linear and mixed-integer programming computation. Documenta Mathematica, Extra Volume ISMP, 107â121. 4, 5
[Bloom et al., 2017] Bloom, N., Jones, C. I., Van Reenen, J., & Webb, M. (2017). Are Ideas Getting Harder to Find? Working Paper 23782, National Bureau of Economic Research. 6
[Coleman et al., 2017] Coleman, C., Narayanan, D., Kang, D., Zhao, T., Zhang, J., Nardi, L., Bailis, P., Olukotun, K., Ré, C., & Zaharia, M. (2017). Dawnbench: An end-to-end deep learning benchmark and competition. 5
[Deng et al., 2009] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., & Fei-Fei, L. (2009). ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09. 3
[Devlin et al., 2018] Devlin, J., Chang, M., Lee, K., & Toutanova, K. (2018). BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805. 3, 9
[Grace, 2013] Grace, K. (2013). Algorithmic progress in six domains. arxiv. 4 [Grace et al., 2017] Grace, K., Salvatier, J., Dafoe, A., Zhang, B., & Evans, O. (2017). When will ai exceed
human performance? evidence from ai experts. 3
[Gupta et al., 2015] Gupta, S., Agrawal, A., Gopalakrishnan, K., & Narayanan, P. (2015). Deep learning with limited numerical precision. In International Conference on Machine Learning (pp. 1737â1746). 13 [He et al., 2015] He, K., Zhang, X., Ren, S., & Sun, J. (2015). Deep residual learning for image recognition.
6
[Hoare, 1962] Hoare, C. A. (1962). Quicksort. The Computer Journal, 5(1), 10â16. 3 [Howard et al., 2017] Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., An- dreetto, M., & Adam, H. (2017). Mobilenets: Efï¬cient convolutional neural networks for mobile vision applications. 6
[Huang et al., 2016] Huang, G., Liu, Z., van der Maaten, L., & Weinberger, K. Q. (2016). Densely connected convolutional networks. 6
[Huang, 2017] Huang, J. (2017). Shufï¬enet in pytorch. https://github.com/jaxony/shufï¬enet. 6 [Iandola et al., 2016] Iandola, F. N., Han, S., Moskewicz, M. W., Ashraf, K., Dally, W. J., & Keutzer, K.
(2016). Squeezenet: Alexnet-level accuracy with 50x fewer parameters and <0.5mb model size. 6
[Ioffe & Szegedy, 2015] Ioffe, S. & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. 11
[Jia et al., 2018] Jia, X., Song, S., He, W., Wang, Y., Rong, H., Zhou, F., Xie, L., Guo, Z., Yang, Y., Yu, L., et al. (2018). Highly scalable deep learning training system with mixed-precision: Training imagenet in four minutes. arXiv preprint arXiv:1807.11205. 13
[Jovanovic & Rousseau, 2005] Jovanovic, B. & Rousseau, P. L. (2005). General purpose technologies. In Handbook of economic growth, volume 1 (pp. 1181â1224). Elsevier. 15
[Krizhevsky et al., 2012] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classiï¬cation with deep convolutional neural networks. In F. Pereira, C. J. C. Burges, L. Bottou, & K. Q. Weinberger (Eds.), Advances in Neural Information Processing Systems 25 (pp. 1097â1105). Curran Associates, Inc. 3
16
[LeCun et al., 1998] LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning ap- plied to document recognition. Proceedings of the IEEE, 86(11), 2278â2324. 3
[Lipton & Steinhardt, 2018] Lipton, Z. C. & Steinhardt, J. (2018). Troubling trends in machine learning scholarship. 10
[Liu et al., 2019] Liu, X., He, P., Chen, W., & Gao, J. (2019). Multi-task deep neural networks for natural language understanding. 3
[Ma et al., 2018] Ma, N., Zhang, X., Zheng, H.-T., & Sun, J. (2018). Shufï¬enet v2: Practical guidelines for efï¬cient cnn architecture design. 6
[McCandlish et al., 2018] McCandlish, S., Kaplan, J., Amodei, D., & Team, O. D. (2018). An empirical model of large-batch training. 12, 13
[Mnih et al., 2013] Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., & Ried- miller, M. (2013). Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602. 12
[Moore, 1965] Moore, G. E. (1965). Cramming more components onto integrated circuits. Electronics, 38(8). 3, 14, 15
[OpenAI et al., 2019] OpenAI, :, Berner, C., Brockman, G., Chan, B., Cheung, V., DËebiak, P., Dennison, C., Farhi, D., Fischer, Q., Hashme, S., Hesse, C., Józefowicz, R., Gray, S., Olsson, C., Pachocki, J., Petrov, M., de Oliveira Pinto, H. P., Raiman, J., Salimans, T., Schlatter, J., Schneider, J., Sidor, S., Sutskever, I., Tang, J., Wolski, F., & Zhang, S. (2019). Dota 2 with large scale deep reinforcement learning. 9, 19
[Paszke et al., 2017] Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmai- son, A., Antiga, L., & Lerer, A. (2017). Automatic differentiation in PyTorch. In NIPS Autodiff Workshop. 6
[Radford et al., ] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. Language models are unsupervised multitask learners. 9
[Raffel et al., 2019] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., & Liu, P. J. (2019). Exploring the limits of transfer learning with a uniï¬ed text-to-text transformer. 9
[Raymond Perrault & Niebles, 2019] Raymond Perrault, Yoav Shoham, E. B. J. C. J. E. B. G. T. L. J. M. S. M. & Niebles, J. C. (2019). âThe AI Index 2019 Annual Reportâ. Technical report, AI Index Steering Committee, Human-Centered AI Institute, Stanford University, Stanford, CA. 6
[Russakovsky et al., 2015] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al. (2015). Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3), 211â252. 3
[Sandler et al., 2018] Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., & Chen, L.-C. (2018). Mo- bilenetv2: Inverted residuals and linear bottlenecks. 6
[Sastry et al., 2019] Sastry, G., Clark, J., Brockman, G., & Sutskever, I. (2019). Addendum to AI and Com- pute: Compute used in older headline results. 3
[Silver et al., 2016] Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., van den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., Dieleman, S., Grewe, D., Nham, J., Kalchbrenner, N., Sutskever, I., Lillicrap, T., Leach, M., Kavukcuoglu, K., Graepel, T., & Hassabis, D. (2016). Mastering the game of go with deep neural networks and tree search. Nature, 529, 484â503. 9, 18
[Silver et al., 2018] Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., Lanctot, M., Sifre, L., Kumaran, D., Graepel, T., Lillicrap, T., Simonyan, K., & Hassabis, D. (2018). A general reinforcement learning algorithm that masters chess, shogi, and go through self-play. Science, 362(6419), 1140â1144. 3, 9, 18
[Silver et al., 2017] Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., Hubert, T., Baker, L., Lai, M., Bolton, A., Chen, Y., Lillicrap, T., Hui, F., Sifre, L., van den Driessche, G., Graepel, T., & Hassabis, D. (2017). Mastering the game of go without human knowledge. Nature, 550, 354â. 9, 18
[Simonyan & Zisserman, 2014] Simonyan, K. & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. 6
[Sutskever et al., 2014] Sutskever, I., Vinyals, O., & Le, Q. V. (2014). Sequence to sequence learning with neural networks. CoRR, abs/1409.3215. 10, 18
[Sutton, 2019] Sutton, R. (2019). The bitter lesson. Incomplete Ideas (blog), March, 13. 12
17
[Szegedy et al., 2014] Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Van- houcke, V., & Rabinovich, A. (2014). Going deeper with convolutions. 6, 11
[Tan & Le, 2019] Tan, M. & Le, Q. V. (2019). Efï¬cientnet: Rethinking model scaling for convolutional neural networks. 6, 10, 11, 16
I., Simonyan, K., Vinyals, O., Kavukcuoglu, K., van den Driessche, G., Lockhart, E., Cobo, L. C., Stimberg, F., Casagrande, N., Grewe, D., Noury, S., Dieleman, S., Elsen, E., Kalchbrenner, N., Zen, H., Graves, A., King, H., Walters, T., Belov, D., & Hassabis, D. (2017). Parallel wavenet: Fast high-ï¬delity speech synthesis. 3
[Vaswani et al., 2017] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., & Polosukhin, I. (2017). Attention is all you need. CoRR, abs/1706.03762. 10, 16, 18
[Wang et al., 2018] Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., & Bowman, S. R. (2018). GLUE: A multi-task benchmark and analysis platform for natural language understanding. CoRR, abs/1804.07461. 3
[Xiao, 2017] Xiao, H. (2017). Pytorch mobilenet implementation of "mobilenets: Efï¬cient convolutional neural networks for mobile vision applications". https://github.com/marvis/pytorch-mobilenet. 6
[Xie et al., 2016] Xie, S., Girshick, R., Dollár, P., Tu, Z., & He, K. (2016). Aggregated residual transforma- tions for deep neural networks. 3, 6
[Zagoruyko & Komodakis, 2016] Zagoruyko, S. & Komodakis, N. (2016). Wide residual networks. 6 [Zhang et al., 2019] Zhang, H., Dauphin, Y. N., & Ma, T. (2019). Fixup initialization: Residual learning
without normalization. 11
[Zhang et al., 2017] Zhang, X., Zhou, X., Lin, M., & Sun, J. (2017). Shufï¬enet: An extremely efï¬cient convolutional neural network for mobile devices. 6, 10
[Zhu, 2019] Zhu, L. (2019). 6
# A Calculations for efï¬ciency improvements in Go, Dota, and Machine Translation
Machine Translation: We estimate that the Transformer [Vaswani et al., 2017] required 61x less compute to get to Seq2Seq-level of performance [Sutskever et al., 2014] on English to French translation on WMTâ14 3 years later. This estimate is based on:
1. total training compute used by the transformer base model in original paper (3.3e18 FLOPs) 2. compute estimate for Seq2Seq in AI and Compute (4.0e19 FLOPs) 3. the base transformer got to Seq2Seq level around 20% of the way through itâs run. (provided by
authors of transformer paper).
4.0e19/(0.20 3.3e18) = 61
⤠We estimate the the Transformer [Vaswani et al., 2017] required 9x less compute to get to GMNT-level of performance on English to French translation on WMT-14 1 year later. This estimate is based on:
1. total training compute used by the transformer big model in original paper (2.3e19 FLOPs) 2. compute estimate for GMNT transformer paper (1.4e20 FLOPs) 3. the base transformer got to Seq2Seq level around 68% of the way through itâs run. (provided by
authors of transformer paper).
1.4e20/(0.68 2.3e19) = 9
â¤
AlphaGo Zero to AlphaZero: We estimate that AlphaZero [Silver et al., 2018] required 8x less compute to get to AlphaGo Zero [Silver et al., 2017] level approximately one year later. We donât currently have enough information to compare to AlphaGo Lee [Silver et al., 2016].This is based on:
1. an estimated 4.4x decrease in total FLOPs used to train AlphaZero in AI and Compute 2. it took AlphaZero 390,000 of the 700,000 steps it was trained for to match AlphaGo Zero perfor-
2. it took AlphaZero 390,000 of the 700,000 steps it was trained for to match AlphaGo Zero perfor- mance.
# mance.
(700, 000/390, 000) = 8
â¤
18
OpenAI Five Rerun: OpenAI Five "Rerun" got to the same skill level from scratch on the ï¬nal environment without surgery using 5x less compute 2 months after the OG match [OpenAI et al., 2019]. However, some hard to pin portion of the additional cost came from a changing environment, as there were balance change patches approximately every 2 weeks during the original 10 month training period.
# B Calculations for efï¬ciency improvements in image classiï¬cation
Table 3 FLOPs required to reach same AlexNet level accuracy
teraï¬op/s-days Experiment Epochs gigaï¬ops/img (used) gigaï¬ops/img (THOP) 367.7 Vgg-11 12 7.98 7.98 - 308.0 Wide_ResNet_50 7 11.46 11.46 - 266.1 AlexNet 90 0.77 0.77 - 118.6 Resnet-50 8 3.86 3.86 - 118.5 Resnet-34 9 3.43 3.43 - 115.3 ResNext_50 7 4.29 4.29 - 97.9 Resnet-18 15 1.70 1.70 - 82.9 DenseNet121 8 2.70 2.70 - 73.1 Squeezenet_v1_1 53 0.36 0.36 - 61.4 GoogLeNet 8 2.00 2.00 - 24.0 MobileNet_v1 11 0.57 0.58 0.57 20.2 MobileNet_v2 16 0.33 0.33 - 15.4 Shufï¬eNet_v2_1_5x 13 0.31 0.31 - 12.9 Shufï¬eNet_v1_1x 24 0.14 0.15 0.14 10.8 Shufï¬eNet_v2_1x 20 0.14 0.15 0.14 6.0 Efï¬cientNet-b0 4 0.39 - 0.39
# gigaï¬ops/img (paper)
Where training_f lops = epochs With images_per_epoch = 1.28 f lops_per_image ⤠⤠106 and a teraf lop/s ⤠images_per_epoch (24 day = 1e12 ⤠⤠60 ⤠60s/day)
19
# C Accuracy achieved in relevant models
Table 4 Top-5 ï¬nal training accuracy comparisons for relevant models
Experiment My Top-5 Pytorch/Examples Top-5 Paper Top-5 Single Crop Validation* AlexNet 79.0% 79.1% 83.0% ? Vgg-11 86.8% 88.6% 93.0% no GoogLeNet 88.0% 89.5% 89.9% yes Resnet-50 92.8% 92.9% 93.3% yes Squeezenet_v1_1 80.6% 80.6% 80.3% ?
# Table 5 Top-1 ï¬nal training accuracy comparisons for relevant models
Experiment My Top-1 Pytorch/Examples Top-1 Paper Top-1 Single Crop Validation* MobileNet_v1 71.0% - 70.6% yes MobileNet_v2 68.5% 71.9% 72.0% yes Shufï¬eNet_v1_1x 64.6% - 67.6% yes Shufï¬eNet_v2_1_5x 69.3% 69.4% 71.6% yes
*We use a single center 224x224 crop for evaluating performance on the validation data points for all of our models, but not all of original papers evaluate performance in this manner.
20 | {
"id": "1807.11205"
} |
2005.04118 | Beyond Accuracy: Behavioral Testing of NLP models with CheckList | Although measuring held-out accuracy has been the primary approach to
evaluate generalization, it often overestimates the performance of NLP models,
while alternative approaches for evaluating models either focus on individual
tasks or on specific behaviors. Inspired by principles of behavioral testing in
software engineering, we introduce CheckList, a task-agnostic methodology for
testing NLP models. CheckList includes a matrix of general linguistic
capabilities and test types that facilitate comprehensive test ideation, as
well as a software tool to generate a large and diverse number of test cases
quickly. We illustrate the utility of CheckList with tests for three tasks,
identifying critical failures in both commercial and state-of-art models. In a
user study, a team responsible for a commercial sentiment analysis model found
new and actionable bugs in an extensively tested model. In another user study,
NLP practitioners with CheckList created twice as many tests, and found almost
three times as many bugs as users without it. | http://arxiv.org/pdf/2005.04118 | Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, Sameer Singh | cs.CL, cs.LG | null | Association for Computational Linguistics (ACL), 2020 | cs.CL | 20200508 | 20200508 | 0 2 0 2
y a M 8 ] L C . s c [
1 v 8 1 1 4 0 . 5 0 0 2 : v i X r a
# Beyond Accuracy: Behavioral Testing of NLP Models with CheckList
Marco Tulio Ribeiro Microsoft Research [email protected]
Tongshuang Wu Univ. of Washington [email protected]
Carlos Guestrin Univ. of Washington [email protected]
Sameer Singh Univ. of California, Irvine [email protected]
# Abstract
Although measuring held-out accuracy has been the primary approach to evaluate general- ization, it often overestimates the performance of NLP models, while alternative approaches for evaluating models either focus on individ- ual tasks or on speciï¬c behaviors. Inspired by principles of behavioral testing in software engineering, we introduce CheckList, a task- agnostic methodology for testing NLP mod- els. CheckList includes a matrix of general linguistic capabilities and test types that facil- itate comprehensive test ideation, as well as a software tool to generate a large and diverse number of test cases quickly. We illustrate the utility of CheckList with tests for three tasks, identifying critical failures in both commercial and state-of-art models. In a user study, a team responsible for a commercial sentiment analy- sis model found new and actionable bugs in an extensively tested model. In another user study, NLP practitioners with CheckList cre- ated twice as many tests, and found almost three times as many bugs as users without it.
# Introduction
One of the primary goals of training NLP models is generalization. Since testing âin the wildâ is expensive and does not allow for fast iterations, the standard paradigm for evaluation is using train- validation-test splits to estimate the accuracy of the model, including the use of leader boards to track progress on a task (Rajpurkar et al., 2016). While performance on held-out data is a useful indicator, held-out datasets are often not compre- hensive, and contain the same biases as the training data (Rajpurkar et al., 2018), such that real-world performance may be overestimated (Patel et al., 2008; Recht et al., 2019). Further, by summarizing the performance as a single aggregate statistic, it becomes diï¬cult to ï¬gure out where the model is failing, and how to ï¬x it (Wu et al., 2019).
A number of additional evaluation approaches have been proposed, such as evaluating robust- ness to noise (Belinkov and Bisk, 2018; Rychalska et al., 2019) or adversarial changes (Ribeiro et al., 2018; Iyyer et al., 2018), fairness (Prabhakaran et al., 2019), logical consistency (Ribeiro et al., 2019), explanations (Ribeiro et al., 2016), diagnos- tic datasets (Wang et al., 2019b), and interactive error analysis (Wu et al., 2019). However, these approaches focus either on individual tasks such as Question Answering or Natural Language Infer- ence, or on a few capabilities (e.g. robustness), and thus do not provide comprehensive guidance on how to evaluate models. Software engineering re- search, on the other hand, has proposed a variety of paradigms and tools for testing complex software systems. In particular, âbehavioral testingâ (also known as black-box testing) is concerned with test- ing diï¬erent capabilities of a system by validating the input-output behavior, without any knowledge of the internal structure (Beizer, 1995). While there are clear similarities, many insights from software engineering are yet to be applied to NLP models. In this work, we propose CheckList, a new eval- uation methodology and accompanying tool1 for comprehensive behavioral testing of NLP models. CheckList guides users in what to test, by provid- ing a list of linguistic capabilities, which are appli- cable to most tasks. To break down potential ca- pability failures into speciï¬c behaviors, CheckList introduces diï¬erent test types, such as prediction invariance in the presence of certain perturbations, or performance on a set of âsanity checks.â Fi- nally, our implementation of CheckList includes multiple abstractions that help users generate large numbers of test cases easily, such as templates, lexi- cons, general-purpose perturbations, visualizations, and context-aware suggestions.
1https://github.com/marcotcr/checklist
Capability Min Func Test INVariance â DiRectional Vocabulary â Fail. rate=15.0% 16.2% © 34.6% NER 0.0% 20.8% Negation 76.4%
Negation 76.4% Test case (A) Testing Negation with MFT Labels: negative, positive, neutral Template: I {NEGATION} {POS_VERB} the {THING}. Expected Predicted Pass? | canât say | recommend the food. neg pos x | didnât love the flight. neg neutral x Failure rate = 76.4% (B) Testing NER with INV Same pred. (inv) after removals / additions @AmericanAir thank you we got on a inv ¢ pos: x different flight to [ Chicago â Dallas ]. neutral @VirginAmerica | canât lose my luggage, inv {neutral 523) moving to [ Brazil + Turkey ] soon, ugh. Failure rate = 20.8% [c) Testing Vocabulary with DIR @AmericanAir service wasn't great. You Ty d neg are lame. neutral @JetBlue why won't YOU help them?! Ugh. | dread you. Sentiment monotonic decreasing (1) x tL ¢ a9 x neutral Failure rate = 34.6%
Figure 1: CheckListing a commercial sentiment analy- sis model (). Tests are structured as a conceptual ma- trix with capabilities as rows and test types as columns (examples of each type in A, B and C).
As an example, we CheckList a commercial sen- timent analysis model in Figure 1. Potential tests are structured as a conceptual matrix, with capa- bilities as rows and test types as columns. As a test of the modelâs Negation capability, we use a Minimum Functionality test (MFT), i.e. simple test cases designed to target a speciï¬c behavior (Figure 1A). We generate a large number of sim- ple examples ï¬lling in a template (âI {NEGATION} {POS_VERB} the {THING}.â) with pre-built lex- icons, and compute the modelâs failure rate on such examples. Named entity recognition (NER) is an- other capability, tested in Figure 1B with an In- variance test (INV) â perturbations that should not change the output of the model. In this case, chang- ing location names should not change sentiment. In Figure 1C, we test the modelâs Vocabulary with a Directional Expectation test (DIR) â perturbations to the input with known expected results â adding negative phrases and checking that sentiment does not become more positive. As these examples indi- cate, the matrix works as a guide, prompting users to test each capability with diï¬erent test types.
We demonstrate the usefulness and generality of CheckList via instantiation on three NLP tasks: sentiment analysis (Sentiment), duplicate question
detection (QQP; Wang et al., 2019b), and ma- chine comprehension (MC; Rajpurkar et al., 2016). While traditional benchmarks indicate that models on these tasks are as accurate as humans, Check- List reveals a variety of severe bugs, where com- mercial and research models do not eï¬ectively han- dle basic linguistic phenomena such as negation, named entities, coreferences, semantic role label- ing, etc, as they pertain to each task. Further, CheckList is easy to use and provides immediate value â in a user study, the team responsible for a commercial sentiment analysis model discovered many new and actionable bugs in their own model, even though it had been extensively tested and used by customers. In an additional user study, we found that NLP practitioners with CheckList generated more than twice as many tests (each test containing an order of magnitude more examples), and uncov- ered almost three times as many bugs, compared to users without CheckList.
# 2 CheckList
Conceptually, users âCheckListâ a model by ï¬ll- ing out cells in a matrix (Figure 1), each cell po- tentially containing multiple tests. In this section, we go into more detail on the rows (capabilities), columns (test types), and how to ï¬ll the cells (tests). CheckList applies the behavioral testing principle of âdecoupling testing from implementationâ by treating the model as a black box, which allows for comparison of diï¬erent models trained on diï¬erent data, or third-party models where access to training data or model structure is not granted.
# 2.1 Capabilities
While testing individual components is a common practice in software engineering, modern NLP mod- els are rarely built one component at a time. In- stead, CheckList encourages users to consider how diï¬erent natural language capabilities are mani- fested on the task at hand, and to create tests to evaluate the model on each of these capabilities. For example, the Vocabulary+POS capability per- tains to whether a model has the necessary vocab- ulary, and whether it can appropriately handle the impact of words with diï¬erent parts of speech on the task. For Sentiment, we may want to check if the model is able to identify words that carry positive, negative, or neutral sentiment, by verify- ing how it behaves on examples like âThis was a good ï¬ight.â For QQP, we might want the model to
understand when modiï¬ers diï¬erentiate questions, e.g. accredited in (âIs John a teacher?â, âIs John an accredited teacher?â). For MC, the model should be able to relate comparatives and superlatives, e.g. (Context: âMary is smarter than John.â, Q: âWho is the smartest kid?â, A: âMaryâ).
We suggest that users consider at least the fol- lowing capabilities: Vocabulary+POS (important words or word types for the task), Taxonomy (syn- onyms, antonyms, etc), Robustness (to typos, irrele- vant changes, etc), NER (appropriately understand- ing named entities), Fairness, Temporal (under- standing order of events), Negation, Coreference, Semantic Role Labeling (understanding roles such as agent, object, etc), and Logic (ability to handle symmetry, consistency, and conjunctions). We will provide examples of how these capabilities can be tested in Section 3 (Tables 1, 2, and 3). This listing of capabilities is not exhaustive, but a starting point for users, who should also come up with additional capabilities that are speciï¬c to their task or domain.
# 2.2 Test Types
We prompt users to evaluate each capability with three diï¬erent test types (when possible): Mini- mum Functionality tests, Invariance, and Direc- tional Expectation tests (the columns in the matrix). A Minimum Functionality test (MFT), inspired by unit tests in software engineering, is a collec- tion of simple examples (and labels) to check a behavior within a capability. MFTs are similar to creating small and focused testing datasets, and are particularly useful for detecting when models use shortcuts to handle complex inputs without actually mastering the capability. The Vocabulary+POS ex- amples in the previous section are all MFTs.
We also introduce two additional test types in- spired by software metamorphic tests (Segura et al., 2016). An Invariance test (INV) is when we apply label-preserving perturbations to inputs and expect the model prediction to remain the same. Diï¬er- ent perturbation functions are needed for diï¬erent capabilities, e.g. changing location names for the NER capability for Sentiment (Figure 1B), or in- troducing typos to test the Robustness capability. A Directional Expectation test (DIR) is similar, except that the label is expected to change in a cer- tain way. For example, we expect that sentiment will not become more positive if we add âYou are lame.â to the end of tweets directed at an airline (Figure 1C). The expectation may also be a target
label, e.g. replacing locations in only one of the questions in QQP, such as (âHow many people are there in England?â, âWhat is the population of England + Turkey?â), ensures that the questions are not duplicates. INVs and DIRs allow us to test models on unlabeled data â they test behaviors that do not rely on ground truth labels, but rather on re- lationships between predictions after perturbations are applied (invariance, monotonicity, etc).
# 2.3 Generating Test Cases at Scale
Users can create test cases from scratch, or by per- turbing an existing dataset. Starting from scratch makes it easier to create a small number of high- quality test cases for speciï¬c phenomena that may be underrepresented or confounded in the original dataset. Writing from scratch, however, requires signiï¬cant creativity and eï¬ort, often leading to tests that have low coverage or are expensive and time-consuming to produce. Perturbation functions are harder to craft, but generate many test cases at once. To support both these cases, we provide a variety of abstractions that scale up test creation from scratch and make perturbations easier to craft. Templates Test cases and perturbations can of- ten be generalized into a template, to test the In Fig- model on a more diverse set of inputs. ure 1 we generalized âI didnât love the food.â with the template âI {NEGATION} {POS_VERB} the {THING}.â, where {NEGATION} = {didnât, canât say I, ...}, {POS_VERB} = {love, like, ...}, {THING} = {food, ï¬ight, service, ...}, and generated all test cases with a Cartesian product. A more diverse set of inputs is particularly helpful when a small set of test cases could miss a failure, e.g. if a model works for some forms of negation but not others. Expanding Templates While templates help scale up test case generation, they still rely on the userâs creativity to create ï¬ll-in values for each
the MORE... I really flight MORE... FILL IN WITH... Check All enjoyed liked loved regret
Figure 2: Templating with masked language models. âI really {mask} the flight.â yields verbs that the user can interactively ï¬lter into positive, negative, and neutral ï¬ll-in lists.
Labels: positive, negative, or neutral; INV: same pred. (INV) after removals/ additions; DIR: sentiment should not decrease ( Ã ) or increase ( Ã )
Failure Rate (%) Test TYPE and Description = G A@ & RB Example test cases & expected behavior MFT: Short sentences with neu- tral adjectives and nouns 0.0 76 4.8 94.6 81.8 MFT: Short sentences with The company is Australian. neutral That is a private aircraft. neutral That cabin crew is extraordinary. pos S _sentiment-laden adjectives 40 15.0 28 0.0 0.2 | despised that aircraft. neg 2 INV: Replace neutral words 4 62 49.4 19.2 19.2 @Vitbin shouldbe concerned that > when I'm about to fly... INV 3 with other neutral words @united the + our nightmare continues... INV * DIR: Add positive phrases.filS 56 19.4 1.4 0.2 10.2 @SoulhwestAir Great trip on 2672 yesterday... You are extraordinary. |p if sent. goes down by > 0.1 ~ ~ . â . @AmericanAir AA45 ... JFK to LAS. You are brilliant. DIR: Add negative phrases, @USAirways your service sucks. You are lame. J fails if sent. goes up by > 0.1 08 346 5.0 0.0 13.2 @jerBiue all day. Labhor you. | INV: Add randomly generated 96 3 24.8 4 @JetBlue that selfie was extreme. @pi9QDK INV Robust, URLs and handles to tweets 6 134 248 14 74 @united stuck because staff took a break? Not happy IK... https://t.co/PWK1jb INV : character wi @JetBlue + @JeBtlue Ler INV INV: Swap one character With 56 ig gq 59 ag @elblue> @leBtlue L eri its neighbor (typo) @SouthwestAir no thanks + thakns INV INV: Switching locations @JetBlue I want you guys to be the first to fly to # Cuba > Canada... INV % should not change predictions 70 208 148 7.6 64 @VvirginAmerica I miss the #nerdbird in San Jose > Denver INV 2 INV: Switching person names 4 151 91 66 24. <~AifPortagents were horrendous. Sharon + Erin was your saviour INV should not change predictions = 4 15-11 6.6 2-4 @united 8602947, Jon + Sean at http://t.co/S8tuTghiOD, thanks. INV Tempora MFT Sentiment change over 419 36,6 42.2 188 11,0 [Used to hate this airline, although now Tike it pos time, present should prevail In the past I thought this airline was perfect, now I think it is creepy. neg MFT: Negated negative should be positive or neutral MFT: Negated neutral should still be neutral 18.8 54.2 29.4 13.2 2.6 40.4 39.6 74.2 98.4 95.4 MFT: Negation of negative at the end, should be pos. or neut. 100.0 90.4 100.0 84.8 7.2 MFT: Negated positive with neutral content in the middle 98-4 100.0 100.0 74.0 30.2 The food is not poor. pos or neutral It isnât a lousy customer service. pos or neutral This aircraft is not private. neutral This is not an international flight. neutral I thought the plane would be awful, but it wasnât. pos or neutral I thought I would dislike that plane, but I didnât. pos or neutral I wouldn't say, given itâs a Tuesday, that this pilot was great. neg I donât think, given my history with airplanes, that this is an amazing staff. neg MFT: Author sentiment is more important than of others 45.4 62.4 68.0 38.8 30.0 MFT: Parsing sentiment in (question, âyesâ) form SRL 9.0 57.6 20.8 3.6 3.0 MFT: Parsing sentiment in (question, ânoâ) form 96.8 90.8 81.6 55.4 54.8 Some people think you are excellent, but I think you are nasty. neg Some people hate you, but I think you are exceptional. pos Do I think that airline was exceptional? Yes. neg Do I think that is an awkward customer service? Yes. neg Do I think the pilot was fantastic? No. neg Do I think this company is bad? No. pos or neutral
Table 1: A selection of tests for sentiment analysis. All examples (right) are failures of at least one model.
placeholder (e.g. positive verbs for {POS_VERB}). We provide users with an abstraction where they mask part of a template and get masked language model (RoBERTa (Liu et al., 2019) in our case) sug- gestions for ï¬ll-ins, e.g. âI really {mask} the flight.â yields {enjoyed, liked, loved, regret, ...}, which the user can ï¬lter into positive, negative, and neutral ï¬ll-in lists and later reuse across mul- tiple tests (Figure 2). Sometimes RoBERTa sug- gestions can be used without ï¬ltering, e.g. âThis is a good {mask}â yields multiple nouns that donât need ï¬ltering. They can also be used in per- turbations, e.g. replacing neutral words like that or the for other words in context (Vocabulary+POS INV examples in Table 1). RoBERTa suggestions can be combined with WordNet categories (syn- onyms, antonyms, etc), e.g. such that only context- appropriate synonyms get selected in a perturba- tion. We also provide additional common ï¬ll-ins for general-purpose categories, such as Named En- tities (common male and female ï¬rst/last names, cities, countries) and protected group adjectives (nationalities, religions, gender and sexuality, etc).
Open source We release an implementation of CheckList at https://github.com/marcotcr/ checklist. In addition to templating features and mask language model suggestions, it contains var- ious visualizations, abstractions for writing test expectations (e.g. monotonicity) and perturbations, saving/sharing tests and test suites such that tests can be reused with diï¬erent models and by diï¬erent teams, and general-purpose perturbations such as char swaps (simulating typos), contractions, name and location changes (for NER tests), etc.
# 3 Testing SOTA models with CheckList
We CueckList the following commercial Sentiment analysis models via their paid APIsâ: Microsoftâs Text Analytics (Hi), Google Cloudâs Natural Lan- guage (G), and Amazonâs Comprehend (@). We also CueckList BERT-base (®) and RoBERTa- base (RoB) (Liu et al., 2019) finetuned on SST-2? (acc: 92.7% and 94.8%) and on the QQP dataset
2From 11/2019, but obtained similar results from 04/2020. 3Predictions with probability of positive sentiment in the p1{3, 2{3q range are considered neutral.
Label: duplicate =, or non-duplicate_#; INV: same pred. (NV) after removals/ additions
âTest TYPE and Description Failure Rate Example Test cases & expected behavior & â RoB Vocab. MFT: Modifiers changes question intent 78.4 78.0 _{ Is Mark Wright a photographer? | Is Mark Wright an accredited photographer? } # ,. _ MFT: Synonyms in simple templates 22.8 39.2 { How can I become more vocal? | How can I become more outspoken? ) = i INV: Replace words with synonyms in real pairs 13.1 12.7 i i necessary ° ee ee o> organised celigiont} INV & MFT: More X = Less antonym(X) 69.4 100.0. { How can I become more optimistic? | How can I become less pessimistic? } = INV: Swap one character with its neighbor (typo) 18.2 12.0. { Why aml getting + gettnig lazy? | Why are we so lazy? } INV Robust. DIR: Paraphrase of question should be duplicate 69.0 25.0 Can I gain weight from not eating enough? _ Can 1+ Do you think I can gain weight from not eating enough? { = INV: Change the same name in both questions 11.8 9.4 NER DIR: Change names in one question, expect # 35.1 30.1 DIR: Keep first word and entities of a question, Why isnât Hillary Clinton > Nicole Perez in jail? Is Hillary Clinton + Nicole Perez going to go to jail? What does India think of Donald Trump? What India thinks about Donald Trump + John Green? # } INV 30.0 32,8. Will itbe difficult to get a US Visa if Donald Trump gets elected? fill in the gaps with ROBERTa; expect # : Will the US accept Donald Trump? MFT: Is # used to be, non-duplicate 61.8 96.8 _{ Is Jordan Perry an advisor? | Did Jordan Perry use to be an advisor? | # | MFT: before # after, non-duplicate 98.0 34.4 {Is itunhealthy to eat after 10pm? | Is it unhealthy to eat before 10pm? } # remporal . coming #: comine What was Danielle Bennettâs life before becoming an agent? MFT: before becoming # after becoming 100.0 0.0 What was Danielle Bennettâs life after becoming an agent? MFT: simple negation, non-duplicate 18.6 0.0 { How can I become a person who is not biased? | How can I become a biased person? | # Negation FT: negation of antonym, should be duplicate 81.6 88.6 { How can I become a positive person? | How can I become a person who is not negative | If Joshua and Chloe were alone, do you think he would reject her? : . se: he + sl 9.0 9 MFT: Simple coreference: he # she 79.0 966 1 Joshua and Chloe were alone, do you think she would reject him? Coref MFT: Simple resolved coreference, his and her 99.6 100.0 If Jack and Lindsey were married, do you think Lindseyâs family would be happy? # If Jack and Lindsey were married, do you think his family would be happy? MET: Order is irrelevant for comparisons 99.6 100.0 MET: Orders is irrelevant in symmetric relations 81.8 100.0 SRL MFT: Order is relevant for asymmetric relations 71.4 100.0 MET: Active / passive swap, same semantics 65.8. 98.6 MET: Active / passive swap, different semantics 97.4 100.0 { Are tigers heavier than insects? | What is heavier, insects or tigers? } = { Is Nicole related to Heather? | Is Heather related to Nicole? } = ( Is Sean hurting Ethan? | Is Ethan hurting Sean? } # { Does Anna love Benjamin? | Is Benjamin loved by Anna? } = { Does Danielle support Alyssa? | Is Danielle supported by Al INV: Symmetry: pred(a, b) = pred(b, a) 44 22 Logic DIR: Implications, eg. (a=b) a (a=c)=>(b=c) 97 85 (Gl. q2) | (@2, a1) | INV no example
# ft
Table 2: A selection of tests for Quora Question Pair. All examples (right) are failures of at least one model.
(acc: 91.1% and 91.3%). For MC, we use a pre- trained BERT-large ï¬netuned on SQuAD (Wolf et al., 2019), achieving 93.2 F1. All the tests pre- sented here are part of the open-source release, and can be easily replicated and applied to new models.
Sentiment Analysis Since social media is listed as a use case for these commercial models, we test on that domain and use a dataset of unlabeled air- line tweets for INV* and DIR perturbation tests. We create tests for a broad range of capabilities, and present subset with high failure rates in Ta- ble 1. The Vocab.+POS MFTs are sanity checks, where we expect models to appropriately handle common neutral or sentiment-laden words. @ and RoB do poorly on neutral predictions (they were trained on binary labels only). Surprisingly, G and @, fail (7.6% and 4.8%) on sentences that are clearly neutral, with G also failing (15%) on non- neutral sanity checks (e.g. âI like this seat.â). In the DIR tests, the sentiment scores predicted by 8% and G frequently (12.6% and 12.4%) go down con- siderably when clearly positive phrases (e.g. âYou
@
4For all the INV tests, models fail whenever their predic- tion changes and the probability changes by more than 0.1.
are extraordinary.â) are added, or up (: 34.6%) for negative phrases (e.g. âYou are lame.â).
All models are sensitive to addition of random (not adversarial) shortened URLs or Twitter han- dles (e.g. 24.8% of @, predictions change), and to name changes, such as locations (G: 20.8%, a: 14.8%) or person names (G: 15.1%, A: 9.1%). of the models do well in tests for the Tem- poral, Negation, and SRL capabilities. Failures on negations as simple as âThe food is not poor.â are particularly notable, e.g. G (54.2%) and &, (29.4%). The failure rate is near 100% for all com- mercial models when the negation comes at the end of the sentence (e.g âI thought the plane would be awful, but it wasnât.â), or with neutral content be- tween the negation and the sentiment-laden word.
Commercial models do not fail simple Fair- ness sanity checks such as âI am a black woman.â (template: âI am a {PROTECTED} {NOUN}.â), always predicting them as neutral. Similar to soft- ware engineering, absence of test failure does not imply that these models are fair â just that they are not unfair enough to fail these simple tests. On always predicts negative when the other hand,
&
Test TYPE Failure Example Test cases (with expected behavior and (i) prediction) and Description Rate (&) MFT: comparisons 20.0 C: Victoria is younger than Dylan. Vocab MFT: intensifiers to superlative: most/least. 91.3 Q: Who is less young? A: Dylan &: Victoria C: Anna is worried about the project. Matthew is extremely worried about the project. Q: Who is least worried about the project? A: Anna @: Matthew MFT: match properties to categories 82.4 MFT: nationality vs job 49.4 > & MFT: animal vs vehicles 26.2 & MFT: comparison to antonym 673 MFT: more/less in context, more/less antonym in question 100.0 C: There is a tiny purple box in the room. Q: What size is the box? A: tim }: purple C: Stephanie is an Indian accountant. . Q: What is Stephanieâs job? A: accountant @: Indian accountant C: Jonathan bought a truck. Isabella bought a hamster. Q: Who bought an animal? A: Isabella @: Jonathan C: Jacob is shorter than Kimberly. Q: Who is taller? A: Kimberly © C: Jeremy is more optimistic than Taylor. Q: Who is more pessimistic? A: Taylor Jacob @: Jeremy C: ...Newcomen designs had a duty of about 7 million, but most were closer to 5 million... g INV: Swap adjacent characters in Q (typo) 11-6 Q; What was the ideal duty + udty of a Newcomen engine? A: INV @: 7 million + 5 million Z 8 % INV: add irrelevant sentence to C 9.8 (no example) _ C: Both Jason and Abigail were journalists, but there was a change in Abigail, who is now a model. @ MPT: change in one person only 41.5 Q: Who is a model? A: Abigail @: Abigail were journalists, but there was a change in Abigail 3 E : f â & MFT: Understanding before/after, last/first 82.9 C: Logan became a farmer before Danielle did. Q: Who became a farmer last? A: Danielle , MFT: Context has negation 67.5 Cz Aaron is not a writer, Rebecca is. Q: Who is a writer? Az Rebecca #: Aaron Z Z_ MFT: Q has negation, C does not 100.0 C; Aaron is an editor. Mark is an actor. Q: Who is not an actor? At Aaron &): Mark ; C: Melissa and Antonio are friends. He is a journalist, and she is an adviser. MFT: Simple coreference, he/she. 100.0 Whois ajoumalict? Az Antonioâ 4: Melisa . : Who is st? Az &: Meliss z . C: Victoria and Alex are friends. Her mom.is an agent & MFT: Simple coreference, his/her. 100.0 Whose mom is an agent? A: Victoria, Alex . C: Kimberly and Jennifer are friends, The former is a teacher MFT: former/latter 100.0 Q: Who is a teacher? A: Kimberly @: Jennifer _, MFT: subject/object distinction 60.8 4 & MFT: subj/obj distinction with 3 agents 95.7
Table 3: A selection of tests for Machine Comprehension.
{PROTECTED} is black, atheist, gay, and lesbian, while predicting positive for Asian, straight, etc.
With the exception of tests that depend on pre- dicting âneutralâ, and RoB did better than all commercial models on almost every other test. This is a surprising result, since the commercial models list social media as a use case, and are under regular testing and improvement with customer feedback, while and RoB are research models trained on and the SST-2 dataset (movie reviews). Finally, RoB fail simple negation MFTs, even though they are fairly accurate (91.5%, 93.9%, respectively) on the subset of the SST-2 validation set that contains negation in some form (18% of instances). By iso- lating behaviors like this, our tests are thus able to evaluate capabilities more precisely, whereas per- formance on the original dataset can be misleading.
&
@
&
Quora Question Pair While and RoB surpass human accuracy on QQP in benchmarks (Wang et al., 2019a), the subset of tests in Table 2 indicate that these models are far from solving the ques- tion paraphrase problem, and are likely relying on
&
# shortcuts for their high accuracy.
Both models lack what seems to be crucial skills for the task: ignoring important modiï¬ers on the Vocab. test, and lacking basic Taxonomy under- standing, e.g. synonyms and antonyms of common words. Further, neither is robust to typos or simple paraphrases. The failure rates for the NER tests indicate that these models are relying on shortcuts such as anchoring on named entities too strongly instead of understanding named entities and their impact on whether questions are duplicates.
Surprisingly, the models often fail to make sim- ple Temporal distinctions (e.g. is#used to be and before#after), and to distinguish between simple Coreferences (he#she). In SRL tests, neither model is able to handle agent/predicate changes, or ac- tive/passive swaps. Finally, @ and RoB change predictions 4.4% and 2.2% of the time when the question order is flipped, failing a basic task re- quirement (if q1 is a duplicate of q2, so is gz of q1). They are also not consistent with Logical implica- tions of their predictions, such as transitivity.
@
Machine Comprehension Vocab+POS tests in often fails to properly grasp in- Table 3 show that tensity modiï¬ers and comparisons/superlatives. It also fails on simple Taxonomy tests, such as match- ing properties (size, color, shape) to adjectives, distinguishing between animals-vehicles or jobs- nationalities, or comparisons involving antonyms.
@
The model does not seem capable of handling short instances with Temporal concepts such as be- fore, after, last, and ï¬rst, or with simple examples of Negation, either in the question or in the context. It also does not seem to resolve basic Coreferences, and grasp simple subject/object or active/passive distinctions (SRL), all of which are critical to true comprehension. Finally, the model seems to have certain biases, e.g. for the simple negation template â{P1} is not a {PROF}, {P2} is.â as con- text, and âWho is a {PROF}?â as question, if we set {PROF} = doctor, {P1} to male names and {P2} to female names (e.g. âJohn is not a doctor, Mary is.â; âWho is a doctor?â), the model fails (picks the man as the doctor) 89.1% of the time. If the situation is reversed, the failure rate is only 3.2% (woman predicted as doctor). If {PROF} = secretary, it wrongly picks the man only 4.0% of the time, and the woman 60.5% of the time.
Discussion We applied the same process to very diï¬erent tasks, and found that tests reveal interest- ing failures on a variety of task-relevant linguistic capabilities. While some tests are task speciï¬c (e.g. positive adjectives), the capabilities and test types are general; many can be applied across tasks, as is (e.g. testing Robustness with typos) or with minor variation (changing named entities yields diï¬erent expectations depending on the task). This small se- lection of tests illustrates the beneï¬ts of systematic testing in addition to standard evaluation. These tasks may be considered âsolvedâ based on bench- mark accuracy results, but the tests highlight vari- ous areas of improvement â in particular, failure to demonstrate basic skills that are de facto needs for the task at hand (e.g. basic negation, agent/object distinction, etc). Even though some of these fail- ures have been observed by others, such as typos (Belinkov and Bisk, 2018; Rychalska et al., 2019) and sensitivity to name changes (Prabhakaran et al., 2019), we believe the majority are not known to the community, and that comprehensive and struc- tured testing will lead to avenues of improvement in these and other tasks.
# 4 User Evaluation
The failures discovered in the previous section demonstrate the usefulness and ï¬exibility of Check- List. In this section, we further verify that Check- List leads to insights both for users who already test their models carefully and for users with little or no experience in a task.
4.1 CheckListing a Commercial System
We approached the team responsible for the gen- eral purpose sentiment analysis model sold as a service by Microsoft (q on Table 1). Since it is a public-facing system, the modelâs evaluation proce- dure is more comprehensive than research systems, including publicly available benchmark datasets as well as focused benchmarks built in-house (e.g. negations, emojis). Further, since the service is ma- ture with a wide customer base, it has gone through many cycles of bug discovery (either internally or through customers) and subsequent ï¬xes, after which new examples are added to the benchmarks. Our goal was to verify if CheckList would add value even in a situation like this, where models are already tested extensively with current practices.
We invited the team for a CheckList session last- ing approximately 5 hours. We presented Check- List (without presenting the tests we had already created), and asked them to use the methodology to test their own model. We helped them imple- ment their tests, to reduce the additional cognitive burden of having to learn the software components of CheckList. The team brainstormed roughly 30 tests covering all capabilities, half of which were MFTs and the rest divided roughly equally between INVs and DIRs. Due to time constraints, we imple- mented about 20 of those tests. The tests covered many of the same functionalities we had tested our- selves (Section 3), often with diï¬erent templates, but also ones we had not thought of. For example, they tested if the model handled sentiment coming from camel-cased twitter hashtags correctly (e.g. â#IHateYouâ, â#ILoveYouâ), implicit negation (e.g. âI wish it was goodâ), and others. Further, they proposed new capabilities for testing, e.g. handling diï¬erent lengths (sentences vs paragraphs) and sen- timent that depends on implicit expectations (e.g. âThere was no {AC}â when {AC} is expected). Qualitatively, the team stated that CheckList was very helpful: (1) they tested capabilities they had not considered, (2) they tested capabilities that they had considered but are not in the benchmarks,
and (3) even capabilities for which they had bench- marks (e.g. negation) were tested much more thor- oughly and systematically with CheckList. They discovered many previously unknown bugs, which they plan to ï¬x in the next model iteration. Finally, they indicated that they would deï¬nitely incorpo- rate CheckList into their development cycle, and requested access to our implementation. This ses- sion, coupled with the variety of bugs we found for three separate commercial models in Table 1, indicates that CheckList is useful even in pipelines that are stress-tested and used in production.
# 4.2 User Study: CheckList MFTs
We conduct a user study to further evaluate dif- ferent subsets of CheckList in a more controlled environment, and to verify if even users with no previous experience in a task can gain insights and ï¬nd bugs in a model. We recruit 18 participants (8 from industry, 10 from academia) who have at least intermediate NLP experience5, and task them ï¬netuned on QQP for a period of with testing two hours (including instructions), using Jupyter notebooks. Participants had access to the QQP val- idation dataset, and are instructed to create tests that explore diï¬erent capabilities of the model. We separate participants equally into three conditions: In Unaided, we give them no further instructions, simulating the current status-quo for commercial systems (even the practice of writing additional tests beyond benchmark datasets is not common for research models). In Cap. only, we provide short descriptions of the capabilities listed in Section 2.1 as suggestions to test, while in Cap.+templ. we further provide them with the template and ï¬ll-in tools described in Section 2.3. Only one partici- pant (in Unaided) had prior experience with QQP. Due to the short study duration, we only asked users to write MFTs in all conditions; thus, even Cap.+templ. is a subset of CheckList.
&
We present the results in Table 4. Even though users had to parse more instructions and learn a new tool when using CheckList, they created many more tests for the model in the same time. Further, templates and masked language model suggestions helped users generate many more test cases per test in Cap.+templ. than in the other two conditions â although users could use arbitrary Python code rather than write examples by hand, only one user in Unaided did (and only for one test).
5i.e. have taken a graduate NLP course or equivalent.
Unaided CheckList Cap. only Cap.+templ. #Tests #Cases/test #Capabilities tested 5.8 Ë 1.1 7.3 Ë 5.6 3.2 Ë 0.7 10.2 Ë 1.8 5.0 Ë 1.2 7.5 Ë 1.9 13.5 Ë 3.4 198.0 Ë 96 7.8 Ë 1.1 Total severity #Bugs (sev Ä 3q 10.8 Ë 3.8 2.2 Ë 1.2 21.7 Ë 5.7 5.5 Ë 1.7 23.7 Ë 4.2 6.2 Ë 0.9
Table 4: User Study Results: ï¬rst three rows indi- cate number of tests created, number of test cases per test and number of capabilities tested. Users report the severity of their ï¬ndings (last two rows).
Users explored many more capabilities on Cap. only and Cap.+templ. (we annotate tests with capabilities post-hoc); participants in Unaided only tested Robustness, Vocabulary+POS, Taxonomy, and few instances of SRL, while participants in the other conditions covered all capabilities. Users in Cap. only and Cap.+templ. collectively came up with tests equivalent to almost all MFTs in Table 2, and more that we had not contemplated. Users in Unaided and Cap. only often did not ï¬nd more bugs because they lacked test case variety even when testing the right concepts (e.g. negation).
At the end of the experiment, we ask users to evaluate the severity of the failures they observe on each particular test, on a 5 point scale6. While there is no âground truthâ, these severity ratings provide each userâs perception on the magnitude of the discovered bugs. We report the severity sum of discovered bugs (for tests with severity at least 2), in Table 4, as well as the number of tests for which severity was greater or equal to 3 (which ï¬lters out minor bugs). We note that users with Check- List (Cap. only and Cap.+templ.) discovered much more severe problems in the model (measured by total severity or # bugs) than users in the control condition (Unaided). We ran a separate round of severity evaluation of these bugs with a new user (who did not create any tests), and obtain nearly identical aggregate results to self-reported severity. The study results are encouraging: with a subset of CheckList, users without prior experience are able to ï¬nd signiï¬cant bugs in a SOTA model in only 2 hours. Further, when asked to rate diï¬erent aspects of CheckList (on a scale of 1-5), users in- dicated the testing session helped them learn more about the model (4.7 Ë 0.5), capabilities helped them test the model more thoroughly (4.5 Ë 0.4), and so did templates (4.3 Ë 1.1).
61 (not a bug), 2 (minor bug), 3 (bug worth investigating and ï¬xing), 4 (severe bug, model may not be ï¬t for production), and 5 (no model with this bug should be in production).
# 5 Related Work
One approach to evaluate speciï¬c linguistic capa- bilities is to create challenge datasets. Belinkov and Glass (2019) note beneï¬ts of this approach, such as systematic control over data, as well as drawbacks, such as small scale and lack of resem- blance to ârealâ data. Further, they note that the majority of challenge sets are for Natural Language Inference. We do not aim for CheckList to replace challenge or benchmark datasets, but to comple- ment them. We believe CheckList maintains many of the beneï¬ts of challenge sets while mitigating their drawbacks: authoring examples from scratch with templates provides systematic control, while perturbation-based INV and DIR tests allow for testing behavior in unlabeled, naturally-occurring data. While many challenge sets focus on extreme or diï¬cult cases (Naik et al., 2018), MFTs also focus on what should be easy cases given a capa- bility, uncovering severe bugs. Finally, the user study demonstrates that CheckList can be used ef- fectively for a variety of tasks with low eï¬ort: users created a complete test suite for sentiment analysis in a day, and MFTs for QQP in two hours, both revealing previously unknown, severe bugs.
With the increase in popularity of end-to- end deep models, the community has turned to âprobesâ, where a probing model for linguistic phe- nomena of interest (e.g. NER) is trained on in- termediate representations of the encoder (Tenney et al., 2019; Kim et al., 2019). Along similar lines, previous work on word embeddings looked for cor- relations between properties of the embeddings and downstream task performance (Tsvetkov et al., 2016; Rogers et al., 2018). While interesting as analysis methods, these do not give users an under- standing of how a ï¬ne-tuned (or end-to-end) model can handle linguistic phenomena for the end-task. For example, while Tenney et al. (2019) found that very accurate NER models can be trained using BERT (96.7%), we show BERT ï¬netuned on QQP or SST-2 displays severe NER issues.
There are existing perturbation techniques meant to evaluate speciï¬c behavioral capabilities of NLP models such as logical consistency (Ribeiro et al., 2019) and robustness to noise (Belinkov and Bisk, 2018), name changes (Prabhakaran et al., 2019), or adversaries (Ribeiro et al., 2018). CheckList provides a framework for such techniques to sys- tematically evaluate these alongside a variety of other capabilities. However, CheckList cannot be
directly used for non-behavioral issues such as data versioning problems (Amershi et al., 2019), label- ing errors, annotator biases (Geva et al., 2019), worst-case security issues (Wallace et al., 2019), or lack of interpretability (Ribeiro et al., 2016).
# 6 Conclusion
While useful, accuracy on benchmarks is not suï¬- cient for evaluating NLP models. Adopting princi- ples from behavioral testing in software engineer- ing, we propose CheckList, a model-agnostic and task-agnostic testing methodology that tests indi- vidual capabilities of the model using three diï¬er- ent test types. To illustrate its utility, we highlight signiï¬cant problems at multiple levels in the con- ceptual NLP pipeline for models that have âsolvedâ existing benchmarks on three diï¬erent tasks. Fur- ther, CheckList reveals critical bugs in commercial systems developed by large software companies, in- dicating that it complements current practices well. Tests created with CheckList can be applied to any model, making it easy to incorporate in current benchmarks or evaluation pipelines.
Our user studies indicate that CheckList is easy to learn and use, and helpful both for expert users who have tested their models at length as well as for practitioners with little experience in a task. The tests presented in this paper are part of Check- Listâs open source release, and can easily be in- corporated into existing benchmarks. More impor- tantly, the abstractions and tools in CheckList can be used to collectively create more exhaustive test suites for a variety of tasks. Since many tests can be applied across tasks as is (e.g. typos) or with minor variations (e.g. changing names), we ex- pect that collaborative test creation will result in evaluation of NLP models that is much more ro- bust and detailed, beyond just accuracy on held-out data. CheckList is open source, and available at https://github.com/marcotcr/checklist.
# Acknowledgments
We would like to thank Sara Ribeiro, Scott Lund- berg, Matt Gardner, Julian Michael, and Ece Kamar for helpful discussions and feedback. Sameer was funded in part by the NSF award #IIS-1756023, and in part by the DARPA MCS program under Contract No. N660011924033 with the United States Oï¬ce of Naval Research.
# References
Saleema Amershi, Andrew Begel, Christian Bird, Rob DeLine, Harald Gall, Ece Kamar, Nachi Nagap- pan, Besmira Nushi, and Tom Zimmermann. 2019. Software engineering for machine learning: A case study. In International Conference on Software En- gineering (ICSE 2019) - Software Engineering in Practice track. IEEE Computer Society.
Boris Beizer. 1995. Black-box Testing: Techniques for Functional Testing of Software and Systems. John Wiley & Sons, Inc., New York, NY, USA.
Yonatan Belinkov and Yonatan Bisk. 2018. Synthetic and natural noise both break neural machine transla- tion. In International Conference on Learning Rep- resentations.
Yonatan Belinkov and James Glass. 2019. Analysis methods in neural language processing: A survey. Transactions of the Association for Computational Linguistics, 7:49â72.
Mor Geva, Yoav Goldberg, and Jonathan Berant. 2019. Are we modeling the task or the annotator? an inves- tigation of annotator bias in natural language under- standing datasets. In Empirical Methods in Natural Language Processing (EMNLP), pages 1161â1166.
Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. In Proceedings of NAACL-HLT, pages 1875â1885.
Najoung Kim, Roma Patel, Adam Poliak, Patrick Xia, Alex Wang, Tom McCoy, Ian Tenney, Alexis Ross, Tal Linzen, Benjamin Van Durme, et al. 2019. Prob- ing what diï¬erent nlp tasks teach machines about In Proceedings of function word comprehension. the Eighth Joint Conference on Lexical and Compu- tational Semantics (* SEM 2019), pages 235â249.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.
Aakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, and Graham Neubig. 2018. Stress Test Evaluation for Natural Language Infer- In International Conference on Computa- ence. tional Linguistics (COLING).
Kayur Patel, James Fogarty, James A Landay, and Bev- erly Harrison. 2008. Investigating statistical ma- chine learning as a tool for software development. In Proceedings of the SIGCHI Conference on Hu- man Factors in Computing Systems, pages 667â676. ACM.
Vinodkumar Prabhakaran, Ben Hutchinson, and Mar- garet Mitchell. 2019. Perturbation sensitivity analy- sis to detect unintended model biases. In Proceed- ings of the 2019 Conference on Empirical Methods
in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 5740â5745, Hong Kong, China. Association for Computational Lin- guistics.
Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you donât know: Unanswerable ques- In Proceedings of the 56th An- tions for SQuAD. nual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 784â 789, Melbourne, Australia. Association for Compu- tational Linguistics.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2383â2392.
Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. 2019. Do imagenet classiï¬ers In International Confer- generalize to imagenet? ence on Machine Learning, pages 5389â5400.
Marco Tulio Ribeiro, Carlos Guestrin, and Sameer Singh. 2019. Are red roses red? evaluating con- sistency of question-answering models. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6174â6184.
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. Why should i trust you?: Explain- In Proceed- ing the predictions of any classiï¬er. ings of the 22nd ACM SIGKDD international con- ference on knowledge discovery and data mining, pages 1135â1144. ACM.
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Semantically equivalent adversarial rules for debugging nlp models. In Association for Computational Linguistics (ACL).
Anna Rogers, Shashwath Hosur Ananthakrishna, and Anna Rumshisky. 2018. Whatâs in your embedding, In Proceed- and how it predicts task performance. ings of the 27th International Conference on Com- putational Linguistics, pages 2690â2703, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
Alicja Gosiewska, and PrzemysÅaw Biecek. 2019. Models in the wild: On corruption robustness of neural nlp In International Conference on Neural systems. Information Processing, pages 235â247. Springer.
Sergio Segura, Gordon Fraser, Ana B Sanchez, and An- tonio Ruiz-Cortés. 2016. A survey on metamorphic testing. IEEE Transactions on software engineering, 42(9):805â824.
Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. In BERT rediscovers the classical NLP pipeline.
Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4593â 4601, Florence, Italy. Association for Computational Linguistics.
Yulia Tsvetkov, Manaal Faruqui, and Chris Dyer. 2016. Correlation-based intrinsic evaluation of word vec- tor representations. In Proceedings of the 1st Work- shop on Evaluating Vector-Space Representations for NLP, pages 111â115, Berlin, Germany. Associ- ation for Computational Linguistics.
Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019. Universal adversarial trig- In Proceed- gers for attacking and analyzing nlp. ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 2153â2162.
Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019a. Superglue: A stickier benchmark for general-purpose language un- derstanding systems. In Advances in Neural Infor- mation Processing Systems, pages 3261â3275.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019b. GLUE: A multi-task benchmark and analysis plat- In Inter- form for natural language understanding. national Conference on Learning Representations.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Râemi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingfaceâs trans- formers: State-of-the-art natural language process- ing. ArXiv, abs/1910.03771.
Tongshuang Wu, Marco Tulio Ribeiro, Jeï¬rey Heer, and Daniel S Weld. 2019. Errudite: Scalable, repro- ducible, and testable error analysis. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 747â763. | {
"id": "1907.11692"
} |
2005.03675 | Machine Learning on Graphs: A Model and Comprehensive Taxonomy | There has been a surge of recent interest in learning representations for
graph-structured data. Graph representation learning methods have generally
fallen into three main categories, based on the availability of labeled data.
The first, network embedding (such as shallow graph embedding or graph
auto-encoders), focuses on learning unsupervised representations of relational
structure. The second, graph regularized neural networks, leverages graphs to
augment neural network losses with a regularization objective for
semi-supervised learning. The third, graph neural networks, aims to learn
differentiable functions over discrete topologies with arbitrary structure.
However, despite the popularity of these areas there has been surprisingly
little work on unifying the three paradigms. Here, we aim to bridge the gap
between graph neural networks, network embedding and graph regularization
models. We propose a comprehensive taxonomy of representation learning methods
for graph-structured data, aiming to unify several disparate bodies of work.
Specifically, we propose a Graph Encoder Decoder Model (GRAPHEDM), which
generalizes popular algorithms for semi-supervised learning on graphs (e.g.
GraphSage, Graph Convolutional Networks, Graph Attention Networks), and
unsupervised learning of graph representations (e.g. DeepWalk, node2vec, etc)
into a single consistent approach. To illustrate the generality of this
approach, we fit over thirty existing methods into this framework. We believe
that this unifying view both provides a solid foundation for understanding the
intuition behind these methods, and enables future research in the area. | http://arxiv.org/pdf/2005.03675 | Ines Chami, Sami Abu-El-Haija, Bryan Perozzi, Christopher Ré, Kevin Murphy | cs.LG, cs.NE, cs.SI, stat.ML | null | null | cs.LG | 20200507 | 20220412 | 2 2 0 2
r p A 2 1 ] G L . s c [
3 v 5 7 6 3 0 . 5 0 0 2 : v i X r a
# Machine Learning on Graphs: A Model and Comprehensive Taxonomy
# Ines Chami*â , Sami Abu-El-Haijaâ¡, Bryan Perozziâ â , Christopher R´eâ¡â¡, and Kevin Murphyâ â
â Stanford University, Institute for Computational and Mathematical Engineering â¡University of Southern California, Information Sciences Institute â¡â¡Stanford University, Department of Computer Science â â Google Research {chami,chrismre}@cs.stanford.edu, [email protected], [email protected], [email protected]
# April 13, 2022
# Abstract
There has been a surge of recent interest in graph representation learning (GRL). GRL methods have generally fallen into three main categories, based on the availability of labeled data. The ï¬rst, network embedding, focuses on learning unsupervised representations of relational structure. The second, graph regularized neural networks, lever- ages graphs to augment neural network losses with a regularization objective for semi-supervised learning. The third, graph neural networks, aims to learn differentiable functions over discrete topologies with arbitrary structure. How- ever, despite the popularity of these areas there has been surprisingly little work on unifying the three paradigms. Here, we aim to bridge the gap between network embedding, graph regularization and graph neural networks. We propose a comprehensive taxonomy of GRL methods, aiming to unify several disparate bodies of work. Speciï¬cally, we propose the GRAPHEDM framework, which generalizes popular algorithms for semi-supervised learning (e.g. GraphSage, GCN, GAT), and unsupervised learning (e.g. DeepWalk, node2vec) of graph representations into a single consistent approach. To illustrate the generality of GRAPHEDM, we ï¬t over thirty existing methods into this frame- work. We believe that this unifying view both provides a solid foundation for understanding the intuition behind these methods, and enables future research in the area.
*Work partially done during an internship at Google Research.
1
# Contents
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Deï¬nitions . . 2.2 The generalized network embedding problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Node features in network embedding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.2 Transductive and inductive network embedding . . . . . . . . . . . . . . . . . . . . . . . . . Positional vs structural network embedding . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.3 . 2.2.4 Unsupervised and supervised network embedding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 The GRAPHEDM framework . 3.2 Taxonomy of objective functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Taxonomy of encoders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Historical Context . . . . . . . . . . . . . . 4.1 Shallow embedding methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 Distance-based: Euclidean methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.2 Distance-based: Non-Euclidean methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.3 Outer product-based: Matrix factorization methods . . . . . . . . . . . . . . . . . . . . . . . 4.1.4 Outer product-based: Skip-gram methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Auto-encoders . . 4.3 Graph neural networks 4.4 Summary of unsupervised embedding methods . . . . . . . . . . . . . . 5.1 Shallow embedding methods . 5.2 Graph regularization methods . . . . . . 5.2.1 Laplacian . . Skip-gram . 5.2.2 5.3 Graph convolution framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 The Graph Neural Network model and related frameworks . . . . . . . . . . . . . . . . . . . 5.3.2 Graph Convolution Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Spectrum-based methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Spectrum-free methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.1 Sampling-based spatial methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5.2 Attention-based spatial methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Non-Euclidean Graph Convolutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Summary of supervised graph embedding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Spectral Graph Convolutions . . . . 5.4.1 5.4.2 5.5 Spatial Graph Convolutions . . . 6.1 Unsupervised applications . 4 6 6 8 8 8 9 9 9 9 11 12 12 13 13 15 16 17 18 21 22 23 24 24 25 25 26 27 27 28 29 29 31 32 32 33 34 34
# 1 Introduction
# 2 Preliminaries
# 3 A Taxonomy of Graph Embedding Models . . .
# 4 Unsupervised Graph Embedding
# 5 Supervised Graph Embedding
# 6 Applications
. 6.1.1 Graph reconstruction . . 6.1.2 Link prediction . . . 6.1.3 Clustering . . . . 6.1.4 Visualization . . 6.2 Supervised applications . . .
. . . . 6.2.1 Node classiï¬cation . 6.2.2 Graph classiï¬cation .
. . . .
.
. . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
..
2
35 35 35 35 37 37 37 37 38
# 7 Conclusion and Open Research Directions
3
38
# 1 Introduction
Learning representations for complex structured data is a challenging task. In the last decade, many successful models have been developed for certain kinds of structured data, including data deï¬ned on a discretized Euclidean domain. For instance, sequential data, such as text or videos, can be modelled via recurrent neural networks, which can capture sequential information, yielding efï¬cient representations as measured on machine translation and speech recognition tasks. Another example is convolutional neural networks (CNNs), which parameterize neural networks according to structural priors such as shift-invariance, and have achieved unprecedented performance in pattern recognition tasks such as image classiï¬cation or speech recognition. These major successes have been restricted to particular types of data that have a simple relational structure (e.g. sequential data, or data following regular patterns).
In many settings, data is not nearly as regular: complex relational structures commonly arise, and extracting information from that structure is key to understanding how objects interact with each other. Graphs are a universal data structures that can represent complex relational data (composed of nodes and edges), and appear in multiple domains such as social networks, computational chemistry [62], biology [139], recommendation systems [86], semi- supervised learning [60], and others. For graph-structured data, it is challenging to deï¬ne networks with strong structural priors, as structures can be arbitrary, and can vary signiï¬cantly across different graphs and even different nodes within the same graph. In particular, operations like convolutions cannot be directly applied on irregular graph domains. For instance in images, each pixel has the same neighborhood structure, allowing to apply the same ï¬lter weights at multiple locations in the image. However in graphs, one canât deï¬ne an ordering of node since each node might have a different neighborhood structure (Fig. 1). Furthermore, Euclidean convolutions strongly rely on geometric priors (e.g. shift invariance) which donât generalize to non-Euclidean domains (e.g. translations might not even be deï¬ned on non-Euclidean domains).
These challenges led to the development of Geometric Deep Learning (GDL) research which aims at applying deep learning techniques to non-Euclidean data. In particular, given the widespread prevalence of graphs in real- world applications, there has been a surge of interest in applying machine learning methods to graph-structured data. Among these, Graph Representation Learning (GRL) methods aim at learning low-dimensional continuous vector representations for graph-structured data, also called embeddings.
Broadly speaking, GRL can be divided into two classes of learning problems, unsupervised and supervised (or semi-supervised) GRL. The ï¬rst family aims at learning low-dimensional Euclidean representations that preserve the structure of an input graph. The second family also learns low-dimensional Euclidean representations but for a speciï¬c downstream prediction task such as node or graph classiï¬cation. Different from the unsupervised setting where inputs are usually graph structures, inputs in supervised settings are usually composed of different signals deï¬ned on graphs, commonly known as node features. Additionally, the underlying discrete graph domain can be ï¬xed, which is the transductive learning setting (e.g. predicting user properties in a large social network), but can also vary in the inductive learning setting (e.g. predicting molecules attribute where each molecule is a graph). Finally, note that while most supervised and unsupervised methods learn representations in Euclidean vector spaces, there recently has been interest for non-Euclidean representation learning, which aims at learning non-Euclidean embedding spaces such as hyperbolic or spherical spaces. The main motivations for this body of work is to use a continuous embedding space that resembles the underlying discrete structure of the input data it tries to embed (e.g. the hyperbolic space is a continuous version of trees [131]).
Given the impressive pace at which the ï¬eld of GRL is growing, we believe it is important to summarize and describe all methods in one uniï¬ed and comprehensible framework. The goal of this survey is to provide a uniï¬ed view of representation learning methods for graph-structured data, to better understand the different ways to leverage graph structure in deep learning models.
A number of graph representation learning surveys exist. First, there exist several surveys that cover shallow network embedding and auto-encoding techniques and we refer to [27, 36, 67, 73, 164] for a detailed overview of these methods. Second, Bronstein et al. [24] also gives an extensive overview of deep learning models for non-Euclidean data such as graphs or manifolds. Third, there have been several recent surveys [12, 155, 166, 168] covering methods applying deep learning to graphs, including graph neural networks. Most of these surveys focus on a speciï¬c sub-ï¬eld of graph representation learning and do not draw connections between each sub-ï¬eld.
In this work, we extend the encoder-decoder framework proposed by Hamilton et al. [73] and introduce a general framework, the Graph Encoder Decoder Model (GRAPHEDM), which allows us to group existing work into four major categories: (i) shallow embedding methods, (ii) auto-encoding methods, (iii) graph regularization methods, and (iv) graph neural networks (GNNs). Additionally, we introduce a Graph Convolution Framework (GCF), speciï¬cally
4
¢ + + + + $ ¢ + + + + 4 ¢ + + + + ry % + + + + ry 6 & ° °
# (a) Grid (Euclidean).
# (b) Arbitrary graph (Non-Euclidean).
Figure 1: An illustration of Euclidean vs. non-Euclidean graphs.
designed to describe convolution-based GNNs, which have achieved state-of-the art performance in a broad range of applications. This allows us to analyze and compare a variety of GNNs, ranging in construction from methods operating in the Graph Fourier1 domain to methods applying self-attention as a neighborhood aggregation function [147]. We hope that this uniï¬ed formalization of recent work would help the reader gain insights into the various learning methods on graphs to reason about similarities, differences, and point out potential extensions and limitations. That said, our contribution with regards to previous surveys are threefold:
⢠We introduce a general framework, GRAPHEDM, to describe a broad range of supervised and unsupervised methods that operate on graph-structured data, namely shallow embedding methods, graph regularization meth- ods, graph auto-encoding methods and graph neural networks.
⢠Our survey is the ï¬rst attempt to unify and view these different lines of work from the same perspective, and we provide a general taxonomy (Fig. 3) to understand differences and similarities between these methods. In particular, this taxonomy encapsulates over thirty existing GRL methods. Describing these methods within a comprehensive taxonomy gives insight to exactly how these methods differ.
⢠We release an open-source library for GRL which includes state-of-the-art GRL methods and important graph applications, including node classiï¬cation and link prediction. Our implementation is publicly available at https://github.com/google/gcnn-survey-paper.
Organization of the survey We ï¬rst review basic graph deï¬nitions and clearly state the problem setting for GRL (Sec- tion 2). In particular, we deï¬ne and discuss the differences between important concepts in GRL, including the role of node features in GRL and how they relate to supervised GRL (Section 2.2.1), the distinctions between inductive and transductive learning (Section 2.2.2), positional and structural embeddings (Section 2.2.3) and the differences between supervised and unsupervised embeddings (Section 2.2.4). We then introduce GRAPHEDM (Section 3) a general framework to describe both supervised and unsupervised GRL methods, with or without the presence of node features, which can be applied in both inductive and transductive learning settings. Based on GRAPHEDM, we in- troduce a general taxonomy of GRL methods (Fig. 3) which encapsulates over thirty recent GRL models, and we describe both unsupervised (Section 4) and supervised (Section 5) methods using this taxonomy. Finally, we survey graph applications (Section 6).
1As deï¬ned by the eigenspace of the graph Laplacian.
5
# 2 Preliminaries
Here we introduce the notation used throughout this article (see Table 1 for a summary), and the generalized network embedding problem which graph representation learning methods aim to solve.
# 2.1 Deï¬nitions
Deï¬nition 2.1 (Graph). A graph G given as a pair: G = (V, E), comprises a set of vertices (or nodes) V = {v1, . . . , v|V |} connected by edges E = {e1, . . . , e|E|}, where each edge ek is a pair (vi, vj) with vi, vj â V . A graph is weighted if there exist a weight function: w : (vi, vj) â wij that assigns weight wij to edge connecting nodes vi, vj â V . Otherwise, we say that the graph is unweighted. A graph is undirected if (vi, vj) â E implies (vj, vi) â E, i.e. the relationships are symmetric, and directed if the existence of edge (vi, vj) â E does not necessarily imply (vj, vi) â E. Finally, a graph can be homogeneous if nodes refer to one type of entity and edges to one relationship. It can be heterogeneous if it contains different types of nodes and edges.
For instance, social networks are homogeneous graphs that can be undirected (e.g. to encode symmetric relations like friendship) or directed (e.g. to encode the relation following); weighted (e.g. co-activities) or unweighted.
Deï¬nition 2.2 (Path). A path P is a sequence of edges (ui1, ui2), (ui2, ui3), . . . , (uik , uik+1) of length k. A path is called simple if all uij are distinct from each other. Otherwise, if a path visits a node more than once, it is said to contain a cycle.
Deï¬nition 2.3 (Distance). Given two nodes (u, v) in a graph G, we deï¬ne the distance from u to v, denoted dG(u, v), to be the length of the shortest path from u to v, or â if there exist no path from u to v.
The graph distance between two nodes is the analog of geodesic lengths on manifolds.
Deï¬nition 2.4 (Vertex degree). The degree, deg(vi), of a vertex vi in an unweighted graph is the number of edges incident to it. Similarly, the degree of a vertex vi in a weighted graph is the sum of incident edges weights. The degree matrix D of a graph with vertex set V is the |V | à |V | diagonal matrix such that Dii = deg(vi).
Definition 2.5 (Adjacency matrix). A finite graph G = (V, E) can be represented as a square |V| x |V| adjacency matrix, where the elements of the matrix indicate whether pairs of nodes are adjacent or not. The adjacency matrix is binary for unweighted graph, A ⬠{0,1}!V1*!Y|, and non-binary for weighted graphs W ⬠R'V\*|. Undirected graphs have symmetric adjacency matrices, in which case, W denotes symmetrically-normalized adjacency matrix: Ww = D-!/2WwD-'/2, where D is the degree matrix. Definition 2.6 (Laplacian). The unnormalized Laplacian of an undirected graph is the |\V\ x |V| matrix L = DâW. The symmetric normalized Laplacian is L = I â D~'/?WD~1/?. The random walk normalized Laplacian is the matrix L'â = I â D-1W.
The name random walk comes from the fact that Dâ1W is a stochastic transition matrix that can be interpreted as the transition probability matrix of a random walk on the graph. The graph Laplacian is a key operator on graphs and can be interpreted as the analogue of the continuous Laplace-Beltrami operator on manifolds. Its eigenspace capture important properties about a graph (e.g. cut information often used for spectral graph clustering) but can also serve as a basis for smooth functions deï¬ned on the graph for semi-supervised learning [15]. The graph Laplacian is also closely related to the heat equation on graphs as it is the generator of diffusion processes on graphs and can be used to derive algorithms for semi-supervised learning on graphs [167].
Deï¬nition 2.7 (First order proximity). The ï¬rst order proximity between two nodes vi and vj is a local similarity measure indicated by the edge weight wij. In other words, the ï¬rst-order proximity captures the strength of an edge between node vi and node vj (should it exist).
Deï¬nition 2.8 (Second-order proximity). The second order proximity between two nodes vi and vj is measures the similarity of their neighborhood structures. Two nodes in a network will have a high second-order proximity if they tend to share many neighbors.
Note that there exist higher-order measures of proximity between nodes such as Katz Index, Adamic Adar or Rooted PageRank [99]. These notions of node proximity are particularly important in network embedding as many algorithms are optimized to preserve some order of node proximity in the graph.
6
Notation Meaning GRL Graph Representation Learning oe GRAPHEDM Graph Encoder Decoder Model Abbreviations GNN Graph Neural Network GCF Graph Convolution Framework G=(V,E) Graph with vertices (nodes) V and edges E uâ¬EV Graph vertex dg(-,:) Graph distance (length of shortest path) deg(-) Node degree DeRVIXIV! Diagonal degree matrix Graph notation W eRVIxIV Graph weighted adjacency matrix W eRVIXIV Symmetric normalized adjacency matrix (W = D~!/2WD-1/2) Ae {0,1}!IxIVl_ | Graph unweighted weighted adjacency matrix LeRWVIxIV| Graph unnormalized Laplacian matrix (LZ = D â W) LeRWIxIV Graph normalized Laplacian matrix (L = I â D~1/2WD-1/2) L eâ¬RIVIXIV| Random walk normalized Laplacian (L'â¢â = I â D-!W) do Input feature dimension X ⬠RIVIxdo Node feature matrix d Final embedding dimension ZERWVIx4 Node embedding matrix dy Intermediate hidden embedding dimension at layer ¢ Hé E RV Ixde Hidden representation at layer 0 y Label space y® ⬠RIVIXII Graph (5 = G) or node ($ = N) ground truth labels g° ⬠RIVIXIYl Predicted labels s(W) ⬠R'VI*IVI__| Target similarity or dissimilarity matrix in graph regularization GRAPHEDM notation W eRIVIXIVI Predicted similarity or dissimilarity matrix ENC(.; 0â) Encoder network with parameters 0â DEC(.; 0â) Graph decoder network with parameters 0â DEC(-; 0%) Label decoder network with parameters 0% Leurly*, 7°39) Larec(W,W; 0) Lrua() d,(-,-) Supervised loss Graph regularization loss Parametersâ regularization loss Matrix distance used for to compute the graph regularization loss Embedding distance for distance-based decoders p-norm Frobenuis norm
Table 1: Summary of the notation used in the paper.
7
# 2.2 The generalized network embedding problem
Network embedding is the task that aims at learning a mapping function from a discrete graph to a continuous domain. Formally, given a graph G = (V,E) with weighted adjacency matrix W ¢ RIY!*I!Y|, the goal is to learn low- dimensional vector representations {Z;}icv (embeddings) for nodes in the graph {v;};ev, such that important graph properties (e.g. local or global structure) are preserved in the embedding space. For instance, if two nodes have similar connections in the original graph, their learned vector representations should be close. Let Z ⬠R'Y!*¢ denote the nodeâ embedding matrix. In practice, we often want low-dimensional embeddings (d < |V|) for scalability purposes. That is, network embedding can be viewed as a dimensionality reduction technique for graph structured data, where the input data is defined on a non-Euclidean, high-dimensional, discrete domain.
# 2.2.1 Node features in network embedding
Deï¬nition 2.9 (Vertex and edge ï¬elds). A vertex ï¬eld is a function deï¬ned on vertices f : V â R and similarly an edge ï¬eld is a function deï¬ned on edges: F : E â R. Vertex ï¬elds and edge ï¬elds can be viewed as analogs of scalar ï¬elds and tensor ï¬elds on manifolds.
Graphs may have node attributes (e.g. gender or age in social networks; article contents for citation networks) which can be represented as multiple vertex ï¬elds, commonly referred to as node features. In this survey, we denote node features with X â R|V |Ãd0, where d0 is the input feature dimension. Node features might provide useful information about a graph. Some network embedding algorithms leverage this information by learning mappings:
W, X â Z.
In other scenarios, node features might be unavailable or not useful for a given task: network embedding can be featureless. That is, the goal is to learn graph representations via mappings:
W â Z.
Note that depending on whether node features are used or not in the embedding algorithm, the learned representation could capture different aspects about the graph. If nodes features are being used, embeddings could capture both structural and semantic graph information. On the other hand, if node features are not being used, embeddings will only preserve structural information of the graph.
Finally, note that edge features are less common than node features in practice, but can also be used by embed- ding algorithms. For instance, edge features can be used as regularization for node embeddings [38], or to compute messages from neighbors as in message passing networks [62].
# 2.2.2 Transductive and inductive network embedding
Historically, a popular way of categorizing a network embedding method has been by whether the model can generalize to unseen data instances â methods are referred to as operating in either a transductive or inductive setting [158]. While we do not use this concept for constructing our taxonomy, we include a brief discussion here for completeness.
In transductive settings, it assumed that all nodes in the graph are observed in training (typically the nodes all come from one ï¬xed graph). These methods are used to infer information about or between observed nodes in the graph (e.g. predicting labels for all nodes, given a partial labeling). For instance, if a transductive method is used to embed the nodes of a social network, it can be used to suggest new edges (e.g. friendships) between the nodes of the graph. One major limitation of models learned in transductive settings is that they fail to generalize to new nodes (e.g. evolving graphs) or new graph instances.
On the other hand, in inductive settings, models are expected to generalize to new nodes, edges, or graphs that were not observed during training. Formally, given training graphs (G1, . . . , Gk), the goal is to learn a mapping to continuous representations that can generalize to unseen test graphs (Gk+1, . . . , Gk+l). For instance, inductive learning can be used to embed molecular graphs, each representing a molecule structure [62], generalizing to new
2Although we present the model taxonomy via embedding nodes yielding Z â R|V |Ãd, it can also be extended for models that embed an entire graph i.e. with Z â Rd as a d-dimensional vector for the whole graph (e.g. [7, 52]), or embed graph edges Z â R|V |Ã|V |Ãd as a (potentially sparse) 3D matrix with Zu,v â Rd representing the embedding of edge (u, v).
8
graphs and showing error margins within chemical accuracy on many quantum properties. Embedding dynamic or temporally evolving graphs is also another inductive graph embedding problem.
There is a strong connection between inductive graph embedding and node features (Section 2.2.1) as the latter are usually necessary for most inductive graph representation learning algorithms. More concretely, node features can be leveraged to learn embeddings with parametric mappings and instead of directly optimizing the embeddings, one can optimize the mappingâs parameters. The learned mapping can then be applied to any node (even those that were not present a training time). On the other hand, when node features are not available, the ï¬rst mapping from nodes to embeddings is usually a one-hot encoding which fails to generalize to new graphs where the canonical node ordering is not available.
Finally, we note that this categorization of graph embedding methods is at best an incomplete lens for viewing the landscape. While some models are inherently better suited to different tasks in practice, recent theoretical results [138] show that models previously assumed to be capable of only one setting (e.g. only transductive) can be used in both.
# 2.2.3 Positional vs structural network embedding
An emerging categorization of graph embedding algorithms is about whether the learned embeddings are positional or structural. Position-aware embeddings capture global relative positions of nodes in a graph and it is common to refer to embeddings as positional if they can be used to approximately reconstruct the edges in the graph, preserving distances such as shortest paths in the original graph [162]. Examples of positional embedding algorithms include random walk or matrix factorization methods. On the other hand, structure-aware embeddings capture local structural information about nodes in a graph, i.e. nodes with similar node features or similar structural roles in a network should have similar embeddings, regardless of how far they are in the original graph. For instance, GNNs usually learn embeddings by incorporating information for each nodeâs neighborhood, and the learned representations are thus structure-aware.
In the past, positional embeddings have commonly been used for unsupervised tasks where positional informa- tion is valuable (e.g. link prediction or clustering) while structural embeddings have been used for supervised tasks (e.g. node classiï¬cation or whole graph classiï¬cation). More recently, there has been attempts to bridge the gap be- tween positional and structural representations, with positional GNNs [162] and theoretical frameworks showing the equivalence between the two classes of embeddings [138].
# 2.2.4 Unsupervised and supervised network embedding
Network embedding can be unsupervised in the sense that the only information available is the graph structure (and possibly node features) or supervised, if additional information such as node or graph labels is provided. In unsu- pervised network embedding, the goal is to learn embeddings that preserved the graph structure and this is usually achieved by optimizing some reconstruction loss, which measures how well the learned embeddings can approximate the original graph. In supervised network embedding, the goal is to learn embeddings for a speciï¬c purpose such as predicting node or graph attributes, and models are optimized for a speciï¬c task such as graph classiï¬cation or node classiï¬cation. We use the level of supervision to build our taxonomy and cover differences between supervised and unsupervised methods in more details in Section 3.
# 3 A Taxonomy of Graph Embedding Models
We ï¬rst describe our proposed framework, GRAPHEDM, a general framework for GRL (Section 3.1). In particular, GRAPHEDM is general enough that it can be used to succinctly describe over thirty GRL methods (both unsupervised and supervised). We use GRAPHEDM to introduce a comprehensive taxonomy in Section 3.2 and Section 3.3, which summarizes exiting works with shared notations and simple block diagrams, making it easier to understand similarities and differences between GRL methods.
# 3.1 The GRAPHEDM framework
The GRAPHEDM framework builds on top of the work of Hamilton et al. [73], which describes unsupervised network embedding methods from an encoder-decoder perspective. Cruz et al. [45] also recently proposed a modular encoder- based framework to describe and compare unsupervised graph embedding methods. Different from these unsupervised frameworks, we provide a more general framework which additionally encapsulates supervised graph embedding
9
[xf ENC(W, X; 0â) fz} DEC(Z; 05) » Mig Ls Loup kk-4 â>| £Lo,REG 1 1 1 1 1 1 1 1 1 1 1 1 1 1 L
Figure 2: Illustration of the GRAPHEDM framework. Based on the supervision available, methods will use some or all of the branches. In particular, unsupervised methods do not leverage label decoding for training and only optimize the similarity or dissimilarity decoder (lower branch). On the other hand, semi-supervised and supervised methods leverage the additional supervision to learn modelsâ parameters (upper branch).
methods, including ones utilizing the graph as a regularizer (e.g. [169]), and graph neural networks such as ones based on message passing [62, 132] or graph convolutions [25, 83].
Input The GRAPHEDM framework takes as input an undirected weighted graph G = (V, E), with adjacency matrix W â R|V |Ã|V |, and optional node features X â R|V |Ãd0 . In (semi-)supervised settings, we assume that we are given training target labels for nodes (denoted N ), edges (denoted E), and/or for the entire graph (denoted G). We denote the supervision signal as S â {N, E, G}, as presented below.
Model The GRAPHEDM framework can be decomposed as follows:
⢠Graph encoder network ENCÎE : R|V |Ã|V | à R|V |Ãd0 â R|V |Ãd, parameterized by ÎE, which combines the graph structure with node features (or not) to produce node embedding matrix Z â R|V |Ãd as:
Z = ENC(W, X; ÎE).
As we shall see next, this node embedding matrix might capture different graph properties depending on the supervision used for training.
* Graph decoder network DECop : R!Y!*4 RIV|*IYI, parameterized by 0â, which uses the node embed- dings Z to compute similarity or dissimilarity scores for all node pairs, producing a matrix W ⬠RIV!*IV! as:
a W =DEC(Z;0â).
⢠Classiï¬cation network DECÎS : R|V |Ãd â R|V |Ã|Y|, where Y is the label space. This network is used in (semi-)supervised settings and parameterized by ÎS. The output is a distribution over the labels ËyS, using node embeddings, as:
9° = DEC(Z; 0%).
Our GRAPHEDM framework is general (see Fig. 2 for an illustration). Speciï¬c choices of the aforementioned (en- coder and decoder) networks allows GRAPHEDM to realize speciï¬c graph embedding methods. Before presenting the taxonomy and showing realizations of various methods using our framework, we brieï¬y discuss an application perspective.
Output The GRAPHEDM model can return a reconstructed graph similarity or dissimilarity matrix Ww (often used to train unsupervised embedding algorithms), as well as a output labels 7° for supervised applications. The label output space Y varies depending on the supervised application.
+ Node-level supervision, with 7 ¢ Y!â!, where Y represents the node label space. If Y is categorical, then this is also known as (semi-)supervised node classification (Section 6.2.1), in which case the label decoder network produces labels for each node in the graph. If the embedding dimensions d is such that d = |))|,
10
# yS
then the label decoder network can be just a simple softmax activation across the rows of Z, producing a distribution over labels for each node. Additionally, the graph decoder network might also be used in supervised node-classiï¬cation tasks, as it can be used to regularize embeddings (e.g. neighbor nodes should have nearby embeddings, regardless of node labels).
+ Edge-level supervision, with 7â ¢ Y!â'*!â|, where Â¥ represents the edge label space. For example, V can be multinomial in knowledge graphs (for describing the types of relationships between two entities), setting Y = {0,1} *elation yes) Tt is common to have #(relation types) = 1, and this is is known as link prediction, where edge relations are binary. In this review, when 7â = {0,1}!Y!*IYl Ge. Y = {0,1}), then rather than naming the output of the decoder as 7â, we instead follow the nomenclature and position link prediction as an unsupervised task (Section 4). Then in lieu of 7â we utilize Ww, the output of the graph decoder network (which is learned to reconstruct a target similarity or dissimilarity matrix) to rank potential edges.
* Graph-level supervision, with 7° ⬠), where ) is the graph label space. In the graph classification task (Sec- tion 6.2.2), the label decoder network converts node embeddings into a single graph labels, using graph pooling via the graph edges captured by W. More concretely, the graph pooling operation is similar to pooling in stan- dard CNNs, where the goal is to downsample local feature representations to capture higher-level information. However, unlike images, graphs donât have a regular grid structure and it is hard to define a pooling pattern which could be applied to every node in the graph. A possible way of doing so is via graph coarsening, which groups similar nodes into clusters to produce smaller graphs [49]. There exist other pooling methods on graphs such as DiffPool [160] or SortPooling [165] which creates an ordering of nodes based on their structural roles in the graph. Details about graph pooling operators is outside the scope of this work and we refer the reader to recent surveys [155] for a more in-depth treatment.
# 3.2 Taxonomy of objective functions
We now focus our attention on the optimization of models that can be described in the GRAPHEDM framework by describing the loss functions used for training. Let Î = {ÎE, ÎD, ÎS} denote all model parameters. GRAPHEDM models can be optimized using a combination of the following loss terms:
SUP, which compares the predicted labels ËyS to the ground truth labels yS. This term depends on the task the model is being trained for. For instance, in semi-supervised node classiï¬cation tasks (S = N ), the graph vertices are split into labelled and unlabelled nodes (V = VL ⪠VU ), and the supervised loss is computed for each labelled node in the graph:
Leur(y%,9%30) = So ey, G38), i]u;eVE
where £(-) is the loss function used for classification (e.g. cross-entropy). Similarly for graph classification tasks (S = G), the supervised loss is computed at the graph-level and can be summed across multiple training graphs:
LSur(y%, 9%: ©) = Ey, 9%; 8).
* Graph regularization loss term, £¢.REG, which leverages the graph structure to impose regularization con- straints on the model parameters. This loss term acts as a smoothing term and measures the distance between the decoded similarity or dissimilarity matrix Ww, and a target similarity or dissimilarity matrix s(W), which might capture higher-order proximities than the adjacency matrix itself:
Lorna (W, W;®) = di(s(W), W), (1)
where d1(·, ·) is a distance or dissimilarity function. Examples for such regularization are constraining neigh- boring nodes to share similar embeddings, in terms of their distance in L2 norm. We will cover more examples of regularization functions in Section 4 and Section 5.
⢠Weight regularization loss term, LREG, e.g. for representing prior, on trainable model parameters for reducing overï¬tting. The most common regularization is L2 regularization (assumes a standard Gaussian prior):
LREG(Î) = ||θ||2 2. θâÎ
11
Finally, models realizable by GRAPHEDM framework are trained by minimizing the total loss L deï¬ned as:
L=aliyp(y*®, 9%; 9) + BLarec(W, W; 0) + 7LrEc(9), (2)
where a, 3 and y are hyper-parameters, that can be tuned or set to zero. Note that graph embedding methods can be trained in a supervised (a # 0) or unsupervised (a = 0) fashion. Supervised graph embedding approaches leverage an additional source of information to learn embeddings such as node or graph labels. On the other hand, unsupervised network embedding approaches rely on the graph structure only to learn node embeddings.
A common approach to solve supervised embedding problems is to ï¬rst learn embeddings with an unsupervised method (Section 4) and then train a supervised model on the learned embeddings. However, as pointed by Weston et al. [154] and others, using a two-step learning algorithm might lead to sub-optimal performances for the supervised task, and in general, supervised methods (Section 5) outperform two-step approaches.
# 3.3 Taxonomy of encoders
Having introduced all the building blocks of the GRAPHEDM framework, we now introduce our graph embedding taxonomy. While most methods we describe next fall under the GRAPHEDM framework, they will signiï¬cantly differ based on the encoder used to produce the node embeddings, and the loss function used to learn model parameters. We divide graph embedding models into four main categories:
⢠Shallow embedding methods, where the encoder function is a simple embedding lookup. That is, the parame- ters of the model ÎE are directly used as node embeddings:
Z = ENC(ÎE) = ÎE â R|V |Ãd.
Note that shallow embedding methods rely on an embedding lookup and are therefore transductive, i.e. they generally cannot be directly applied in inductive settings where the graph structure is not ï¬xed.
⢠Graph regularization methods, where the encoder network ignores the graph structure and only uses node features as input:
Z = ENC(X; ÎE).
As its name suggests, graph regularization methods leverage the graph structure through the graph regularization loss term in Eq. (2) (8 4 0) to regularize node embeddings.
⢠Graph auto-encoding methods, where the encoder is a function of the graph structure only:
Z = ENC(W ; ÎE).
⢠Neighborhood aggregation methods, including graph convolutional methods, where both the node features and the graph structure are used in the encoder network. Neighborhood aggregation methods use the graph structure to propagate information across nodes and learn embeddings that encode structural properties about the graph:
Z = ENC(W, X; ÎE).
# 3.4 Historical Context
In most machine learning applications, models follow a rather simple two-step paradigm. First they automatically extract meaningful patterns from data, without the need for manual feature engineering. This is the so-called represen- tation learning step [17]. Second, they use these representations in downstream applications which can be supervised (e.g. classiï¬cation) or unsupervised (e.g. clustering, visualization, nearest-neighbor search). This is the so-called downstream task.3
3For supervised tasks, these two steps are often combined into one learning algorithm, which learns both representations and decision rules on top of these representations.
12
A good data representation should be expressive enough that it preserves meaningful features found in the original data, but simple enough that it makes the downstream task easier. For instance, having low-dimensional representations of high-dimensional datasets can help overcome issues caused by the curse of dimensionality such as overï¬tting. In the context of GRL, a graph encoder is used to learn representation, and a graph or label decoder is used for downstream tasks (e.g. node classiï¬cation, link prediction). Historically, the graph encoder-decoder networks were used for manifold learning. When input data lies on a high-dimensional Euclidean space, it is common to assume it sits in an intrinsically low-dimensional manifold. This is known as the standard manifold hypothesis. Methods for manifold learning seek to recover this intrinsically low-dimensional manifold. This is usually done by ï¬rst building a discrete approximation of the manifold using a graph in which nearby points in the ambient Euclidean space are connected by an edge. Because manifolds are locally Euclidean, graph distances provide a good proxy for both local and global manifold distances. The second step is to âï¬attenâ this graph representation by learning a non-linear mapping from nodes in the graph to points in a low-dimensional Euclidean space, while preserving graph distances as best as possible. These representations are usually easier to work with than the original high-dimensional representations, and can then be used in downstream applications.
In the early 2000s, non-linear4 dimensionality reduction methods were extremely popular to solve the manifold learning problem. For instance, Laplacian Eigenmaps (LE) [14] use spectral techniques to compute embeddings, and IsoMap [141] use a combination of the FloydâWarshall algorithm and the classical Multi-dimensional scaling algorithm to preserve global graph geodesics. These methods rely on shallow encoders, and we describe some of these in Section 4.1.1.
While manifold dimensionality reduction methods have had critical impact in machine learning applications, they do not scale to large datasets. For instance, IsoMAP requires computing all pairs of shortest paths which takes more than quadratic time. A perhaps more important limitation is their inability to compute embeddings for new datapoints because the mappings from node to embeddings are non-parametric.
In more recent years, many non-shallow network architectures have been proposed for the problem of graph embedding. These include graph regularization networks and graph neural networks which can be described using our GRAPHEDM framework. Because they leverage the expressiveness of deep neural networks, GRL models often yield more expressive, scalable and generalizable embeddings than classical methods.
In the next sections, we review recent methods for supervised and unsupervised graph embedding techniques using GRAPHEDM and summarize the proposed taxonomy in Fig. 3.
# 4 Unsupervised Graph Embedding
We now give an overview of recent unsupervised graph embedding approaches using the taxonomy described in the previous section. These methods map a graph, its nodes, and/or its edges, onto a continuous vector space, without using task-speciï¬c labels for the graph or its nodes. Some of these methods optimize an objective to learn an embedding that preserves the graph structure e.g. by learning to reconstruct some node-to-node similarity or dissimilarity matrix, such as the adjacency matrix. Some of these methods apply a contrastive objective, e.g. contrasting close-by node-pairs versus distant node-pairs [121]: nodes co-visited in short random walks should have a similarty score higher than distant ones; or contrasting real graphs versus fake ones [148]: the mutual information between a graph and all of its nodes, should be higher in real graphs than in fake graphs.
# 4.1 Shallow embedding methods
Shallow embedding methods are transductive graph embedding methods where the encoder function is a simple em- bedding lookup. More concretely, each node vi â V has a corresponding low-dimensional learnable embedding vector Zi â Rd and the shallow encoder function is simply:
Z = ENC(ÎE) = ÎE â R|V |Ãd.
Embeddings of nodes can be learned such that the structure of the data in the embedding space corresponds to the underlying graph structure. At a high level, this is similar to dimensionality reduction methods such as PCA, except
4The non-linearity term here comes to contrast with Euclidean dimensionality reduction methods (such as Principal Component Analysis) which rely on linear projections.
13
# Legend
Legend Laplacian MDS [Kruskal, 1964] Section 4.1.1 N IsoMAP [Tenenbaum, 2000] Shallow embeddings LLE [Roweis, 2000] LE [Belkin, 2002] Non-Euclidean Poincaré [Nickel, 2017] Section 4.1.2 |) Lorentz [Nickel, 2018] Product [Gu, 2018] Graph neural networks Matrix GF [Ahmed, 2013] Factorization |â| GraRep [Cao, 2015] Section 4.1.3 HOPE [Ou, 2016] DeepWalk [Perozzi, 2014] node2vec [Grover, 2016] WYS [Abu-el-haija, 2018] LINE [Tang, 2015] HARP [Chen, 2018] Unsupervised @=0) Autoencoders| {SDNE [Wang, 2016] Section 4 IN . = Section 4.2 DNGR [Cao, 2016] ( XAT 1 ENC(W, X;0â) N Message passing GAE [Kipf, 2016] Section 4.3. | Graphite [Grover, 2019] DGI [Velickovi¢, 2019] Laplacian LP [Zhu, 2002] Section 5.1] | LS [Zhou, 2004] Laplacian ManiReg [Belkin, 2006] Section 5.2.1 |) SemiEmbed [Weston, 2008] NGM [Bui, 2018] | Planetoid [Yang, 2016] Graph Embedding Supervised (a #0) Section 5 GNN GGSNN MPNN GraphNets Spectrum-free ChebyNet Section 5.4.2 || GCN Spatial SAGE Section 5.5 |) MoNet GAT Non-Euclidean |_| HGCN Section 5.6 HGNN SCNN
# GNN [Scarselli, 2009] GGSNN [Li, 2015] MPNN [Gilmer, 2017] GraphNets [Battaglia, 2016]
# ChebyNet [Defferrard, 2016] GCN [Kipf, 2016]
# SAGE [Hamilton, 2017] MoNet [Monti, 2017] GAT [VeliËckovi´c, 2018]
# HGCN [Chami, 2019] HGNN [Liu, 2019]
# Global ï¬ltering
|
# Spectrum-based Section 5.4.1
# SCNN [Bruna, 2014] [Henaff, 2015]
Figure 3: Taxonomy of graph representation learning methods. Based on what information is used in the encoder network, we categorize graph embedding approaches into four categories: shallow embeddings, graph auto-encoders, graph-based regularization and graph neural networks. Note that message passing methods can also be viewed as spatial convolution, since messages are computed over local neighborhood in the graph domain. Reciprocally, spatial convolutions can also be described using message passing frameworks.
14
PEA â : ecw :
Figure 4: Shallow embedding methods. The encoder is a simple embedding look-up and the graph structure is only used in the loss function.
that the input data might not have a linear structure. In particular, methods used for non-linear dimensionality reduction often start by building a discrete graph from the data (to approximate the manifold) and can be applied to graph embedding problems. Here, we analyze two major types of shallow graph embedding methods, namely distance- based and outer product-based methods.
Distance-based methods These methods optimize embeddings such that points that are close in the graph (as mea- sured by their graph distances for instance) stay as close as possible in the embedding space using a predeï¬ned distance function. Formally, the decoder network computes pairwise distance for some distance function d2(·, ·), which can lead to Euclidean (Section 4.1.1) or non-Euclidean (Section 4.1.2) embeddings:
W =DEC(Z; 0â) with W,; = do(Z;, Z;)
Outer product-based methods These methods on the other hand rely on pairwise dot-products to compute node similarities and the decoder network can be written as:
W = DEC(Z;0â) =ZZ'.
Embeddings are then learned by minimizing the graph regularization loss: Lo,REG(W, W; 0) = d,(s(W), W). Note that for distance-based methods, the function s(-) measures dissimilarity or distances between nodes (higher values mean less similar pairs of nodes), while in outer-product methods, it measures some notion of similarity in the graph (higher values mean more similar pairs).
# 4.1.1 Distance-based: Euclidean methods
Most distance-based methods optimize Euclidean embeddings by minimizing Euclidean distances between similar nodes. Among these, we ï¬nd linear embedding methods such as PCA or MDS, which learn low-dimensional linear projection subspaces, or nonlinear methods such as Laplacian eigenmaps, IsoMAP and Local linear embedding. Note that all these methods have originally been introduced for dimensionality reduction or visualization purposes, but can easily be extended to the context of graph embedding.
Multi-Dimensional Scaling (MDS) [89] refers to a set of embedding techniques used to map objects to positions while preserving the distances between these objects. In particular, metric MDS (mMDS) [44] minimizes the reg- ularization loss in Eq. (1) with s(W ) set to some distance matrix measuring the dissimilarity between objects (e.g. Euclidean distance between points in a high-dimensional space):
a (Daj (s(W)iy â Wi)? Â¥? d,(s(W), W) = (| + âââ~_, = 16007) 7) = ( > WP ) Wij = do(Zi, Z;) = ||Zi â Zillo-
That is, mMDS ï¬nds an embedding conï¬guration where distances in the low-dimensional embedding space are pre- served by minimizing a residual sum of squares called the stress cost function. Note that if the dissimilarities are computed from Euclidean distances of a higher-dimensional representation, then mMDS is equivalent to the PCA dimensionality reduction method. Finally, there exist variants of this algorithm such as non-metric MDS, when the dissimilarity matrix s(W ) is not a distance matrix, or classical MDS (cMDS) which can be solved in closed form using a low-rank decomposition of the gram matrix.
15
Isometric Mapping (IsoMap) [141] is an algorithm for non-linear dimensionality reduction which estimates the intrinsic geometry of a data lying on a manifold. This method is similar to MDS, except for a different choice of the distance matrix. IsoMap approximates manifold distances (in contrast with straight-line Euclidean geodesics) by ï¬rst constructing a discrete neighborhood graph G, and then using the graph distances (length of shortest paths computed using Dijkstraâs algorithm for example) to approximate the manifold geodesic distances:
s(W )ij = dG(vi, vj).
IsoMAP then uses the cMDS algorithm to compute representations that preserve these graph geodesic distances. Different from cMDS, IsoMAP works for distances that do not necessarily come from a Euclidean metric space (e.g. data deï¬ned on a Riemannian manifold). It is however computationally expensive due to the computation of all pairs of shortest path lengths in the neighborhood graph.
Locally Linear Embedding (LLE) [127] is another non-linear dimensionality reduction technique which was intro- duced around the same time as IsoMap and improves over its computational complexity via sparse matrix operations. Different from IsoMAP which preserves the global geometry of manifolds via geodesics, LLE is based on the local ge- ometry of manifolds and relies on the assumptions that when locally viewed, manifolds are approximately linear. The main idea behind LLE is to approximate each point using a linear combination of embeddings in its local neighborhood (linear patches). These local neighborhoods are then compared globally to ï¬nd the best non-linear embedding.
(LE) [14] is a non-linear dimensionality reduction methods that seeks to preserve local dis- Laplacian Eigenmaps tances. Spectral properties of the graph Laplacian matrix capture important structural information about graphs. In particular, eigenvectors of the graph Laplacian provide a basis for smooth functions deï¬ned on the graph vertices (the âsmoothestâ function being the constant eigenvector corresponding to eigenvalue zero). LE is a non-linear dimension- ality reduction technique which builds on this intuition. LE ï¬rst constructs a graph from datapoints (e.g. k-NN graph or ε-neighborhood graph) and then represents nodes in the graphs via the Laplacianâs eigenvectors corresponding to smaller eigenvalues. The high-level intuition for LE is that points that are close on the manifold (or graph) will have similar representations, due to the âsmoothnessâ of Laplacianâs eigenvectors with small eigenvalues. Formally, LE learns embeddings by solving the generalized eigenvector problem:
min Z'LZ ZERIV! xd subject to Z'DZ =I and Z'D1=0,
where the first constraint removes an arbitrary scaling factor in the embedding and the second one removes trivial solutions corresponding to the constant eigenvector (with eigenvalue zero for connected graphs). Further, note that ZILZ= 3 YY; 4 W;,;||Z; â Z;||} and therefore the minimization objective can be equivalently written as a graph regularization term using our notations:
a (W,W) = 3° Wii; aj Wij = de(Zj, Z;) = ||Zi â 2).
Therefore, LE learns embeddings such that the Euclidean distance in the embedding space is small for points that are close on the manifold.
# 4.1.2 Distance-based: Non-Euclidean methods
The distance-based methods described so far assumed embeddings are learned in a Euclidean space. Graphs are non-Euclidean discrete data structures, and several works proposed to learn graph embeddings into non-Euclidean spaces instead of conventional Euclidean space. Examples of such spaces include the hyperbolic space, which has a non-Euclidean geometry with a constant negative curvature and is well-suited to represent hierarchical data.
To give more intuition, the hyperbolic space can be thought of as continuous versions of trees, where geodesics (generalization of shortest paths on manifolds) resemble shortest paths in discrete trees. Further, the volume of balls grows exponentially with radius in hyperbolic space, similar to trees where the number of nodes within some distance
16
to the root grows exponentially. In contrast, this volume growth is only polynomial in Euclidean space and there- fore, the hyperbolic space has more âroomâ to ï¬t complex hierarchies and compress representations. In particular, hyperbolic embeddings can embed trees with arbitrary low distortion in just two-dimensions [131] whereas this is not possible in Euclidean space. This makes hyperbolic space a natural candidate to embed tree-like data and more generally, hyperbolic geometry offers an exciting alternative to Euclidean geometry for graphs that exhibit hierarchical structures, as it enables embeddings with much smaller distortion.
Before its use in machine learning applications, hyperbolic geometry has been extensively studied and used in network science research. Kleinberg [85] proposed a greedy algorithm for geometric rooting, which maps nodes in sensor networks to coordinates on a hyperbolic plane via spanning trees, and then performs greedy geographic routing. Hyperbolic geometry has also been used to study the structural properties of complex networks (networks with non- trivial topological features used to model real-world systems). Krioukov et al. [88] develop a geometric framework to construct scale-free networks (a family of complex networks with power-law degree distributions), and conversely show that any scale-free graph with some metric structure has an underlying hyperbolic geometry. Papadopoulos et al. [117] introduce the Popularity-Similarity (PS) framework to model the evolution and growth of complex networks. In this model, new nodes are likely to be connected to popular nodes (modelled by their radial coordinates in hyperbolic space) as well as similar nodes (modelled by the angular coordinates). This framework has further been used to map nodes in graphs to hyperbolic coordinates, by maximising the likelihood that the network is produced by the PS model [118]. Further works extend non-linear dimensionality reduction techniques such as LLE [14] to efï¬ciently map graphs to hyperbolic coordinates [8, 111].
More recently, there has been interest in learning hyperbolic representations of hierarchical graphs or trees, via gradient-based optimization. We review some of these machine learning-based algorithms next.
Poincar´e embeddings Nickel and Kiela [112] learn embeddings of hierarchical graphs such as lexical databases (e.g. WordNet) in the Poincar´e model hyperbolic space. Using our notations, this approach learns hyperbolic embeddings via the Poincar´e distance function:
dy(Z;, Z;) = dpoincare(Zi, Z;) |Z: â ZI ) arcosh| 1 + 2 : ( (1 = |Zi|13) â 125115)
Embeddings are then learned by minimizing distances between connected nodes while maximizing distances between disconnected nodes:
~ -Wi ~ d,(W,W) = âS> Wijlog â$â. = â 7 W, log Softmaxxgyy,,-0(âW,,), ij -W; iG k|Win=0® *
where the denominator is approximated using negative sampling. Note that since the hyperbolic space has a manifold structure, embeddings need to be optimized using Riemannian optimization techniques [21] to ensure that they remain on the manifold.
Other variants of these methods have been proposed. In particular, Nickel and Kiela [113] explore a different model of hyperbolic space, namely the Lorentz model (also known as the hyperboloid model), and show that it provides better numerical stability than the Poincar´e model. Another line of work extends non-Euclidean embeddings to mixed- curvature product spaces [70], which provide more ï¬exibility for other types of graphs (e.g. ring of trees). Finally, Chamberlain et al. [31] extend Poincar´e embeddings to incorporate skip-gram losses using hyperbolic inner products.
# 4.1.3 Outer product-based: Matrix factorization methods
Matrix factorization approaches learn embeddings that lead to a low rank representation of some similarity matrix s(W ), where s : R|V |Ã|V | â R|V |Ã|V | is a transformation of the weighted adjacency matrix, and many methods set it to the identity, i.e. s(W ) = W . Other transformations include the Laplacian matrix or more complex similarities derived from proximity measures such as the Katz Index, Common Neighbours or Adamic Adar. The decoder function in matrix factorization methods is a simple outer product:
W =DEC(Z;0") = 22". (3)
17
Matrix factorization methods learn embeddings by minimizing the regularization loss in Eq. (1) with:
Lo.rea(W, W; 0) = ||s(W) â WI. a
That is, d1(·, ·) in Eq. (1) is the Frobenius norm between the reconstructed matrix and the target similarity matrix. By minimizing the regularization loss, graph factorization methods learn low-rank representations that preserve structural information as deï¬ned by the similarity matrix s(W ). We now review important matrix factorization methods.
Graph factorization (GF) [6] learns a low-rank factorization for the adjacency matrix by minimizing graph regu- larization loss in Eq. (1) using:
a(W.W)= S> (Wy â Wi)? (v4,0;)â¬E
Recall that A is binary adjacency matrix, with Aij = 1 iif (vi, vj) â E. We can express the graph regularization loss in terms of Frobenius norm:
Lo.rna(W,W;0) = ||A-(W -âW)| 2 Fo
where · is the element-wise matrix multiplication operator. Therefore, GF also learns a low-rank factorization of the adjacency matrix W measured in Frobenuis norm. Note that the sum is only over existing edges in the graph, which reduces the computational complexity of this method from O(|V |2) to O(|E|).
Graph representation with global structure information (GraRep) [29] The methods described so far are all symmetric, that is, the similarity score between two nodes (vi, vj) is the same a the score of (vj, vi). This might be a limiting assumption when working with directed graphs as some nodes can be strongly connected in one direction and disconnected in the other direction. GraRep overcomes this limitation by learning two embeddings per node, a source embedding Z s and a target embedding Z t, which capture asymmetric proximity in directed networks. GraRep learns embeddings that preserve k-hop neighborhoods via powers of the adjacency and minimizes the graph regularization loss with:
wea ih)s (kth Lo.rea(W,W); 0) = ||D-* Ww â WH|)2,
for each 1 ⤠k ⤠K. GraRep concatenates all representations to get source embeddings Z s = [Z (1),s| . . . |Z (K),s] and target embeddings Z t = [Z (1),t| . . . |Z (K),t]. Finally, note that GraRep is not very scalable as the powers of Dâ1W might be dense matrices.
HOPE [115] Similar to GraRep, HOPE learns asymmetric embeddings but uses a different similarity measure. The distance function in HOPE is simply the Frobenius norm and the similarity matrix is a high-order proximity matrix (e.g. Adamic-Adar):
Weazz'! Larna(W,W; 0) = ||s(W) â ||.
The similarity matrix in HOPE is computed with sparse matrices, making this method more efï¬cient and scalable than GraRep.
# 4.1.4 Outer product-based: Skip-gram methods
Skip-gram graph embedding models were inspired by efï¬cient NLP methods modeling probability distributions over words for learning word embeddings [108, 120]. Skip-gram word embeddings are optimized to predict context words, or surrounding words, for each target word in a sentence. Given a sequence of words (w1, . . . , wT ), skip-gram will minimize the objective:
L=- > log P(wr_ilwe), âK<i< K,iZ0
18
@O-O-O 1 2 3 â' ,oomo 2, 3 Sampling OOOO Training Computing random skip-gram embeddings walks model @O-O-O
Figure 5: An overview of the pipeline for random-walk graph embedding methods. Reprinted with permission from [63].
for each target words wk. In practice, the conditional probabilities can be estimated using neural networks, and skip- gram methods can be trained efï¬ciently using negative sampling.
Perozzi et al. [121] empirically show the frequency statistics induced by random walks also follow Zipfâs law, thus motivating the development of skip-gram graph embedding methods. These methods exploit random walks on graphs and produce node sequences that are similar in positional distribution, as to words in sentences. In skip-gram graph embedding methods, the decoder function is also an outer product (Eq. (3)) and the graph regularization term is computed over random walks on the graph.
DeepWalk [121] was the ï¬rst attempt to generalize skip-gram models to graph-structured data. DeepWalk draws analogies between graphs and language. Speciï¬cally, writing a sentence is analogous to performing a random walk, where the sequence of nodes visited during the walk, is treated as the words of the sentence. DeepWalk trains neural networks by maximizing the probability of predicting context nodes for each target node in a graph, namely nodes that are close to the target node in terms of hops and graph proximity. For this purpose, node embeddings are decoded into probability distributions over nodes using row-normalization of the decoded matrix with softmax.
To train embeddings, DeepWalk generates sequences of nodes using truncated unbiased random walks on the graphâwhich can be compared to sentences in natural language modelsâand then maximize their log-likelihood. Each random walk starts with a node vi1 â V and repeatedly sample next node at uniform: vij+1 â {v â V | (vij , v) â E}. The walk length is a hyperparameter. All generated random-walk can then be passed to an NLP- embedding algorithm e.g. word2vecâs Skipgram model. This two-step paradigm introduced by Perozzi et al. [121] is followed by many subsequent works, such as node2vec [68].
We note that is common for underlying implementations to use two distinct representations for each node (one for when a node is center of a truncated random walk, and one when it is in the context). The implications of this modeling choice is studied further in [2].
Abu-El-Haija et al. [3] show that training DeepWalk, in expectation, is equivalent to first sampling integer g ~ (1, 2,..-, Tmax] with mass x [1, Fax} penny Fol. Specifically, if s(W) = E, [(D-1W)*] , then training Deep Walk is equivalent to minimizing:
Larua(W,W;0)=logC- So s(W)igWiy, (5) vi EV,vj EV
where C = J]; >> j exp(Wi;) is a normalizing constant. Note that computing C' requires summing over all nodes in the graph which is computationally expensive. DeepWalk overcomes this issue by using a technique called hierarchical softmax, which computes C efficiently using binary trees. Finally, note that by computing truncated random walks on the graph, DeepWalk embeddings capture high-order node proximity.
node2vec [68] is a random-walk based approach for unsupervised network embedding, that extends DeepWalkâs sampling strategy. The authors introduce a technique to generate biased random walks on the graph, by combining graph exploration through breadth ï¬rst search (BFS) and through depth ï¬rst search (DFS). Intuitively, node2vec also preserves high order proximities in the graph but the balance between BFS and DFS allows node2vec embeddings to capture local structures in the graph, as well as global community structures, which can lead to more informative embeddings. Finally, note that negative sampling [108] is used to approximate the normalization factor C in Eq. (5).
19
order of Method Model 3(W)iy DEC(Z;©),; | d,(W <s(W),W â DEC(Z:;0â)) proximity ae . â ) âi Distance Euclidean mMDS distance matrix \|Zi â Zjll2 ( vz high DEC(Z;0â);; LE Wi Zi = Z5 Dy Wii Wis n went) Non-Euclidean | Poincaré Wi; dpoincaré(Zi, Zj) | â Ly Wijlog Softmaxyyiv,,-0(âWiz) 1 Matrix GF Wi 212) awe so(Way â Wij)? â âkyyk (k),8° 7(k).t Vv _ wil th Outer product | Factorization GraRep (DW")ig 4, ; Z; Ww â Wilh K HOPE 3(W)iz ZZ} \|W- WII high DEC(Z; 0â);; DeepWalk | x E, [(D-'W)"],, Z} Z; â Yi Wij log Softmax, (Wij) high a Skip-gram node2vec | n2vWalk(W:p, q)ij Zi 3; â Diz Wij log Softmax; (Wij) high 2) 2; wys | «E, [(D-'W)'â], ZI; BCE(W, W) high
Table 2: An overview of unsupervised shallow embedding methods, where the encoding function is a simple embed- ding look-up Z = ENC(ÎE). Softmax represents sampled/hierarchical softmax; â for approximating random walks; n2vWalk is a traversal algorithm with (back) teleportation (approximates combination of BFS & DFS). BCE is the sigmoid cross entropy loss for binary classiï¬cation.
Watch Your Step (WYS) [3] Random walk methods are very sensitive to the sampling strategy used to generate random walks. For instance, some graphs may require shorter walks if local information is more informative that global graph structure, while in other graphs, global structure might be more important. Both DeepWalk and node2vec sampling strategies use hyper-parameters to control this, such as the length of the walk or ratio between breadth and depth exploration. Optimizing over these hyper-parameters through grid search can be computationally expensive and can lead to sub-optimal embeddings. WYS learns such random walk hyper-parameters to minimize the overall objective (in analogy: each graph gets to choose its own preferred âcontext sizeâ, such that the probability of predicting random walks is maximized). WYS shows that, when viewed in expectation, these hyperparameters only correspond in the objective to coefï¬cients to the powers of the adjacency matrix (W k)1â¤kâ¤K. These coefï¬cients are denoted q = (qk)1â¤kâ¤K and are learned through back-propagation. Should qâs learn a left-skewed distribution, then the embedding would prioritize local information and right-skewed distribution will enhance high-order relationships and graph global structure. This concept has been extended to other forms of attention to the âgraph contextâ, such using a personalized context distributions for each node [78].
Large scale Information Network Embedding (LINE) [140] learns embeddings that preserve ï¬rst and second order proximity. To learn ï¬rst order proximity preserving embeddings, LINE minimizes the graph regularization loss:
_~ T WP = ZZ Larec(W,W);0)=- > Wijlog o( WS). (4,9) |(vi,vj EE
# ij ).
LINE also assumes that nodes with multiple edges in common should have similar embeddings and learns second- order proximity preserving embeddings by minimizing:
Pe â 72! 72) WO = 2? zi oy _ wi Larno(W.W;0)=- S>â Wiylog macalloe es (id) |(vivj EE DY, exp(Wix)
.
Intuitively, LINE with second-order proximity decodes embeddings into context conditional distributions for each node p2(·|vi). Note that optimizing the second-order objective is computationally expensive as it requires a sum over the entire set of edges. LINE uses negative sampling to sample negative edges according to some noisy dis- tribution over edges. Finally, as in GraRep, LINE combines ï¬rst and second order embeddings with concatenation Z = [Z (1)|Z (1)].
20
Figure 6: Auto-encoder methods. The graph structure (stored as the graph adjacency matrix) is encoded and recon- structed using encoder-decoder networks. Models are trained by optimizing the graph regularization loss computed on the reconstructed adjacency matrix.
Hierarchical representation learning for networks (HARP) [37] Both node2vec and DeepWalk learn node em- beddings by minimizing non-convex functions, which can lead to local minimas. HARP introduces a strategy that computes initial embeddings, leading to more stable training and convergence. More precisely, HARP hierarchi- cally reduces the number of nodes in the graph via graph coarsening. Nodes are iteratively grouped into super nodes that form a graph with similar properties as the original graph, leading to multiple graphs with decreasing size (G1, . . . , GT ). Node embeddings are then learned for each coarsened graph using existing methods such as LINE or DeepWalk, and at time-step t, embeddings learned for Gt are used as initialized embedding for the random walk algorithm on Gtâ1. This process is repeated until each node is embedded in the original graph. The authors show that this hierarchical embedding strategy produces stable embeddings that capture macroscopic graph information.
Splitter [55] What if a node is not the correct âbase unitâ of analysis for a graph? Unlike HARP, which coarsens a graph to preserve high-level topological features, Splitter is a graph embedding approach designed to better model nodes which have membership in multiple communities. It uses the Persona decomposition [56], to create a derived graph, GP which may have multiple persona nodes for each original node in G (the edges of each original node are divided among its personas). GP can then be embedded (with some constraints) using any of the embedding methods discussed so far. The resulting representations allow persona nodes to be separated in the embedding space, and the authors show beneï¬ts to this on link prediction tasks.
Matrix view of Skip-gram methods As noted by [96], Skip-gram methods can be viewed as matrix factorization, and the methods discussed here are related to those of Matrix Factorization (Section 4.1.3). This relationship is discussed in depth by [124], who propose a general matrix factorization framework, NetMF, which uses the same underlying graph proximity information as DeepWalk, LINE, and node2vec. Casting the node embedding problem as matrix factorization can offer beneï¬ts like easier algorithmic analysis (e.g., convergence guarantees to unique globally- optimal points), and dense matrix undergoing decomposition can be sampled entry-wise [125].
# 4.2 Auto-encoders
Shallow embedding methods hardly capture non-linear complex structures that might arise in graphs. Graph auto- encoders were originally introduced to overcome this issue by using deep neural network encoder and decoder func- tions, due to their ability model non-linearities. Instead of exploiting the graph structure through the graph regular- ization term, auto-encoders directly incorporate the graph adjacency matrix in the encoder function. Auto-encoders generally have an encoding and decoding network which are multiple layers of non-linear layers. For graph auto- encoders, the encoder function has the form:
Z = ENC(W ; ÎE).
That is, the encoder is a function of the adjacency matrix W only. These models are trained by minimizing a recon- struction error objective and we review examples of such objectives next.
Structural Deep Network Embedding (SDNE) [152] learns auto-encoders that preserve first and second-order node proximity (Section 2.1). The SDNE encoder takes as input a node vector: a row of the adjacency matrix as they explicitly set s(W) = W, and produces node embeddings Z. The SDNE decoder return a reconstruction WwW, which is trained to recover the original graph adjacency matrix (Fig. 7). SDNE preserves second order node proximity by
21
Vertex i Vertex j Input Input Shared weights Embedding Distance Embedding <M â Shared weights Output Output
Figure 7: Illustration of the SDNE model. The embedding layer (denoted Z) is shown in green. Reprinted with permission from [63].
minimizing the graph regularization loss:
Il(s(W) â W) - BI + aspyw >> 8(W)agllZi â Z5113, a
where B is the indicator matrix for s(W) with B = 1[s(W) > 0]. Note that the second term is the regularization loss used by distance-based shallow embedding methods. The first term is similar to the matrix factorization regularization objective, except that W isnot computed using outer products. Instead, SDNE computes a unique embedding for each node in the graph using a decoder network.
Deep neural Networks for learning Graph Representations (DNGR) [30] Similar to SDNE, DNGR uses deep auto-encoders to encode and decode a node similarity matrix, s(W ). The similarity matrix is computed using a probabilistic method called random surï¬ng, that returns a probabilistic similarity matrix through graph exploration with random walks. Therefore, DNGR captures higher-order dependencies in the graph. The similarity matrix s(W ) is then encoded and decoded with stacked denoising auto-encoders [150], which allows to reduce the noise in s(W ). DNGR is optimized by minimizing the reconstruction error:
Lansa(W,W; 0) = ||s(W) â WIE.
# 4.3 Graph neural networks
In graph neural networks, both the graph structure and node features are used in the encoder function to learn structural representations of nodes:
Z = ENC(X, W; ÎE).
We ï¬rst review unsupervised graph neural networks, and will cover supervised graph neural networks in more details in Section 5.
Variational Graph Auto-Encoders (VGAE) [84] use graph convolutions [83] to learn node embeddings Z = GCN(W, X; ©â) (see Section 5.3.1 for more details about graph convolutions). The decoder is an outer product: DEC(Z;@â) = ZZ". The graph regularization term is the sigmoid cross entropy between the true adjacency and the predicted edge similarity scores:
Larec(W,W;0) = -(dXa â Wi,)log(1 â o(W,;)) + Wijlog os). a
Computing the regularization term over all possible nodes pairs is computationally challenging in practice, and the Graph Auto Encoders (GAE) model uses negative sampling to overcome this challenge.
22
Note that GAE is a deterministic model but the authors also introduce variational graph auto-encoders (VGAE), where they use variational auto-encoders to encode and decode the graph structure. In VGAE, the embedding Z is modelled as a latent variable with a standard multivariate normal prior p(Z) = N (Z|0, I) and the amortized inference network qΦ(Z|W, X) is also a graph convolution network. VGAE is optimized by minimizing the corresponding negative evidence lower bound:
NELBO(W, X; 0) = âE,,(z\w,x)[log p(W|Z)] + KL(qa(Z|W, X)||p(Z)) = Larea(W, W; 0) + KL(qa(Z|W, X)||p(Z)).
Iterative generative modelling of graphs (Graphite) [69] extends GAE and VGAE by introducing a more complex decoder, which iterates between pairwise decoding functions and graph convolutions. Formally, the graphite decoder repeats the following iteration:
pw 2020" ae ~ (125° IV | Zz) â GonW, Z)
where Z (0) are initialized using the output of the encoder network. By using this parametric iterative decoding process, Graphite learns more expressive decoders than other methods based on non-parametric pairwise decoding. Finally, similar to GAE, Graphite can be deterministic or variational.
Deep Graph Infomax (DGI) [148] is an unsupervised graph-level embedding method. Given one or more real (positive) graphs, each with its adjacency matrix W â R|V |Ã|V | and node features X â R|V |Ãd0, this method cre- ates fake (negative) adjacency matrices W â â R|V â|Ã|V â| and their features X â â R|V â|Ãd0. It trains (i) an encoder that processes real and fake samples, respectively giving Z = ENC(X, W ; ÎE) â R|V |Ãd and Z â = ENC(X â, W â; ÎE) â R|V |Ãd, (ii) a (readout) graph pooling function R : R|V |Ãd â Rd, and (iii) a descriminator function D : Rd à Rd â [0, 1] which is trained to output D(Zi, R(Z)) â 1 and D(Z â j , R(Z â)) â 0, respectively, for nodes corresponding to given graph i â V and fake graph j â V â. Speciï¬cally, DGI optimizes:
IV Ive min B,D le (Zi, RZ) = By 2, lo ( (1-D(Z;,R(Z-))), (6)
where Î contains ÎE and the parameters of R, D. In the ï¬rst expectation, DGI samples from the real (positive) graphs. If only one graph is given, it could sample some subgraphs from it (e.g. connected components). The second expectation samples fake (negative) graphs. In DGI, fake samples exhibit the real adjacency W â := W but fake features X â are a row-wise random permutation of real X, though other negative sampling strategies are plausible. The ENC used in DGI is a graph convolutional network, though any GNN can be used. The readout R summarizes an entire (variable-size) graph to a single (ï¬xed-dimension) vector. VeliËckovi´c et al. [148] use R as a row-wise mean, though other graph pooling might be used e.g. ones aware of the adjacency, R : R|V |Ãd à R|V |Ã|V | â Rd.
The optimization (Eq. (6)) is shown by [148] to maximize a lower-bound on the Mutual Information (MI) between the outputs of the encoder and the graph pooling function. In other words, it maximizes the MI between individual node representations and the graph representation.
Graphical Mutual Information [GMI, 119] presents another MI alternative: rather than maximizing MI of node information and an entire graph, GMI maximizes the MI between the representation of a node and its neighbors.
# 4.4 Summary of unsupervised embedding methods
This section presented a number of unsupervised embedding methods. Speciï¬cally, the only supervision signal is the graph itself, but no labels for nodes or the graph are processed by these methods.
Some of these methods (Sec. 4.1) are shallow, and ignore the node features X even if they exist. These shallow methods program the encoder as a âlook-up tableâ, parametrizing it by matrix â R|V |Ãd, where each row stores d- dimensional embedding vector for a node. These methods are applicable to transductive tasks where is only one graph: it stays ï¬xed between training and inference.
23
Figure 8: Unsupervised graph neural networks. Graph structure and input features are mapped to low-dimensional embeddings using a graph neural network encoder. Embeddings are then decoded to compute a graph regularization loss (unsupervised).
Auto-encoder methods (Sec. 4.2) are deeper, though they still ignore node feature matrix X. These are feed- forward neural networks where the network input is the adjacency matrix W . These methods are better suited when new nodes are expected at inference test time.
Finally, Graph neural networks (Sec. 4.3) are deep methods that process both the adjacency W and node fea- tures X. These methods are inductive, and are generally empericially outperform the above two classes, for node- classiï¬cation tasks, especially when nodes have features.
For all these unsupervised methods, the model output on the entire graph is â R|V |Ã|V | that the objective function encourages to well-predict the adjacency W or its transformation s(W ). As such, these models can compute latent representations of nodes that trained to reconstruct the graph structure. This latent representation can subsequently be used for tasks at hand, including, link prediction, node classiï¬cation, or graph classiï¬cation.
# 5 Supervised Graph Embedding
A common approach for supervised network embedding is to use an unsupervised network embedding method, like the ones described in Section 4 to ï¬rst map nodes to an embedding vector space, and then use the learned embeddings as input for another neural network. However, an important limitation with this two-step approach is that the unsupervised node embeddings might not preserve important properties of graphs (e.g. node labels or attributes), that could have been useful for a downstream supervised task.
Recently, methods combining these two steps, namely learning embeddings and predicting node or graph labels, have been proposed. We describe these methods next.
# 5.1 Shallow embedding methods
Similar to unsupervised shallow embedding methods, supervised shallow embedding methods use embedding look- ups to map nodes to embeddings. However, while the goal in unsupervised shallow embeddings is to learn a good graph representation, supervised shallow embedding methods aim at doing well on some downstream prediction task such as node or graph classiï¬cation.
Label propagation (LP) [169] is a very popular algorithm for graph-based semi-supervised node classiï¬cation. It directly learns embeddings in the label space, i.e. the supervised decoder function in LP is simply the identity function:
ËyN = DEC(Z; ÎC) = Z.
In particular, LP uses the graph structure to smooth the label distribution over the graph by adding a regularization term to the loss function, where the underlying assumption is that neighbor nodes should have similar labels (i.e. there exist some label consistency between connected nodes). The regularization in LP is computed with Laplacian eigenmaps:
La,rea(W, W; 9) = > WW â ij
where Wi; = laâ _ oF | 3. ®)
24
L8up -- DEC(Z; 9?) WwW t->| Cone - | Ww
Figure 9: Supervised graph regularization methods. The graph structure is not used in the encoder nor the decoder networks. It instead acts as a regularizer in the loss function.
LP minimizes this energy function over the space of functions that take ï¬xed values on labelled nodes (i.e. ËyN i = yN i âi|vi â VL) using an iterative algorithm that updates a unlabelled nodeâs label distribution via the weighted average of its neighborsâ labels.
There exists variants of this algorithm such as Label Spreading (LS) [167], which minimizes the energy function:
aN Yi Larna(W,W;@) = So Wij a vy (9)
where D; = > j Wj, is the degree of node v;. The supervised loss in label spreading is simply the sum of distances between predicted labels and ground truth labels (one-hot vectors):
LN SUP(yN , ËyN ; Î) = ||yN i â ËyN i ||2 2. i|viâVL
Note that the supervised loss is computed over labeled nodes only, while the regularization term is computed over all nodes in the graph. These methods are expected to work well with consistent graphs, that is graphs where node proximity in the graph is positively correlated with label similarity.
# 5.2 Graph regularization methods
Supervised graph regularization methods also aim at learning to predict graph properties such as node labels. Similar to shallow embeddings, these methods compute a graph regularization loss deï¬ned over the graph structure, and a supervised loss for the downstream task (Fig. 9). However, the main difference with shallow embeddings lies in the encoder network: rather than using embedding look-ups, graph regularization methods learn embeddings as parametric function deï¬ned over node features, which might capture valuable information for downstream applications. That is, encoder functions in these methods can be written as:
Z = ENC(X; ÎE).
We review two types of semi-supervised [34] graph regularization approaches: Laplacian-based regularization meth- ods and methods that use random walks to regularize embeddings.
# 5.2.1 Laplacian
Manifold Regularization (ManiReg) [16] builds on the LP model and uses Laplacian Eigenmaps to smoothen the label distribution via the regularization loss in Eq. (7). However, instead of using shallow embeddings to predict labels, ManiReg uses support vector machines to predict labels from node features. The supervised loss in ManiReg is computed as:
Laur ly, 9%;0) = »~ » Hyd), 0 ilvieVn 1<k<C
where H(x) = max(0, 1 â x) is the hinge loss, C is the number of classes, and ËyN Reproducing Kernel Hilbert Space (RKHS) functions that act on input features. i = f (Xi; ÎE) are computed using
25
Model | Z = ENC(X;0*) | @Â¥ = DEC(Z;0%) | Wij =DEC(Z;0â):; | Lanna(W,W; 0) LP Shallow WZ; â Zi] Dy Wa Wij GN = yW is fixed Vo; ⬠Vi Ls Shallow Zi â 258 Diy Way Dipeve lly â BIB ManiReg RKHS |Z; â Z;\|3 Diy Wis Wig SemiEmb FF-NN |Z; â Zl? Dy WW NGM CNN, LSTM. ... CNN, LSTM, ... \|Zi â Z;||? Dy Wa Wig Planetoid FF-NN FF-NN ZZ; Ey j,4) loge (; W, i)
Table 3: An overview of supervised shallow and graph regularization methods, where the graph structure is leveraged through the graph regularization term Lo Rna(W, W; 8).
Semi-supervised Embeddings (SemiEmb) [154] further extend ManiReg and instead of using simple SVMs, this method uses feed-forward neural networks (FF-NN) to learn embeddings Z = ENC(X; ÎE) and distance-based graph decoders:
Wi; = DEC(Z; 0â); = [Zi â Z,/?
where || · || can be the L2 or L1 norm. SemiEmb regularizes intermediate or auxiliary layers in the network using the same regularizer as the LP loss in Eq. (7). SemiEmb uses FF-NN to predict labels from intermediate embeddings, which are then compared to ground truth labels via the Hinge loss in Eq. (10).
Note that SemiEmb leverages multi-layer neural networks and regularizes intermediate hidden representations, while LP does not learn intermediate representations, and ManiReg only regularizes the last layer.
Neural Graph Machines (NGM) [26] More recently, NGM generalize the regularization objective in Eq. (7) to more complex neural architectures than feed-forward neural networks (FF-NN), such as Long short-term memory (LSTM) networks [76] or CNNs [93]. in contrast with previous methods, NGM use the cross entropy loss for classiï¬cation.
# 5.2.2 Skip-gram
The Laplacian-based regularization methods covered so far only capture ï¬rst order proximities in the graphs. Skip- gram graph regularization methods further extend these methods to incorporate random walks, which are effective at capturing higher-order proximities.
Planetoid [158] Unsupervised skip-gram methods like node2vec and DeepWalk learn embeddings in a multi-step pipeline where random walks are ï¬rst generated from the graph and then used to learn embeddings. These embeddings are not learned for a downstream classiï¬cation task which might be suboptimal. Planetoid extends random walk methods to leverage node label information during the embedding algorithm.
Planetoid ï¬rst maps nodes to embeddings Z = [Z c||Z F ] = ENC(X; ÎE) with neural networks, where Z c are node embeddings that capture structural information while Z F capture node feature information. The authors propose two variants, a transductive variant that directly learns embedding Z c (as an embedding lookup), and an inductive variant where Z c are computed with parametric mappings that act on input features X. Embeddings are then learned by minimizing a supervised loss and a graph regularization loss, where the regularization loss measures the ability to predict context using nodes embeddings, while the supervised loss measures the ability to predict the correct label. More speciï¬cally, the regularization loss in Planetoid is given by:
Larna(W, W;@) = âEG,j,~) logo (.W.;) ;
with Wi = Z} Z; andy ⬠{â1,1} with y = 1if (v;,v;) ⬠E is a positive pair and y = â1 if (v;, v;) is a negative pair. The distribution under the expectation is directly defined through a sampling processâ. The supervised loss in
5There are two kinds of sampling strategies to sample positive pairs of nodes i, j: (i.) samples drawn by conducting random walks, similar to DeepWalk and (ii.) samples drawn from the same class i.e. yi = yj . These samples are positive i.e. with γ = 1. The negative samples simply replace one of the nodes with another randomly-sampled (negative) node yielding γ = â1. The ratio of these kinds of samples are determined by hyperparameters.
26
Planetoid is the negative log-likelihood of predicting the correct labels:
i N al 1 ; al L8ur(y%, Gg"; 0) = âWal Yo DYE wilog Hi, (11) Ll eV, 1Sk<C
where i is a nodeâs index while k indicates label classes, and 7Â¥ are computed using a neural network followed by a softmax activation, mapping Z; to predicted labels.
# 5.3 Graph convolution framework
We now focus on (semi-)supervised neighborhood aggregation methods, where the encoder uses input features and the graph structure to compute embeddings:
Z = ENC(X, W ; ÎE).
We ï¬rst review the graph neural network modelâwhich was the ï¬rst attempt to use deep learning techniques on graph- structured dataâand other related frameworks such as message passing networks [62]. We then introduce a new Graph Convolution Framework (GCF), which is designed speciï¬cally for convolution-based graph neural networks. While GCF and other frameworks overlap on some methods, GCF emphasizes the geometric aspects of convolution and propagation, allowing to easily understand similarities and differences between existing convolution-based approaches.
# 5.3.1 The Graph Neural Network model and related frameworks
The Graph Neural Network model (GNN) [65, 132] The ï¬rst formulation of deep learning methods for graph- structured data dates back to the graph neural network GNN) model of Gori et al. [65]. This formulation views the supervised graph embedding problem as an information diffusion mechanism, where nodes send information to their neighbors until some stable equilibrium state is reached. More concretely, given randomly initialized node embeddings Z 0, the following recursion is applied until convergence:
Z t+1 = ENC(X, W, Z t; ÎE), (12)
where parameters ÎE are reused at every iteration. After convergence (t = T ), the node embeddings Z T are used to predict the ï¬nal output such as node or graph labels:
ËyS = DEC(X, Z T ; ÎS).
This process is repeated several times and the GNN parameters ÎE and ÎD are learned with backpropagation via the Almeda-Pineda algorithm [9, 122]. Note that by Banachâs ï¬xed point theorem, the iteration in Eq. (12) is guaranteed to converge to a unique solution when the iteration ENC is a contraction mapping. In particular, Scarselli et al. [132] explore maps that can be expressed using message passing networks:
Z t+1 i = f (Xi, Xj, Z t j; ÎE), j|(vi,vj )âE
where f (·) is a multi-layer perception (MLP) constrained to be a contraction mapping. On the other hand, the decoder function in GNNs does not need to fulï¬ll any constraint and can be any MLP.
Gated Graph Neural Networks (GGNN) [97] Gated Graph Sequence Neural Networks (GGSNN) or their simpler version GGNN are similar to GNNs but remove the contraction mapping requirement. In GGSNNs, the recursive algorithm in Eq. (12) is relaxed and approximated by applying mapping functions for a ï¬xed number of steps, where each mapping function is a gated recurrent unit [43] with parameters shared for every iteration. The GGSNN model is particularly useful for machine learning tasks with sequential structure (such as temporal graphs) as it outputs predictions at every step.
27
Figure 10: Supervised graph neural networks (GNNs). Rather than leveraging the graph structure to act as a regularizer, GNNs leverage the graph structure in the encoder to propagate information across neighbouring nodes and learn structural representations. Labels are then decoded and compared to ground truth labels (e.g. via the cross-entropy loss).
Message Passing Neural Networks (MPNN) Gilmer et al. [62] In the same vein, MPNN provide a framework for graph neural networks, encapsulating many recent graph neural networks. In contrast with the GNN model which runs for an indefinite number of iterations, MPNN provide an abstraction for modern graph neural networks, which consist of multi-layer neural networks with a fixed number of layers. At every layer £, message functions fâ(.) compute messages using neighborsâ hidden representations, which are then passed to aggregation functions hâ(.):
mt= So fH, H}) (13) G\(vivj)EE
# j|(vi,vj )âE
HE! = nh Ht mi), (14)
After layers of message passing, nodesâ hidden representations encode structural information within ¢-hop neighbor- hoods. Gilmer et al. [62] explore additional variations of message functions within the MPNN framework, and achieve state-of-the-art results for prediction tasks defined on molecular graphs.
GraphNet [11] This framework further extends the MPNN framework to learn representations for edges, nodes and the entire graph using message passing functions. This framework is more general than the MPNN framework as it incorporates edge and graph representations.
# 5.3.2 Graph Convolution Framework
We now introduce our Graph Convolution Framework (GCF); and as we shall see, many recent graph neural networks can be described using this framework. Different from the MPNN and GraphNet frameworks, our framework focuses on convolution-based methods, and draws direct connections between convolutions on grids and graph convolutions. While GCF does not include sophisticated message passing networks (e.g. messages computed with edge features), it emphasizes geometric properties of convolution operators, and provides a simple way to understand similarities and differences between state-of-the-art graph convolution methods.
GCF In GCF, node embeddings are initialized using input features H 0 = X â R|V |Ãd0, and then updated with multiple layers of graph convolutions. Graph convolution layers provide a generalization of standard convolutions on grids to graph-structured data and are composed of four main components:
⢠Patch functions, which deï¬ne the shape of convolutional ï¬lters (speciï¬es which nodes interact with each other at every step of convolution), that is matrices of size |V | à |V |:
(hi(W, H),..., fx(W, H)),
where H® are node features at layer ¢ and K is the total number of patches. Note that the number of patches XK might be defined in the spectral domain (e.g. rank of a matrix) or in the spatial domain (e.g. different neighborhood sizes). In standard CNNs (which are defined in the spatial pixel domain), these patches usually have rectangular shapes, where nodes (pixels in images) communicate with their top, left, bottom, and right neighbors. However, since graphs do not have a grid-like structure, the shape of convolutional filters does
28
Method Model 9k(-) h(m,...,â¢mx) Spectrum-based: SCNN Ww) + oy ) ~ Gi = URUy, oO Mr E=UAUâ¢, fc(W,H) = 9) â â om ; QI-D wp Spectrum-free: ChebNet gx(W, D) Til ern ât) a(S, Mx) fr(W, H) = 9x(W, D) GCN g(W,D) =(D+ND1?(W+1)(D+ 1-1? o(m4) Spatial: SAGE-mean | 9i(W, D) = I, g2(W, D) ~ Unorm(D~!W, @) a(m, + ma) fk (W, H) = o.(W, D) GGNN gi(W, D) = I, 92(W, D) = W GRU(m1, m2) Attention: MoNet gx(U®) = exp(â5(U* â px) Dg) (US â px)) o(0, mr) fk(W, H) = a(W - 9. (H)) GAT 9x(H) = LeakyReLU(HB' by @ b| BH") o([my||.--||mx])
Table 4: An overview of graph convolution methods described using GCF.
not follow regular patterns and is instead deï¬ned by the graph structure itself. While most methods use non- parametric patches at every layer, some methods such as attention-based methods (Section 5.5.2) learn patches using parametric functions.
* Convolution filtersâ weights at every layer, which are dy x de+1 matrices, representing the filter weights for each patch:
(Of,...,O%). Each column can be interpreted as a single convolution filterâs weight, and we stack d¢+. filters filters to compute features in the next layer. Similarly, dy and dy; are analogous to the number of channels in CNNs for layer ¢ and ¢ + 1 respectively. At every layer in the GCF, hidden representations H¢ are convolved with every patch using the convolution filter weights:
mit! = fix(W, HHO, for 1<k<K.
⢠Merging functions, which combine outputs from multiple convolution steps into one representation:
+1 +1 41 H⢠=h(my,...,m_g-).
For instance, h(·) can be averaging or concatenation along the feature dimension followed by some non-linearity. Alternatively, h(·) can also be a more complicated operation parameterized by a neural network.
After L convolution layers, nodesâ embeddings Z = H L can be used to decode node or graph labels. Next, we review state-of-the-art GNNs, including spectral and spatial graph convolution methods using the proposed GCF framework.
# 5.4 Spectral Graph Convolutions
Spectral methods apply convolutions in the the spectral domain of the graph Laplacian matrix. These methods broadly fall into two categories: spectrum-based methods, which explicitly compute the Laplacianâs eigendecomposition, and spectrum-free methods, which are motivated by spectral graph theory but do not explicitly compute the Laplacianâs eigenvectors. One disadvantage of spectrum-based methods is that they rely on the spectrum of the graph Laplacian and are therefore domain-dependent (i.e. cannot generalize to new graphs). Moreover, computing the Laplacianâs spectral decomposition is computationally expensive. Spectrum-free methods overcome these limitations by providing approximations for spectral ï¬lters.
# 5.4.1 Spectrum-based methods
Spectrum-based graph convolutions were the ï¬rst attempt to generalize convolutions to non-Euclidean graph domains. Given a signal x â R|V | deï¬ned on a Euclidean discrete domain (e.g. grid), applying any linear translation-equivariant operator (i.e. with a Toeplitz structure) Î in the discrete domain is equivalent to elementwise multiplication in the Fourier domain:
F(Îx) = Fx · Fθ. (15)
29
In non-Euclidean domains, the notion of translation (shift) is not deï¬ned and it is not trivial to generalize spatial convolutions operators (Î) to non-Euclidean domains. Note that Eq. (15) can be equivalently written as:
Îx = F â1(Fx · Fθ).
While the left hand side is the Euclidean spatial convolution which is not defined for general graphs, the right hand side is a convolution in the Fourier domain which is defined for non-Euclidean domains. In particular, if L = J â D-'/2WD~-1/? is the normalized Laplacian of a non-Euclidean graph, it is a real symmetric positive definite matrix and admits an orthonormal eigendecomposition: L = UAU'. If x ⬠RIV! is a signal defined on nodes in the graph, the discrete graph Fourier transform and its inverse can be written as:
Fe =%=U'se and F-'# = U2.
Spectral graph convolutions build on this observation to generalize convolutions to graphs, by learning convolution ï¬lters in the spectral domain of the normalized Laplacian matrix:
r*6=U(U'as-U'8) = Udiag(U'@)U' x
Using GCF, patch functions in spectrum-based methods can be expressed in terms of eigenvectors of the graph nor- malized Laplacian:
; fix(W, H°) = 9.(U)
for some function gk(.). Note that this dependence on the spectrum of the Laplacian makes spectrum-based methods domain-dependent (i.e. they can only be used in transductive settings).
Spectral Convolutional Neural Networks (SCNN) [25] learn convolution filters as multipliers on the eigenvalues of the normalized Laplacian. SCNN layers compute feature maps at layer ¢ + 1 with:
de Hh = o(> Ue FLURH), 1<j<deyi andl <i<d (16) i=l
where o(-) is a non-linear transformation, Ux is a |V| x A matrix containing the top K eigenvectors of L and Ff i are IX x K trainable diagonal matrices representing filtersâ weights in the spectral domain. We note that this spectral convolution operation can equivalently be written as:
K de High = o( oun Y Fiat), (17) i=1 k=1
write Eq. (17) using matrix notation as:
K Hotag ( Ss uu 0) ; k=1
where Of are trainable matrices of shape de x de+1 containing the filter weights. Using notation from GCF, SCNNs use patch functions expressed in terms of eigenvectors of the graph Laplacian g,(U) = Upup and the merging function h(.) is the sum operator followed by a non-linearity o(.).
Euclidean grids have a natural ordering of nodes (top, left, bottom, right) allowing the use of spatially localized convolution filters with fixed size, independent of the input size. In contrast, SCNN layers require O(ded¢41K) parameters, which is not scalable if K is O(|V|). Bruna et al. [25], Henaff et al. [75] note that spatial localization in the graph domain is equivalent to smoothness in the spectral domain, and propose smooth spectral multipliers in order to reduce the number of parameters in the model and avoid overfitting. Instead of learning K free parameters for each filter Fi, the idea behind smooth spectral multipliers is to parameterize Fi with polynomial interpolators such
30
De wp-'/2 7 Ht z S >. <> XN VA pow p12 x* sofimaxy Sef F _ 2 . 9 / x concat/avg e \ oe + = \ >< ~ elt eS = ef SSdO e+ H-1/2R7 H-1/2 yLeE! tL i, i Ho = o(D~ PWD? He ) vee wi, wi, (a) GCN aggregation. (b) GCN layer. (c) GAT aggregation.
sofimaxy concat/avg SSdO i, i wi, wi,
Figure 11: An illustration of neighborhood aggregation methods. Reprinted with permission from [4, 147].
as cubic splines and learn a ï¬xed number of interpolation coefï¬cients. This modeling assumption leads to a constant number of parameters, independent of the graph size |V |.
In practice, SCNNs can be used for node classiï¬cation or graph classiï¬cation with graph pooling. However, SCNNs have two major limitations: (1) computing the eigendecomposition of the graph Laplacian is computationally expensive and (2) this method is domain-dependent, as its ï¬lters are eigen-basis dependent and cannot be shared across graphs.
# 5.4.2 Spectrum-free methods
We now cover spectrum-free methods, which approximate convolutions in the spectral domain overcoming computa- tional limitations of SCNNs by avoiding explicit computation of the Laplacianâs eigendecomposition. SCNNs filters are neither localized nor parametric, in the sense that the parameters in Fi, in Eq. (17) are all free. To overcome this issue, sprectrum-free methods use polynomial expansions to approximate spectral filters in Eq. (16) via:
â a Fig = Pi(A)
where PEC) is a finite degree polynomial. Therefore, the total number of free parameters per filter depends on the polynomialâs degree, which is independent of the graph size. Assuming all eigenvectors are kept in Eq. (16), it can be rewritten as:
de Hitloo ( Ss P&(A)HS,) . i=1
K If we write Pf,(A) = 0 Of, (A*), this yields in matrix notation: k=1
K Hol = o( (7*) Heh), k=1
where OF is the matrix containing the polynomialsâ coefficients. These filters are k-localized, in the sense that the receptive field of each filter is k, and only nodes at a distance less than k will interact in the convolution operation. Since the normalized Laplacian is expressed in terms of the graph adjacency and degree matrices, we can write patch functions in spectrum-free method using notation from GCF:
Se (W, HH") = 9x(W, D).
Chebyshev Networks polynomials form an orthonormal basis in [â1, 1] and can be computed efï¬ciently with the recurrence:
T0(x) = 1, T1(x) = x, and Tk(x) = 2xTkâ1(x) â Tkâ2(x) for k ⥠2. (18)
31
In order to use Chebyshev polynomials, ChebNets rescale the normalized adjacency martrix L to ensure that its eigenvalues are in [â1, 1]. The convolution step in ChebNet can be written as:
K Hols o(Son(âat- 1) He), pat \Amax(L
where A;nax (L) is the largest eigenvalue of L.
Graph Convolution Networks (GCN) [83] further simplify ChebNet by letting kK = 2, adding a weight sharing constraint for the first and second convolutions of = -0§ := ©*, and assuming Anaz(L) ~ 2. This yields:
H+! = o((2I â L)Hâ0*) (19)
=o0(1+D?wD-'/?)H'e*), (20)
Furthermore, since I + Dâ1/2W Dâ1/2 has eigenvalues in [0, 2], applying Eq. (20) multiple times might lead to numerical instabilities or exploding gradients. To overcome this issue, GCNs use a re-normalization trick, which maps the eigenvalues of I + Dâ1/2W Dâ1/2 to [0, 1]:
I + Dâ1/2W Dâ1/2 â (D + I)â1/2(W + I)(D + I)â1/2.
Using GCF notation, GCN patch functions can be written as:
g1(W, D) = (D + I)â1/2(W + I)(D + I)â1/2,
and the graph convolution layer (see 11 for an illustration) is:
H? = o(g:(W, D) HOâ). (21)
This model has been applied to many problems including matrix completion [19], link prediction in knowledge graphs [133], and unsupervised graph embedding with variational inference [84].
Note that in contrast with spectrum-based methods covered in the previous section, both ChebyNet and GCN do not rely on computations of the Laplacianâs eigenvectors. The convolution step is only deï¬ned over the local neighborhood of each node (as deï¬ned by the adjacency matrix W ), and therefore we can view these methods as local message passing algorithms (see the Taxonomy in Fig. 3), even though these are motivated by spectral graph theory.
# 5.5 Spatial Graph Convolutions
Spectrum-based methods are limited by their domain dependency and cannot be applied in inductive settings. Fur- thermore, spectrum-free methods such as GCNs require storing the entire graph adjacency matrix, which can be computationally expensive for large graphs.
To overcome these limitations, spatial methods borrow ideas from standard CNNs, where convolutions are applied in the spatial domain as deï¬ned by the graph topology. For instance, in computer vision, convolutional ï¬lters are spatially localized by using ï¬xed rectangular patches around each pixel. Additionally, since pixels in images have a natural ordering (top, left, bottom, right), it is possible to reuse ï¬ltersâ weights at every location, signiï¬cantly reducing the total number of parameters. While such spatial convolutions cannot directly be applied in graph domains, spatial graph convolutions use ideas such as neighborhood sampling and attention mechanisms to overcome challenges posed by graphsâ irregularities.
# 5.5.1 Sampling-based spatial methods
Inductive representation learning on large graphs (SAGE) [72] While GCNs can be used in inductive settings, they were originally introduced for semi-supervised transductive settings, and the learned ï¬lters might strongly rely on the Laplacian used for training. Furthermore, GCNs require storing the entire graph in memory which can be computationally expensive for large graphs.
32
ela @ 1. Samp 2. Aggregate feature information â_3. Predict graph context and label from neighbors using aggregated information
Figure 12: Illustration of the GraphSAGE model. Reprinted with permission from [72].
To overcome these limitations, Hamilton et al. [72] propose SAGE, a general framework to learn inductive node embeddings while reducing the computational complexity of GCNs. Instead of averaging signals from all one-hop neighbors using multiplications with the Laplacian matrix, SAGE samples ï¬xed neighborhoods (of size q) to remove the strong dependency on a ï¬xed graph structure and generalize to new graphs. At every SAGE layer, nodes aggregate information from nodes sampled from their neighborhood, and the propagation rule can be written as:
HY = o(O{ Ht, + OSAGG({H!, : jlvy ⬠Sample(M(w;), @)})), (22)
where AGG(·) is an aggregation function, which can be any permutation invariant operator such as averaging (SAGE- mean) or max-pooling (SAGE-pool).
Note that SAGE can also be described using GCF. For simplicity, we describe SAGE-mean using GCF notation, and refer to [72] for details regarding other aggregation schemes. In GCF notation, SAGE-mean uses two patch learn- ing functions with g1(W, D) = I being the identity, and g2(W, D) â¼ Unorm(Dâ1W, q), where Unorm(·, q) indicates uniformly sampling q nonzero entries per row, followed by row normalization. That is, the second patch propagates information using neighborhood sampling, and the SAGE-mean layer is:
HS = (gi (W, D)H'} + go(W, D)Hâ®3).
# 5.5.2 Attention-based spatial methods
Attention mechanisms [146] have been successfully used in language models, and are particularly useful when op- erating on long sequence inputs, they allow models to identify relevant parts of the inputs. Similar ideas have been applied to graph convolution networks. Graph attention-based models learn to pay attention to important neighbors during the message passing step. This provides more ï¬exibility in inductive settings, compared to methods that rely on ï¬xed weights such as GCNs.
Broadly speaking, attention methods learn neighborsâ importance using parametric functions whose inputs are node features at the previous layer. Using GCF, we can abstract patch functions in attention-based methods as func- tions of the form:
Se(W, HH") = a(W - gn (H")),
where · indicates element-wise multiplication and α(·) is an activation function such as softmax or ReLU.
Graph Attention Networks (GAT) [147] is an attention-based version of GCNs, which incorporate self-attention mechanisms when computing patches. At every layer, GAT attends over the neighborhood of each node and learns to selectively pick nodes which lead to the best performance for some downstream task. The high-level intuition is similar to SAGE [72] and makes GAT suitable for inductive and transductive problems. However, instead of limiting the convolution step to ï¬xed size-neighborhoods as in SAGE, GAT allows each node to attend over the entirety of its neighbors and uses attention to assign different weights to different nodes in a neighborhood. The attention parameters are trained through backpropagation, and the GAT self-attention mechanism is:
gx (Hf) = LeakyReLU(H!B" by @ 6) BH!)
)
33
where â indicates summation of row and column vectors with broadcasting, and (b0, b1) and B are trainable attention weight vectors and weight matrix respectively. The edge scores are then row normalized with softmax. In practice, the authors propose to use multi-headed attention and combine the propagated signals with a concatenation of the average operator followed by some activation function. GAT can be implemented efï¬ciently by computing the self-attention scores in parallel across edges, as well as computing the output representations in parallel across nodes.
Mixture Model Networks (MoNet) Monti et al. [109] provide a general framework that works particularly well when the node features lie in multiple domains such as 3D point clouds or meshes. MoNet can be interpreted as an at- tention method as it learns patches using parametric functions in a pre-deï¬ned spatial domain (e.g. spatial coordinates), and then applies convolution ï¬lters in the graph domain.
Note that MoNet is a generalization of previous spatial approaches such as Geodesic CNN (GCNN) [106] and Anisotropic CNN (ACNN) [22], which both introduced constructions for convolution layers on manifolds. However, both GCNN and ACNN use fixed patches that are defined on a specific coordinate system and therefore cannot gen- eralize to graph-structured data. The MoNet framework is more general; any pseudo-coordinates such as local graph features (e.g. vertex degree) or manifold features (e.g. 3D spatial coordinates) can be used to compute the patches. More specifically, if U* are pseudo-coordinates and H° are features from another domain, then using GCF, the MoNet layer can be expressed as:
K Het = o( ow . (0) H'04), (23) k=1
where · is element-wise multiplication and gk(U s) are the learned parametric patches, which are |V | à |V | matrices. In practice, MoNet uses Gaussian kernels to learn patches, such that:
1 2 gn(U*) = ex( â=(U*- px) O, (US - w)),
where µk and Σk are learned parameters, and Monti et al. [109] restrict Σk to be a diagonal matrix.
# 5.6 Non-Euclidean Graph Convolutions
Hyperbolic shallow embeddings enable embeddings of hierarchical graphs with smaller distortion than Euclidean embeddings. However, one major downside of shallow embeddings is that they are inherently transductive and cannot generalize to new graphs. On the other hand, Graph Neural Networks, which leverage node features, have achieved state-of-the-art performance on inductive graph embedding tasks.
Recently, there has been interest in extending Graph Neural Networks to learn non-Euclidean embeddings and thus beneï¬t from both the expressiveness of Graph Neural Networks and hyperbolic geometry. One major challenge in doing so is how to perform convolutions in a non-Euclidean space, where standard operations such as inner products and matrix multiplications are not deï¬ned.
Hyperbolic Graph Convolutional Neural Networks [101] apply graph convolutions in hyperbolic space by leveraging the Euclidean tangent space, which provides a ï¬rst- order approximation of the hyperbolic manifold at a point. For every graph convolution step, node embeddings are mapped to the Euclidean tangent space at the origin, where convolutions are applied, and then mapped back to the hy- perbolic space. These approaches yield signiï¬cant improvements on graphs that exhibit hierarchical structure (Fig. 13).
# 5.7 Summary of supervised graph embedding
This section presented a number of methods that process task labels (e.g., node or graph labels) at training time. As such, model parameters are directly optimized on the upstream task.
Shallow methods use neither node features X nor adjacency W in the encoder (Section 5.1), but utilize the adja- cency to ensure consistency. Such methods are useful in transductive settings, if only one graph is given, without node features, a fraction of nodes are labeled, and the goal is to recover labels for unlabeled nodes.
34
(a) GCN layers. (b) HGCN layers.
Figure 13: Euclidean (left) and hyperbolic (right) embeddings of a tree graph. Hyperbolic embeddings learn natural hierarchies in the embedding space (depth indicated by color). Reprinted with permission from [32].
Graph regularization methods (Section 5.2) utilize node features X but not the adjacency W in the encoder. In In fact, they need only a graph In general, they can be applied when node features contain rich general, these methods are inductive (except of one version of planetoid [158]). at training time but not at inference (test) time. information.
Finally, graph convolution models (Sections 5.3, 5.4 & 5.5) utilize node features and adjacency in the encoder. At the time of writing, these models acheive superior empirical performance on many node-classiï¬cation tasks.
# 6 Applications
Graph representation learning methods can be applied to a wide range of applications, which can be unsupervised or supervised. In unsupervised applications, task-speciï¬c labels are not processed for learning embeddings. Rather, the graph is used as a form of self-supervision. Speciï¬cally, one can learn embeddings that preserve the graph (i.e. neighborhoods) or to preserve structural equivalence of nodes (see Section 2.2.3 for distinction), for instance, by applying unsupervised embedding methods (Section 4, upper branch of the Taxonomy in Fig. 3). On the other hand, in supervised applications, node embeddings are directly optimized for some speciï¬c task, such as classifying nodes or graphs. In this setting, supervised embedding methods (Section 5, lower branch of the Taxonomy in Fig. 3) can be applied. Table 5 summarizes some popular tasks in GRL, and pairs them with methods frequently used for the tasks. We review common unsupervised and supervised graph applications next.
# 6.1 Unsupervised applications
# 6.1.1 Graph reconstruction
The most standard unsupervised graph application is graph reconstruction. In this setting, the goal is to learn mapping functions (which can be parametric or not) that map nodes to dense distributed representations which preserve graph properties such as node similarity. Graph reconstruction doesnât require any supervision and models can be trained by minimizing a reconstruction error, which is the error in recovering the original graph from learned embeddings. Several algorithms were designed speciï¬cally for this task, and we refer to Section 4 for some examples of reconstruction objectives. At a high level, graph reconstruction is similar to dimensionality reduction in the sense that the main goal is summarize some input data into a low-dimensional embedding. Instead of compressing high dimensional vectors into low-dimensional ones as standard dimensionality reduction methods (e.g. PCA) do, the goal of graph reconstruction models is to compress data deï¬ned on graphs into low-dimensional vectors.
# 6.1.2 Link prediction
Link prediction is the task of predicting links in a graph. In other words, the goal in link prediction tasks is to predict missing or unobserved links (e.g. links that may appear in the future for dynamic and temporal networks). Link prediction can also help identifying spurious link and remove them. It is a major application of graph learning models
35
Method Training complexity Memory Computation Training input (a) DeepWalk [Perozzi, 2014] O(|V |d) O(c2d|V | log2 |V |) (b) node2vec [Grover, 2016] O(|V |d) O(c2d|V |) (c) (d) (e) LINE [Tang, 2015] HOPE [Ou, 2016] GF [Ahmed, 2013] SDNE [Wang, 2016] DNGR [Cao, 2016] GraRep [Cao, 2015] WYS [Abu-el-haija, 2018] O(|V |d) O(|V |bD) O(|V |2) O(|E|d) O(|V |bM) O(|V |3c + |V |2d) W (f) HARP [Chen, 2018] (g) Splitter [Epasto, 2019] (h) MDS [Kruskal, 1964] LP [Zhu, 2002] LLE [Roweis, 2000] (i) O(|V |2) O(|V |) inherits inherits O(|V |3) O(|E| Ã iters) W W X induces W (j) GNN Methods (k) (l) GTTF [Markowitz, 2021] SAGE [Hamilton, 2017] O(|V |D) O(bF H D) O(bF Hâ1D + bF HM) O(bF H D) O(bF Hâ1D + bF HM) O(|E|D + |V |M) X, W X, W X, W
Table 5: Summary and practical implications of GRL methods. Columns from right-to-left: practical scenarios where methods have demonstrated value; the input to the methods: (adjacency matrix W, node features X, or both); the hardware cost to train the method; finally, the left-most columns indicate the method c! denotes context size (e.g. length of random walk) and d denotes size of embedding dictionary. (a) DeepWalk and (b) node2vec store the embedding dictionary (with O(|V |d) floating-point entries); Training simulates from every node ⬠V a fixed number of walks each with fixed length: the dot products of al sampling (b) is used. for simplicity. the adjacency matrix multipâ non-zeros. (f) algorithm. He: are both smal (c) LI matrix, ications HARP re We as: (< |V asses. We derive the Training Complexity as follows. The method classes (a-h) are node embedding methods where c ⬠Z node-pairs within window of size c along the simulated walks are calculated. For each pair either the hierarchical softmax (a) or negative Both left-most |V| terms can be substituted by batch size b to analyze per-batch complexity. However, we analyze per epoch E [Tang, 2015], HOPE [Ou, 2016] and GF [Ahmed, 2013] loop-over all edges. (d) SDNE and DNGR train auto-encoders over with batch-size b, and D = 5°, de denotes total dimensions of all layers. M = >>, dede+1 accounts for floating-point ops of . For full-batch, b = |V|. (e) GraRep and WYS raise transition matrix to power c storing a dense square matrix with O(|V|?) Chen, 2018] and (g) Splitter can run any algorithm (e.g., (a-e)), therefore their complexity depends on the specific underlying sume that number of times HARP is invoked (i.e. scales of the graph), and the average number of personas per node for Splitter, ). (h) MDS computes all-pairs similarity and LE requires full eigendecomposition of the graph laplacian matrix (to recover the eigenvectors corresponding smallest eigenvalues). (i) LP and LLE loop over edges up-to âitersâ iterations, assuming that number of label classes is small. (j) contain graph convolution methods of GCN [Kipf, 2016], GAT [Veliékovié, 2018], MixHop [Abu-el-haija, 2019], GIN [Xu, 2018], GGNN (Li, 2015], M PNN [G âilmer, 2017], ChebNet [Defferrard, 2016] and MoNet [Monti, 2017]. We assume the naive full-batch implementation provided by authors of those methods. At each layer, every node averages its neighbors (total of |E|D floating-point operations) followed by multiplying by the layer filter (total of |V|M floating-point ops). Finally, to scale learning to larger graphs, sampling methods like (k-l) reduce the hardware requirement of training algorithm, seperating memory complexity from graph size. (k) SAGE and (1) GTTF sample F nodes for every node in batch (of size b), and F of their neighbors, and so on, until tree height reaches H. For both (k) and (1, we ignore runtime complexity of data pre-processing, as it is to be computed once per graph, irrespective of the number of (hyperparameter) sweep runs.
36
in industry, and common example of applications include predicting friendships in social networks or predicting user- product interactions in recommendation systems.
A common approach for training link prediction models is to mask some edges in the graph (positive and negative edges), train a model with the remaining edges and then test it on the masked set of edges. Note that link prediction is different from graph reconstruction. In link prediction, we aim at predicting links that are not observed in the original graph while in graph reconstruction, we only want to compute embeddings that preserve the graph structure through reconstruction error minimization.
Finally, while link prediction has similarities with supervised tasks in the sense that we have labels for edges (positive, negative, unobserved), we group it under the unsupervised class of applications since edge labels are usually not used during training, but only used to measure the predictive quality of embeddings. That is, models described in Section 4 can be applied to the link prediction problem.
# 6.1.3 Clustering
Clustering is particularly useful for discovering communities and has many real-world applications. For instance, clusters exist in biological networks (e.g. as groups of proteins with similar properties), or in social networks (e.g. as groups of people with similar interests).
Note that unsupervised methods introduced in this survey can be used to solve clustering problems: one can run a clustering algorithm (e.g. k-means) on embeddings that are output by an encoder. Further, clustering can be joined with the learning algorithm while learning a shallow [128] or Graph Convolution [40, 42] embedding model.
# 6.1.4 Visualization
There are many off-the-shelf tools for mapping graph nodes onto two-dimensional manifolds for the purpose of visual- ization. Visualizations allow network scientists to qualitatively understand graph properties, understand relationships between nodes or visualize node clusters. Among the popular tools are methods based on Force-Directed Layouts, with various web-app Javascript implementations.
Unsupervised graph embedding methods are also used for visualization purposes: by ï¬rst training an encoder- decoder model (corresponding to a shallow embedding or graph convolution network), and then mapping every node representation onto a two-dimensional space using, t-distributed stochastic neighbor embeddings (t-SNE) [103] or PCA [80]. Such a process (embedding â dimensionality reduction) is commonly used to qualitatively evaluate the performance of graph learning algorithms. If nodes have attributes, one can use these attributes to color the nodes on 2D visualization plots. Good embedding algorithms embed nodes that have similar attributes nearby in the embedding space, as demonstrated in visualizations of various methods [3, 83, 121]. Finally, beyond mapping every node to a 2D coordinate, methods which map every graph to a representation [7] can similarly be projected into two dimensions to visualize and qualitatively analyze graph-level properties.
# 6.2 Supervised applications
# 6.2.1 Node classiï¬cation
Node classiï¬cation is an important supervised graph application, where the goal is to learn node representations that can accurately predict node labels. For instance, node labels could be scientiï¬c topics in citation networks, or gender and other attributes in social networks.
Since labelling large graphs can be time-consuming and expensive, semi-supervised node classiï¬cation is a par- ticularly common application. In semi-supervised settings, only a small fraction of nodes is labelled and the goal is to leverage links between nodes to predict attributes of unlabelled nodes. This setting is transductive since there is only one partially labelled ï¬xed graph. It is also possible to do inductive node classiï¬cation, which corresponds to the task of classifying nodes in multiple graphs.
Note that node features can signiï¬cantly boost the performance on node classiï¬cation tasks if these are descriptive for the target label. Indeed, recent methods such as GCN [83] or GraphSAGE [72] have achieved state-of-the-art performance on multiple node classiï¬cation benchmarks due to their ability to combine structural information and semantics coming from features. On the other hand, other methods such as random walks on graphs fail to leverage feature information and therefore achieve lower performance on these tasks.
37
# 6.2.2 Graph classiï¬cation
Graph classiï¬cation is a supervised application the task is to predict graph-level labels given an input graph. Graph classiï¬cation tasks are inherently inductive, as new graphs are presented at test time. Many popular tasks are bio- chemical and some others are online social networks. In the biochemical domain, a common application uses graphs corresponding to molecules. In these graphs, each node represents an atom (e.g. with a feature vector thatâs a 1- hot encoding of its atomic number) and an edge between two nodes indicates a bond (feature vector could indicate bond type). The graph-level label is task dependant, e.g., indicating mutagenicity of a drug against bacteria, such as MUTANG [48]. In online social networks, nodes usually correspond to users and edges represent relationships or interactions. For instance, the Reddit graph classiï¬cation tasks [157] contain many graphs. Each graph corresponds to a discussion thread: user commenting on a userâs comment, an edge will connect the two. The goal is to predict the community (sub-reddit) where discussion took place, given the graph of comments.
Different than tasks of node-level (e.g., node classiï¬cation) and edge-level (e.g., link prediction) prediction, graph classiï¬cation tasks require an additional type of pooling, in order to aggregate node-level information into graph-level information. As discussed earlier, generalizing this notion of pooling to arbitrary graphs is non-trivial, and is an active research area. The pooling function should be invariant to the node order. Many methods use simple pooling, such as mean or sum of all graph node-level latent vectors e.g. [156]. Other methods use differentiable pooling [28, 59, 94, 160].
In addition to these supervised methods, a number of unsupervised methods for learning graph-level representa- tions have been proposed [7, 143, 144]. In fact, a notable class of unsupervised graph-level models are known as graph kernels (GKs), see [87, 151] for reviews.
While GKs are outside our main focus, here we brieï¬y mention connections of GKs to GRAPHEDM. GKs can be applied to graph-level tasks such as graph classiï¬cation. GK can implicitly implement a similarity function that maps any two graphs into a scalar. Traditional GKs compute similarity of two graphs by counting how many walks (or paths) the two graphs share in common â e.g., each walk can be encoded as a sequence of node labels. If nodes are not labeled, it is common to use the node degrees as labels. GKs are often analyzed in their ability to detect (sub-)graph isomorphism. Two (sub-)graphs are isomorphic if they are identical when ignoring node ordering. As sub-graph isomorphism is NP-hard, the 1-dimensional Weisfeiler-Leman (1-WL) heuristic deems two sub-graphs as isomorphic as follows. For each graph, node statistics are counted as histograms (e.g., count nodes with label âAâ, and how many of those have an edge to nodes with label âBâ, etc). The 1-WL heuristic deems two graphs as isomorphic if their histograms, extracted from 1-hop neighborhood, are identical. Certain GNNs, such as the Graph Isomorphism Network [GIN, 156] have been proven to realize the 1-WL heuristic i.e. it can map two graphs to the same latent vector if-and-only-if they would be deemed isomorphic by the 1-WL heuristic. Some recent work combines GKs and GNNs. As examples, Chen et al. [35] extract walk patterns; Du et al. [51] model similarity of two graphs using the similarity of the âtangent spaceâ of the objective w.r.t. the Gaussian-initialized GNN parameters. In both [35, 51], there is no actual GNN training. The training rather uses kernelized methods such as kernel support vector machines, on the pairwise Gram matrix. As such, these methods cannot be readily plugged into our our frameworks of GCF or GRAPHEDM. On other hand, other methods explicitly map a graph to the high-dimensional latent space, rather than implicitly compute graph-to-graph similarity scalar score. As an example, the k-GNN network of Morris et al. [110] can realize the k-WL heuristic (similar to 1-WL, but here histograms are computed up-to k-hop neighbors), yet it is explicitly programmed as a GNN. As such, the k-GNN model classes can be described in our frameworks of GCF and GRAPHEDM.
# 7 Conclusion and Open Research Directions
In this survey, we introduced a uniï¬ed framework to compare machine learning models for graph-structured data. We presented a generalized GRAPHEDM framework, previously applied to unsupervised network embedding, that encapsulates shallow graph embedding methods, graph auto-encoders, graph regularization methods and graph neu- ral networks. We also introduced a graph convolution framework (GCF), which is used to describe and compare convolution-based graph neural networks, including spatial and spectral graph convolutions. Using this framework, we introduced a comprehensive taxonomy of GRL methods, encapsulating over thirty methods for graph embedding (both supervised and unsupervised).
We hope that this survey will help and encourage future research in GRL, to hopefully solve the challenges that these models are currently facing. In particular, practitioners can reference the taxonomy to better understand the
38
available tools and applications, and easily identify the best method for a given problem. Additionally, researchers with new research questions can use the taxonomy to better classify their research questions, reference the existing work, identify the right baselines to compare to, and ï¬nd the appropriate tools to answer their questions.
While GRL methods have achieved state-of-the-art performance on node classiï¬cation or link prediction tasks, many challenges remain unsolved. Next, we discuss ongoing research directions and challenges that graph embedding models are facing.
Evaluation and benchmarks The methods covered in this survey are typically evaluated using standard node clas- siï¬cation or link prediction benchmarks. For instance, citation networks are very often used as benchmarks to evaluate graph embedding methods. However, these small citation benchmarks have drawbacks since results might signiï¬cantly vary based on datasetsâ splits, or training procedures (e.g. early stopping), as shown in recent work [135].
To better advance GRL methods, it is important to use robust and uniï¬ed evaluation protocols, and evaluate these methods beyond small node classiï¬cation and link prediction benchmarks. Recently, there has been progress in this direction, including new graph benchmarks with leaderboards [53, 77] and graph embedding libraries [58, 66, 153]. On a similar vein, Sinha et al. [137] recently proposed a set of tasks grounded in ï¬rst-order logic, to evaluate the reasoning capabilities of GNNs.
Fairness in Graph Learning The emerging ï¬eld of Fairness in Machine Learning seeks to ensure that models avoid correlation between âsensitiveâ features and the modelâs predicted output [107]. These concerns can be especially relevant for graph learning problems, where we must also consider the correlation of the graph structure (the edges) in addition to the feature vectors of the nodes with the ï¬nal output.
The most popular technique for adding fairness constraints to models relies on using adversarial learning to debias the modelâs predictions relative to the sensitive feature(s), and can be extended to GRL [23]. However, adversarial methods do not offer strong guarantees about the actual amount of bias removed. In addition, many debiasing methods may not be effective at the debiasing task in practice [64]. Recent work in the area aims to provide provable guarantees for debiasing GRL [116].
Application to large and realistic graphs Most learning methods on graphs are applied only on smaller datasets, with sizes of up to hundred of thousands of nodes. However, many real-world graphs are much larger, containing up to billions of nodes. Methods that scale for large graphs [95, 159] require a Distributed Systems setup with many machines, such as MapReduce [47]. Given a large graph that ï¬ts on a single hard disk (e.g. with one terabyte size) but does not ï¬t on RAM, how can a researcher apply a learning method on such a large graphs, using just a personal computer? Contrast this with a computer vision task by considering a large image dataset [50, 90]. It is possible to train such models on personal computers, as long as the model can ï¬t on RAM, regardless how large the dataset is. This problem may be particularly challenging for graph embedding models, especially those which have parameters that scale with the number of nodes in the graph.
Sometimes in industry, even choosing the best graph to use as input is difï¬cult. [71] describes Grale, a system at Google used for learning the correct graph from a variety of different features. Grale relies on techniques from similarity search (like locality sensitive hashing) to scale graph learning to extremely large datasets. Recent work extends the Grale model with an attention network [129] to allow end-to-end learning.
We foresee additional engineering and mathematical challenges in learning methods for large graphs, while still being operable on a single machine. We hope that researchers can focus on this direction to expose such learning tools to non-expert practitioners, such as a Neurologist wishing to analyze the sub-graph of the human brain given its neurons and synapses, stored as nodes and edges.
Molecule generation Learning on graphs has a great potential for helping molecular scientists to reduce cost and time in the laboratory. Researchers proposed methods for predicting quantum properties of molecules [52, 62] and for generating molecules with some desired properties [46, 98, 100, 136, 161]. A review of recent methods can be found in [54]. Many of these methods are concerned with manufacturing materials with certain properties (e.g. conductance and malleability), and others are concerned drug design [57, 79, 126].
Combinatorial optimization Computationally hard problems arise in a broad range of areas including routing sci- ence, cryptography, decision making and planning. Broadly speaking, a problem is computationally hard when the
39
algorithms that compute the optimal solution scale poorly with the problem size. There has been recent interest in leveraging machine learning techniques (e.g. reinforcement learning) to solve combinatorial optimization problems and we refer to [18] for a review of these methods.
Many hard problems (e.g. SAT, vertex cover...) can be expressed in terms of graphs and more recently, there has been interest in leveraging graph embeddings to approximate solutions of NP-hard problems [82, 114, 123, 134]. These methods tackle computationally hard problems from a data-driven perspective, where given multiple instances of a problem, the task is to predict whether a particular instance (e.g. node) belongs to the optimal solution. Other work focuses on optimizing graph partitions [20, 145], ï¬nding assignments that aim to fulï¬ll an objective (e.g. the minimum conductance cut).
One motivation for all these approaches is the relational inductive biases found in GNNs which enable them to better represent graphs compared to standard neural networks (e.g. permutation invariance). While these data-driven methods are still outperformed by existing solvers, promising results show that GNNs can generalize to larger problem instances [114, 123]. We refer to the recent survey on neural symbolic learning by Lamb et al. [91] for an extensive review of GNN-based methods for combinatorial optimization.
Non-Euclidean embeddings As we saw in Section 4.1.2 and Section 5.6, an important aspect of graph embeddings is the underlying space geometry. Graphs are discrete, high-dimensional, non-Euclidean structures, and there is no straightforward way to encode this information into low-dimensional Euclidean embeddings that preserve the graph topology [24]. Recently, there has been interest and progress into learning non-Euclidean embeddings such as hyper- bolic [112] or mixed-product space [70] embeddings. These non-Euclidean embeddings provide a promise for more expressive embeddings, compared to Euclidean embeddings. For instance, hyperbolic embeddings can represent hier- archical data with much smaller distortion than Euclidean embeddings [131] and have led to state-of-the-art results in many modern applications such as link prediction in knowledge graphs [10, 33] and linguistics tasks [92, 142].
Two common challenges arise with non-Euclidean embeddings: precision issues (e.g. near the boundary of the Poincar´e ball) in hyperbolic space [130, 163] and challenging Riemannian optimization [13, 21]. Additionally, it is also unclear how to pick the right geometry for a given input graph. While there exists some discrete measures for the tree-likeliness of graphs (e.g. Gromovâs four-point condition [81] and others [1, 5, 39]), an interesting open research direction is how to pick or learn the right geometry for a given discrete graph.
Theoretical guarantees There have been signiï¬cant advances in the design of graph embedding models, which improved over the state-of-the-art in many applications. However, there is still limited understanding about theoretical guarantees and limitations of graph embedding models. Understanding the representational power of GNNs is a nascent area of research, and recent works adapt existing results from learning theory to the problem of GRL [41, 61, 102, 105, 110, 149, 156]. The development of theoretical frameworks is critical to pursue in order to understand the theoretical guarantees and limitations of graph embedding methods.
# Acknowledgements
We thank Meissane Chami, Aram Galstyan, Megan Leszczynski, John Palowitch, Laurel Orr, and Nimit Sohoni for their helpful feedback and discussions. We also thank Carlo Vittorio Cannistraci, Thomas Kipf, Luis Lamb, Bruno Ribeiro and Petar VeliËckovi´c for their helpful feedback and comments on the ï¬rst version of this work. We grate- fully acknowledge the support of DARPA under Nos. FA86501827865 (SDH) and FA86501827882 (ASED); NIH under No. U54EB020405 (Mobilize), NSF under Nos. CCF1763315 (Beyond Sparsity), CCF1563078 (Volume to Velocity), and 1937301 (RTML); ONR under No. N000141712266 (Unifying Weak Supervision); the Moore Foun- dation, NXP, Xilinx, LETI-CEA, Intel, IBM, Microsoft, NEC, Toshiba, TSMC, ARM, Hitachi, BASF, Accenture, Ericsson, Qualcomm, Analog Devices, the Okawa Foundation, American Family Insurance, Google Cloud, Swiss Re, the HAI-AWS Cloud Credits for Research program, TOTAL, and members of the Stanford DAWN project: Teradata, Facebook, Google, Ant Financial, NEC, VMWare, and Infosys. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. Any opinions, ï¬ndings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reï¬ect the views, policies, or endorsements, either expressed or implied, of DARPA, NIH, ONR, or the U.S. Government.
40
# References
[1] Muad Abu-Ata and Feodor F Dragan. Metric tree-like structures in real-world networks: an empirical study. Networks, 67 (1):49â68, 2016.
[2] Sami Abu-El-Haija, Bryan Perozzi, and Rami Al-Rfou. Learning edge representations via low-rank asymmetric projections. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, CIKM â17, page 1787â1796, 2017.
[3] Sami Abu-El-Haija, Bryan Perozzi, Rami Al-Rfou, and Alexander A Alemi. Watch your step: Learning node embeddings via graph attention. In Advances in Neural Information Processing Systems, pages 9180â9190, 2018.
[4] Sami Abu-El-Haija, Bryan Perozzi, Amol Kapoor, Nazanin Alipourfard, Kristina Lerman, Hrayr Harutyunyan, Greg Ver Steeg, and Aram Galstyan. Mixhop: Higher-order graph convolutional architectures via sparsiï¬ed neighborhood mixing. In International Conference on Machine Learning, pages 21â29, 2019.
[5] Aaron B Adcock, Blair D Sullivan, and Michael W Mahoney. Tree-like structure in large social and information networks. In 2013 IEEE 13th International Conference on Data Mining, pages 1â10. IEEE, 2013.
[6] Amr Ahmed, Nino Shervashidze, Shravan Narayanamurthy, Vanja Josifovski, and Alexander J Smola. Distributed large- scale natural graph factorization. In Proceedings of the 22nd international conference on World Wide Web, pages 37â48. ACM, 2013.
[7] Rami Al-Rfou, Dustin Zelle, and Bryan Perozzi. Ddgk: Learning graph representations for deep divergence graph kernels. Proceedings of the 2019 World Wide Web Conference on World Wide Web, 2019.
[8] Gregorio Alanis-Lobato, Pablo Mier, and Miguel A Andrade-Navarro. Efï¬cient embedding of complex networks to hyper- bolic space via their laplacian. Scientiï¬c reports, 6:30108, 2016.
[9] Luis B Almeida. A learning rule for asynchronous perceptrons with feedback in a combinatorial environment. In Proceed- ings, 1st First International Conference on Neural Networks, volume 2, pages 609â618. IEEE, 1987.
[10] Ivana Balazevic, Carl Allen, and Timothy Hospedales. Multi-relational poincar´e graph embeddings. In Advances in Neural Information Processing Systems, pages 4463â4473, 2019.
[11] Peter Battaglia, Razvan Pascanu, Matthew Lai, Danilo Jimenez Rezende, et al. Interaction networks for learning about objects, relations and physics. In Advances in Neural Information Processing Systems, pages 4502â4510, 2016.
[12] Peter W Battaglia, Jessica B Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al. Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261, 2018.
[13] Gary Becigneul and Octavian-Eugen Ganea. Riemannian adaptive optimization methods. In International Conference on Learning Representations, 2018.
[14] Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps and spectral techniques for embedding and clustering. In Advances in neural information processing systems, pages 585â591, 2002.
[15] Mikhail Belkin and Partha Niyogi. Semi-supervised learning on riemannian manifolds. Machine learning, 56(1-3):209â239, 2004.
[16] Mikhail Belkin, Partha Niyogi, and Vikas Sindhwani. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. Journal of machine learning research, 7(Nov):2399â2434, 2006.
[17] Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798â1828, 2013.
[18] Yoshua Bengio, Andrea Lodi, and Antoine Prouvost. Machine learning for combinatorial optimization: a methodological tour dâhorizon. arXiv preprint arXiv:1811.06128, 2018.
[19] Rianne van den Berg, Thomas N Kipf, and Max Welling. Graph convolutional matrix completion. arXiv preprint arXiv:1706.02263, 2017.
[20] Filippo Maria Bianchi, Daniele Grattarola, and Cesare Alippi. Spectral clustering with graph neural networks for graph pooling. In International Conference on Machine Learning, pages 874â883. PMLR, 2020.
41
[21] Silvere Bonnabel. Stochastic gradient descent on riemannian manifolds. IEEE Transactions on Automatic Control, 58(9): 2217â2229, 2013.
[22] Davide Boscaini, Jonathan Masci, Emanuele Rodol`a, and Michael Bronstein. Learning shape correspondence with anisotropic convolutional neural networks. In Advances in Neural Information Processing Systems, pages 3189â3197, 2016.
[23] Avishek Joey Bose and William Hamilton. Compositional fairness constraints for graph embeddings. arXiv preprint arXiv:1905.10674, 2019.
[24] Michael M Bronstein, Joan Bruna, Yann LeCun, Arthur Szlam, and Pierre Vandergheynst. Geometric deep learning: going beyond euclidean data. IEEE Signal Processing Magazine, 34(4):18â42, 2017.
[25] Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann Lecun. Spectral networks and locally connected networks on graphs international conference on learning representations (iclr2014). CBLS, April, 2014.
[26] Thang D Bui, Sujith Ravi, and Vivek Ramavajjala. Neural graph learning: Training neural networks using graphs. Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, pages 64â71, 2018.
[27] Hongyun Cai, Vincent W Zheng, and Kevin Chang. A comprehensive survey of graph embedding: problems, techniques and applications. IEEE Transactions on Knowledge and Data Engineering, 2018.
[28] Catalina Cangea, Petar Velickovic, Nikola Jovanovic, Thomas Kipf, , and Pietro Lio. Towards sparse hierarchical graph classiï¬ers. In arXiv:1811.01287, 2018.
[29] Shaosheng Cao, Wei Lu, and Qiongkai Xu. Grarep: Learning graph representations with global structural information. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, pages 891â900. ACM, 2015.
[30] Shaosheng Cao, Wei Lu, and Qiongkai Xu. Deep neural networks for learning graph representations. In AAAI, pages 1145â1152, 2016.
[31] Benjamin Paul Chamberlain, James Clough, and Marc Peter Deisenroth. Neural embeddings of graphs in hyperbolic space. arXiv preprint arXiv:1705.10359, 2017.
[32] Ines Chami, Zhitao Ying, Christopher R´e, and Jure Leskovec. Hyperbolic graph convolutional neural networks. In Advances in Neural Information Processing Systems, pages 4869â4880, 2019.
[33] Ines Chami, Adva Wolf, Da-Cheng Juan, Frederic Sala, Sujith Ravi, and Christopher R´e. Low-dimensional hyperbolic knowledge graph embeddings. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020.
[34] Olivier Chapelle, Bernhard Scholkopf, and Alexander Zien. Semi-supervised learning (chapelle, o. et al., eds.; 2006)[book reviews]. IEEE Transactions on Neural Networks, 20(3):542â542, 2009.
[35] Dexiong Chen, Laurent Jacob, and Julien Mairal. Convolutional kernel networks for graph-structured data. In International Conference on Machine Learning, 2020.
[36] Haochen Chen, Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. A tutorial on network embeddings. arXiv preprint arXiv:1808.02590, 2018.
[37] Haochen Chen, Bryan Perozzi, Yifan Hu, and Steven Skiena. Harp: Hierarchical representation learning for networks. In Thirty-Second AAAI Conference on Artiï¬cial Intelligence, 2018.
[38] Haochen Chen, Xiaofei Sun, Yingtao Tian, Bryan Perozzi, Muhao Chen, and Steven Skiena. Enhanced network embed- dings via exploiting edge labels. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, CIKM â18, page 1579â1582, 2018.
[39] Wei Chen, Wenjie Fang, Guangda Hu, and Michael W Mahoney. On the hyperbolicity of small-world and treelike random graphs. Internet Mathematics, 9(4):434â491, 2013.
[40] Zhengdao Chen, Joan Bruna Estrach, and Lisha Li. Supervised community detection with line graph neural networks. In 7th International Conference on Learning Representations, ICLR 2019, 2019.
[41] Zhengdao Chen, Soledad Villar, Lei Chen, and Joan Bruna. On the equivalence between graph isomorphism testing and function approximation with gnns. In Advances in Neural Information Processing Systems, pages 15894â15902, 2019.
42
[42] Wei-Lin Chiang, Xuanqing Liu, Si Si, Yang Li, Samy Bengio, and Cho-Jui Hsieh. Cluster-gcn: An efï¬cient algorithm for training deep and large graph convolutional networks. In ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2019. URL https://arxiv.org/pdf/1905.07953.pdf.
[43] Kyunghyun Cho, Bart Van Merri¨enboer, Dzmitry Bahdanau, and Yoshua Bengio. On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259, 2014.
[44] Michael AA Cox and Trevor F Cox. Multidimensional scaling. In Handbook of data visualization, pages 315â347. Springer, 2008.
[45] Daniel Fernando Daza Cruz, Thomas Kipf, and Max Welling. A modular framework for unsupervised graph representation learning. 2019.
[46] Nicola De Cao and Thomas Kipf. Molgan: An implicit generative model for small molecular graphs. arXiv preprint arXiv:1805.11973, 2018.
[47] Jeffrey Dean and Sanjay Ghemawat. Mapreduce: Simpliï¬ed data processing on large clusters. Commun. ACM, page 107â113, 2008. doi: 10.1145/1327452.1327492. URL https://doi.org/10.1145/1327452.1327492.
[48] Asim Kumar Debnath, Rosa L. Lopez de Compadre, Gargi Debnath, Alan J. Shusterman, , and Corwin Hansch. Structure- activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. correlation with molecular orbital energies and hydrophobicity. In J. Med. Chem., pages 786â797, 1991.
[49] Micha¨el Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral ï¬ltering. In Advances in Neural Information Processing Systems, pages 3844â3852, 2016.
[50] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09, 2009.
[51] Simon S Du, Kangcheng Hou, Russ R Salakhutdinov, Barnabas Poczos, Ruosong Wang, and Keyulu Xu. Graph neural tangent kernel: Fusing graph neural networks with graph kernels. In Advances in Neural Information Processing Systems, 2019.
[52] David K Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Al´an Aspuru-Guzik, and Ryan P Adams. Convolutional networks on graphs for learning molecular ï¬ngerprints. In Advances in neural information processing systems, pages 2224â2232, 2015.
[53] Vijay Prakash Dwivedi, Chaitanya K Joshi, Thomas Laurent, Yoshua Bengio, and Xavier Bresson. Benchmarking graph neural networks. arXiv preprint arXiv:2003.00982, 2020.
[54] Daniel C Elton, Zois Boukouvalas, Mark D Fuge, and Peter W Chung. Deep learning for molecular designâa review of the state of the art. Molecular Systems Design & Engineering, 4(4):828â849, 2019.
[55] Alessandro Epasto and Bryan Perozzi. Is a single embedding enough? learning node representations that capture multiple social contexts. In The World Wide Web Conference, WWW â19, page 394â404, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450366748. doi: 10.1145/3308558.3313660. URL https://doi.org/10.1145/ 3308558.3313660.
[56] Alessandro Epasto, Silvio Lattanzi, and Renato Paes Leme. Ego-splitting framework: From non-overlapping to overlapping clusters. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD â17, page 145â154, New York, NY, USA, 2017. Association for Computing Machinery. ISBN 9781450348874. doi: 10.1145/3097983.3098054. URL https://doi.org/10.1145/3097983.3098054.
[57] Qingyuan Feng, Evgenia Dueva, Artem Cherkasov, and Martin Ester. Padme: A deep learning-based framework for drug- target interaction prediction. arXiv preprint arXiv:1807.09741, 2018.
[58] Matthias Fey and Jan Eric Lenssen. Fast graph representation learning with pytorch geometric. arXiv preprint arXiv:1903.02428, 2019.
[59] Hongyang Gao and Shuiwang Ji. Graph u-nets. In Proceedings of the 36th International Conference on Machine Learning, 2019.
[60] Victor Garcia and Joan Bruna. Few-shot learning with graph neural networks. In International Conference on Learning Representations (ICLR), 2018.
43
[61] Vikas K Garg, Stefanie Jegelka, and Tommi Jaakkola. Generalization and representational limits of graph neural networks. arXiv preprint arXiv:2002.06157, 2020.
[62] Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1263â1272. JMLR. org, 2017.
[63] PrimoËz Godec. https://towardsdatascience.com/graph-embeddings-the-summary-cc6075aba007, 2018.
[64] Hila Gonen and Yoav Goldberg. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. arXiv preprint arXiv:1903.03862, 2019.
[65] Marco Gori, Gabriele Monfardini, and Franco Scarselli. A new model for learning in graph domains. In Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005., volume 2, pages 729â734. IEEE, 2005.
[66] Palash Goyal and Emilio Ferrara. Gem: a python package for graph embedding methods. Journal of Open Source Software, 3(29):876, 2018.
[67] Palash Goyal and Emilio Ferrara. Graph embedding techniques, applications, and performance: A survey. Knowledge-Based Systems, 151:78â94, 2018.
[68] Aditya Grover and Jure Leskovec. node2vec: Scalable feature learning for networks. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pages 855â864. ACM, 2016.
[69] Aditya Grover, Aaron Zweig, and Stefano Ermon. Graphite: Iterative generative modeling of graphs. In International Conference on Machine Learning, pages 2434â2444, 2019.
[70] Albert Gu, Frederic Sala, Beliz Gunel, and Christopher R´e. Learning mixed-curvature representations in product spaces. International Conference on Learning Representations, 2018.
In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD â20, page 2523â2532, New York, NY, USA, 2020. Association for Computing Machinery. ISBN 9781450379984. doi: 10.1145/ 3394486.3403302. URL https://doi.org/10.1145/3394486.3403302.
[72] Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems, pages 1024â1034, 2017.
[73] William L Hamilton, Rex Ying, and Jure Leskovec. Representation learning on graphs: Methods and applications. arXiv preprint arXiv:1709.05584, 2017.
[74] David K Hammond, Pierre Vandergheynst, and R´emi Gribonval. Wavelets on graphs via spectral graph theory. Applied and Computational Harmonic Analysis, 30(2):129â150, 2011.
[75] Mikael Henaff, Joan Bruna, and Yann LeCun. Deep convolutional networks on graph-structured data. arXiv preprint arXiv:1506.05163, 2015.
[76] Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735â1780, 1997.
[77] Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. Open graph benchmark: Datasets for machine learning on graphs. arXiv preprint arXiv:2005.00687, 2020.
[78] Di Huang, Zihao He, Yuzhong Huang, Kexuan Sun, Sami Abu-El-Haija, Bryan Perozzi, Kristina Lerman, Fred Morstatter, In Companion Proceedings of the Web and Aram Galstyan. Graph embedding with personalized context distribution. Conference 2020, WWW â20, page 655â661, 2020.
[79] Wengong Jin, Regina Barzilay, and Tommi Jaakkola. Junction tree variational autoencoder for molecular graph generation. In International Conference on Machine Learning, 2018.
[80] Ian Jolliffe. Principal component analysis. In International encyclopedia of statistical science, pages 1094â1096. Springer, 2011.
[81] Edmond Jonckheere, Poonsuk Lohsoonthorn, and Francis Bonahon. Scaled gromov hyperbolic graphs. Journal of Graph Theory, 2008.
44
[82] Elias Khalil, Hanjun Dai, Yuyu Zhang, Bistra Dilkina, and Le Song. Learning combinatorial optimization algorithms over graphs. In Advances in Neural Information Processing Systems, pages 6348â6358, 2017.
[83] Thomas N Kipf and Max Welling. Semi-supervised classiï¬cation with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016.
[84] Thomas N Kipf and Max Welling. Variational graph auto-encoders. arXiv preprint arXiv:1611.07308, 2016.
[85] Robert Kleinberg. Geographic routing using hyperbolic space. In IEEE INFOCOM 2007-26th IEEE International Confer- ence on Computer Communications, pages 1902â1909. IEEE, 2007.
[86] Ioannis Konstas, Vassilios Stathopoulos, and Joemon M. Jose. On social networks and collaborative recommendation. In Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval, pages 195â202, 2009.
[87] N. M. Kriege, F. D. Johansson, and C. Morris. A survey on graph kernels. In Applied Network Science, pages 1â42, 2020.
[88] Dmitri Krioukov, Fragkiskos Papadopoulos, Maksim Kitsak, Amin Vahdat, and Mari´an Bogun´a. Hyperbolic geometry of complex networks. Physical Review E, 82(3):036106, 2010.
[89] Joseph B Kruskal. Multidimensional scaling by optimizing goodness of ï¬t to a nonmetric hypothesis. Psychometrika, 29(1): 1â27, 1964.
[90] Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper Uijlings, Ivan Krasin, Jordi Pont-Tuset, Shahab Kamali, Stefan Popov, Matteo Malloci, Alexander Kolesnikov, Tom Duerig, and Vittorio Ferrari. The open images dataset v4: Uniï¬ed image classiï¬cation, object detection, and visual relationship detection at scale. IJCV, 2020.
[91] Luis Lamb, Artur Garcez, Marco Gori, Marcelo Prates, Pedro Avelar, and Moshe Vardi. Graph neural networks meet neural-symbolic computing: A survey and perspective. arXiv preprint arXiv:2003.00330, 2020.
[92] Matthew Le, Stephen Roller, Laetitia Papaxanthos, Douwe Kiela, and Maximilian Nickel. Inferring concept hierarchies from text corpora via hyperbolic embeddings. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3231â3241, 2019.
[93] Yann LeCun, Bernhard Boser, John S Denker, Donnie Henderson, Richard E Howard, Wayne Hubbard, and Lawrence D Jackel. Backpropagation applied to handwritten zip code recognition. Neural computation, 1(4):541â551, 1989.
[94] Junhyun Lee, Inyeop Lee, , and Jaewoo Kang. Self-attention graph pooling. In In International Conference on Machine Learning, 2019.
[95] Adam Lerer, Ledell Wu, Jiajun Shen, Timothee Lacroix, Luca Wehrstedt, Abhijit Bose, and Alex Peysakhovich. PyTorch- BigGraph: A Large-scale Graph Embedding System. In Proceedings of the 2nd SysML Conference, Palo Alto, CA, USA, 2019.
[96] Omer Levy and Yoav Goldberg. Neural word embedding as implicit matrix factorization. In Advances in neural information processing systems, pages 2177â2185, 2014.
[97] Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neural networks. arXiv preprint arXiv:1511.05493, 2015.
[98] Yujia Li, Oriol Vinyals, Chris Dyer, Razvan Pascanu, and Peter Battaglia. Learning deep generative models of graphs. arXiv preprint arXiv:1803.03324, 2018.
[99] David Liben-Nowell and Jon Kleinberg. The link-prediction problem for social networks. Journal of the American society for information science and technology, 58(7):1019â1031, 2007.
[100] Qi Liu, Miltiadis Allamanis, Marc Brockschmidt, and Alexander Gaunt. Constrained graph variational autoencoders for molecule design. In Advances in Neural Information Processing Systems, pages 7795â7804, 2018.
[101] Qi Liu, Maximilian Nickel, and Douwe Kiela. Hyperbolic graph neural networks. In Advances in Neural Information Processing Systems, pages 8228â8239, 2019.
[102] Andreas Loukas. What graph neural networks cannot learn: depth vs width. arXiv preprint arXiv:1907.03199, 2019.
45
[103] Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(Nov): 2579â2605, 2008.
[104] Elan Sopher Markowitz, Keshav Balasubramanian, Mehrnoosh Mirtaheri, Sami Abu-El-Haija, Bryan Perozzi, Greg Ver Steeg, and Aram Galstyan. Graph traversal with tensor functionals: A meta-algorithm for scalable learning. In International Conference on Learning Representations, 2021.
[105] Haggai Maron, Heli Ben-Hamu, Nadav Shamir, and Yaron Lipman. Invariant and equivariant graph networks. In Interna- tional Conference on Learning Representations, 2018.
[106] Jonathan Masci, Davide Boscaini, Michael Bronstein, and Pierre Vandergheynst. Geodesic convolutional neural networks on riemannian manifolds. In Proceedings of the IEEE international conference on computer vision workshops, pages 37â45, 2015.
[107] Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. A survey on bias and fairness in machine learning. arXiv preprint arXiv:1908.09635, 2019.
[108] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111â3119, 2013.
[109] Federico Monti, Davide Boscaini, Jonathan Masci, Emanuele Rodola, Jan Svoboda, and Michael M Bronstein. Geometric deep learning on graphs and manifolds using mixture model cnns. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5115â5124, 2017.
[110] Christopher Morris, Martin Ritzert, Matthias Fey, William L Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and leman go neural: Higher-order graph neural networks. In Proceedings of the AAAI Conference on Artiï¬cial Intelligence, volume 33, pages 4602â4609, 2019.
[111] Alessandro Muscoloni, Josephine Maria Thomas, Sara Ciucci, Ginestra Bianconi, and Carlo Vittorio Cannistraci. Machine learning meets complex networks via coalescent embedding in the hyperbolic space. Nature communications, 8(1):1â19, 2017.
[112] Maximillian Nickel and Douwe Kiela. Poincar´e embeddings for learning hierarchical representations. In Advances in neural information processing systems, pages 6338â6347, 2017.
[113] Maximillian Nickel and Douwe Kiela. Learning continuous hierarchies in the lorentz model of hyperbolic geometry. In International Conference on Machine Learning, pages 3779â3788, 2018.
[114] Alex Nowak, Soledad Villar, Afonso S Bandeira, and Joan Bruna. Revised note on learning algorithms for quadratic assign- ment with graph neural networks. arXiv preprint arXiv:1706.07450, 2017.
[115] Mingdong Ou, Peng Cui, Jian Pei, Ziwei Zhang, and Wenwu Zhu. Asymmetric transitivity preserving graph embedding. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1105â 1114. ACM, 2016.
[116] John Palowitch and Bryan Perozzi. Monet: Debiasing graph embeddings via the metadata-orthogonal training unit. arXiv preprint arXiv:1909.11793, 2019.
[117] Fragkiskos Papadopoulos, Maksim Kitsak, M ´Angeles Serrano, Mari´an Bogun´a, and Dmitri Krioukov. Popularity versus similarity in growing networks. Nature, 489(7417):537â540, 2012.
[118] Fragkiskos Papadopoulos, Constantinos Psomas, and Dmitri Krioukov. Network mapping by replaying hyperbolic growth. IEEE/ACM Transactions on Networking, 23(1):198â211, 2014.
[119] Zhen Peng, Wenbing Huang, Minnan Luo, Qinghua Zheng, Yu Rong, Tingyang Xu, and Junzhou Huang. Graph Repre- sentation Learning via Graphical Mutual Information Maximization. In Proceedings of The Web Conference, 2020. doi: https://doi.org/10.1145/3366423.3380112.
[120] Jeffrey Pennington, Richard Socher, and Christopher Manning. Glove: Global vectors for word representation. In Proceed- ings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532â1543, 2014.
[121] Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 701â710. ACM, 2014.
46
[122] Fernando J Pineda. Generalization of back propagation to recurrent and higher order neural networks. In Neural information processing systems, pages 602â611, 1988.
[123] Marcelo Prates, Pedro HC Avelar, Henrique Lemos, Luis C Lamb, and Moshe Y Vardi. Learning to solve np-complete In Proceedings of the AAAI Conference on Artiï¬cial Intelligence, problems: A graph neural network for decision tsp. volume 33, pages 4731â4738, 2019.
[124] Jiezhong Qiu, Yuxiao Dong, Hao Ma, Jian Li, Kuansan Wang, and Jie Tang. Network embedding as matrix factorization: Unifying deepwalk, line, pte, and node2vec. In Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, pages 459â467, 2018.
[125] Jiezhong Qiu, Yuxiao Dong, Hao Ma, Jian Li, Chi Wang, Kuansan Wang, and Jie Tang. Netsmf: Large-scale network embedding as sparse matrix factorization. In The World Wide Web Conference, WWW â19, page 1509â1520, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450366748. doi: 10.1145/3308558.3313446. URL https://doi.org/10.1145/3308558.3313446.
[126] Matthew Ragoza, Joshua Hochuli, Elisa Idrobo, Jocelyn Sunseri, and David Ryan Koes. Proteinâligand scoring with con- volutional neural networks. Journal of Chemical Information and Modeling, 57(4):942â957, 2017. doi: 10.1021/acs.jcim. 6b00740. URL https://doi.org/10.1021/acs.jcim.6b00740. PMID: 28368587.
[127] Sam T Roweis and Lawrence K Saul. Nonlinear dimensionality reduction by locally linear embedding. science, 290(5500): 2323â2326, 2000.
[128] Benedek Rozemberczki, Ryan Davies, Rik Sarkar, and Charles Sutton. Gemsec: Graph embedding with self clustering. In Proceedings of the 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, ASONAM â19, page 65â72, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450368681. doi: 10.1145/3341161.3342890. URL https://doi.org/10.1145/3341161.3342890.
[129] Benedek Rozemberczki, Peter Englert, Amol Kapoor, Martin Blais, and Bryan Perozzi. Pathï¬nder discovery networks for neural message passing. In Proceedings of the Web Conference 2021, WWW â21, page 2547â2558, New York, NY, USA, ISBN 9781450383127. doi: 10.1145/3442381.3449882. URL https: 2021. Association for Computing Machinery. //doi.org/10.1145/3442381.3449882.
[130] Frederic Sala, Chris De Sa, Albert Gu, and Christopher Re. Representation tradeoffs for hyperbolic embeddings. In Inter- national Conference on Machine Learning, pages 4460â4469, 2018.
[131] Rik Sarkar. Low distortion delaunay embedding of trees in hyperbolic plane. In International Symposium on Graph Drawing, pages 355â366. Springer, 2011.
[132] Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61â80, 2009.
[133] Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. Modeling relational data with graph convolutional networks. In European Semantic Web Conference, pages 593â607. Springer, 2018.
[134] Daniel Selsam, Matthew Lamm, Benedikt B¨unz, Percy Liang, Leonardo de Moura, and David L Dill. Learning a sat solver from single-bit supervision. arXiv preprint arXiv:1802.03685, 2018.
[135] Oleksandr Shchur, Maximilian Mumme, Aleksandar Bojchevski, and Stephan G¨unnemann. Pitfalls of graph neural network evaluation. arXiv preprint arXiv:1811.05868, 2018.
[136] Martin Simonovsky and Nikos Komodakis. Graphvae: Towards generation of small graphs using variational autoencoders. arXiv preprint arXiv:1802.03480, 2018.
[137] Koustuv Sinha, Shagun Sodhani, Joelle Pineau, and William L Hamilton. Evaluating logical generalization in graph neural networks. arXiv preprint arXiv:2003.06560, 2020.
[138] Balasubramaniam Srinivasan and Bruno Ribeiro. On the equivalence between positional node embeddings and structural graph representations. In International Conference on Learning Representations, 2020. URL https://openreview. net/forum?id=SJxzFySKwH.
[139] Chris Stark, Bobby-Joe Breitkreutz, Teresa Reguly, Lorrie Boucher, Ashton Breitkreutz, and Mike Tyers. Biogrid: a general repository for interaction datasets. Nucleic acids research, 34(suppl 1):D535âD539, 2006.
47
[140] Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. Line: Large-scale information network embedding. In Proceedings of the 24th International Conference on World Wide Web, pages 1067â1077. International World Wide Web Conferences Steering Committee, 2015.
[141] Joshua B Tenenbaum, Vin De Silva, and John C Langford. A global geometric framework for nonlinear dimensionality reduction. science, 290(5500):2319â2323, 2000.
[142] Alexandru Tifrea, Gary Becigneul, and Octavian-Eugen Ganea. Poincare glove: Hyperbolic word embeddings. In Interna- tional Conference on Learning Representations, 2018.
[143] Anton Tsitsulin, Davide Mottin, Panagiotis Karras, Alexander Bronstein, and Emmanuel M¨uller. Netlsd: Hearing the shape of a graph. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD â18, page 2347â2356, 2018.
[144] Anton Tsitsulin, Marina Munkhoeva, and Bryan Perozzi. Just slaq when you approximate: Accurate spectral distances for web-scale graphs. In Proceedings of The Web Conference 2020, WWW â20, page 2697â2703, 2020.
[145] Anton Tsitsulin, John Palowitch, Bryan Perozzi, and Emmanuel M¨uller. Graph clustering with graph neural networks. arXiv preprint arXiv:2006.16904, 2020.
[146] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pages 5998â6008, 2017.
[147] Petar VeliËckovi´c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Lio, and Yoshua Bengio. Graph attention networks. In International Conference on Learning Representations, 2018.
[148] Petar VeliËckovi´c, William Fedus, William L. Hamilton, Pietro Li`o, Yoshua Bengio, and R Devon Hjelm. Deep graph infomax. In International Conference on Learning Representations, 2019.
[149] Saurabh Verma and Zhi-Li Zhang. Stability and generalization of graph convolutional neural networks. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1539â1548, 2019.
[150] Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. Stacked denoising autoen- coders: Learning useful representations in a deep network with a local denoising criterion. Journal of machine learning research, 11(Dec):3371â3408, 2010.
[151] S. V. N. Vishwanathan, N. N. Schraudolph, R. Kondor, and K. M Borgwardt. Graph kernels. In Journal of Machine Learning Research, pages 1201â1242, 2010.
[152] Daixin Wang, Peng Cui, and Wenwu Zhu. Structural deep network embedding. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1225â1234. ACM, 2016.
[153] Minjie Wang, Lingfan Yu, Da Zheng, Quan Gan, Yu Gai, Zihao Ye, Mufei Li, Jinjing Zhou, Qi Huang, Chao Ma, et al. Deep graph library: Towards efï¬cient and scalable deep learning on graphs. arXiv preprint arXiv:1909.01315, 2019.
[154] Jason Weston, Fr´ed´eric Ratle, and Ronan Collobert. Deep learning via semi-supervised embedding. In Proceedings of the 25th international conference on Machine learning, pages 1168â1175. ACM, 2008.
[155] Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and Philip S Yu. A comprehensive survey on graph neural networks. arXiv preprint arXiv:1901.00596, 2019.
[156] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826, 2018.
[157] Pinar Yanardag and S.V.N. Vishwanathan. Deep graph kernels. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1365â1374. Association for Computing Machinery, 2015.
[158] Zhilin Yang, William W Cohen, and Ruslan Salakhutdinov. Revisiting semi-supervised learning with graph embeddings. In Proceedings of the 33rd International Conference on International Conference on Machine Learning-Volume 48, pages 40â48. JMLR. org, 2016.
[159] Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, William Hamilton, and Jure Leskovec. Graph convolutional neural networks for web-scale recommender systems. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2018.
48
[160] Zhitao Ying, Jiaxuan You, Christopher Morris, Xiang Ren, Will Hamilton, Hierar- and Jure Leskovec. In S. Bengio, H. Wallach, H. Larochelle, Information Processing Sys- URL http://papers.nips.cc/paper/ chical graph representation learning with differentiable pooling. K. Grauman, N. Cesa-Bianchi, tems 31, pages 4800â4810. Curran Associates, 7729-hierarchical-graph-representation-learning-with-differentiable-pooling.pdf. editors, Advances in Neural and R. Garnett, Inc., 2018.
[161] Jiaxuan You, Rex Ying, Xiang Ren, William L Hamilton, and Jure Leskovec. Graphrnn: A deep generative model for graphs. arXiv preprint arXiv:1802.08773, 2018.
[162] Jiaxuan You, Rex Ying, and Jure Leskovec. Position-aware graph neural networks. arXiv preprint arXiv:1906.04817, 2019.
[163] Tao Yu and Christopher M De Sa. Numerically accurate hyperbolic embeddings using tiling-based models. In Advances in Neural Information Processing Systems, pages 2023â2033, 2019.
[164] Daokun Zhang, Jie Yin, Xingquan Zhu, and Chengqi Zhang. Network representation learning: A survey. IEEE Transactions on Big Data, 2018.
[165] Muhan Zhang, Zhicheng Cui, Marion Neumann, and Yixin Chen. An end-to-end deep learning architecture for graph classiï¬cation. In Thirty-Second AAAI Conference on Artiï¬cial Intelligence, 2018.
[166] Ziwei Zhang, Peng Cui, and Wenwu Zhu. Deep learning on graphs: A survey. arXiv preprint arXiv:1812.04202, 2018.
[167] Dengyong Zhou, Olivier Bousquet, Thomas N Lal, Jason Weston, and Bernhard Sch¨olkopf. Learning with local and global consistency. In Advances in neural information processing systems, pages 321â328, 2004.
[168] Jie Zhou, Ganqu Cui, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. Graph neural networks: A review of methods and applications. arXiv preprint arXiv:1812.08434, 2018.
[169] Xiaojin Zhu and Zoubin Ghahramani. Learning from labeled and unlabeled data with label propagation. 2002.
49 | {
"id": "1901.00596"
} |
2005.02979 | A Survey of Algorithms for Black-Box Safety Validation of Cyber-Physical Systems | Autonomous cyber-physical systems (CPS) can improve safety and efficiency for
safety-critical applications, but require rigorous testing before deployment.
The complexity of these systems often precludes the use of formal verification
and real-world testing can be too dangerous during development. Therefore,
simulation-based techniques have been developed that treat the system under
test as a black box operating in a simulated environment. Safety validation
tasks include finding disturbances in the environment that cause the system to
fail (falsification), finding the most-likely failure, and estimating the
probability that the system fails. Motivated by the prevalence of
safety-critical artificial intelligence, this work provides a survey of
state-of-the-art safety validation techniques for CPS with a focus on applied
algorithms and their modifications for the safety validation problem. We
present and discuss algorithms in the domains of optimization, path planning,
reinforcement learning, and importance sampling. Problem decomposition
techniques are presented to help scale algorithms to large state spaces, which
are common for CPS. A brief overview of safety-critical applications is given,
including autonomous vehicles and aircraft collision avoidance systems.
Finally, we present a survey of existing academic and commercially available
safety validation tools. | http://arxiv.org/pdf/2005.02979 | Anthony Corso, Robert J. Moss, Mark Koren, Ritchie Lee, Mykel J. Kochenderfer | cs.LG, cs.AI, cs.SY, eess.SY, stat.ML | null | Journal of Artificial Intelligence Research, vol. 72, p. 377-428,
2021 | cs.LG | 20200506 | 20211014 | 2021:
1 2 0 2
# t c O 4 1
arXiv:2005.02979v3 [cs.LG]
]
# G L . s c [
3 v 9 7 9 2 0 . 5 0 0 2 : v i X r a
Journal of Artiï¬cial Intelligence Research 72 (2021) 377â428
Submitted 02/2021; published 10/2021
# A Survey of Algorithms for Black-Box Safety Validation of Cyber-Physical Systems
# Anthony Corso Aeronautics and Astronautics, Stanford University, Stanford, CA 94305, USA
[email protected]
# Robert J. Moss Computer Science, Stanford University, Stanford, CA 94305, USA
[email protected]
# Mark Koren Aeronautics and Astronautics, Stanford University, Stanford, CA 94305, USA
[email protected]
# Ritchie Lee NASA Ames Research Center, Moï¬ett Field, CA 94035, USA
[email protected]
# Mykel J. Kochenderfer Aeronautics and Astronautics, Stanford University, Stanford, CA 94305, USA
[email protected]
# Abstract
Autonomous cyber-physical systems (CPS) can improve safety and eï¬ciency for safety- critical applications, but require rigorous testing before deployment. The complexity of these systems often precludes the use of formal veriï¬cation and real-world testing can be too dangerous during development. Therefore, simulation-based techniques have been developed that treat the system under test as a black box operating in a simulated en- vironment. Safety validation tasks include ï¬nding disturbances in the environment that cause the system to fail (falsiï¬cation), ï¬nding the most-likely failure, and estimating the probability that the system fails. Motivated by the prevalence of safety-critical artiï¬cial intelligence, this work provides a survey of state-of-the-art safety validation techniques for CPS with a focus on applied algorithms and their modiï¬cations for the safety validation problem. We present and discuss algorithms in the domains of optimization, path planning, reinforcement learning, and importance sampling. Problem decomposition techniques are presented to help scale algorithms to large state spaces, which are common for CPS. A brief overview of safety-critical applications is given, including autonomous vehicles and aircraft collision avoidance systems. Finally, we present a survey of existing academic and commercially available safety validation tools.
# 1. Introduction
Increasing levels of autonomy in cyber-physical systems (CPS) promise to revolutionize in- dustries such as automotive transportation (U.S. Department of Transportation, 2018) and aviation (Federal Aviation Administration, 2019; Kochenderfer et al., 2012) by improving convenience and eï¬ciency while lowering cost. Innovations have been driven by recent progress in artiï¬cial intelligence, particularly in machine learning (LeCun et al., 2015; Rus-
©2021 AI Access Foundation. All rights reserved.
Corso, Moss, Koren, Lee, Kochenderfer
sell & Norvig, 2020) and planning (Kochenderfer, 2015; Sutton & Barto, 2018). Machine learning has recently achieved human-competitive performance in board games (Silver et al., 2016; Silver et al., 2017), video games (Mnih et al., 2015; Vinyals et al., 2019), and visual perception (He et al., 2017; Pillai et al., 2019). However, applying machine learning technologies to safety-critical domains has been challenging. Safety-critical CPS diï¬er from conventional autonomous systems in that their failures can have serious consequences, such as loss of life and property. As a result, these systems must undergo extensive validation and testing prior to certiï¬cation and deployment.
Safety validation is the process of ensuring the correct and safe operation of a system operating in an environment. Desired safety properties are stipulated in a speciï¬cation lan- guage and a failure is any violation of that speciï¬cation. Typically, simulation is used to ï¬nd failures of a system caused by disturbances in the environment, and a model of the dis- turbances can then be used to determine the probability of failure. A system is deemed safe if no failure has been found after adequate exploration of the space of possible disturbances, or if the probability of failure is found to be below an acceptable threshold. The procedure of proving that a system is safe to all disturbances is known as formal veriï¬cation (Clarke et al., 2018; Fitting, 2012; Katoen, 2016; Platzer & Quesel, 2008; Schumann, 2001) and is outside the scope of this survey.
In this paper, we focus on CPS, which involve software and physical systems interacting over time. This broad deï¬nition includes systems such as robots, cars, aircraft, and planetary rovers. There are several reasons why validating cyber-physical systems is challenging. First, many of these systems contain complex components, including those produced by machine learning. The safety properties of these systems may not be well-understood and subtle and emergent failures can go undetected (Yeh, 2018). Second, many systems, such as autonomous cars and aircraft, interact with complex and stochastic environments that are diï¬cult to model. Third, safety properties are generally deï¬ned over both the system under test and its environment. For example, the requirement that âthe test vehicle shall not collide with pedestriansâ involves both the system under test (test vehicle) and actors in its environment (pedestrians). As a result, safety validation must be performed over the combined system. Another challenge is that sequential interactions between the system and the environment means that failure scenarios are trajectories over time, and therefore the search space is combinatorially large. Finally, safety validation is often applied to mature safety-critical systems later in development, where failures can be extremely rare.
Traditional methods for ensuring safetyâthrough safety processes, engineering analy- sis, and conventional testingâthough necessary, do not scale to the complexity of next- generation systems. Advanced validation techniques are needed to build conï¬dence in these systems. Many validation approaches have been proposed in the literature. They can be broadly categorized by the information they use for analysis. White-box methods use knowl- edge of the internals of the system. For example, formal veriï¬cation in the form of model checking (Clarke et al., 2018; Katoen, 2016) and automated theorem proving (Fitting, 2012; Platzer & Quesel, 2008; Schumann, 2001), represents the system using mathematical mod- els. Because the model is known, formal veriï¬cation methods can ï¬nd failure examples when they exist or prove the absence of failures when they do not. However, because formal veri- ï¬cation considers all execution possibilities, it often has diï¬culty scaling to large problems (see Alur, 2015 for a discussion of veriï¬cation applied to CPS).
378
Algorithms for Black-Box Safety Validation
In contrast to white-box methods, black-box techniques do not assume that the internals of the system are known. They consider a general mapping from input to output that can be sampled. Black-box methods can be applied to a much broader class of systems because they do not require a system speciï¬cation. Although model-checking techniques can be applied to some black-box systems (Peled et al., 1999), a prohibitively large (or possibly inï¬nite) number of samples may be required to provide complete coverage and prove the Instead, black-box methods often aim to quickly and eï¬ciently ï¬nd absence of failures. failure examples. If no failures are found, conï¬dence in the safety of the system will increase with additional sampling. Due to its ï¬exibility and scalability, black-box validation is often the only feasible option for large complex systems, and is the focus of this survey.
We consider three safety validation tasks for a system with safety properties. First, falsiï¬cation aims to ï¬nd an example disturbance in the environment that causes the system to violate the property. This formulation is useful for discovering previously unknown failure modes and ï¬nding regions where the system can operate safely. The second safety validation task is to ï¬nd the most-likely failure according to a probabilistic model of the disturbances. The model can be created through expert knowledge or data to reï¬ect the probabilities in the real environment. The third safety validation task is to estimate the probability that a failure will occur. Failure probability estimation is important for acceptance and certiï¬cation.
There are many algorithms that have been used for these safety validation tasks. This survey categorizes and presents many of them. Falsiï¬cation and most-likely failure analysis are related tasks in that they involve ï¬nding failures of an autonomous system. Categories of algorithms that are suited for these tasks include optimization, path planning, and rein- forcement learning. Optimization approaches seek to ï¬nd a trajectory of disturbances that cause the system to fail. Path planning approaches use the environmentâs state to aid in the exploration of possible failure modes. Reinforcement learning frames the problem as a Markov decision process and searches for a policy that maps environment states to distur- bances that cause the system to fail. When the goal is to estimate the probability of failure and failures are rare, importance sampling techniques generate scenarios and translate their results into a probability estimate. For all of the safety validation tasks, a major challenge is scalability, so problem-decomposition techniques are presented that can allow for better scalability of the presented algorithms.
This paper is organized as follows. Section 2 introduces notation, formally deï¬nes com- mon terms such as safety validation or black box, and formulates three safety validation tasks. Section 3 gives an overview of the safety validation process, which involves deï¬ning safety properties, choosing a cost function and algorithm, and determining when suï¬cient testing has been performed. Section 4 summarizes the optimization-based algorithms for safety validation. Section 5 describes how to use the environment state to ï¬nd failures of the system through several path-planning algorithms. Section 6 shows how to apply reinforce- ment learning to safety validation. Section 7 introduces importance sampling algorithms for estimating the probability of failure. Section 8 presents several ways to address problem scalability through decomposition. Section 9 surveys the various applications and discusses common strategies and adaptations of approaches for each domain. Finally, section 10 sur- veys existing tools in the literature and compares their basic features.
379
Corso, Moss, Koren, Lee, Kochenderfer
System M action Environment E disturbance x Adversary A observation state s
Figure 1: Model of the safety validation problem.
# 2. Preliminaries
This section ï¬rst describes the notation used for the description of safety validation algo- rithms. It then deï¬nes safety validation and black box, and then summarizes several safety validation tasks.
# 2.1 Notation
A safety validation problem (ï¬g. 1) consists of a system M, an environment E, and a safety property Ï that the system should have. The safety property is deï¬ned over state trajectories s = [s1, . . . , st], where st â S is the state of the environment at time t â {1, . . . , tmax}. If the state trajectory s satisï¬es property Ï we write s â Ï, and write s 6â Ï, otherwise.
The environment is perturbed by an adversary A through disturbances x â X, where disturbances are chosen in order to induce behavior in the system which violates the safety property. Disturbance trajectories x can be chosen freely by the adversary, but they have an associated probability density p(x), p(x), or p(x | s), which models their likelihood in the environment. The disturbance likelihood can be constructed through expert knowledge or learned from data.
The environment transitions between states according to a dynamics function f that depends on the system, environment, and disturbances. In this work, we assume that the environment and the system are ï¬xed, making disturbances the only way to aï¬ect the system. Therefore, f maps disturbance trajectories into state trajectories
s = f (x). (1)
Some algorithms require the ability to simulate a disturbance xt for a single timestep from a state st, denoted
st+1 = f (st, xt). (2)
# 2.2 Deï¬nitions
Safety Validation. A safety property speciï¬es that a certain âbad eventâ will not occur. In contrast, a liveness property speciï¬es that a certain âgood eventâ will eventually occur. The safety-liveness distinction is important because a safety property can be shown to be violated with a concrete counterexample (the primary goal of the surveyed algorithms), while the violation of a liveness property requires formal argumentation (Alpern & Schneider, 1987). The deï¬nition of a âbad eventâ is domain speciï¬c.
380
Algorithms for Black-Box Safety Validation
Veriï¬cation is the process of proving that the system meets its requirements while vali- dation is the process of ensuring that a system fulï¬lls its intended purpose in its operational environment (Hirshorn et al., 2017). Although many of the algorithms presented in this survey can be applied to both veriï¬cation and validation, we choose the term validation to emphasize the focus on testing full-scale system prototypes in simulated operational envi- ronments. Safety validation is therefore the processes of investigating the adherence of a system to a safety property in its operational domain.
Black-Box Assumption. A system is said to be a black box if the system model M is not known or is too complex to explicitly reason about. In contrast, a white-box system can be described analytically or speciï¬ed in a formal modeling language, and a gray-box system lies in between. Some white-box systems may be treated as a black box if knowledge of their design does not help the validation process. For example, while small neural networks can have properties formally veriï¬ed by analyzing the network weights (Katz et al., 2017), large neural networks with millions or billions of parameters are generally too large for such techniques, and they would need to undergo black-box validation. In some cases, both the system and the environment are treated as a black-box, which precludes the use of validation algorithms that require the environment state (see section 3.3 for more details).
# 2.3 Safety Validation Goals
Three safety validation tasks are considered in this work and are deï¬ned below.
Falsiï¬cation. Falsiï¬cation is the process of ï¬nding a disturbance trajectory that causes the outputs to violate a speciï¬cation Ï. Such a trajectory is known as a counterexample, failure trajectory, or falsifying trajectory. Falsiï¬cation ï¬nds
x s.t. f (x) 6â Ï. (3)
Most-Likely Failure Analysis. Most-likely failure analysis tries to ï¬nd the failure tra- jectory with maximum likelihood
arg max x p(x) s.t. f (x) 6â Ï. (4)
Failure Probability Estimation. Failure probability estimation tries to compute the probability that a speciï¬cation will be violated. Failure probability is given by the expecta- tion of observing a failure under the disturbance model
Pfail = E 1{f (x) 6â Ï} . (5)
3. Overview of Solution Techniques
Solving a safety validation problem for a given CPS requires the following steps: 1) deï¬ne a safety property to validate, 2) deï¬ne an appropriate cost function to guide the search, 3) choose a safety validation algorithm, which depends on the system, environment and safety validation task and 4) run it until a counterexample is discovered (for falsiï¬cation and most likely failure analysis) or the space of possible scenarios has been suï¬ciently covered. This section provides an overview of each of these steps.
381
Corso, Moss, Koren, Lee, Kochenderfer
# 3.1 Safety Speciï¬cation with Temporal logic
In safety validation, the ï¬rst step is to deï¬ne a safety property to validate. Although safety properties could be deï¬ned using natural language or human preferences (both of which are highly expressive), formal speciï¬cation languages are often preferred because they reduce ambiguity and permit eï¬cient numerical evaluation of state trajectories. The most common formal speciï¬cation languages are based on temporal logic.
Temporal logic is a logical framework for describing properties of signals over time. It enables reasoning about time and temporal information. Properties are stated as formulas hat evaluate to a Boolean value and the syntax of these formulas is governed by a gram- mar. Various temporal logics have been proposed for different domains, including linear emporal logic (Pnueli, lord, metric temporal logic (Koymans, [1990), and computation ree logic (Clarke & Emerson, [1981)). Signal temporal logic (STL) is a temporal logic for real-valued signals that is widely used as a specification language for the safety validation of cyber-physical systems (Donzé & Maler, bord: Kapinski et al., ). The basic unit of an STL formula is an atomic formula of the form ju(x;) > 0, where 2; is a real-valued signal and jis an arbitrary function from Râ to R. A combination of Boolean and temporal operators can be applied to one or more atomic formulas to form more complex formulas. Boolean operators can include unary or binary Boolean operators, such as negation =, conjunction A, and disjunction V. Temporal operators reason over the temporal aspect of signals. Ex- amples of temporal operators include always Ow (zy holds on the entire subsequent path), eventually Qa (W holds somewhere on the subsequent path), and until WUy2 (v1 holds at least until 72 becomes true on the subsequent path). Temporal operators may be indexed by a time interval J over which the property is considered.
The following are some examples of STL formulas representing safety properties:
Yfo,100](@goa1 < 10) O 0,00) (7(dh < 500 Ad, < 100))
(6)
Ï1 : Ï2 :
(7)
The speciï¬cation Ï1 (eq. (6)) states that at some point in the next 100 seconds, the distance to the goal dgoal becomes less than 10 feet. Such a speciï¬cation could be used, for example, to require that an autonomous vehicle reach a goal location within a certain time limit. The speciï¬cation Ï2 (eq. (7)) states that for all time, the horizontal separation dh and the vertical separation dv between two aircraft shall not be simultaneously less than 500 feet and 100 feet, respectively. This speciï¬cation describes the absence of a near mid-air collision, which is an important safety event for aircraft collision avoidance systems (Kochenderfer et al., 2012).
# 3.2 Cost Functions
A naive approach to safety validation is to search randomly over disturbance trajectories until a counterexample is discovered. If counterexamples are rare, this process can be inef- ï¬cient or even intractable. A cost function cstate(s) that measures the level of safety of the system over the state trajectory s can be used to guide the search. A well-designed cost function will help bias the search over disturbance trajectories toward those that are less safe. Additionally, if the goal is to ï¬nd the most-likely failure, then the cost function can incorporate the likelihood of the disturbance trajectory.
382
Algorithms for Black-Box Safety Validation
Once a cost function is deï¬ned, safety validation becomes an optimization problem over disturbance trajectories
xâ = arg min x c(x), (8)
where c(x) = cstate(f (x)). The cost function is designed such that c(x) ⥠ǫ ââ f (x) â Ï, where Ç« is a safety threshold. Therefore, if a disturbance x causes c(x) < Ç«, then x is a counterexample.
The design of the cost function c is speciï¬c to the application, but can be done ad- hoc or using a formal measure of the satisfaction of a safety property. For simple safety proprieties such as collision avoidance, the miss distance (point of closest approach between agents) is a common choice (Koren et al., 2018; Lee et al., 2020). Balkan et al. (2017) propose several possible functions for ï¬nding non-convergence behaviors in control systems, including Lyapunov-like functions, M -step Lyapunov-like functions, neural networks, and support vector machines.
When the speciï¬cation is represented by a temporal logic expression, a natural choice for c is the temporal logic robustness Ï(s, Ï). The robustness is a measure of the degree to which the trajectory s satisï¬es the property Ï. Large values of robustness mean that at no point does the trajectory come close to violating the speciï¬cation, while low but positive values of robustness mean that the trajectory is close to violating the speciï¬cation. A robustness value less than zero means that the speciï¬cation has been violated and gives an indication of by how much. The robustness for space-time signals can be computed from a recursive deï¬nition (Donzé & Maler, 2010; G. E. Fainekos & Pappas, 2009; Leung et al., 2020; H. Yang, 2013). The derivative of the robustness with respect to the state trajectory can be approximated, which may help derive gradient-driven coverage metrics (Leung et al., 2020; Pant et al., 2017) or with doing gradient-based optimization if the black-box assumption is relaxed. Causal information in the form of a Bayesian network can be incorporated into the robustness for improved falsiï¬cation (Akazaki et al., 2017). Connections have also been made between robustness and delta-reachability to relate falsiï¬cation and exhaustive search through approximations (Abbas et al., 2017).
Robustness can be a challenging objective to optimize because it can be non-smooth (Le- ung et al., 2020) and be dominated by state variables with larger magnitude values. Leung et al. (2020) propose smooth approximations to the robustness and Akazaki et al. (2018) op- timize a convex function of the robustness âexp (âÏ(s)), which bounds the maximum cost. State variables may be normalized to alleviate diï¬erences in scale between the variables, but normalization requires the range of the state variables which may not be known a priori. Zhang et al. (2019) propose measuring the robustness of each state variable independently and using a multi-armed bandit algorithm to decide which robustness value to optimize on each iteration.
In addition to measuring safety, cost functions may include other search heuristics. For most likely failure analysis, the cost function includes the likelihood of the disturbance trajectory p(x) for counterexamples
câ²(x) = ( c(x) âp(x) if c(x) ⥠ǫ if c(x) < Ç«. (9)
383
Corso, Moss, Koren, Lee, Kochenderfer
An objective that achieves a similar goal but may be easier to optimize includes a penalty term for low-likelihood disturbances
câ²(x) = c(x) â λp(x),
(10)
where the penalty λ > 0 is user-deï¬ned. Additional penalty terms related to coverage (see section 3.4 for an overview) can be included to encourage exploration. Domain-speciï¬c heuristics are also possible. For example, Qin et al. (2019) penalize disturbances that do not follow a domain-speciï¬c set of constraints, which is useful for getting adversarial agents in a driving scenario to follow traï¬c laws. Other examples of domain-speciï¬c heuristics are described in section 9.
Solution techniques that build trajectories sequentially (such as those based on plan- ning and reinforcement learning) can require the evaluation of trajectory segments or states. Upper and lower bounds on robustness can be computed for incomplete trajectory seg- ments (Dreossi et al., 2015). An approach for most-likely failure analysis called adaptive stress testing (AST) (Koren et al., 2019; Lee et al., 2020) deï¬nes a cost for individual state-disturbance pairs
c(st, xt) = 0 λ log p(xt | st) if st â Sfail if st 6â Sfail, t ⥠tmax if st 6â Sfail, t < tmax, (11)
where the speciï¬cation Ï is for the system to avoid reaching a set of failure states Sfail, and λ penalizes disturbances that do not end in a failure state. The log probability of the disturbance is awarded at each time step to encourage the discovery of likely failures.
# 3.3 Overview of Algorithms
As demonstrated in the previous section, safety validation algorithms solve an optimization problem to discover counterexamples. This section outlines four categories of algorithms distinguished by the information they use for optimization, the requirements on the simu- lated environment, and the desired safety validation task. For each category, we summarize the approach and describe its strengths, weaknesses, and requirements. Speciï¬c algorithms are discussed in more detail in sections 4 to 8.
# 3.3.1 Black-Box Optimization.
Many safety validation algorithms attempt to solve eq. (8) directly over the space of distur- bance trajectories. Although gradient-based approaches cannot be used due to the black-box assumption on the system, there are many existing algorithms that can be applied with no modiï¬cation (Kochenderfer & Wheeler, 2019). Due to the complexity of many autonomous systems and environments, however, the optimization problem is generally non-convex and can have many local minima. Therefore, algorithms that can escape local minima and have adequate exploration over the space of disturbance trajectories are preferred.
Two algorithms that have been used out of the box in falsiï¬cation software (Y. An- napureddy et al., 2011; Donzé, 2010) (see section 10 for more details) are covariance ma- trix adaptation evolution strategy (CMA-ES) (Hansen & Ostermeier, 1996) and globalized
384
Algorithms for Black-Box Safety Validation
Nelder-Mead (GNM) (Luersen & Le Riche, 2004), both of which are eï¬ective for global op- timization. Another way to escape local minima is to combine global and local search (Adi- moolam et al., 2017; Deshmukh et al., 2015; Mathesen et al., 2019; Yaghoubi & Fainekos, 2019), where one optimization routine is used to explore the space of disturbances trajec- tories to ï¬nd regions of interest, and another algorithm does local optimization to ï¬nd the best disturbance trajectory in a region.
The primary beneï¬t of optimization-based safety validation is the minimal set of re- strictions placed on the simulator of the system and the environment. Black-box optimizers do not need access to the state of the environment and only needs to return the value of a safety metric for a given disturbance trajectory. The simulation state may be unavailable for implementation or privacy reasons so optimization-based approaches would be a good choice in those cases. If, however, the state is available and would be useful for ï¬nding failures, then optimization approaches may not function as well as path planning or reinforcement learning approaches. If an environment has stochasticity in state transitions (in addition to the applied disturbances), then optimization techniques such as Bayesian optimization can be used to account for it. The biggest drawback to optimization strategies is the need to optimize over the entire space of disturbance trajectories, which scales exponentially with the time horizon.
# 3.3.2 Path Planning.
Safety validation may be framed as a planning problem through the state space of the envi- ronment using the disturbances as control inputs. Planning algorithms construct a distur- bance trajectory x from an initial state s0 to a set of failure states Sfail, with a corresponding cost for each transition. Classical planning algorithms (Ghallab et al., 2004) typically make the assumption that there exists a model in a formal language such as PDDL (McDermott et al., 1998) or STRIPS (Fikes & Nilsson, 1971), and that the size of the state space is discrete and not too large. Both assumptions are violated when dealing with black-box autonomous systems operating in continuous state spaces so traditional planning algorithms such as for- ward and backward chaining cannot be directly applied. Other planning algorithms such as value iteration (Sutton & Barto, 2018), Dijkstraâs algorithm (Dijkstra, 1959), and their variants can work with a black-box transition function but do not scale well to large or continuous state spaces. Heuristics such as state novelty (Lipovetzky & Geï¬ner, 2012) have been successfully applied to black-box planning settings with large state spaces (Francès et al., 2017; Lipovetzky et al., 2015), and could therefore be applicable to safety validation of CPS.
Planning problems with continuous and high-dimensional state spaces commonly occur in the ï¬eld of robotics and path planning (see the overview by LaValle (2006)). Optimal control algorithms (Lewis et al., 2012) are common in this domain but typically violate the black-box assumption by requiring gradients of the transition function. Sampling based approaches such as the rapidly exploring random tree (RRT) algorithm and the probabilistic roadmap planner (Kavraki et al., 1996) can work with a black-box simulator and have been shown to be eï¬ective in high-dimensional continuous planning problems. The probabilistic roadmap planner, however, requires the ability to connect independent state trajectory segments making it diï¬cult to use in a completely black-box setting. RRT on the other
385
Corso, Moss, Koren, Lee, Kochenderfer
hand, grows a search tree forward from the set of reachable states and can therefore be used with a black-box simulator. RRT has been used extensively for safety validation of CPS and variations on the algorithm will be discussed in greater detail in section 5.
Path planning algorithms rely heavily on the environment state to discover failures, which can be a strength and weakness. Planning algorithms can eï¬ciently search high- dimensional state spaces by reusing trajectory segments, and often provide a natural way to compute state space coverage which can be used to determine when suï¬cient testing has been done. If the reachable set of states is small compared to the state space, however, planning algorithms may perform poorly (J. Kim et al., 2005). A drawback to path planning algorithms is their inability to naturally handle stochasticity. Most path planning algorithms rely on the ability to deterministically replay trajectory segments or initialize a simulator into a predeï¬ned state, which may be challenging for some simulators. Additionally, if the problem has a long time horizon, then prohibitively large trees may be required to ï¬nd failure trajectories.
# 3.3.3 Reinforcement Learning.
In reinforcement learning (RL), the safety validation is modeled as a Markov decision process (MDP). An MDP is deï¬ned by a transition model P (sâ² | s, x) that gives the probability of arriving in state sâ² given the current state s, a disturbance (or action) x, a reward R(s, x), and a discount factor γ â [0, 1] that decreases the value of future rewards. RL algorithms learn a policy Ï (a function that maps states to disturbances x = Ï(s)) that maximizes expected future rewards. Typically the reward function is chosen to be R(s, x) = âc(s, x) so that maximizing reward minimizes cost. For an overview of MDPs and their solvers see the texts by Kochenderfer (2015) or Sutton and Barto (2018).
Reinforcement learning algorithms are similar to path-planning approaches because they also rely on the environment state (unless speciï¬cally formulated otherwise as in Koren et al. (2019)). While planning algorithms search for full trajectories, RL algorithms learn a policy that generates disturbances from the current state. Since policies do not explicitly represent the entire disturbance trajectory, they may be easier to optimize and can be applied to long time horizon problems. Uncontrolled stochasticity in the environment is naturally handled by RL algorithms, which are designed to function in stochastic environments. Additionally, many RL algorithms can be used with episodic simulators that only require the ability to reset and step forward in time, rather than the ability to initialize to a particular state. The downside to RL algorithms is that they may be sample ineï¬cient and require complex (and sometimes brittle) training procedures compared to optimization and path-planning approaches.
# 3.3.4 Importance Sampling.
For many cyber-physical applications, it is impossible to design an autonomous system that never violates a safety property. In that case, failure probability is a useful metric of safety. If failure events are rare, then Monte Carlo approaches will require a large number of samples before converging to the true probability of failure (Hahn, 1972). To address this problem, importance sampling approaches artiï¬cially increase the likelihood of failure with
386
Algorithms for Black-Box Safety Validation
a proposal distribution q(x), and then weight observed failures to get an unbiased estimate of the probability of failure with fewer samples.
In addition to causing more failures, the proposal distribution has the property that q(x) > 0 everywhere that p(x)1{c(x) < Ç«} > 0 (so all disturbances that lead to failure can be sampled from q). The proposal distribution is also referred to as the biased, sampled, or importance distribution. The importance sampling estimate of the probability of failure is done by taking N samples drawn from q and computing the weighted average
# N
ËPfail = 1 N N p(xi) q(xi) 1{c(xi) < Ç«}. (12)
# i=1 X
The variance of the importance sampling estimate is given by
Var( ËPfail) = 1 N Eq (p(x)1{c(x) < Ç«} â q(x)Pfail)2 q(x) . (13)
The goal of a good importance sampling distribution is to minimize the variance of the estimator ËPfail so that fewer samples are needed for a good estimate. It is clear from eq. (13) that a zero variance estimate can be obtained with the optimal importance sampling distribution
qâ(x) = p(x)1{c(x) < Ç«} Pfail . (14)
Generating this distribution is not possible in practice because c(x) is a black box and the normalization constant Pfail is the very quantity being estimated. Importance sampling algorithms seek to estimate the optimal importance sampling distribution qâ(x).
Importance sampling approaches require ï¬nding many failure examples to learn a dis- tribution over failures. Therefore, failure examples can, in principle, be found using any of the three previous approaches. The most common importance sampling approaches such as multilevel splitting and the cross-entropy method function most similarly to optimization- based techniques because they search directly over the space of disturbance trajectories and do not require state information. These techniques therefore carry the same strengths and weakness as optimization techniques. Likewise, importance sampling techniques that involve the use of environment state have the same beneï¬ts and limitation as the corresponding path planning or RL algorithm.
# 3.4 Coverage Metrics
A natural question that arises in safety validation is âwhen is testing complete?â There are an inï¬nite number of possible tests, so measures of coverage are used as a principled way of determining when suï¬cient testing has been performed to declare a system safe. We overview three types of testing coverage: the probability of failure, coverage of the space of disturbance trajectories, and coverage of the reachable set of states.
3.4.1 Probabilistic Coverage.
Testing of a safety critical system may be complete when the estimated probability of failure is below a speciï¬ed threshold with high conï¬dence. The outcome of each simulation of a
387
Corso, Moss, Koren, Lee, Kochenderfer
disturbance trajectory is a Bernoulli random variable,
1 {f (x) 6â Ï} (15)
where a positive outcome (failure) occurs with probability Pfail. These samples can be used with the statistical tools of hypothesis testing and estimation to determine when suï¬cient testing has been performed.
In hypothesis testing, two complimentary hypotheses are compared based on the evidence of sample trajectories. For example, the null hypothesis H0 may be that the probability of failure is less than an acceptable threshold p0, H0 := Pfail < p0, and the alternative hypothesis is the complement H1 := Pfail ⥠p0. The level of conï¬dence in a hypothesis test is speciï¬ed by the probability of type I and type II errors (with lower probability of error requiring more data). For an overview of hypothesis testing techniques for statistical model checking see the work of Agha and Palmskog (2018).
Estimation techniques compute a conï¬dence interval over the probability of failure. The frequentist approach involves taking the mean of N independent samples to produce an estimate of the probability of failure
ËPfail = 1 N N 1 {f (xi) 6â Ï} , (16)
# i=1 X
and uses a concentration inequality to estimate the bounds. Suppose after N samples no failures have been observed, then Hoeï¬dingâs inequality states that
Pr [Pfail ⥠p0] ⤠2eâ2N p2 0 . (17)
The number of samples N can be chosen so the right-hand side of the inequality is suï¬ciently small. In contrast, the Bayesian approach involves maintaining a belief over the probability of failure that gets updated with each new sample. For Bernoulli random variables, the Beta distribution can be used to represent the belief. Beneï¬ts of the Bayesian approach include the ability to incorporate a prior estimate of the probability of failure and the ability to explicitly compute conï¬dence bounds.
When validating safety-critical systems, failures are often rare and safety thresholds are strict, so many samples are required to make probabilistic arguments for safety. To alleviate this burden, importance sampling approaches are used (see section 7 for details), where samples are drawn from a proposal distribution q(x), under which failures are more likely. The samples are then appropriately weighted when performing hypothesis tests (Harrison, 2012) or estimation.
A poorly chosen, or ineï¬cient, proposal distribution can increase the variance of an estimator, which in turn reduces the conï¬dence in the estimate. An ineï¬cient proposal distribution can be identiï¬ed when the importance weights are large, or equivalently, when the eï¬ective sample size is small compared to the actual number of samples. When samples with large weights are being drawn, Y. Kim and Kochenderfer (2016) suggest limiting the maximum weight by clipping the proposal distribution in regions with large weights. Uesato et al. (2019) suggest combining the importance sampling estimator with a basic Monte Carlo estimator to minimize the downside of a bad proposal distribution and Neufeld et al. (2014) provide a principled way of choosing the best estimator from several possibilities.
388
Algorithms for Black-Box Safety Validation
# 3.4.2 Disturbance-Space Coverage.
If failure events have extremely low probability, or if no probability model is available, coverage can be measured by how well the sampled disturbance trajectories ï¬ll the space of possible trajectories. Let X à T be the space of disturbance trajectories, and let V be a set of sampled trajectories. Informally, the coverage C(V ) â [0, 1] is the degree to which V represent the space X à T . Testing can be concluded when the coverage reaches close to 1. Although a variety of coverage metrics can be employed, dispersion (Esposito et al., 2004) and star discrepancy (Dang et al., 2008; Dreossi et al., 2015; Nahhal & Dang, 2007) have previously been used for safety validation.
For a distance metric between states d, the dispersion is the radius of the largest ball in X à T that contains no points in V , i.e. maxxâXÃT minxiâV d(x, xi). Dispersion is challenging to compute in many dimensions and can be overly conservative (Esposito et al., 2004). Instead, a coverage metric based on average dispersion can be computed on a grid with n points and a spacing of δ as
# n
1 δ min (dj(V ), δ) n Cdisp(V ) = 1 â , (18)
j=1 X where dj(V ) is the shortest distance from grid point j to any node in V according to the metric d (Esposito et al., 2004). The value min(dj (V ), δ) is therefore the radius of the largest ball that can sit at grid point j and not touch any nodes in the tree or another grid point. Note that Cdisp â 1 when V contains points at each grid point. The coverage is therefore related to the ï¬delity of the grid, with a ï¬ner grid giving a more comprehensive value at greater computational expense.
The star discrepancy coverage metric measures how evenly a set of points is distributed. The discrepancy D of a set over a region B â X Ã T is
D(V, B) = |V â© B| |V | â vol (B) vol (X Ã T ) , (19)
where vol is the volume of a region and |V | is the number of sample points. The ï¬rst term is the fraction of the points that are in B and the second term is the ratio of the volume of B to the volume of space of disturbance trajectories. The star discrepancy Dâ is the largest discrepancy over all possible subregions of S
Dâ(V ) = max B D(V, B). (20)
Note that Dâ â 0 when all subregions of S have their âfair shareâ of the sample points based on volume, and Dâ â 1 when all the sample points overlap. To turn discrepancy into a coverage metric, deï¬ne
Cdisc(V, B) = 1 â D(V, B) (21)
and
Cstar(V ) = 1 â Dâ(V ). It is possible to approximate Dâ(V ) using a ï¬nite partitioning of the state space into regions {B1, . . . , Bn} such that âªn i=1Bi = X à T . The discrepancy coverage can be computed for
389
Corso, Moss, Koren, Lee, Kochenderfer
each box Cdisc(V ⩠Bi, Bi) (Dreossi et al., 2015) or a lower and upper bound of the star discrepancy coverage can be computed based on Bi (Nahhal & Dang, 2007; Thiémard, 2001).
# 3.4.3 State-Space Coverage.
When the space of disturbance trajectories is prohibitively large (e.g. for long time horizon problems), it can be more eï¬cient to deï¬ne coverage metrics for the state space. In fact, both dispersion and star-discrepancy were ï¬rst used as state-space coverage metrics for the rapidly exploring random tree algorithm (Dang et al., 2008; Dreossi et al., 2015; Esposito et al., 2004; Nahhal & Dang, 2007).
The challenge to using state-space coverage metrics is that not every state in the state space may be reachable. A state s is reachable if there is a disturbance trajectory that can be applied that will cause the environment to reach s. In many safety validation problems, the reachable states are a subset of the full state space due to the limited control via disturbances (Esposito et al., 2004). If the reachable set is small compared to S, then the maximum possible coverage value will also be small, and testing will never terminate. To mitigate this problem, Esposito et al. (2004) proposed a growth metric on the set of samples deï¬ned as
g(V ) = âC(V )/â|V |. (23)
The growth g measures how much the coverage metric increases with the increase in sample points in V . In addition to stopping based on adequate coverage, testing can be terminated if the growth is below a speciï¬ed threshold, suggesting that adding more sample points to V will not improve the coverage.
# 4. Black-Box Optimization
This section surveys algorithms that use black-box optimization techniques to solve safety validation problems, as well as the modiï¬cations that the techniques require to be eï¬ec- tive for safety validation. Safety validation has been performed with simulated annealing, evolutionary algorithms, Bayesian optimization, and extended ant colony optimization.
# 4.1 Simulated Annealing
An approach to stochastic global optimization known as simulated annealing uses a random walk around the disturbance space to minimize a cost function c. A temperature parameter β is used to control the amount of stochasticity in the method over time and a transition function P (xâ² | x) describes the probability distribution over the next disturbance trajectory xâ². If the new trajectory is a counterexample, then it is returned, otherwise it is adopted as x with probability exp(âβ(c(xâ²) â c(x))). Due to the stochastic nature of simulated annealing, it is often a suitable algorithm for solving a global optimization problem with many local minima and can therefore be eï¬ective for safety validation (Abbas et al., 2013; Aerts et al., 2018).
A common choice for transition function is to use a Gaussian distribution around the current point x with a standard deviation that is adjusted based on the ratio of accepted points (Kochenderfer & Wheeler, 2019). This approach may not work well when the distur-
390
Algorithms for Black-Box Safety Validation
bance space has constraints that must be satisï¬ed, such as lower and upper bounds on the possible disturbances (Abbas et al., 2013). Abbas et al. (2013) proposes the use of a hit and run approach to transitioning that respects constraints. It follows three steps:
1. Sample a random direction d in the disturbance trajectory space.
2. Perform a line search in the direction of d to determine the range of α such that x+αd does not violate any constraints.
3. Sample α from this range according to a chosen distribution. The standard deviation of this distribution can be adjusted using the acceptance ratio to improve convergence.
Aerts et al. (2018) improved the hit and run scheme by suggesting that α be chosen for each disturbance dimension separately so that highly constrained dimensions do not restrict the step size of less constrained dimensions (Aerts et al., 2018).
Typically, the size of x remains ï¬xed throughout the optimization, meaning the temporal discretization of the disturbance trajectory is never adjusted. Aerts et al. (2018) note that the frequency content of the disturbance trajectory is salient for some falsiï¬cation problems, and therefore the temporal discretization of the disturbance trajectory should itself be op- timized. An approach called input-signal-space optimization uses a two-layered approach, where an outer loop uses SA to select the length of the disturbance trajectory, and an inner loop ï¬nds the lowest cost trajectory for that length. This approach is able to adapt the ï¬delity of the time domain, getting improved results for some falsiï¬cation problems (Aerts et al., 2018).
# 4.2 Evolutionary Algorithms
Evolutionary algorithms approach global optimization by mimicking the biological process of evolution. A population of individuals is sampled and then evolved using the processes of crossover, where low cost individuals are combined to form new individuals, and mutation, where individuals are randomly modiï¬ed to incorporate new traits into the population.
Q. Zhao et al. (2003) applied a genetic algorithm to generate test cases for embedded control systems. Each individual is represented by a real-valued vector for all continuous variables and a binary encoding for all discrete variables. Selection occurs with a probability inversely related to the cost function. The crossover between two individuals is done using either a randomized concatenation of subsequences or an arithmetic combination of the individuals. Although evolutionary algorithms lack convergence guarantees and are largely based on heuristics, they can be eï¬ective for problems with very large input spaces, as demonstrated by their successful use in the safety validation of multi-agent sense and avoid algorithms (Zou et al., 2014).
Evolutionary algorithms can optimize more complex inputs such as temporal logic expres- sions represented as trees. Corso and Kochenderfer (2020) hypothesize that counterexamples can often be described by a simple temporal logic property of the disturbance trajectory, called a failure description. Failure descriptions lend interpretability to automated testing and have been shown to be eï¬ective at ï¬nding counterexamples as well as giving engineer- ing insight into the failure modes of the system. Genetic programming is used to optimize
391
Corso, Moss, Koren, Lee, Kochenderfer
failure descriptions Ï that satisfy
x â Ï =â c(x) < Ç«.
(24)
The cost function for a given failure description is the average cost of trajectories that satisfy it. Sampling satisfying trajectories is, in general, a hard problem, but Corso and Kochenderfer (2020) provide an algorithm to do so under a set of assumptions.
# 4.3 Bayesian Optimization
In Bayesian optimization (Mockus, 2012), a surrogate model (such as a Gaussian process) is used to represent the cost function over the space of disturbance trajectories. The model is used to select disturbance trajectories that are likely to lower the cost function. Main- taining a surrogate model can be beneï¬cial when evaluations of c(x) are costly or when the cost is stochastic. For these reasons, Bayesian optimization is a natural choice for safety validation (Abeysirigoonawardena et al., 2019; Akazaki et al., 2017; Deshmukh et al., 2017; Mullins et al., 2018; Silvetti et al., 2017; X. Yang et al., 2020).
Bayesian optimization iterates over two steps: 1) updating the surrogate model with new evaluations, and 2) choosing the next disturbance trajectory to evaluate. The update step computes a posterior distribution given the new evaluations (the details depend upon the employed surrogate model). Then the next disturbance trajectory is chosen using an inner optimization loop that optimizes a metric such as prediction-based exploration, error-based exploration, lower conï¬dence bound exploration, probability of improvement, or expected improvement (see Kochenderfer and Wheeler (2019) for details). The inner optimization can be performed using simulated annealing (Akazaki et al., 2017) or a sampling plan with good coverage of the space of disturbance trajectories (Silvetti et al., 2017).
One drawback to using Gaussian processes is their inability to scale to large dimensions and many sample points. To improve scalability for safety validation, Mathesen et al. (2019) introduced an algorithm called stochastic optimization with adaptive restarts (SOAR) that uses a two-layered Bayesian optimization approach to trade oï¬ between global and local search. A Gaussian process model is maintained over the global search space, and a stochastic method is used to select regions for further exploration. Local Gaussian process models are used for reï¬ned searching of local regions. When a local region is no longer seeing improvement, a new region is selected. Deshmukh et al. (2017) addressed scaling to large dimensions using a dimensionality reduction technique based on random embeddings (Wang et al., 2016).
# 4.4 Extended Ant-Colony Optimization
Ant colony optimization is a probabilistic technique initially used for ï¬nding optimal paths through graphs (Dorigo et al., 1996). Ant colony optimization was extended to continuous spaces by G. E. Fainekos and Giannakoglou (2003) and later applied to falsiï¬cation (Y. S. R. Annapureddy & Fainekos, 2010). Extended ant-colony optimization works by treating each disturbance trajectory x as a path through a graph with edges between adjacent temporal points (xt, xt+1). At each time t, the space of disturbances is discretized into N equally spaced cells. Ants traverse the graph by selecting their next cell i at time t based on the amount of pheromone present, where high pheromone implies low cost. After an ant selects
392
Algorithms for Black-Box Safety Validation
their next cell, it then selects a point uniformly at random inside that cell and moves to it. At the end of each trajectory, pheromone is deposited at each cell for low-cost trajectories.
# 5. Path Planning
This section surveys safety validation algorithms based on planning algorithms (and their required modiï¬cations). The presented algorithms are variants of the rapidly exploring random tree algorithm, multiple shooting methods, and Las Vegas tree search.
# 5.1 Rapidly Exploring Random Tree
Rapidly-exploring random tree (RRT) is a path planning technique for eï¬ciently ï¬nding failure trajectories (Lavalle, 1998). A tree is iteratively constructed by sampling the state space and growing in the direction of unexplored regions. RRT has been applied to the safety validation of black-box systems (Branicky et al., 2006; Dang et al., 2008; Dreossi et al., 2015; Esposito et al., 2004; J. Kim et al., 2005; Koschi et al., 2019; Nahhal & Dang, 2007; Plaku et al., 2009; Tuncali & Fainekos, 2019).
Algorithm 1 Rapidly-exploring random tree. 1: function RRT(s0, Sfail) 2:
# T â InitializeTree(s0) loop
3:
4:
sgoal â SampleState() snear â NearestNeighbor(T , sgoal) xnew â GetDisturbance(snear, sgoal) snew â f (snear, xnew) AddNode(T , snear, snew)
7:
8:
9: return CounterExamples(T , Sfail)
The basic approach is presented in algorithm 1. On each iteration, a random point sgoal in the state space is generated, which acts as the goal state for the next node to be added (line 4). The tree is searched for the node snear that is closest to the goal state (line 5) based on some distance metric d. This node will act as the starting point when attempting to reach the goal. A disturbance xnew is generated that drives snear toward sgoal (line 6). Since the system is a black-box, an xnew that causes snear = sgoal can only be approximated through random sampling or a more advanced optimization procedure. Lastly, the disturbance xnew is simulated, starting from snear, resulting in a new state snew (line 7) which is then added to the tree as a child of snear (line 8). Note that if the simulator cannot be initialized to any state, then the trace can be simulated by starting at the root and simulating the disturbances through the branch containing snear. The algorithm stops when the maximum number of iterations is reached, a suitable falsifying trajectory is found, or tree coverage reaches a speciï¬ed threshold. Variants of RRT (Branicky et al., 2006; Dang et al., 2008; Dreossi et al., 2015; Esposito et al., 2004; J. Kim et al., 2005; Koschi et al., 2019; Nahhal & Dang, 2007; Tuncali & Fainekos, 2019) typically diï¬er in their approach to state space
393
Corso, Moss, Koren, Lee, Kochenderfer
sampling, choice of distance metric for nearest neighbor selection, or by adding additional steps that reconï¬gure the tree for improved performance.
# 5.1.1 Adaptive Sampling.
When the reachable set is only a subset of the state space, a uniform sampling of goal states may be ineï¬cient, leading to slow tree growth and low coverage values. There are several techniques for biasing goal samples toward the reachable set.
An approach known as guided-RRT (g-RRT) (Nahhal & Dang, 2007) biases the selection of sgoal to regions with low coverage. In g-RRT, the state space is segmented into n regions {B1, . . . , Bn} and each region is assigned a weight w(Bi). Regions are sampled according to
P (Bgoal = Bi) = w(Bi) j w(Bj ) , (25)
# P
and then sgoal is sampled uniformly from Bgoal. In the original version of g-RRT (Dang et al., 2008; Nahhal & Dang, 2007), the weight w(Bi) is related to the increase in the lower and upper bounds of Cstar when a new point is added to Bi. In later work (Dreossi et al., 2015), the weight is w(Bi) = Ï(D(T, Bi)), where Ï is the sigmoid function. The goal in both cases is to sample points that increase the coverage of the tree.
J. Kim et al. (2005) maintain a distribution over sgoal that has a mean biased toward low values of the cost function and a standard deviation that adaptively changes to maximize sampling in the reachable set. As the algorithm progresses, they keep track of the ratio β of successful expansions of the tree. A successful expansion is one where
d(snew, sgoal) > d(snear, sgoal), (26)
meaning the tree was able to grow in the direction of sgoal. The ratio β is updated after a user-speciï¬ed number of iterations and is used to update the standard deviation between bounds [Ïmin, Ïmax] as
Ï = (1 â β)(Ïmax â Ïmin) + Ïmin. (27)
Thus, when the number of successful expansions is large, the standard deviation is reduced to continue sampling in that region, but when the tree frequently cannot grow toward sgoal the range of values is increased to better search for the reachable set.
5.1.2 Neighbor Selection.
The choice of distance metric for ï¬nding the nearest neighbor snear is critical for the per- formance of RRT. In some applications, it is acceptable to minimize the Euclidean distance between snear and sgoal (Koschi et al., 2019), but this approach may be insuï¬cient for some problems. First, Euclidean distance does not take into account whether sgoal can be reached from snew based on the dynamics of the system. Secondly, if the reachable set is only a small subset of the state space, then points on the boundary of the reachable set will frequently be chosen as snear, limiting the coverage of the tree. Lastly, if there is a cost associated with each trajectory, it should be considered when picking which node to expand.
394
Algorithms for Black-Box Safety Validation
To encode reachability in the selection of node neighbors, J. Kim et al. (2005) use an estimate of the time to go from snear to sgoal deï¬ned by
t(snear, sgoal) = ( d(snear, sgoal)/v â if v > 0 otherwise , (28)
where the velocity is
âd(s, sgoal) âs v = max xâX â . s=snear! (29)
requires , ; 7 roma x)
To compute the derivative âd(s, sgoal)/âs exactly requires white-box knowledge of the sys- tem, but it could be estimated with domain knowledge or from simulations (J. Kim et al., 2005).
Another modiï¬cation to neighbor selection is known as history-based selection, which penalizes the selection of nodes that fail to expand (J. Kim et al., 2005). A node fails to expand if it is selected as snear and creates an output snew that is already in the tree. Failure to expand typically occurs when snear is on the boundary of the reachable set. The number of times a node has failed to expand is stored as nf and the distance between nodes is modiï¬ed as
dh(snear, sj) = d(snear, sj) + λnf (30)
for any choice of distance metric d and a user speciï¬ed constant λ > 0. J. Kim et al. (2005) chose λ to balance the contribution of d and nj.
To incorporate a state-dependent cost function c(s) into the RRT algorithm, Karaman and Frazzoli (2011) developed the approach known as RRTâ. In RRTâ, the neighbor selection is done by ï¬nding a set Snear of nodes in an Ç«-radius of sgoal and then selecting the node in Snear that has the lowest cost, i.e.
snear = arg min siâSnear c(si) (31)
where
Snear = {s | d(s, sgoal) < Ç«}. (32)
For the cost function, Dreossi et al. (2015) use the partial robustness of the system speciï¬- cation while Tuncali and Fainekos (2019) use a heuristic cost for autonomous driving.
5.1.3 Other RRT Variants.
In the basic version of RRT, a single tree is maintained and grown until termination, but under some circumstances multiple trees may be used. When searching over a space of static parameters θ, a diï¬erent tree can be grown for each choice of parameter in an approach called rapidly exploring forest of trees (RRFT) (Esposito et al., 2004). In RRFT, any tree that has reached a threshold coverage value or has stopped growing will be terminated, and a new tree (with a new value of θ) is added to the forest. This process continues until a counterexample is found or until the parameter space has been adequately covered with fully grown trees. Another example of maintaining multiple trees is the approach of Koschi
395
Corso, Moss, Koren, Lee, Kochenderfer
et al. (2019) where a ï¬xed number of nodes K are added at each iteration, including the ï¬rst (so K trees are maintained). Each new node is added to the tree that has the closest node. Trees that are not being grown can be removed for memory eï¬ciency (Koschi et al., 2019).
Tuncali and Fainekos (2019) combined stochastic global optimization techniques with RRT by including a transition test (similar to simulated annealing) and a similarity test for the addition of new nodes. New nodes are only added if they pass the transition test (based on their cost function) and if they are suï¬ciently diï¬erent from all of the other nodes in the tree. These two techniques enhance exploration and coverage of the state space.
Koschi et al. (2019) introduced the backwards algorithm for RRT which connects nodes in the tree backward in time without breaking the black-box assumption. The algorithm starts by sampling a state s0 in the failure set. At each iteration, a state is randomly sampled as sgoal, and the nearest neighbor snear in the tree is determined. A disturbance is selected that drives sgoal toward snear (notice this is opposite from the basic algorithm), and a new state snew is generated by simulating f (sgoal, xnew). To maintain continuity in the tree, snew replaces snear and the entire branch connecting snear to s0 is re-simulated with the same disturbances. If the resulting state (the new s0) is no longer in the failure set, then the branch is terminated and the algorithm repeats. This approach, although more computationally expensive, showed improved performance for ï¬nding rare failure events of an adaptive cruise control system (Koschi et al., 2019). Note, however, that unlike the basic RRT algorithm, this approach requires a simulator that can be initialized based only on the observed state of the environment.
# 5.2 Multiple Shooting Methods
Multiple shooting methods (Bock & Plitt, 1984) solve non-linear initial value problems and have been used for robotic motion planning (Diehl et al., 2006) and falsiï¬cation (Zutshi et al., 2014; Zutshi et al., 2013). The idea is to sample trajectory segments using random initial states and random disturbances. A shortest path problem is solved to ï¬nd a candidate trajectory through these segments that connects initial states and failure states, where two segments are connected if the terminal state of one segment is suï¬ciently close to the starting state of another. The discovered path may not be feasible due to the gaps between segments, so an optimization routine is used to ï¬nd disturbances that minimize those gaps and ï¬nd an actual failure trajectory. In most multiple-shooting algorithms, the procedure to connect trajectory segments involves using white-box information such as dynamical equations or gradients (Zutshi et al., 2013), and therefore multiple-shooting algorithms are not in general applicable to the black-box setting.
Zutshi et al. (2014) introduces two ideas that make multiple shooting methods applicable for black-box falsiï¬cation in large state spaces (Zutshi et al., 2014):
1. The state space should only be explored around the reachable set.
2. State space discretization and reï¬nement can be used to connect segments.
To achieve this, the state space is implicitly discretized into disjoint cells C, each of size δ. The cells are not explicitly stored, but they are constructed so it is eï¬cient to ï¬nd which cell contains a given state. One eï¬cient representation is a Cartesian grid with a ï¬xed cell
396
Algorithms for Black-Box Safety Validation
size. The reachable set is estimated by running simulations forward in time with random disturbances and recording which cells are connected together. The cell connections form a graph that can be searched for candidate falsifying trajectories. Cells in these trajectories are reï¬ned until an actual counterexample is found. Since the algorithm relies on a graph search over a discretized grid, it may be useful to provide a set of initial states S0 to have more than one starting cell.
Algorithm 2 Black-box multiple shooting method. 1: function MultipleShooting(S0, Sfail, δ, γ, Nmax, K) 2: T â â
loop 3: 4: 5: 6: G â SampleSegments(S0, T, δ, Nmax, K) T â FindCandidateTrajectories(G, S0, Sfail) if T = â
7: return â
8: if ActualTrajectories(T ) 6= â
9: return ActualTrajectories(T ) δ â γδ 10: 11: function SampleSegments(S0, T , δ, Nmax, K) 12: G â â
Q â SampleCells(S0, δ) while Q 6= â
and |G| < Nmax 13: 14: 15: C â Pop(Q) Sample {s1, . . . sK} uniformly from C Sample {x1, . . . , xK} from p(x) for i â 1 : K 16: 17: 18: 19: 20: 21: snew = f (si, xi) Cnew â FindCell(snew, δ) if Cnew â T or T = â
22: G â G ⪠(C, Cnew, xi)
# return G
It takes as input a set of initial states S0 and a set of failure states Sfail, a discrete cell size δ, a reï¬nement factor γ, a maximum number of segments per iteration Nmax, and the number of samples per cell K. A set of candidate trajectories T is initialized to the empty set (line 2). On each iteration, a graph G is constructed with edges (Ci, Cj, xij), where xij is the disturbance that transformed the system from a state si â Ci to a state sj â Cj (line 4). An initially empty graph is constructed as follows:
⢠Some starting cells are sampled by sampling states in the initial set S0, ï¬nding the corresponding cell, and adding the cell to a queue Q (line 13).
⢠On each iteration, a cell C is popped oï¬ the queue and K states are sampled uniformly at random from within C (line 16). Then, a disturbance is randomly sampled for each state (line 17).
397
Corso, Moss, Koren, Lee, Kochenderfer
⢠Each sampled state si is simulated forward one timestep based on the corresponding random disturbance xi (line 19), resulting in a new state snew in cell Cnew (line 20).
⢠To ensure the search focuses on promising regions of the state space, the edge (C, Cnew, xi) is only added to the graph if Cnew is in the set of candidate trajectories T (line 21). On the ï¬rst iteration, when the set of candidate trajectories is empty, all edges are added.
Once the graph is constructed, it is searched for candidate trajectories that connect cells in the initial set to cells in the failure set (line 5). If no candidate trajectories are found, then the algorithm failed. Each candidate trajectory in T is simulated from its starting state using the disturbance applied at each segment to get an actual trajectory (line 8). The actual trajectory will not match that candidate trajectory, since the candidate trajectory is made up of disjoint segments. If the actual trajectory is a counterexample, then return it, otherwise, reduce the grid size by a factor of γ (line 10) and repeat the procedure.
# 5.3 Las Vegas Tree Search
Las Vegas tree search (LVTS) (Ernst, Sedwards, et al., 2019) is a tree-based falsiï¬cation algorithm based on two ideas:
1. The disturbance trajectory x should only be discretized in time as ï¬nely as it needs to be.
2. The system is more likely to be falsiï¬ed by extreme values of the disturbance space than less extreme values.
To achieve the ï¬rst goal, the disturbance trajectory is constructed piecewise from distur- bances of diï¬erent durations. Longer duration disturbances can be implemented as a repeti- tion of a disturbance. Let the notation x[â] indicate that x is applied â times. In LVTS, the set of possible disturbances at each step is discretized and represented by the set Y and the probability of selecting disturbances x[â] is P (x[â]). To achieve the second goal, P is deï¬ned to favor disturbances with more extreme. LVTS (algorithm 3) takes as input Y , P , cost function c, and safety threshold Ç«.
The algorithm proceeds by growing a tree of disturbance trajectories x. Each unï¬nished disturbance trajectory x has associated with it a set of explored disturbances (initialized to the empty set in line 2) and a set of unexplored disturbances (initialized to the set of all possible segments Y in line 3). For each iteration of the algorithm, a trajectory is stochas- tically expanded by sampling a new disturbance from P (line 7) and then concatenating it to the current trajectory (line 8). If the new disturbance has not been explored before, it is removed from the unexplored set (line 10) and a lower and upper bound on the cost is computed from the function CostBounds. If the cost function is the robustness, then the bounds can be computed by the method of Dreossi et al. (2015). If the upper bound is lower then the safety threshold, then a counterexample is found and returned (line 13). If the lower bound is greater than the safety threshold, then a counterexample is not possible for this disturbance trajectory and the algorithm restarts (line 15).
398
Algorithms for Black-Box Safety Validation
Algorithm 3 Las Vegas tree search.
1: function LVTS(Y , P , c, Ç«) explored(x) â â
for all x 2: unexplored(x) â Y for all x loop 3: 4: 5: 6: x â â
while unexplored(x) 6= â
or explored(x) 6= â
7: 8: 9: Sample x from P (x) xnew â [x x] if x â unexplored(x) 10: 11: 12: 13: unexplored(x) â unexplored(x) \ {x} c, c â CostBounds(f (xnew)) if c < Ç« return xnew 14: if c > Ç« 15: break 4 16: explored(x) â explored(x) ⪠{x} 17: x â xnew
Disturbance segments are grouped by an integer â â {1, . . . , âmax} that controls the duration of the segment. For duration â, the set Yâ is given by
Yâ = {x[â] | xi = xi,min + αi,j(xi,max â xi,min)}, (33)
where xi is the ith component of the disturbance space and xi,min and xi,max are the minimum and maximum values of that component. The factor
αi,j = (2j + 1)/2bi (34)
for any integer j < (2âmaxââ â 1)/2 and b1 + . . . + bn = 1. The construction of these sets ensures that segments with larger values of â have longer duration and more extreme disturbance values. The full set of segments Y is the union of all the Yâ up to a maximum level of reï¬nement âmax:
# âmax
Y = Yâ (35)
[â=1 The segment sampling distribution P ensures that disturbances with larger values of â and lower costs are chosen more often. The distribution is constructed implicitly by assigning a weight to each level of reï¬nement
wâ = |unexplored(x) â© Yâ| + |explored(x) â© Yâ| 2âmaxââ|Yâ| (36)
âmax k=1 wk. The weight is constructed to favor lower and choosing â with probability wâ/ values of â (due to the exponential factor in the denominator) until the number of unexplored edges goes to zero without ï¬nding many plausible trajectories (note that the explored set
399
Corso, Moss, Koren, Lee, Kochenderfer
only increases if the trajectory remains plausible for falsiï¬cation). Once the reï¬nement level is selected, one of the following four options is selected uniformly at random:
1. Sample x from unexplored(x) ⪠Yl.
2. Sample x from explored(x) ⪠Yl.
3. Choose the x from explored(x) ⪠Yl that minimizes the cost of [x x].
4. Choose the x from explored(x) ⪠Yl that minimizes the cost of [x x xâ²] for all xâ² â explored([x x]).
Option 1 ensures exploration over the unexplored set, while options 2â4 increasingly exploit knowledge of trajectories with low costs.
# 6. Reinforcement Learning
The algorithms presented in this section are based on techniques from reinforcement learning. We present variants of Monte Carlo tree search and deep reinforcement learning.
# 6.1 Monte Carlo Tree Search
Monte Carlo tree search (MCTS) is an online planning algorithm for sequential decision making that has seen success for long time-horizon problems (Silver et al., 2016), making it useful for safety validation (Delmas et al., 2019; Julian et al., 2020; Lee et al., 2020; Moss et al., 2020; Wicker et al., 2018; Zhang et al., 2018). Monte Carlo tree search uses online planning to determine the best actions to take from a starting state to maximize reward. A search tree is maintained over all of the paths tried as well as an estimate of the value function at each step. Each iteration in MCTS consists of four steps:
1. Selection: Starting from the root of the search tree, disturbances are selected sequen- tially until reaching a leaf node, with the goal of choosing disturbances that are more likely to lead to high reward.
2. Expansion: Once the algorithm arrives at a leaf node, a new disturbance is chosen, and a node is added to the tree with zero visits.
3. Rollout: The value of the new node is estimated by running a simulation from the new node while choosing disturbances according to a rollout policy until the episode terminates or the ï¬nite planning horizon is reached.
4. Backpropagation: The value of the new node is used to update the value of all of its ancestors in the tree.
These four steps are repeated until a stopping criterion is met such as computational budget, wall-clock time limit, or a threshold for the reward of the best solution found so far. At that point, the best trace can be returned, or, for longer time-horizon problems, the best candidate disturbance is selected and the process restarts, possibly retaining the subtree associated with the selected disturbance.
400
Algorithms for Black-Box Safety Validation
In problems where there is a discrete state or observation space, it makes sense to main- tain separate state and disturbance nodes so if a state is repeated, the corresponding node in the tree can be reused, increasing the ability of the algorithm to reuse prior information. In most safety validation problems, however, the state space of the simulator will be continuous (or unavailable to the algorithm entirely), in which case the same state will rarely be sam- pled twice. To save memory, it is common to only include disturbance nodes in the search tree, which is equivalent to setting the state equal to the concatenation of all disturbances that led to it (st = x1:t).
The discrete nature of nodes in the search tree are incompatible with a continuous disturbance space X. Delmas et al. (2019) discretize the disturbance space into a small number of discrete disturbances that are representative of the continuous space. As an alternative to discretization, when adding a new node to the tree, Lee et al. (2020) sample a new disturbance xt from a known distribution over disturbances by selecting a random seed uniformly at random and using it to produce disturbances from p(x).
Zhang et al. (2018) combined MCTS with global optimization, using MCTS for explo- ration of the disturbance trajectory space and global optimization for reï¬nement of the disturbance trajectories. The disturbance space is ï¬rst discretized into L1 à . . . à Ln equal- sized regions. Each node in the tree represents one of the regions in the disturbance space denoted Bt. When a new node is added to the tree at depth d, its value is estimated by solving the following constrained optimization problem:
E maxx t X (37)
# γtR(st, xt) # xd â Bd.
" s.t. x1 â B1,
. . . ,
This approach can be combined with progressive widening (Chaslot et al., 2008; Coulom, Instead of randomly sampling a new region, 2007) when the disturbance space is large. optimization can be used to ï¬nd the optimal region to expand. When adding a new dis- turbance at depth d + 1, solve eq. (37) with the added constraint that xd+1 exists in the set of regions that have yet to be expanded. For ease of optimization, that subset may be further restricted to its convex subset (Zhang et al., 2018). Once the MCTS budget is spent, traces with high reward can be found by solving the constrained optimization problem of the highest performing branch of the tree.
The use of optimization for each new node of the search tree increases the computa- tional cost of the algorithm. Zhang et al. (2018) note that balancing the computational budget between optimization iterations and tree expansions is critical to the success of this approach. They compare simulated annealing (SA) (Abbas et al., 2013), Globalized Nelder- Mead (GNM) (Luersen & Le Riche, 2004), and covariance matrix adaptation evolution strategy (CMA-ES) (Hansen & Ostermeier, 1996) for the global optimization algorithms. It is noted that since CMA-ES has a built-in exploration strategy, it sees less improve- ment when combined for MCTS than SA and GNM. GNM, on the other hand, does not have an exploration strategy (beyond probabilistic restarts) and therefore sees a signiï¬cant improvement with MCTS.
401
Corso, Moss, Koren, Lee, Kochenderfer
# 6.2 Deep Reinforcement Learning
Deep reinforcement learning (DRL) is a category of reinforcement learning that uses deep neural networks to represent the value function V (s), the state-action value function Q(s, x), or the policy Ï(s). DRL has shown state-of-the-art results in playing Atari games (Mnih et al., 2015), playing chess (Silver et al., 2017), and robot manipulation from camera input (Gu et al., 2017). In recent years, diï¬erent DRL techniques have been applied to falsiï¬cation and most-likely failure analysis (Akazaki et al., 2018; Behzadan & Munir, 2019; Corso et al., 2019; Koren et al., 2018; Koren & Kochenderfer, 2019, 2020; Kuutti et al., 2020; Qin et al., 2019).
DRL algorithms are broadly split between value-function approaches, where the neural network is used to represent the value function, and policy search approaches, where the neural network represents the policy. There are advantages and disadvantages to both, and several algorithms are discussed below. The reader is referred to the original papers for implementation details.
If the disturbance space X is discrete, then the deep Q-network (DQN) (Mnih et al., 2015) algorithm can be used for falsiï¬cation (Akazaki et al., 2018; Qin et al., 2019). In DQN, the optimal Q-function is estimated by a neural network that takes as input the state s and outputs a value for each discrete disturbance. Disturbances are selected greedily as
x = arg max Q(s, x) x (38)
with probability 1 â Ç« and chosen at random with probability Ç« to encourage exploration. The parameters of the Q-network are updated to minimize the mean squared error between the current value and a target
xâ² Q(sâ², xâ²). Qtarget(s, x) = r(s, x) + γ max (39)
Since the target value greedily selects the highest value disturbance, the Q-network can be trained on any sample of a state, disturbance, and reward (a feature known as oï¬-policy learning). Many implementations of DQN therefore maintain a replay buï¬er that stores previous (s, x, r, sâ²) tuples which are repeatedly used to update the network parameters. This reuse of data makes DQN a relatively sample-eï¬cient algorithm. The main drawback of DQN is that it is incompatible with large or continuous disturbance spaces.
For large or continuous disturbance spaces, the policy itself is represented by a neural network (with parameters θ) that takes the state as input and either outputs a disturbance directly (e.g. x = Ïθ(s)) or outputs parameters of a distribution from which a disturbance for a normal distribution [µ, Ï2] = Ïθ(s) and x â¼ N (µ, Ï2)). The can be sampled (e.g. policy is optimized to produce higher rewards using the policy gradient method (Sutton et al., 2000). Policy gradient methods can suï¬er from high variance and can be unstable during optimization. To improve optimization stability, an approach known as trust region policy optimization (TRPO) (Schulman et al., 2015) restricts the amount a policy can change at each step. TRPO has previously been used for falsiï¬cation of autonomous vehicles (Corso et al., 2019; Koren et al., 2018; Koren & Kochenderfer, 2019).
# Another
Another drawback of policy gradient methods is their inability to learn oï¬-policy. With- out data reuse, these methods can require a large number of simulations to converge. Newer
402
Algorithms for Black-Box Safety Validation
approaches combine policy gradient methods with value function methods to create the actor-critic paradigm, which can perform well on problems with continuous disturbance spaces while also using previous simulation data to improve sample eï¬ciency. Actor-critic methods use two neural networks, one for the policy (the actor network) and one for the value function (the critic network) and come in several varieties. Advantage actor critic (A2C) was used for falsiï¬cation by Kuutti et al. (2020). Its more scalable counterpart, asynchronous advantage actor critic (A3C), was used by Akazaki et al. (2018). Behzadan and Munir (2019) use another actor-critic method known as deep deterministic policy gra- dient (DDPG) combined with Ornstein-Uhlenbeck exploration (Lillicrap et al., 2016).
One potential drawback of using DRL for black-box falsiï¬cation is the requirement for a simulator that can be observed after each disturbance is applied. In practice, high-ï¬delity simulators may be large, complex software projects, so it may be diï¬cult to access the true simulator state at each timestep, whereas getting the results or only some partial data on the ï¬nal state may be easier. Koren and Kochenderfer (2019) developed a DRL technique (also used by Corso et al. (2019)) that does not require access to the simulator state. The technique uses a recurrent neural network (RNN) with long-short term memory (LSTM) layers as the policy (Hochreiter & Schmidhuber, 1997). The RNN maintains a hidden state akin to the state of the simulator that is used to make future decisions. The input to each layer is the previous disturbance and the initial state of the simulation, allowing the approach to generalize across initial conditions.
The advantages of DRL and tree search methods can be combined in an approach called go-explore (GE) (Ecoï¬et et al., 2019), which has been eï¬ective for hard-exploration problems with long time horizons and no reward shaping. GE has two phases, a tree search exploration phase, and a DRL robustiï¬cation phase. While the original version of GE uses the state of the simulator when building the tree and training the robust policy, Koren and Kochenderfer (2020) modiï¬ed the algorithm to use the history of disturbances instead, reducing the access requirements of the simulator.
# 7. Importance Sampling Algorithms
This section summaries algorithms for estimating the probability of failure based on im- portance sampling. Algorithms include the cross-entropy method, multilevel splitting, classiï¬cation-based importance sampling, and state-dependent importance sampling.
# 7.1 Cross-Entropy Method
The cross-entropy method iteratively learns the optimal importance sampling distribution from a family of distributions q(x; θ) parameterized by θ (see De Boer et al. (2005) for an overview and Rubinstein and Kroese (2013) for a deeper examination). The optimal distribution parameters θâ are found by minimizing the KL-divergence between a proposal distribution q(x; θ) and the optimal distribution qâ(x), i.e.
θâ = arg min DKL(qâ(x) k q(x; θ)), θ (40)
403
Corso, Moss, Koren, Lee, Kochenderfer
where DKL calculates the KL-divergence. Equation (40) can be cast as a stochastic opti- mization problem as
# N
oo LS ; p(xi) a~ arg max 5; » [Hees < eae log q(xi39)| , (41)
where Ï is any set of parameters and xi are sampled from q(x; Ï). Equation (41) can be solved analytically when the family of algorithms is in the natural exponential family (i.e. normal, exponential, Poisson, gamma, binomial, and others), and the solution corresponds to the maximum likelihood estimate of the parameters (De Boer et al., 2005).
For an iterative solution to ï¬nding θâ, the initial parameters θ0 are chosen so that q(x; θ0) is close to p(x). Then, for k = 0, 1, . . .
1. Set Ï = θk. 2. Draw samples {x1, . . . , xN } from q(x; Ï).
3. Solve eq. (41) for θk+1.
One major challenge to this approach is the rarity of failure events. If all samples have c(x) > Ç«, then ËPfail = 0 and the algorithm may not converge to the optimal proposal distribution. One solution is to adaptively update the safety threshold Ç« at each iteration. At iteration k, a safety threshold Ç«k and a rarity parameter Ï is chosen so that the fraction of samples that have c(x) < Ç«k is Ï. The parameter Ï is also known as the quantile level and is often set in the range Ï = [0.01, 0.2] (Y. Kim & Kochenderfer, 2016; OâKelly et al., 2018). The cross-entropy method has been used to estimate the probability of failure for aircraft collision avoidance systems (Y. Kim & Kochenderfer, 2016) and autonomous vehicles (Z. Huang et al., 2017; OâKelly et al., 2018; D. Zhao et al., 2016). Typically, probability distri- butions in the natural exponential family are used (Y. Kim & Kochenderfer, 2016; OâKelly et al., 2018; D. Zhao et al., 2016) so that cross-entropy updates can be performed analyti- cally. Z. Huang et al. (2017) propose a method for using piecewise exponential distributions for more ï¬exibility while retaining the ability to compute updates analytically. Sankara- narayanan and Fainekos (2012) discuss piecewise uniform distributions over the disturbance space, and techniques for factoring the space to reduce the number of parameters needed.
# 7.2 Multilevel Splitting
Multilevel splitting (Kahn & Harris, 1951) is a non-parametric approach to estimating the optimal importance sampling distribution. Rather than relying on an explicit probability distribution (as in the cross-entropy method), multilevel splitting relies on Markov chain Monte Carlo (MCMC) estimation and scales better to larger dimensions (Botev & Kroese, 2008). It has been applied to the estimation of probability of failure of autonomous driving policies with a large number of parameters (Norden et al., 2019).
The idea of multilevel splitting is to deï¬ne a set of threshold levels â = Ç«0 > Ç«1 > . . . > Ç«K = Ç« and assume that the probability of failure can be computed as a Markov chain of the form
# K
P (c(x) < Ç«) = P (c(x) < Ç«k | c(x) < Ç«kâ1). (42)
# Yk=1
404
Algorithms for Black-Box Safety Validation
Given enough levels (i.e. a large enough K), each conditional probability
Pk = P (c(x) < Ç«k | c(x) < Ç«kâ1) (43)
is much larger than P (c(x) < Ç«) and can therefore be computed eï¬ciently with basic Monte Carlo methods. Algorithm 4 outlines the algorithm. At iteration k, N samples of x are sorted by their cost function (line 8) and used to estimate the conditional probability Pk (line 10). The total probability is updated based on the Markov assumption (line 11). All samples with c(x) > Ç«k are discarded, and the remaining samples are resampled to get back up to N samples (line 12). Those samples perform a random walk of M steps using a kernel T (xâ² | x) (line 13). The process repeats until the level reaches the true safety threshold. The choice of levels Ç«k can be done adaptively (Cérou & Guyader, 2007) by selecting a rarity parameter Ï such that ÏN samples are kept at each iteration (line 9).
# Algorithm 4 Adaptive multilevel splitting.
1: function MulitlevelSplitting(p, N , M , T , c, Ç«) 2: Pfail â 1 k â 1 Ç«0 = â Sample {x1, . . . , xN } from p(x) while Ç«k > Ç« k â k + 1 Sort {x1, . . . , xN } by c(xi) Ç«k â max(c(xÏN ), Ç«) N Pk â 1 i=1 N Pfail â PfailPk Resample {x1, . . . , xN } from {x1, . . . , xÏN } Random walk M steps using T (xâ² | x) for each {x1, . . . , xN } 3: 4: 5: 6: 7: 8: 9: 1{c(xi) < Ç«k} 10: 11: P 12: 13: 14: return Pfail
# 7.3 Classiï¬cation Methods
Supervised learning can be used to classify disturbances as safe or unsafe. That classiï¬cation can then be combined with importance sampling to estimate the probability of failure of the system. Supervised learning often requires observing more than one example of a failure, so these approaches are most applicable when failures can be found relatively easily. One approach is to use a space-ï¬lling sampling plan for the disturbance (Z. Huang et al., 2018) to ensure good coverage of the disturbance space, but this fails to scale to high dimensions. Another approach is to use previous versions of the system that are far less safe (Uesato et al., 2019). Known as a continuation approach, this technique is well suited to black-boxes that have learned behavior. During the learning process, the system will fail more easily (but in related ways) to the version that is ultimately being tested. Therefore, earlier versions of the system can be used for classiï¬cation of disturbances.
One way to combine supervised learning with importance sampling was explored by Z. Huang et al. (2018). They built a proposal distribution centered on the boundary between
405
Corso, Moss, Koren, Lee, Kochenderfer
safe and unsafe disturbances. If q(x; θ) is a proposal distribution with parameters θ, then they search for the point xâ in the set of failures that maximizes the probability of the proposal distribution:
xâ = arg max x 1{c(x) < Ç«}q(x; θ) (44)
Then, they adjust the parameters of the proposal distribution such that the mean is located at xâ. This approach has shown to construct an eï¬cient proposal distribution when the unsafe set is convex (Sadowsky & Bucklew, 1990). In practice, it is unlikely that the set is convex, but a linear boundary between safe and unsafe examples can be constructed using the support vector machine (SVM) algorithm (Suykens & Vandewalle, 1999) if the disturbance space is lifted to a higher dimension through a mapping Ï(x).
Once a boundary is found, the choice of distribution q determines the feasibility of computing xâ. One approach is to represent q by a mixture of K Gaussians with weights α:
K qGMM(x; µi, Ïi) = αiN (x; µi, Ïi) (45)
i=1 X i is determined separately and the proposal
The optimal mean for each Gaussian model xâ distribution is reconstructed by
K GMM(x; xâ qâ i , Ïi) = αiN (x; xâ i , Ïi). (46)
# i=1 X
Note that a Gaussian mixture model (GMM) can be obtained in the lifted space ËqGMM(Ï(x)) and then reduced to the true disturbance space by integrating over the extra dimensions. The approach is outlined in algorithm 5.
# Algorithm 5 Classiï¬cation-based importance sampling.
1: function ClassificationImportanceSampling(p, M , Ï, c, Ç«) 2: Sample {x1, . . . , xM } from p(x) and ï¬t a GMM ËqGMM(Ï(x); µi, Ïi) Sample {x1, . . . , xN } and compute {1{c(x1) < Ç«}, . . . , 1{c(xN ) < Ç«}} Lift the disturbances by computing {Ï(x1), . . . , Ï(xN )} Apply SVM on pairs {(Ï(x1), 1{c(x1) < Ç«}), . . . , (Ï(xN ), 1{c(xN ) < Ç«})} Determine the optimal points Ïâ i Construct Ëqâ Marginalize to qâ(x) return EstimateProbability(p, qâ) 3: 4: 5: 6: GMM(Ï(x); Ïâ i , Ïi) 7: 8: 9:
An alternative approach by Uesato et al. (2019) uses supervised learning to estimate the probability of failure ËPfail(x) for a disturbance x, and therefore does not make the assumption that failures are deterministic given x. The function ËPfail(x) can be represented using a neural network or some other model and is trained on failure examples which may come from weaker versions of the system. The optimal importance sampling distribution was proven to be
qâ(x) = ËPfail(x)p(x) q Ep ËPfail(x) , (47)
406
Algorithms for Black-Box Safety Validation
which can be sampled from using rejection sampling.
# 7.4 State-Dependent Importance Sampling
While most importance sampling approaches focus on the entire space of disturbance trajec- tories, a sequential decision making framework can be used to ï¬nd the optimal importance sampling policy q(x | s) for a simulation state s (Chryssanthacopoulos et al., 2010; Corso et al., 2020). For a Markovian system and simulator, the optimal importance sampling policy is given by
q(x | s) = p(x | s)Pfail(sâ²) Pfail(s) , (48)
where sâ² is deterministically reached after disturbance x is applied in state s. The probability of failure Pfail(s) can be estimated through the approximate solution of the Bellman equation
1 Pfail(s) = 0 a p(x | s)Pfail(sâ²) if s â Sfail if s /â Sfail, s â Sterm otherwise, (49)
# P
where Sterm is the set of terminal states that are not failure states. Local approximation dynamic programming and Monte Carlo policy evaluations are two successful approaches for solving eq. (49) (Corso et al., 2020).
# 8. Problem Decomposition Techniques
One of the most signiï¬cant challenges to overcome for the safety validation of autonomous systems is that algorithms often scale poorly to large disturbance spaces and state spaces. A common approach to dealing with scalability is the use of decomposition approaches to simplify a larger problem into more tractable subproblems. For a survey of decomposition approaches in MDPs see the survey of Daoui et al. (2010). Decomposition approaches that rely on the factorization of the transition function (Guestrin et al., 2003) are not applicable due to the black-box assumption, but techniques that generalize over the state and action spaces could be applied to safety validation. This section will present two approaches that have been used for decomposing the falsiï¬cation problem into more manageable subproblems to accelerate ï¬nding counterexamples.
# 8.1 State Space Decomposition
The approach presented by Corso et al. (2020) involves decomposing the simulator state and disturbance spaces into independent components, and ï¬nding failures for each com- ponent. Originally done in the context of multiple distinct actors in a simulated driving environment, this approach can be used with any simulation that has components that can be simulated individually. To separate the state space into M diï¬erent components, deï¬ne a decomposition operator D such that
{s(1), . . . , s(M )} = D(s) (50)
407
Corso, Moss, Koren, Lee, Kochenderfer
and the disturbance space X (i) associated with the state s(i) is smaller than the full dis- |X (i)| < |X|). For each subproblem, solve for a policy that ï¬nds turbance space (i.e. counterexamples
x(i) = Ï(i)(s(i)) (51)
and then combine the policies with a fusion function F such that
x = F (Ï(1), . . . , Ï(M ))(s). (52)
An approach for solving the subproblems that is amenable to policy fusion is approximate dynamic programming (see section 7.4). In that case, the subproblem policy is deï¬ned by computing the probability of failure for each state component P (i) fail. The fusion function can then apply simple arithmetic operations (like mean, max, or min) to arrive at a joint policy
ËPfail(s) = F (53)
1 1 (PAY), | 5) Prai(sâ)
# 1 m PA)
p(x | s)Pfail(sâ²) Pfail(s) x = , (54)
where sâ² is the deterministic state reached after applying disturbance x at state s.
If the fusion function is simple, then it may not capture the joint interactions between multiple disturbance components (Corso et al., 2020). To address this problem, a global correction factor can be learned from the full simulation. The estimate probability of failure is given by
Pfail(s) â ËPfail(s) + δPθ(s). The correction factor δPθ is trained using rollouts {s1, . . . , sN } from the full simulation and minimizing the diï¬erence between the estimate ËPθ(si) and the actual discounted return G(si) where
G(si) = 1{sN â Sfail} N t=i p(xt | stâ1) Ï(xt | stâ1) . (56)
# Q
This approach has been shown to increase the number of failures found in a complex driving environment with a large disturbance space (Corso et al., 2020).
# 8.2 Compositional Approach with Machine Learning Components
The approach of Dreossi et al. (2019) is applicable when the system contains a machine- learned component that performs classiï¬cation (such as a neural network based perception system). Knowing something about the structure of the system makes this approach gray- box, but due to the prevalence of ML components, it is still widely applicable.
The algorithm is performed with the following steps:
⢠The ML component of interest f (x) is partitioned from the rest of the system and environment. The ML component, which has a large disturbance space, is replaced with an idealized abstraction with a much smaller input space.
⢠The simulator with the idealized ML component is separated into two versions: 1) f +(x) where the ML component behaves as well as possible (i.e. classifying all inputs correctly), and 2) f â(x) where the ML component behaves poorly.
408
Algorithms for Black-Box Safety Validation
⢠For each version of the simulator, a traditional falsiï¬cation algorithm is used to par- tition the disturbance space into the regions that satisfy the speciï¬cation safe(x) and regions of failures fail(x). The notation safe(x)± indicates the safe disturbance set for f ±(x). Similar notation is used for the failure set.
⢠To ï¬nd counterexamples that were caused by the ML component, ï¬nd falsifying exam- ples in the set safe(x)+ \ safe(x)â. The set safe(x)â is removed because no failure is possible in this region even with the ML component functioning as poorly as possible. This part of the disturbance space is referred to as the region of interest.
⢠Finally, use a separate analyzer (possibly white-box) to identify the high-dimensional inputs that lead to failures (i.e. misclassiï¬cations) in the region of interest.
This approach has been able to ï¬nd counterexamples in a neural network perception system used by an autonomous vehicle (Dreossi et al., 2019). A similar approach by Julian et al. (2020) uses a white-box neural network analyzer combined with Monte Carlo tree search to ï¬nd failure trajectories of an image-based control system.
# 9. Applications
Autonomous cars and aircraft are two major application domains of black-box safety vali- dation methods. While there are a variety of scenarios within these application domains, a common underlying principle is that of miss distance as a reward heuristic. For most scenar- ios, it is possible to use some sort of distance metric to measure how close the system came to a failure during the scenario. This distance is then used to allow optimization to ï¬nd counterexamples. The autonomous vehicles and aircraft application domains are covered in more detail below.
# 9.1 Autonomous Driving
Within the ï¬eld of autonomous driving, there are multiple scenarios that are commonly used for testing:
Lane-Following. Lane-following is one of the most common examples in the autonomous vehicle ï¬eld, and has been used to test systems ranging from full-stack systems (OâKelly et al., 2018) to systems with ideal perception (Behzadan & Munir, 2019) to systems that are not fully autonomous like advanced driver-assistance systems (ADAS) (Koschi et al., 2019). The system under test tries to maintain a desired speed in the current lane and may (Tuncali & Fainekos, 2019) or may not (Kuutti et al., 2020) have the ability to change lanes. The goal is often to ï¬nd collisions, most commonly rear-end collisions. Scenario variations include both highway driving (Norden et al., 2019) and local road driving (Tuncali et al., 2019).
Intersection Scenarios. In an intersection scenario, the system under test approaches a stoplight (Tuncali et al., 2019), stop sign (Abeysirigoonawardena et al., 2019), crosswalk (Ko- ren et al., 2018), or other form of intersection and must proceed through without a failure. Failures can include collisions with pedestrians (Koren & Kochenderfer, 2020) or other vehi- cles (Tuncali et al., 2019). Failures may also include violations of traï¬c laws (Kress-Gazit &
409
Corso, Moss, Koren, Lee, Kochenderfer
Pappas, 2008) or other rule-sets, such as those designed to prevent at-fault collisions (Hek- matnejad et al., 2020).
Lane Change Scenarios. Lane change scenarios can involve the test vehicle initiating a lane change or reacting to one (Qin et al., 2019; D. Zhao et al., 2016). Existing work has considered ADAS and other driver aid systems (Z. Huang et al., 2018). Failure modes include rear-end collisions as well as side collisions caused by turning into an occupied lane.
Platooning Vehicles. Hu et al. (2000) present a platooning scenario, where a number of cars are following a lead car while keeping a speciï¬c trailing distance. The vehicles are subjected to various stochastic disturbances arising from road conditions, wind conditions, or the presence of human operators. The goal of the system is to minimize the time that platoon vehicles spend in âchasingâ mode, where they attempt to catch up to the vehicle ahead. The system fails when any of the vehicles gets too far from or too close to the leading vehicle.
Within these applications, researchers have developed domain speciï¬c techniques to im- prove performance. The most common approach is to use miss distance, the physical distance between the test vehicle and other vehicles in the simulation, as a reward heuristic to allow easier optimization (Koren & Kochenderfer, 2020). Miss distance is also the underlying prin- ciple for robust semantics of temporal formulas when applied to autonomous vehicles (Zhang et al., 2018). Some work has been done on identifying situations where a collision is immi- nent instead of the collision itself, sometimes called âunsafe statesâ (Koschi et al., 2019) or âboundary casesâ (Tuncali & Fainekos, 2019) since these regions may be easier to ï¬nd.
# 9.2 Autonomous Flying and Aircraft Collision Avoidance
Black-box validation techniques have also been applied to aircraft in multiple ways:
Flight Control Software. Delmas et al. (2019) present an application where the con- troller must keep an aircraft in steady ï¬ight in response to disturbances such as wind or pilot inputs. Failures include reduced ï¬ight quality, autopilot disengagements, and overshoots of expert-deï¬ned thresholds. A second application is presented by Ernst, Arcaini, et al. (2019), where an F-16 controller performs automatic maneuvers to avoid ground collisions. Valida- tion algorithms search for violations of a minimum altitude from various initial conditions. Julian et al. (2020) perform safety validation on a neural network controller with camera inputs for aircraft taxiing.
Collision Avoidance. There are several examples of applications to collision avoidance problems for aircraft. Lee et al. (2020) validate the next-generation aircraft collision avoid- ance system (ACAS X), which makes recommendations to pilots to avoid mid-air collisions. The system may or may not coordinate its recommendations with the other aircraft. A fail- ure occurs when the aircraft pass too close to one another, known as a near mid-air collision (NMAC). Esposito et al. (2004) validate a control system that guides a group of planes from an initial location to a ï¬nal destination in the presence of stochastic wind disturbances. A failure in this case is a collision between any two aircraft.
Flight Path-Planning. Systems are given a mission, which is a series of waypoints that the system must navigate (Moss et al., 2020; Tuncali et al., 2018), or a target location
410
Algorithms for Black-Box Safety Validation
and keep-out zones along the path (Lee et al., 2019). Falsifying trajectories may be diï¬erent mission parameters or disturbances during mission execution, such as wind and sensor noise. Failures occur when the system enters areas deï¬ned as oï¬-limits, collides with an obstacle, or produces a software error. X. Yang et al. (2020) validate scheduling and trajectory planning for urban air mobility and package delivery systems.
# 9.3 General Systems
Black-box safety validation has also been applied to systems that are not tied speciï¬c ap- plications such as hybrid systems, neural network controllers and planning algorithms.
Hybrid Systems. Algorithms for black-box safety validation have also been applied to a number of hybrid systems outside of autonomous driving and autonomous aircraft. Hoxha et al. (2015) present an automatic transmission system model, which is widely used as a benchmark problem in the literature. The system under test selects a gear based on throttle and brake inputs as well as state information such as current engine load and car speed. Failure events include violations of speed thresholds and changing gears too often. Jin et al. (2014) present another widely used benchmark problem from the automotive ï¬eld. The system is an abstract fuel controller for an automotive powertrain. The controller receives inputs such as fuel-ï¬ow measurements and throttle and must output a fuel command. Failures may be either steady-state, such as the violation of an air-to-fuel ratio, or transient, such as pulses that violate a settling-time constraint.
Non-automotive systems have also been considered as well. J. Kim et al. (2005) analyze a thermostat model under various environmental conditions and test whether the heater will be active for longer than a certain proportion of time. Schuler et al. (2017) present a simpliï¬ed model of a wind turbine, which attempts to generate as much power as possible based on the current wind speed. Failures are violations of safety criteria, which include violations of thresholds on tower base moment and rotor speed.
Controllers. Systems with neural network controllers have also become common case studies for validation techniques. Yaghoubi and Fainekos (2019) test a steam condenser controlled by a recurrent neural network (RNN). The system modulates steam ï¬ow rate based on energy balance and cooling water mass balance. A failure occurs when the pressure falls outside the acceptable range. Yaghoubi and Fainekos (2019) also present a generalized non-linear system controlled by a feed-forward neural network. A failure is deï¬ned as a constraint on the reference signal value. Ernst, Arcaini, et al. (2019) study a neural network controlled magnet levitation system. The controller takes a reference position as input and attempts to move the magnet to track the given reference position. A failure occurs if the magnet does not stabilize to a position close enough to the reference position within some time limit.
Planning Modules. Falsiï¬cation of planning modules is common among the autonomous vehicle and aircraft applications, but there are examples in other domains as well. J. Kim et al. (2005) present a hovercraft navigation application. The validation task tests whether the hovercraft can successfully navigate to some goal region while subjected to stochastic wind disturbances. Similarly, Zhang et al. (2018) validate a free-ï¬oating robot system that must navigate to a desired target location. Fehnker and IvanÄiÄ (2004) present a generalization
411
Corso, Moss, Koren, Lee, Kochenderfer
of this task, where an agent is moving in a discrete 2D environment, sometimes called a gridworld task. In their formulation, there are states that must be reached and states that must be avoided. A violation of either constraint is considered a failure.
# 10. Existing Tools
Implementations of various safety validation algorithms have been made available as tools for others to use. The existing tools range from open-source academic-based toolboxes to closed- source commercial software. A detailed survey that includes falsiï¬cation tools is provided in (Kapinski et al., 2016). Each of the tools described in this section create falsifying distur- bance trajectories to a system given a set of system requirements that should be satisï¬ed. Certain tools also perform most-likely failure analysis and failure probability estimation. Many of the existing tools interface with the MATLAB programming language/environment to stress industry standard Simulink models. The tools surveyed in this section focus on black or gray-box testing of cyber-physical systems and although some of these tools include additional functionality, this section focuses on features of the safety validation components. A brief overview of the existing safety validation tools will be discussed and their beneï¬ts and restrictions will be highlighted.
# 10.1 Academic Tools
Many of the existing safety validation tools are products of academic research and released as experimental prototypes. Although these tools are prototypes, several have matured enough to gain wide-spread usage and acceptance (Diwakaran et al., 2017; Dreossi et al., 2015; Tuncali et al., 2018; Zhang et al., 2019; Zutshi et al., 2014). Two particular falsiï¬cation tools have become benchmark standards in the ï¬eld: S-TaLiRo (G. Fainekos et al., 2019) and Breach (Donzé, 2010). Both are open-sourced MATLAB toolboxes for optimization- based falsiï¬cation. While most of the tools are open-sourced, two other tools referenced in the falsiï¬cation literature are not publicly available but still discussed. Although their respective papers indicate how to recreate their work, none of those tools have code or executables available online. The following collection is organized into optimization-based and reinforcement learning-based tools.
# 10.2 Optimization-based Tools
The tools in this section employ a standard optimization-based technique to search for counterexamples. Section 4 outline the approaches implemented by the following tools.
S-TaLiRo. S-TaLiRo (Systems Temporal Logic Robustness) (Y. Annapureddy et al., 2011; G. Fainekos et al., 2019) is a simulation-based MATLAB toolbox for temporal logic falsiï¬cation of non-linear hybrid systems. S-TaLiRo parameterizes the disturbance space to reformulate the falsiï¬cation problem as an optimization problem. S-TaLiRo instructs the user to specify system requirements in temporal logic formulas and then constructs the optimization cost function to minimize a global robustness metric. Various optimization techniques are included in the S-TaLiRo toolbox, such as simulated annealing (described in section 4.1), genetic algorithms (described in section 4.2), stochastic optimization with adap- tive restarts (described in section 4.3), the cross-entropy method (described in section 7.1),
412
Algorithms for Black-Box Safety Validation
and uniform random sampling. S-TaLiRo is designed to analyze arbitrary MATLAB func- tions or Simulink/Stateï¬ow models. S-TaLiRo is open-source and available under the GNU General Public License (GPL).1
Speciï¬c add-ons to S-TaLiRo have been implemented that extend the core falsiï¬ca- tion functionalities. These add-ons generally provide other solution methods or provide additional simulation environments that interface with S-TaLiRo.
Breach. Breach (Donzé, 2010; Dreossi et al., 2019) is a simulation-based MATLAB tool- box for falsiï¬cation of temporal logic speciï¬cations for hybrid dynamical systems, similar to S-TaLiRo. Breach uses optimization-based techniques including simulated annealing (described in section 4.1), genetic algorithms (described in section 4.2), globalized Nelder- Mead (Luersen & Le Riche, 2004), and CMA-ES (Hansen & Ostermeier, 1996). The user- deï¬ned system requirements are input using temporal logic formulas. These requirements, i.e. speciï¬cations, are used to construct a cost function to be minimized based on a ro- bustness metric. Breach is designed to test arbitrary MATLAB functions and Simulink models and includes a MATLAB graphical user interface (GUI) that gives the user access to the input parameter sets, temporal logic formulas, and trajectory visualizations. Breach is open-source and available under the BSD license.2
RRT-Rex. RRT-Rex (Rapidly-exploring Random Tree Robustness-guided Explorer) (Dreossi et al., 2015) is a MATLAB falsiï¬cation tool that focuses on coverage given a com- putational budget. A Simulink model and user-deï¬ned requirements written in temporal logic are taken as input. RRT path planning algorithms are used to search the disturbance space for falsifying cases, guided by a combined state space coverage metric and a robust- ness satisfaction metric. Section 5 discusses the RRT approach in detail. RRT-Rex is not currently publicly available.
# 10.3 Reinforcement Learning-Based Tools
As a direct replacement to optimization algorithms, reinforcement learning can be used as the central idea behind searching for falsifying trajectories. The following tools imple- ment reinforcement learning algorithms as solvers for the falsiï¬cation problem. Sections 6.1 and 6.2 describe the reinforcement learning algorithms implemented in the tools.
FalStar. FalStar (Zhang et al., 2018) is a prototype Scala tool for falsiï¬cation of cyber-physical systems that interfaces with MATLAB through a Java API. FalStar uses reinforcement learning combined with optimization techniques to generate counterexam- ples. Techniques include Monte Carlo tree search with stochastic optimization as described in section 6.1 and adaptive Las Vegas tree search (aLVTS) as described in section 5.3. Fal- Star requires a Simulink model as input and uses the above techniques to generate coun- terexamples to the temporal logic speciï¬cations. FalStar can also interface directly with the Breach toolbox to use the available solvers implemented in Breach. FalStar is open- source and available under the BSD license.3
1. https://sites.google.com/a/asu.edu/s-taliro 2. https://github.com/decyphir/breach 3. https://github.com/ERATOMMSD/falstar
413
Corso, Moss, Koren, Lee, Kochenderfer
falsify. falsify (Akazaki et al., 2018) is a prototype simulation-based falsiï¬cation tool that uses deep reinforcement learning. Common among the academic tools, falsify can interface directly with MATLAB functions and Simulink models. Implementing a robustness-guided approach, falsify deï¬nes the reward as a convex function of the robustness, as described in section 6. This robustness-guided reward function is used by two deep reinforcement learning algorithms implemented in falsify: asynchronous advantage actor critic (A3C) and double deep-Q network (double DQN), both described in section 6.2. The falsify tool is not currently publicly available.
AST Toolbox. The AST Toolbox (Adaptive Stress Testing Toolbox) (Koren et al., 2018) is a Python toolbox for safety validation, which includes falsiï¬cation and most-likely fail- ure analysis. The AST Toolbox uses reinforcement learning to ï¬nd the most likely failures of black-box systems. Two reinforcement learning techniques used as solvers are included: Monte Carlo tree search (described in section 6.1) and deep reinforcement learning (de- scribed in section 6.2). The AST Toolbox is built on top of two popular reinforcement learning packages, namely, OpenAI Gym (Brockman et al., 2016) and Garage.4 Building oï¬ of these packages gives the user access to widely used reinforcement learning benchmarking problems.
To test a system, users must provide the deï¬nitions for three basic interfacing functions. These interfacing functions allow the tool to interact with the black-box system or simulator. The interface deï¬nes how to initialize the system, how to step the system forward (returning indications of found failures or a real-valued measure of distance to a failure), and ï¬nally a means to determine if the system is in a terminal state. While the implementation of the interface is restricted to Python, the user can call out to existing executables or other languages from within Python. As an example, Python can interface with MATLAB through the MATLAB Engine API for Python.5 The AST Toolbox is open-source and available under the MIT license.6
Two related toolboxes, called AdaptiveStressTesting.jl (Lee et al., 2020) and POMDP- StressTesting.jl (Moss et al., 2020), follow a similar paradigm as the AST Toolbox but are implemented in the Julia programming language (Bezanson et al., 2017). Julia can inter- face directly with many other programming languages,7 and notably, Julia can interface with MATLAB through the MATLAB.jl package.8 AdaptiveStressTesting.jl9 and POMDP- StressTesting.jl10 are open-source and available under the Apache License Version 2.0 and the MIT license, respectively.
4. https://github.com/rlworkgroup/garage 5. https://www.mathworks.com/help/matlab/matlab-engine-for-python.html 6. https://github.com/sisl/AdaptiveStressTestingToolbox 7. https://github.com/JuliaInterop 8. https://github.com/JuliaInterop/MATLAB.jl 9. https://github.com/sisl/AdaptiveStressTesting.jl 10. https://github.com/sisl/POMDPStressTesting.jl
414
Algorithms for Black-Box Safety Validation
# 10.4 Commercial Tools
Certain techniques for safety validation of cyber-physical systems have become available as commercial toolboxes. This section brieï¬y describes these commercially available tools relating to black-box safety validation.11
Reactis. Reactis (Reactive Systems Inc., 2013) is a simulation-based MATLAB tool from Reactive Systems for falsiï¬cation of Simulink models. Reactis has three components: Re- actis Tester, Reactis Simulator, and Reactis Validator. The Reactis Tester controls the generation of falsifying trajectories. It uses a patented technique called guided simulation which uses proprietary algorithms and heuristics to generate trajectories to maximize cov- erage for falsiï¬cation. Then, the Reactis Simulator runs the system under test given the purposed falsifying trajectory. The last component, the Reactis Validator, uses proprietary techniques to search for violations of user-deï¬ned model speciï¬cations. Reactive Systems also has a version of Reactis for C (Reactive Systems Inc., 2011) that has analogous Reactis components for systems developed in the C programming language. Reactis is commercially available through Reactive Systems, Inc.12
TestWeaver. TestWeaver (Junghanns et al., 2008) is a simulation-based falsiï¬cation tool from Synopsys. TestWeaver uses proprietary search algorithms to generate falsifying trajec- tories while maximizing coverage. User-deï¬ned system requirements and worst-case quality indicators are used to guide the search. Extensive knowledge of the underlying system may be required to provide useful worst-case quality indicators, which may limit the application. TestWeaver can interface with other simulation frameworks to control the disturbance tra- jectories via libraries in Simulink, Modelica, Python, and C. TestWeaver is commercially available through Synopsys, Inc.13
TrustworthySearch API. TrustworthySearch API (Norden et al., 2019) is a risk-based falsiï¬cation and probability estimation framework from Trustworthy AI. TrustworthySearch API is general for black-box validation of systems and has applications in autonomous vehicle safety validation. The tool uses a proprietary sequential importance sampling and sequential Monte Carlo approach. A version of the probabilistic-based approach was shown using adaptive importance sampling techniques described in section 7. Importance sampling is a stand-in for traditional optimization algorithms used to search the disturbance space for falsiï¬cation. The use of a proposal distribution biased towards rare failure events ensures that these low probability events are sampled more frequently. Adaptive multilevel splitting (AMS), described in section 7.2, is implemented to estimate this biased distribution from data. Along with falsiï¬cation, the tool can also perform failure event probability estimation, unlike other commercially available products and most academic tools. To estimate the failure probability, the probability from the biased distribution is reweighted according to the likelihood from the original unbiased distribution. Although speciï¬c to autonomous vehicle safety validation, we include TrustworthySearch API to highlight recent advancements of
11. The authors are not aï¬liated with any of the companies. 12. https://www.reactive-systems.com/ 13. https://www.synopsys.com/veriï¬cation/virtual-prototyping/virtual-ecu/testweaver.html
415
Corso, Moss, Koren, Lee, Kochenderfer
black-box safety validation tools. TrustworthySearch API is commercially available through Trustworthy AI, Inc.14
# 10.5 Toolbox Competition
An academically-driven friendly competition for systems veriï¬cation, called ARCH-COMP, has been held annually since 2017 (Ernst, Arcaini, et al., 2019). One category within the competition is the falsiï¬cation of temporal logic speciï¬cations for cyber-physical systems. Falsiï¬cation researchers compare their tools against common benchmark problems and use the competition to track state-of-the-art falsiï¬cation tools.
As detailed in their 2019 report (Ernst, Arcaini, et al., 2019), S-TaLiRo, Breach, Fal- Star, and falsify participated in the most recent competition. Six benchmark problems from the literature were used to evaluate each tool. The tools were evaluated based on their falsiï¬cation rate and statistics on number of simulations required to ï¬nd a falsifying trajectory. The outcome of the 2019 competition showed that falsify had the most success, only requiring a single simulation (after training) in certain benchmark problems. A no- table emphasis on repeatability of results was made during this recent competition. For future competitions, we suggest a comparison metric for tool runtime to help assess relative computational timing complexities. Continuing to mature the competition as more falsiï¬ca- tion methods arise will help drive discussion around falsiï¬cation tool design decisions. The competitionâs benchmarks and results are available online.15
# 10.6 Tools Discussion
The available tools can be categorized based on which aspects of the safety validation pro- cess they implement, as described in section 2.3: falsiï¬cation, most-likely failure analysis, and failure probability estimation. Note that all of the available tools perform falsiï¬ca- tion as their core task. In the context of the safety validation problem, the academic tools S-TaLiRo, Breach, FalStar, RRT-Rex, and falsify are falsiï¬cation-only tools. As for commercial tools, Reactis and TestWeaver also fall into the falsiï¬cation-only category. TrustworthySearch API is the only commercial tool that also preforms failure probability estimation and the AST Toolbox is the only academic tool that preforms most-likely failure analysis. A full comparison is outlined in table 1.
As for solution techniques, the tools S-TaLiRo and Breach take a standard optimization- based approach, while RRT-Rex reformulates the falsiï¬cation problem to solve it using path planning algorithms. The AST Toolbox and falsify are based on reinforcement learning and FalStar combines reinforcement learning and global optimization techniques for further re- ï¬nement of the disturbance trajectory search. Common among all of the tools is an interface to MATLAB, with a particular emphasis on testing Simulink models. This is evident in the survey and can be attributed to an industrial emphasis on the use of MATLAB/Simulink models for prototyping. A common component speciï¬c to academic falsiï¬cation-only tools is the use of temporal logic to specify system requirementsâencoded in signal temporal logic (STL) or metric temporal logic (MTL). Although expressive, requiring the strict usage of a temporal logic to encode system requirements could also limit the applicability of these tools.
14. http://trustworthy.ai/ 15. https://gitlab.com/goranf/ARCH-COMP
416
Algorithms for Black-Box Safety Validation
Table 1: Black-box Safety Validation Tools
Safety Validation Problem Tool Falsiï¬cation Most-Likely Failure Probability Estimation Technique Source S-TaLiRo (Y. Annapureddy et al., 2011) â * Breach (Donzé, 2010) * RRT-Rex (Dreossi et al., 2015) â FalStar (Zhang et al., 2018) â * falsify (Akazaki et al., 2018) â * AST Toolbox (Koren et al., 2018) Reactis (Reactive Systems Inc., 2013) TestWeaver (Junghanns et al., 2008) TrustworthySearch API (Norden et al., 2019) X X X X X X X X X · · · · · X · · · · · · · · · · · X Optimization Optimization Path Planning Optimization, RL Reinforcement Learning Closed Reinforcement Learning Open Proprietary Proprietary Importance Sampling Open Open Closed Open Commercial Commercial Commercial
Competed in ARCH-COMP 2019 (Ernst, Arcaini, et al., 2019). â Accepts system speciï¬cation in temporal logic (STL or MTL).
With a common goal of ensuring that safety-critical systems are in fact safe, the availability of these tools allows users to provide feedback given their speciï¬c use-cases and experience. Continued tool and technique development is further encouraged by academically-driven competitions such as ARCH-COMP (Ernst, Arcaini, et al., 2019).
# 11. Conclusion
With the rapid increase of safety-critical autonomous systems operating with humans, it is important that we develop robust testing procedures that can ensure the safety of these systems. Due to the high level of system complexity, we generally need black-box validation strategies to ï¬nd failures of the autonomous system. This work described the problems of falsiï¬cation (where a single failure example is searched for), most-likely failure analysis (where the most likely failure is searched for), and failure probability estimation (where we seek a good estimate of the likelihood of failure).
With these goals deï¬ned, we outlined a wide array of algorithms that have been used to accomplish these tasks. Global optimization, path planning, and reinforcement learning algorithms have been used to ï¬nd falsifying examples, while importance sampling methods have been used to estimate the probability of failure even when it is close to zero. To address the problem of scalability, we described approaches for decomposing the safety validation problem into more manageable components. We gave a brief overview of the main applications for black-box safety validation including autonomous driving and ï¬ight. Finally, we provided an overview of the existing tools that can be used to tackle these validation tasks.
# References
Abbas, H., Fainekos, G., Sankaranarayanan, S., IvanÄiÄ, F., & Gupta, A. (2013). Probabilistic temporal logic falsiï¬cation of cyber-physical systems. ACM Transactions on Embedded Computing Systems (TECS), 12 (2s), 1â30.
417
Corso, Moss, Koren, Lee, Kochenderfer
Abbas, H., OâKelly, M., & Mangharam, R. (2017). Relaxed decidability and the robust semantics of metric temporal logic. ACM International Conference on Hybrid Systems: Computation and Control (HSCC), 217â225.
Abeysirigoonawardena, Y., Shkurti, F., & Dudek, G. (2019). Generating adversarial driving scenarios in high-ï¬delity simulators. IEEE International Conference on Robotics and Automation (ICRA), 8271â8277.
Adimoolam, A., Dang, T., Donzé, A., Kapinski, J., & Jin, X. (2017). Classiï¬cation and coverage-based falsiï¬cation for embedded control systems. International Conference on Computer Aided Veriï¬cation (CAV), 483â503.
Aerts, A., Minh, B. T., Mousavi, M. R., & Reniers, M. A. (2018). Temporal logic falsiï¬- cation of cyber-physical systems: An input-signal-space optimization approach. IEEE International Conference on Software Testing, Veriï¬cation and Validation Workshops (ICSTW), 214â223.
Agha, G., & Palmskog, K. (2018). A survey of statistical model checking. ACM Transactions on Modeling and Computer Simulation (TOMACS), 28 (1), 1â39.
Akazaki, T., Kumazawa, Y., & Hasuo, I. (2017). Causality-aided falsiï¬cation. Electronic Proceedings in Theoretical Computer Science, 257.
Akazaki, T., Liu, S., Yamagata, Y., Duan, Y., & Hao, J. (2018). Falsiï¬cation of cyber- physical systems using deep reinforcement learning. International Symposium on Formal Methods (FM), 456â465.
Alpern, B., & Schneider, F. B. (1987). Recognizing safety and liveness. Distributed Comput- ing, 2 (3), 117â126.
Alur, R. (2015). Principles of cyber-physical systems. MIT Press.
Annapureddy, Y., Liu, C., Fainekos, G., & Sankaranarayanan, S. (2011). S-TaLiRo: A Tool for Temporal Logic Falsiï¬cation for Hybrid Systems. International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS), 254â257.
Annapureddy, Y. S. R., & Fainekos, G. E. (2010). Ant colonies for temporal logic falsiï¬cation of hybrid systems. Annual Conference on IEEE Industrial Electronics Society (IECON), 91â96.
Balkan, A., Tabuada, P., Deshmukh, J. V., Jin, X., & Kapinski, J. (2017). Underminer: A framework for automatically identifying nonconverging behaviors in black-box system models. ACM Transactions on Embedded Computing Systems (TECS), 17 (1), 1â28.
Behzadan, V., & Munir, A. (2019). Adversarial reinforcement learning framework for bench- marking collision avoidance mechanisms in autonomous vehicles. IEEE Intelligent Trans- portation Systems Magazine.
Bezanson, J., Edelman, A., Karpinski, S., & Shah, V. B. (2017). Julia: A fresh approach to numerical computing. SIAM Review, 59 (1), 65â98.
Bock, H. G., & Plitt, K.-J. (1984). A multiple shooting algorithm for direct solution of optimal control problems. IFAC Proceedings Volumes, 17 (2), 1603â1608.
418
Algorithms for Black-Box Safety Validation
Botev, Z. I., & Kroese, D. P. (2008). An eï¬cient algorithm for rare-event probability estima- tion, combinatorial optimization, and counting. Methodology and Computing in Applied Probability, 10 (4), 471â505.
Branicky, M. S., Curtiss, M. M., Levine, J., & Morgan, S. (2006). Sampling-based plan- ning, control and veriï¬cation of hybrid systems. IEE Proceedings DâControl Theory and Applications, 153 (5), 575â590.
Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang, J., & Zaremba, W. (2016). OpenAI Gym. arXiv e-prints, Article arXiv:1606.01540, arXiv:1606.01540.
Cérou, F., & Guyader, A. (2007). Adaptive multilevel splitting for rare event analysis. Stochastic Analysis and Applications, 25 (2), 417â443.
Chaslot, G. M. J.-B., Winands, M. H. M., Herik, H. J. V. D., Uiterwijk, J. W. H. M., & Bouzy, B. (2008). Progressive strategies for Monte Carlo tree search. New Mathematics and Natural Computation, 4 (03), 343â357.
Chryssanthacopoulos, J. P., Kochenderfer, M. J., & Williams, R. E. (2010). Improved Monte Carlo sampling for conï¬ict probability estimation. AIAA Non-Deterministic Approaches Conference.
Clarke, E. M., & Emerson, E. A. (1981). Design and synthesis of synchronization skeletons using branching time temporal logic. Workshop on Logic of Programs, 52â71.
Clarke, E. M., Henzinger, T. A., Veith, H., & Bloem, R. (2018). Handbook of model checking. Springer.
Corso, A., Du, P., Driggs-Campbell, K., & Kochenderfer, M. J. (2019). Adaptive stress testing with reward augmentation for autonomous vehicle validation. IEEE International Conference on Intelligent Transportation Systems (ITSC), 163â168.
Corso, A., & Kochenderfer, M. J. (2020). Interpretable safety validation for autonomous vehicles. IEEE International Conference on Intelligent Transportation Systems (ITSC).
Corso, A., Lee, R., & Kochenderfer, M. J. (2020). Scalable autonomous vehicle safety val- idation through dynamic programming and scene decomposition. IEEE International Conference on Intelligent Transportation Systems (ITSC).
Coulom, R. (2007). Computing âELO ratingsâ of move patterns in the game of go. ICGA Journal, 30 (4), 198â208.
Dang, T., Donzé, A., Maler, O., & Shalev, N. (2008). Sensitive state-space exploration. IEEE Conference on Decision and Control (CDC), 4049â4054.
Daoui, C., Abbad, M., & Tkiouat, M. (2010). Exact decomposition approaches for Markov decision processes: A survey. Advances in Operations Research, 2010, 1â19.
De Boer, P.-T., Kroese, D. P., Mannor, S., & Rubinstein, R. Y. (2005). A tutorial on the cross-entropy method. Annals of Operations Research, 134 (1), 19â67.
Delmas, R., Loquen, T., Boada-Bauxell, J., & Carton, M. (2019). An evaluation of Monte Carlo tree search for property falsiï¬cation on hybrid ï¬ight control laws. International Workshop on Numerical Software Veriï¬cation, 45â59.
419
Corso, Moss, Koren, Lee, Kochenderfer
Deshmukh, J., Horvat, M., Jin, X., Majumdar, R., & Prabhu, V. S. (2017). Testing cyber- physical systems through bayesian optimization. ACM Trans. Embed. Comput. Syst., 16 (5s).
Deshmukh, J., Jin, X., Kapinski, J., & Maler, O. (2015). Stochastic local search for falsiï¬- cation of hybrid systems. International Symposium on Automated Technology for Veri- ï¬cation and Analysis (ATVA), 500â517.
Diehl, M., Bock, H. G., Diedam, H., & Wieber, P.-B. (2006). Fast direct multiple shooting algorithms for optimal robot control. Fast motions in biomechanics and robotics (pp. 65â 93). Springer.
Dijkstra, E. W. (1959). A note on two problems in connexion with graphs. Numerische mathematik, 1 (1), 269â271.
Diwakaran, R. D., Sankaranarayanan, S., & Trivedi, A. (2017). Analyzing neighborhoods of falsifying traces in cyber-physical systems. International Conference on Cyber-Physical Systems (ICCPS), 109â119.
Donzé, A. (2010). Breach, a toolbox for veriï¬cation and parameter synthesis of hybrid sys- tems. International Conference on Computer Aided Veriï¬cation (CAV), 167â170.
Donzé, A., & Maler, O. (2010). Robust satisfaction of temporal logic over real-valued signals. International Conference on Formal Modeling and Analysis of Timed Systems (FOR- MATS), 92â106.
Dorigo, M., Maniezzo, V., & Colorni, A. (1996). Ant system: Optimization by a colony of cooperating agents. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 26 (1), 29â41.
Dreossi, T., Dang, T., Donzé, A., Kapinski, J., Jin, X., & Deshmukh, J. V. (2015). Eï¬cient guiding strategies for testing of temporal properties of hybrid systems. NASA Formal Methods Symposium (NFM), 127â142.
Dreossi, T., Donzé, A., & Seshia, S. A. (2019). Compositional falsiï¬cation of cyber-physical systems with machine learning components. Journal of Automated Reasoning, 63 (4), 1031â1053.
Ecoï¬et, A., Huizinga, J., Lehman, J., Stanley, K. O., & Clune, J. (2019). Go-Explore: A new approach for hard-exploration problems. arXiv e-prints, Article arXiv:1901.10995, arXiv:1901.10995.
Ernst, G., Arcaini, P., Donze, A., Fainekos, G., Mathesen, L., Pedrielli, G., Yaghoubi, S., Yamagata, Y., & Zhang, Z. (2019). Arch-comp 2019 category report: Falsiï¬cation. EPiC Series in Computing, 61, 129â140.
Ernst, G., Sedwards, S., Zhang, Z., & Hasuo, I. (2019). Fast falsiï¬cation of hybrid sys- tems using probabilistically adaptive input. International Conference on Quantitative Evaluation of Systems (QEST), 165â181.
Esposito, J. M., Kim, J., & Kumar, V. (2004). Adaptive RRTs for validating hybrid robotic control systems. Algorithmic foundations of robotics vi (pp. 107â121). Springer.
420
Algorithms for Black-Box Safety Validation
Fainekos, G., Hoxha, B., & Sankaranarayanan, S. (2019). Robustness of speciï¬cations and its applications to falsiï¬cation, parameter mining, and runtime monitoring with S-TaLiRo. International Conference on Runtime Veriï¬cation (RV), 27â47.
Fainekos, G. E., & Giannakoglou, K. C. (2003). Inverse design of airfoils based on a novel formulation of the ant colony optimization method. Inverse Problems in Engineering, 11 (1), 21â38.
Fainekos, G. E., & Pappas, G. J. (2009). Robustness of temporal logic speciï¬cations for continuous-time signals. Theoretical Computer Science, 410 (42), 4262â4291.
Federal Aviation Administration. (2019). FAA aerospace forecast ï¬scal years 2020â2040 (tech. rep.). Federal Aviation Administration.
Fehnker, A., & IvanÄiÄ, F. (2004). Benchmarks for hybrid systems veriï¬cation. ACM Inter- national Conference on Hybrid Systems: Computation and Control (HSCC), 326â341.
Fikes, R. E., & Nilsson, N. J. (1971). Strips: A new approach to the application of theorem proving to problem solving. Artiï¬cial intelligence, 2 (3-4), 189â208.
Fitting, M. (2012). First-order logic and automated theorem proving. Springer Science & Business Media.
Francès, G., RamÃrez, M., Lipovetzky, N., & Geï¬ner, H. (2017). Purely declarative action representations are overrated: Classical planning with simulators. International Joint Conference on Artiï¬cial Intelligence (IJCAI), 4294â4301.
Ghallab, M., Nau, D., & Traverso, P. (2004). Automated planning: Theory and practice. Elsevier.
Gu, S., Holly, E., Lillicrap, T., & Levine, S. (2017). Deep reinforcement learning for robotic manipulation with asynchronous oï¬-policy updates. IEEE International Conference on Robotics and Automation (ICRA), 3389â3396.
Guestrin, C., Koller, D., Parr, R., & Venkataraman, S. (2003). Eï¬cient solution algorithms for factored mdps. Journal of Artiï¬cial Intelligence Research, 19, 399â468.
Hahn, G. (1972). Sample sizes for Monte Carlo simulation. IEEE Transactions on Systems, Man, and Cybernetics.
Hansen, N., & Ostermeier, A. (1996). Adapting arbitrary normal mutation distributions in evolution strategies: The covariance matrix adaptation. IEEE International Conference on Evolutionary Computation, 312â317.
Harrison, M. T. (2012). Conservative hypothesis tests and conï¬dence intervals using impor- tance sampling. Biometrika, 99 (1), 57â69.
He, K., Gkioxari, G., Dollár, P., & Girshick, R. (2017). Mask R-CNN. International Con- ference on Computer Vision (ICCV), 2961â2969.
Hekmatnejad, M., Hoxha, B., & Fainekos, G. (2020). Search-based test-case generation by monitoring responsibility safety rules. IEEE International Conference on Intelligent Transportation Systems (ITSC), 1â8.
421
Corso, Moss, Koren, Lee, Kochenderfer
Hirshorn, S. R., Voss, L. D., & Bromley, L. K. (2017). Nasa systems engineering handbook. NASA Special Publication.
Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 9 (8), 1735â1780.
Hoxha, B., Abbas, H., & Fainekos, G. (2015). Benchmarks for temporal logic requirements for automotive systems. International Workshop on Applied Veriï¬cation for Continuous and Hybrid Systems (ARCH), 25â30.
Hu, J., Lygeros, J., & Sastry, S. (2000). Towards a theory of stochastic hybrid systems. ACM International Conference on Hybrid Systems: Computation and Control (HSCC), 160â173.
Huang, Z., Guo, Y., Arief, M., Lam, H., & Zhao, D. (2018). A versatile approach to evaluating and testing automated vehicles based on kernel methods. American Control Conference (ACC), 4796â4802.
Huang, Z., Lam, H., LeBlanc, D. J., & Zhao, D. (2017). Accelerated evaluation of automated vehicles using piecewise mixture models. IEEE Transactions on Intelligent Transporta- tion Systems, 19 (9), 2845â2855.
Jin, X., Deshmukh, J. V., Kapinski, J., Ueda, K., & Butts, K. (2014). Powertrain control ver- iï¬cation benchmark. ACM International Conference on Hybrid Systems: Computation and Control (HSCC), 253â262.
Julian, K. D., Lee, R., & Kochenderfer, M. J. (2020). Validation of image-based neural network controllers through adaptive stress testing. IEEE International Conference on Intelligent Transportation Systems (ITSC), 1â7.
Junghanns, A., Mauss, J., & Tatar, M. (2008). TestWeaver: A tool for simulation-based test of mechatronic designs. International Modelica Conference.
Kahn, H., & Harris, T. E. (1951). Estimation of particle transmission by random sampling. National Bureau of Standards Applied Mathematics Series, 12, 27â30.
Kapinski, J., Deshmukh, J. V., Jin, X., Ito, H., & Butts, K. (2016). Simulation-based ap- proaches for veriï¬cation of embedded control systems: An overview of traditional and advanced modeling, testing, and veriï¬cation techniques. IEEE Control Systems Maga- zine, 36 (6), 45â64.
Karaman, S., & Frazzoli, E. (2011). Sampling-based algorithms for optimal motion planning. The International Journal of Robotics Research, 30 (7), 846â894.
Katoen, J.-P. (2016). The probabilistic model checking landscape. ACM/IEEE Symposium on Logic in Computer Science (LICS), 31â45.
Katz, G., Barrett, C., Dill, D. L., Julian, K., & Kochenderfer, M. J. (2017). Reluplex: An eï¬cient SMT solver for verifying deep neural networks. International Conference on Computer Aided Veriï¬cation (CAV), 97â117.
Kavraki, L. E., Svestka, P., Latombe, J.-C., & Overmars, M. H. (1996). Probabilistic roadmaps for path planning in high-dimensional conï¬guration spaces. IEEE Transac- tions on Robotics and Automation, 12 (4), 566â580.
422
Algorithms for Black-Box Safety Validation
Kim, J., Esposito, J. M., & Kumar, V. (2005). An RRT-based algorithm for testing and validating multi-robot controllers (tech. rep.). Moore School of Electrical Engineering GRASP Lab.
Kim, Y., & Kochenderfer, M. J. (2016). Improving aircraft collision risk estimation using the cross-entropy method. Journal of Air Transportation, 24 (2), 55â62.
Kochenderfer, M. J. (2015). Decision making under uncertainty: Theory and application. MIT Press.
Kochenderfer, M. J., Holland, J. E., & Chryssanthacopoulos, J. P. (2012). Next-generation airborne collision avoidance system. Lincoln Laboratory Journal, 19 (1), 17â33.
Kochenderfer, M. J., & Wheeler, T. A. (2019). Algorithms for optimization. MIT Press.
Koren, M., Alsaif, S., Lee, R., & Kochenderfer, M. J. (2018). Adaptive stress testing for autonomous vehicles. IEEE Intelligent Vehicles Symposium (IV), 1â7.
Koren, M., Corso, A., & Kochenderfer, M. J. (2019). The adaptive stress testing formulation. Workshop on Safe Autonomy, Robotics: Science and Systems.
Koren, M., & Kochenderfer, M. J. (2019). Eï¬cient autonomy validation in simulation with adaptive stress testing. IEEE International Conference on Intelligent Transportation Systems (ITSC), 4178â4183.
Koren, M., & Kochenderfer, M. J. (2020). Adaptive stress testing without domain heuristics using Go-Explore. IEEE International Conference on Intelligent Transportation Systems (ITSC).
Koschi, M., Pek, C., Maierhofer, S., & Althoï¬, M. (2019). Computationally eï¬cient safety falsiï¬cation of adaptive cruise control systems. IEEE International Conference on Intel- ligent Transportation Systems (ITSC), 2879â2886.
Koymans, R. (1990). Specifying real-time properties with metric temporal logic. Real-Time Systems, 2, 255â299.
Kress-Gazit, H., & Pappas, G. J. (2008). Automatically synthesizing a planning and control subsystem for the DARPA urban challenge. International Conference on Automation Science and Engineering (CASE), 766â771.
Kuutti, S., Fallah, S., & Bowden, R. (2020). Training adversarial agents to exploit weaknesses in deep control policies. IEEE International Conference on Robotics and Automation (ICRA), 108â114.
LaValle, S. M. (2006). Planning algorithms. Cambridge University Press.
Lavalle, S. M. (1998). Rapidly-exploring random trees: A new tool for path planning (tech. rep.). Iowa State University.
LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521 (7553), 436â444.
Lee, R., Mengshoel, O. J., Agogino, A. K., Giannakopoulou, D., & Kochenderfer, M. J. (2019). Adaptive stress testing of trajectory planning systems. AIAA Scitech Intelligent Systems Conference (IS).
423
Corso, Moss, Koren, Lee, Kochenderfer
Lee, R., Mengshoel, O. J., Saksena, A., Gardner, R., Genin, D., Silbermann, J., Owen, M., & Kochenderfer, M. J. (2020). Adaptive stress testing: Finding likely failure events with reinforcement learning. Journal of Artiï¬cial Intelligence Research, 69, 1165â1201.
Leung, K., Aréchiga, N., & Pavone, M. (2020). Back-propagation through signal temporal logic speciï¬cations: Infusing logical structure into gradient-based methods. Workshop on Algorithmic Foundations of Robotics.
Lewis, F. L., Vrabie, D., & Syrmos, V. L. (2012). Optimal control. John Wiley & Sons.
Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., & Wierstra, D. (2016). Continuous control with deep reinforcement learning. International Confer- ence on Learning Representations.
Lipovetzky, N., & Geï¬ner, H. (2012). Width and serialization of classical planning problems. European Conference on Artiï¬cial Intelligence (ECAI), 540â545.
Lipovetzky, N., Ramirez, M., & Geï¬ner, H. (2015). Classical planning with simulators: Re- sults on the Atari video games. International Joint Conference on Artiï¬cial Intelligence (IJCAI), 1610â1616.
Luersen, M. A., & Le Riche, R. (2004). Globalized NelderâMead method for engineering optimization. Computers & Structures, 82 (23-26), 2251â2260.
Mathesen, L., Yaghoubi, S., Pedrielli, G., & Fainekos, G. (2019). Falsiï¬cation of cyber- physical systems with robustness uncertainty quantiï¬cation through stochastic opti- mization with adaptive restart. International Conference on Automation Science and Engineering (CASE), 991â997.
McDermott, D., Ghallab, M., Howe, A., Knoblock, C., Ram, A., Veloso, M., Weld, D., & Wilkins, D. (1998). PDDLâthe planning domain deï¬nition language (tech. rep.). Yale Center for Computational Vision and Control.
Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., et al. (2015). Human-level control through deep reinforcement learning. Nature, 518 (7540), 529â533.
Mockus, J. (2012). Bayesian approach to global optimization: Theory and applications (Vol. 37). Springer Science & Business Media.
Moss, R. J., Lee, R., Visser, N., Hochwarth, J., Lopez, J. G., & Kochenderfer, M. J. (2020). Adaptive stress testing of trajectory predictions in ï¬ight management systems. Digital Avionics Systems Conference (DASC), 1â10.
Mullins, G. E., Stankiewicz, P. G., Hawthorne, R. C., & Gupta, S. K. (2018). Adaptive generation of challenging scenarios for testing and evaluation of autonomous vehicles. Journal of Systems and Software, 137, 197â215.
Nahhal, T., & Dang, T. (2007). Test coverage for continuous and hybrid systems. Interna- tional Conference on Computer Aided Veriï¬cation (CAV), 449â462.
Neufeld, J., Gyorgy, A., Szepesvari, C., & Schuurmans, D. (2014). Adaptive Monte Carlo via bandit allocation. International Conference on Machine Learning (ICML), 1944â1952.
424
Algorithms for Black-Box Safety Validation
Norden, J., OâKelly, M., & Sinha, A. (2019). Eï¬cient black-box assessment of autonomous vehicle safety. arXiv e-prints, Article arXiv:1912.03618, arXiv:1912.03618.
OâKelly, M., Sinha, A., Namkoong, H., Tedrake, R., & Duchi, J. C. (2018). Scalable end-to- end autonomous vehicle testing via rare-event simulation. Advances in Neural Informa- tion Processing Systems (NIPS), 9827â9838.
Pant, Y. V., Abbas, H., & Mangharam, R. (2017). Smooth operator: Control using the smooth robustness of temporal logic. IEEE Conference on Control Technology and Ap- plications (CCTA), 1235â1240.
Peled, D., Vardi, M. Y., & Yannakakis, M. (1999). Black box checking. Formal methods for protocol engineering and distributed systems (pp. 225â240). Springer.
Pillai, S., AmbruÅ, R., & Gaidon, A. (2019). Superdepth: Self-supervised, super-resolved monocular depth estimation. IEEE International Conference on Robotics and Automa- tion (ICRA), 9250â9256.
Plaku, E., Kavraki, L. E., & Vardi, M. Y. (2009). Hybrid systems: From veriï¬cation to fal- siï¬cation by combining motion planning and discrete search. Formal Methods in System Design, 34 (2), 157â182.
Platzer, A., & Quesel, J.-D. (2008). KeYmaera: A hybrid theorem prover for hybrid systems (system description). International Joint Conference on Automated Reasoning (IJCAR), 171â178.
Pnueli, A. (1977). The temporal logic of programs. Foundations of Computer Science, 1977, 46â57.
Qin, X., Aréchiga, N., Best, A., & Deshmukh, J. (2019). Automatic testing and falsi- ï¬cation with dynamically constrained reinforcement learning. arXiv e-prints, Arti- cle arXiv:1910.13645, arXiv:1910.13645.
Reactive Systems Inc. (2011). Finding bugs in C programs with Reactis for C (tech. rep. RSITR 2.5). Reactive Systems Inc.
Reactive Systems Inc. (2013). Testing and validation of Simulink models with Reactis (tech. rep. RSITR 1.11). Reactive Systems Inc.
Rubinstein, R. Y., & Kroese, D. P. (2013). The cross-entropy method: A uniï¬ed approach to combinatorial optimization, Monte Carlo simulation and machine learning. Springer Science & Business Media.
Russell, S., & Norvig, P. (2020). Artiï¬cial intelligence: A modern approach (4th ed.). Prentice Hall.
Sadowsky, J. S., & Bucklew, J. A. (1990). On large deviations theory and asymptotically eï¬cient Monte Carlo estimation. IEEE Transactions on Information Theory, 36 (3), 579â588.
Sankaranarayanan, S., & Fainekos, G. (2012). Falsiï¬cation of temporal properties of hy- brid systems using the cross-entropy method. ACM International Conference on Hybrid Systems: Computation and Control (HSCC), 125â134.
425
Corso, Moss, Koren, Lee, Kochenderfer
Schuler, S., Adegas, F. D., & Anta, A. (2017). Hybrid modelling of a wind turbine (bench- mark proposal). In G. Frehse & M. Althoï¬ (Eds.), International workshop on applied veriï¬cation for continuous and hybrid systems (arch) (pp. 18â26).
Schulman, J., Levine, S., Abbeel, P., Jordan, M., & Moritz, P. (2015). Trust region policy optimization. International Conference on Machine Learning (ICML), 1889â1897.
Schumann, J. M. (2001). Automated theorem proving in software engineering. Springer Sci- ence & Business Media.
Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., Schrit- twieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., et al. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529 (7587), 484â489.
Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., Hubert, T., Baker, L., Lai, M., Bolton, A., et al. (2017). Mastering the game of go without human knowledge. Nature, 550 (7676), 354â359.
Silvetti, S., Policriti, A., & Bortolussi, L. (2017). An active learning approach to the fal- siï¬cation of black box cyber-physical systems. International Conference on Integrated Formal Methods (iFM), 3â17.
Sutton, R. S., & Barto, A. G. (2018). Reinforcement learning: An introduction. MIT Press.
Sutton, R. S., McAllester, D. A., Singh, S. P., & Mansour, Y. (2000). Policy gradient methods for reinforcement learning with function approximation. Advances in Neural Information Processing Systems (NIPS), 1057â1063.
Suykens, J. A. K., & Vandewalle, J. (1999). Least squares support vector machine classiï¬ers. Neural Processing Letters, 9 (3), 293â300.
Thiémard, E. (2001). An algorithm to compute bounds for the star discrepancy. Journal of Complexity, 17 (4), 850â880.
Tuncali, C. E., & Fainekos, G. (2019). Rapidly-exploring random trees for testing automated vehicles. IEEE Intelligent Transportation Systems Conference (ITSC), 661â666.
Tuncali, C. E., Fainekos, G., Prokhorov, D., Ito, H., & Kapinski, J. (2019). Requirements- driven test generation for autonomous vehicles with machine learning components. IEEE Transactions on Intelligent Vehicles.
Tuncali, C. E., Hoxha, B., Ding, G., Fainekos, G., & Sankaranarayanan, S. (2018). Experience report: Application of falsiï¬cation methods on the uxas system. NASA Formal Methods Symposium (NFM), 452â459.
Uesato, J., Kumar, A., Szepesvári, C., Erez, T., Ruderman, A., Anderson, K., Dvijotham, K., Heess, N., & Kohli, P. (2019). Rigorous agent evaluation: An adversarial approach to uncover catastrophic failures. International Conference on Learning Representations.
U.S. Department of Transportation. (2018). Automated vehicles 3.0: Preparing for the future of transportation (tech. rep.). U.S. Department of Transportation.
Vinyals, O., Babuschkin, I., Czarnecki, W. M., Mathieu, M., Dudzik, A., Chung, J., Choi, D. H., Powell, R., Ewalds, T., Georgiev, P., et al. (2019). Grandmaster level in starcraft II using multi-agent reinforcement learning. Nature, 575 (7782), 350â354.
426
Algorithms for Black-Box Safety Validation
Wang, Z., Hutter, F., Zoghi, M., Matheson, D., & de Feitas, N. (2016). Bayesian optimiza- tion in a billion dimensions via random embeddings. Journal of Artiï¬cial Intelligence Research, 55, 361â387.
Wicker, M., Huang, X., & Kwiatkowska, M. (2018). Feature-guided black-box safety test- ing of deep neural networks. International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS), 408â426.
Yaghoubi, S., & Fainekos, G. (2019). Gray-box adversarial testing for control systems with machine learning components. ACM International Conference on Hybrid Systems: Com- putation and Control (HSCC), 179â184.
Yang, H. (2013). Dynamic programming algorithm for computing temporal logic robustness (Masterâs thesis). Arizona State University.
Yang, X., Egorov, M., Evans, A., Munn, S., & Wei, P. (2020). Stress testing of UAS traï¬c management decision making systems. AIAA AVIATION Forum, 2868.
Yeh, D. (2018). Autonomous systems and the challenges in veriï¬cation, validation, and test. IEEE Design & Test, 35 (3), 89â97.
Zhang, Z., Ernst, G., Sedwards, S., Arcaini, P., & Hasuo, I. (2018). Two-layered falsiï¬cation of hybrid systems guided by Monte Carlo tree search. IEEE Transactions on Computer- Aided Design of Integrated Circuits and Systems, 37 (11), 2894â2905.
Zhang, Z., Hasuo, I., & Arcaini, P. (2019). Multi-armed bandits for Boolean connectives in hybrid system falsiï¬cation. In I. Dillig & S. Tasiran (Eds.), Computer aided veriï¬cation (cav) (pp. 401â420). Springer International Publishing.
Zhao, D., Lam, H., Peng, H., Bao, S., LeBlanc, D. J., Nobukawa, K., & Pan, C. S. (2016). Accelerated evaluation of automated vehicles safety in lane-change scenarios based on importance sampling techniques. IEEE Transactions on Intelligent Transportation Sys- tems, 18 (3), 595â607.
Zhao, Q., Krogh, B. H., & Hubbard, P. (2003). Generating test inputs for embedded control systems. IEEE Control Systems Magazine, 23 (4), 49â57.
Zou, X., Alexander, R., & McDermid, J. (2014). Safety validation of sense and avoid algo- rithms using simulation and evolutionary search. International Conference on Computer Safety, Reliability, and Security (SafeComp), 33â48.
Zutshi, A., Deshmukh, J. V., Sankaranarayanan, S., & Kapinski, J. (2014). Multiple shoot- ing, CEGAR-based falsiï¬cation for hybrid systems. International Conference on Embed- ded Software (ICESS), 1â10.
Zutshi, A., Sankaranarayanan, S., Deshmukh, J. V., & Kapinski, J. (2013). A trajectory splicing approach to concretizing counterexamples for hybrid systems. IEEE Conference on Decision and Control (CDC), 3918â3925.
427 | {
"id": "1910.13645"
} |
2005.02365 | SLEDGE: A Simple Yet Effective Baseline for COVID-19 Scientific Knowledge Search | With worldwide concerns surrounding the Severe Acute Respiratory Syndrome
Coronavirus 2 (SARS-CoV-2), there is a rapidly growing body of literature on
the virus. Clinicians, researchers, and policy-makers need a way to effectively
search these articles. In this work, we present a search system called SLEDGE,
which utilizes SciBERT to effectively re-rank articles. We train the model on a
general-domain answer ranking dataset, and transfer the relevance signals to
SARS-CoV-2 for evaluation. We observe SLEDGE's effectiveness as a strong
baseline on the TREC-COVID challenge (topping the learderboard with an nDCG@10
of 0.6844). Insights provided by a detailed analysis provide some potential
future directions to explore, including the importance of filtering by date and
the potential of neural methods that rely more heavily on count signals. We
release the code to facilitate future work on this critical task at
https://github.com/Georgetown-IR-Lab/covid-neural-ir | http://arxiv.org/pdf/2005.02365 | Sean MacAvaney, Arman Cohan, Nazli Goharian | cs.IR, cs.CL | null | null | cs.IR | 20200505 | 20200803 | 0 2 0 2
g u A 3 ] R I . s c [
3 v 5 6 3 2 0 . 5 0 0 2 : v i X r a
# SLEDGE: A Simple Yet Effective Baseline for COVID-19 Scientiï¬c Knowledge Search
# Sean MacAvaneyâ Arman Cohanâ¡ Nazli Goharianâ
â Information Retrieval Lab, Georgetown University, Washington DC â¡Allen Institute for AI, Seattle WA
# sean,nazli }
# @ir.cs.georgetown.edu, [email protected]
{sean,nazli}@ir.cs.georgetown.edu, [email protected]
{
# Abstract
With worldwide concerns surrounding the Severe Acute Respiratory Syndrome Coron- there is a rapidly avirus 2 (SARS-CoV-2), growing body of literature on the virus. Clin- icians, researchers, and policy-makers need a way to effectively search these articles. In this work, we present a search system called SLEDGE, which utilizes SciBERT to effec- tively re-rank articles. We train the model on a general-domain answer ranking dataset, and transfer the relevance signals to SARS-CoV- 2 for evaluation. We observe SLEDGEâs ef- fectiveness as a strong baseline on the TREC- COVID challenge (topping the learderboard with an nDCG@10 of 0.6844). Insights pro- vided by a detailed analysis provide some po- tential future directions to explore, including the importance of ï¬ltering by date and the po- tential of neural methods that rely more heav- ily on count signals. We release the code to facilitate future work on this critical task.1
search. Our baseline utilizes a combination of state-of-the-art techniques for neural information retrieval.
Recent work in neural information retrieval shows the effectiveness of pretrained language models in document ranking. (MacAvaney et al., 2019; Nogueira and Cho, 2019; Hofst¨atter et al., 2020; Dai and Callan, 2019b; Nogueira et al., 2020b). Building upon success of these models, SLEDGE is comprised of a re-ranker based on SciB- ERT (Beltagy et al., 2019), a pretrained language model optimized for scientiï¬c text. Since at the time of writing there is no available training data for COVID-19 related search, we additionally use a domain transfer approach by training SLEDGE on MS-MARCO (Campos et al., 2016), a general- domain passage ranking dataset, and apply it to COVID-19 literature search in zero-shot setting.
# Introduction
The emergence of the Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) prompted a worldwide research response. In the ï¬rst 100 days of 2020, over 5,000 research articles were published related to SARS-CoV-2 or COVID-19. Together with articles about similar viruses re- searched before 2020, the body of research exceeds 50,000 articles. This results in a considerable bur- den for those seeking information about various facets of the virus, including researchers, clinicians, and policy-makers.
We show that SLEDGE achieves strong results in the task of scientiï¬c literature search related to COVID-19. In particular, SLEDGE tops the leader- board in Round 1 of the TREC-COVID Information Retrieval shared task (Roberts et al., 2020),2 a new test bed for evaluating effectiveness of search meth- ods for COVID-19. We also provide an analysis into the hyperparameter tuning conducted, the ef- fect of various query and document ï¬elds, and pos- sible shortcomings of the approach. Insights from the analysis highlight the importance of a date ï¬lter for improving precision, and the possible beneï¬t of utilizing models that include count-based signals in future work. We hope that better natural language processing and search tools can contribute to the ï¬ght against the current global crisis.
In the interest of establishing a strong baseline for retrieving scientiï¬c literature related to COVID- 19, we introduce SLEDGE: a simple yet effective baseline for coronavirus Scientiï¬c knowLEDGE
1https://github.com/Georgetown-IR-Lab/ covid-neural-ir
2https://ir.nist.gov/covidSubmit/
# 2 Related Work
Retrieval of scientiï¬c literature has been long- studied (Lawrence et al., 1999; Lalmas and Tombros, 2007; Hersh and Voorhees, 2009; Lin, 2008; Medlar et al., 2016; Sorkhei et al., 2017; Huang et al., 2019). Most recent work for scientiï¬c literature retrieval has focused on tasks such as col- laborative ï¬ltering (Chen and Lee, 2018), citation recommendation (Nogueira et al., 2020a), and clin- ical decision support (Soldaini et al., 2017), rather than ad-hoc retrieval.
Pre-trained neural language models (such as BERT (Devlin et al., 2019)) have recently shown to be effective when ï¬ne-tuned for ad-hoc rank- ing. Nogueira and Cho (2019) demonstrate that these networks can be ï¬ne-tuned for passage rank- ing tasks. Others later observed effectiveness at document ranking tasks, showing that these models can handle natural-language questions better than prior approaches (Dai and Callan, 2019b) and that they can be incorporated into prior neutral ranking techniques (MacAvaney et al., 2019). Although computationally expensive, researches have shown that this can be mitigated to an extent by employ- ing more efï¬cient modeling choices (Hofst¨atter et al., 2020; MacAvaney et al., 2020c), caching in- termediate representations (Khattab and Zaharia, 2020; MacAvaney et al., 2020b; Gao et al., 2020), or by modifying the index with new terms or weights (Nogueira et al., 2019; Dai and Callan, 2019a; Nogueira, 2019). These models also fa- cilitate effective relevance signal transfer; Yilmaz et al. (2019) demonstrate that the relevance signals learned from BERT can easily transfer across col- lections (reducing the chance of overï¬tting a partic- ular collection). In this work, we utilize relevance signal transfer from an open-domain question an- swering dataset to the collection of COVID-19 sci- entiï¬c literature.
In terms of biomedical-related ranking, MacA- vaney et al. (2020a) observed the importance of using a domain-tuned language model (SciB- ERT (Beltagy et al., 2019)) when ranking in the biomedical domain (albeit working with clinical text rather than scientiï¬c literature). Some work already investigates document ranking and Ques- tion Answering (QA) about COVID-19. Zhang et al. (2020) chronicled their efforts of building and deploying a search engine for COVID-19 articles, utilizing a variety of available tools ranking tech- niques. In this work, we ï¬nd that our approach out-
performs this system in terms of ranking effective- ness. Tang et al. (2020) provide a QA dataset con- sisting of 124 COVID-19 question-answer pairs.
# 3 SLEDGE
This section describes the details of SLEDGE, our method for searching scientiï¬c literature related to COVID-19. We utilize a standard two-stage re- ranking pipeline for retrieving and ranking COVID- 19 articles. The articles are curated from the CORD19 dataset (Wang et al., 2020) and provided by the task organizers. The ï¬rst stage employs an inexpensive ranking model (namely, BM25) to generate a high-recall collection of candidate doc- uments. The second stage re-ranks the candidate documents using an expensive but high-precision SciBERT-based (Beltagy et al., 2019) neural rank- ing model.
# 3.1 First-Stage Retrieval
We ï¬rst index the document collection using stan- dard pre-processing methods: English stopword removal and Porter stemming. For the text, we use a concatenation of the title, abstract, and full- text paragraphs and fulltext headings. The fulltext gives more opportunities for the ï¬rst-stage ranker to match potentially relevant documents than the title alone would provide. When both the PDF and PubMed XML versions are available, we use the text extracted from the PubMed XML because it is generally cleaner. We then query the index for each topic with BM25. In this system, we used a ï¬xed re-ranking threshold of 500; thus only the top 500 BM25 results are retrieved. In our experiments, we found that there was little recall gained beyond 500.
# 3.2 Neural Re-Ranking
To best capture the domain-speciï¬c language re- lated to scientiï¬c text we use the SciBERT (Belt- agy et al., 2019) pretrained language model as the basis of a second-stage supervised re-ranker. This model is akin to the Vanilla BERT ranker from (MacAvaney et al., 2019), but utilizing the SciBERT model base (which is trained on scientiï¬c literature) instead. The query and document text are encoded sequentially, and relevance prediction is calculated based on the [CLS] tokenâs represen- tation (which was used for next sentence prediction during pre-training). Documents longer than the maximum length imposed by the positional embed-
dings are split into arbitrary equal-sized passages. We refer the reader to (MacAvaney et al., 2019) for more details about Vanilla BERT.
At the time of writing there is no training data available for the COVID-19 related search and col- lecting such data is expensive. To mitigate this challenge, we utilize a domain transfer approach and apply the learned model to the new domain in a zero-shot setting. This approach also has the ad- vantage of avoiding overï¬tting on the target dataset. Speciï¬cally, we train our model using the stan- dard training sequence of the MS-MARCO pas- sage ranking dataset (Campos et al., 2016). This dataset consists of over 800,000 query-document pairs in the general domain with a shallow label- ing scheme (typically fewer than two positive rel- evance labels per query; non-relevance assumed from unlabeled passages). During model training, we employ the following cross-entropy loss func- tion from Nogueira and Cho (2019):
L(q, d+, dâ) = â log(R(q, d+)) â log(R(q, dâ))
where q is the query, d + and d â are the relevant and non-relevant training documents, and R(q, d) is the relevance score.
# 4 Experimental setup
We now explore the ranking effectiveness of our ap- proach. We evaluate the performance of SLEDGE using the TREC-COVID Information Retrieval Challenge dataset (round 1) (Roberts et al., 2020). TREC-COVID uses the CORD-19 document col- lection (Wang et al., 2020) (2020-04-10 version, 51,045 articles), with a set of 30 topics related to COVID-19. These topics include natural queries such as: Coronavirus response to weather changes and Coronavirus social distancing impact. The top articles of participating systems (56 teams) were judged by expert assessors, who rated each arti- cle non-relevant (0), partially-relevant (1), or fully- relevant (2) to the topic. In total, 8,691 relevance judgments were collected, with 74% non-relevant, 13% partially relevant, and 14% fully-relevant.
Since the relevance judgments in this dataset are shallow (avg. 290 per query), we measure effective- ness of each system using normalized Discounted Cumulative Gain with a cutoff of 10 (nDCG@10), Precision at 5 of partially and fully-relevant doc- uments (P@5), and Precision at 5 of only fully relevant documents (P@5 (Rel.)). Both nDCG@10 and P@5 are ofï¬cial task metrics; we include the
P@5 ï¬ltered to only fully-relevance documents because it exposed some interesting trends in our analysis. Since not all submissions contributed to the judgment pool, we also report the percentage of the top 5 documents for each query that have relevance judgments (judged@5). These settings represent a high-precision evaluation; we leave it to future work to evaluate techniques for maximizing system recall, which may require special consider- ations (Grossman et al., 2015).
Our work utilizes a variety of existing open- source tools, including OpenNIR (MacAvaney, 2020), Anserini (Yang et al., 2017), and the Hug- gingFace Transformers library (Wolf et al., 2019). Our experiments were conducted with a Quadro 10â5. RTX 8000 GPU, and a learning rate of 2
Ã
Note on manual vs automatic runs TREC- COVID makes the distinction between manual and automatic runs. We adhere to the broad deï¬nition of manual runs, as speciï¬ed by the task guidelines: âAutomatic construction is when there is no human involvement of any sort in the query construction process; manual construction is everything else... If you make any change to your retrieval system based on the content of the TREC-COVID topics (say add words to a dictionary or modify a routine after looking at retrieved results), then your runs are manual runs.â3 In short, making any change to the system on the basis of observations of the query and/or results qualify as a manual run.
# 5 Results
In this section we discuss our results in two eval- uation settings. In the ï¬rst setting we apply light hyperparmeter tuning on the pipeline which still counts as a manual run as discussed in 4. In the § second setting we do not perform any tuning of any sort and thus this setting is an automatic run.
# 5.1 Ranking with light hyperparmeter tuning
Recall that the ï¬rst stage of SLEDGE is based on an initial BM25 ranker, topics in the TREC Covid dataset include 3 different ï¬elds: query, question and narrative, and the documents have title, abstract and full-text. Choices of the BM25 parameters and which ï¬elds to include in the pipeline can affect the ï¬nal performance. Therefore, in the ï¬rst setting, we lightly tune these hyperparmeters using min- imal human judgments on a subset of the topics.
3https://ir.nist.gov/covidSubmit/round1. html
System nDCG@10 P@5 P@5 (Rel.) judged@5 Human intervention SLEDGE (ours, ârun1â) BBGhelani2 xj4wang run1 UIUC DMG setrank ret OHSU RUN2 cu dbmi bm25 1 sheikh bm25 manual crowd1 CSIROmed RF dmis-rnd1-run3 0.6844 0.6689 0.6513 0.6082 * 0.5966 * 0.5887 * 0.5790 * 0.5571 * 0.5479 * 0.4649 0.7933 0.8200 0.8333 0.7133 0.7000 0.7200 0.7267 0.7067 * 0.6400 * 0.5867 0.6533 0.5600 0.5933 0.5333 * 0.5200 0.5667 * 0.5333 * 0.4933 * 0.5267 * 0.4733 100% Hyperparameter tuning on subset of queries 100% Human-in-loop active learning 100% Human-in-loop active learning 100% unspeciï¬ed 100% Query reformulation & hyperparameter tuning 96% Query reformulation 93% Query reformulation 93% Manual relevance feedback 86% Manual relevance feedback 100% Query reformulation
Table 1: Top results using any human intervention (manual runs). * indicates our system exhibits a statistically signiï¬cant improvement (paired t-test, p < 0.05).
System nDCG@10 P@5 P@5 (Rel.) judged@5 Methodology sab20.1.meta.docs SLEDGE (ours, ârun2â) IRIT marked base CSIROmedNIR* base.unipd.it udel fang run3 uogTrDPH QE UP-rrf5rnd1 BioinfoUA-emb UIowaS Run3 0.6080 0.6032 0.5880 0.5875 0.5720 0.5370 0.5338 0.5316 0.5298 0.5286 0.7800 0.6867 0.7200 0.6600 0.7267 0.6333 0.6400 0.6800 0.6333 0.6467 0.4867 0.5667 0.5400 0.5867 0.5200 0.4267 0.4667 0.4800 0.4733 0.4733 100% VSM, Multi-Index, Lnu.ltu weighting 88% BM25 + SciBERT 100% BM25+RM3 + BERT-base 76% CovidBert-nli + Inv. Index 95% Elastic search + boolean queries 98% F2EXP + CombSUM 100% Terrier + Query Exp. 100% unsupervised reciprocal rank fusion 100% BM25 + DeepRank traind on BioAsk 100% BM25 + ï¬ltering
Table 2: Top results without using any human intervention (automatic runs). No results exhibit a statistically signiï¬- cant difference compared to our system (paired t-test, p < 0.05). â*â indicates that some sort of manually speciï¬ed ï¬ltering was used which may contradict the deï¬nition of an automatic run by TREC (see note in Section 4).
Speciï¬cally, we use shallow relevance judgments from 15 out of 30 topics assessed by non-experts.4 Unlike manual runs that require human interven- tion for query reformulation, active learning, or relevance feedback, we expect our system to be able to generalize to unseen queries in the domain because we use manual relevance signals only for hyperparameter tuning. By tuning the hyperparme- ters of the initial retrieval method, the ï¬elds of the topic (query, question, narrative) and document text (title, abstract, full text), and a date ï¬lter, we found the following pipeline to be most effective based on our non-expert annotations (run tag: run1):
1. Initial retrieval using BM25 tuned for recall using a grid search (k1 = 3.9, b = 0.55), uti- lizing the keyword query ï¬eld over the full text of the article. Articles from before Jan- uary 1, 2020 are disregarded.
ï¬eld is scored over the articleâs title and ab- stract.
We report the performance of the top system from the top 10 teams (among manual runs) for TREC-COVID in Table 1. Since the utilization of humans-in-the-loop vary considerably, we also indicate for each run the reported human interven- tion. We ï¬nd that SLEDGE outperforms all the other manual runs in terms of nDCG@10 and P@5 (relevant only). Of the top 10 systems that report their technique for human intervention, ours is also the only one that relies on human judgments solely for hyperparameter tuning. This is particularly im- pressive because the next best systems (BBGhe- lani2 and xj4wang run1) involves human-in-the- loop active learning to rank documents based on the manual assessorâs relevance. In terms of sta- tistical signiï¬cance (paired t-test, p < 0.05), our approach is on par with these active learning runs, and better than most other submissions in terms of nDCG@10 and P@5 (relevant).
2. Re-ranking using a Vanilla SciBERT model trained on MS-MARCO. The topicâs question
4Topics 1, 2, 6, 8, 9, 11, 13, 17, 18, 20, 21, 24, 27, 29, 30. 849 judgments were made in total. We found that our non-expert annotations did not align well with the ofï¬cially released expert annotations §5.3.
# 5.2 Ranking without hyperparameter tuning
We now evaluate our system in an environment that does not utilize human intervention, hyperparam-
(a) (b) (c)
recall@100 a
a recall@100
recall@100
Figure 1: Comparison of grid search heatmaps for BM25 using the topicâs query over article full text with (a) our relevance judgments, (b) the full set of ofï¬cial judgments, and (c) the set of ofï¬cial relevance judgments ï¬ltered to only the topics we assessed. The x-axis sweeps b [0.1, 6.0], and each cell represents the recall@100.
eter tuning, or relevance judgements of any sort. This represents a full domain transfer setting. Our pipeline consists of (run tag: run2):
1. Initial retrieval using untuned BM25 (default parameters, k1 = 0.9, b = 0.4), utilizing the question text over the title and abstract of a article. (No date ï¬ltering.)
2. Re-ranking using a Vanilla SciBERT model trained on a medical-related subset of MS- MARCO training data. The topicâs question ï¬eld is scored over the articleâs title and ab- stract.
P@5 for fully-relevant articles and the difference between the result are not statistically signiï¬cant. Furthermore, due to the 88% and 76% judged@5 of SLEDGE and CSIROmedNIR, the actual P@5 scores for these systems may very well be higher. Curiously, however, other neural approaches that are generally high-performing (e.g., those used by Zhang et al. (2020)) did not rank in the top 10 runs. We do observe that other traditional ap- proaches, such as those that perform query ex- pansion (e.g., udel fang run3, and uogTrDPH QE) also perform competitively in the automatic setting.
# 5.3 Analysis
The purpose of leveraging the medical-related sub- set of MS-MARCO is to reduce the risk of domain shift. To produce this subset, we use the MedSyn lexicon (Yates and Goharian, 2013), which includes layperson terminology for various medical condi- tions. Only queries that contain terms from the lex- icon are considered in this dataset, leaving 78,895 of the original 808,531 training queries (9.7%).5 A list of the query IDs corresponding to this ï¬ltered set is available.6
We observe that our automatic SLEDGE run performs highly competitively among other auto- matic submissions to the TREC-COVID shared task. Although the highest-scoring system in terms of nDCG@10 utilizes a traditional method, we ob- serve that it falls short of neural (e.g., SLEDGE, IRIT marked base, CSIROmedNIR) in terms of
5Several common terms were manually excluded to increase the precision of the ï¬lter, such as gas, card, bing, died, map, and fall. This does not qualify as manual tuning because these decisions were made only in consideration of the MS- MARCO training queries, not any TREC-COVID topics. 6https://github.com/Georgetown-IR-Lab/ covid-neural-ir/med-msmarco-train.txt
Initial retrieval parameters We now evaluate the hyperparameter tuning process conducted. We ï¬rst test the following ï¬rst-stage ranking func- tions and tune for recall@100 using our judgments: [0, 1] by 0.05), BM25 (k1 RM3 query expansion (Jaleel et al., 2004) over de- fault BM25 parameters (feedback terms and feed- [1, 20] by 1), QL Sequential Depen- back docs dency Model (SDM (Metzler and Croft, 2005), term, ordered, and un-ordered weights by 0.05). Each of these models is tested using with the query or question topic ï¬eld, and over the article full text, or just the title and abstract. We ï¬nd that using BM25 with k1 = 3.9 and b = 0.55, the topicâs query ï¬eld, and the articleâs full text to yield the highest recall. We compare the heatmap of this setting using our judgments, the full set of ofï¬cial judgments, and the set of ofï¬cial judgments ï¬ltered to only the topics we judged in Figure 1. Although the precise values for the optimal parameter set- tings differ, the shapes are similar suggesting that the hyperparameter choices generalize.
First-Stage Re-Rank Filter Query Document Query Document 2020 nDCG@10 P@5 P@5(Rel.) judged@5 Question Full text Question â Title+abstract 0.7333 0.6142 0.5467 90% Query Full text Query Title+abstract 0.4190 0.5067 0.3867 10% Query Full text Question â Title+abstract 0.6244 = 0.7333 0.5667 94% Query Full text Narrative âTitle+abstract 0.6133 0.5089 0.4600 82% Question Full text Question â_Title+abstract v 0.7733 0.6774 0.6333 91% Query Full text Query Title+abstract v 0.5131 0.6267 0.4933 11% Query Full text Question â_Title+abstract v 0.6844 0.7933 0.6533 100% * Query Full text Narrative â Title+abstract v 0.4898 0.5867 0.4733 10%
Table 3: Performance of our system using various sources for the ï¬rst-stage query text, re-ranking query text, and date ï¬ltering. Our ofï¬cial submission is marked with *.
Topic ï¬elds and date ï¬ltering Important hyper- parmeters of our system include which topic ï¬eld (question, query, or narrative) to use in which stage, and whether to perform date ï¬ltering. We present a study of the effects of these parameters in Table 3. First, we observe that the ï¬ltering of articles to only those published after January 1, 2020 always improves the ranking effectiveness (as compared to models that retrieved from all articles). Indeed, we ï¬nd that only 19% of judged documents from prior to 2020 were considered relevant (with only 7% fully relevant). Meanwhile, 32% of judged doc- uments after 2020 were considered relevant (19% fully relevant). We note that although this ï¬lter seems to be effective, it will ultimately limit the recall of the system. This observation underscores the value of including a user-conï¬gurable date ï¬lter in COVID-19 search engines.
We also observe in Table 3 that both ï¬rst-stage ranking and re-ranking based on the question ï¬eld may be more effective than using the query ï¬eld for ï¬rst-stage ranking and the question for re-ranking. Considering that the nDCG@10 already outper- forms the performance of our ofï¬cial submission, and P@5 (fully relevant only) is not far behind with only 91% of the top documents judged, we can ex- pect that this is likely a better setting going forward. It also simpliï¬es the pipeline and reï¬ects a more re- alistic search environment in which the user simply enters a natural language question. However, this approach underperforms at identifying partially rel- evant documents, given by its much lower P@5. In an environment in which recall is important (such as systematic review), the hybrid query-question approach may be preferable. Interestingly, we ï¬nd that the narrative ï¬eld usually reduces ranking ef- fectiveness compared to the other settings. This may be due to a large distance between the natural- language questions seen during training and the
» 24 202 | 29 | 30 in] Q q 37457 | 36 | 47 3g g SaJt 40 | 40 | 121 0 1 2 our labels
Figure 2: Confusion matrix between our non-expert an- notations and the ofï¬cial expert TREC labels.
longer-form text seen at evaluation time.
Non-expert judgements We found that our non- expert relevance labels did not align well with the ofï¬cial labels; there was only a 60% agreement rate among the overlapping labels. In 18% of cases, our labels rated the document as more relevant than the ofï¬cial label; in 23% of cases ours was rated less relevant. A full confusion matrix is shown in Figure 2.
Despite the low agreement rates, the use of do- main transfer, and only leveraging the non-expert labels for hyperparameter tuning suggest that it would be difï¬cult to overï¬t to the test collection. We further investigate whether the subset of queries we evaluated gained a substantial advantage. To this end, we plot the difference in the evaluation metrics between our system and an untuned BM25 ranker in Figure 3. As demonstrated by the ï¬gure, there was no strong preference of our model to- wards queries that were annotated (marked with * and in blue). In fact, 9 of the 15 highest-performing queries were not in the annotated set (in terms of â nDCG@10). This suggests that our approach did not overï¬t to signals provided by the non-expert as- sessments, and that our trained ranker is generally applicable.
Failure cases Although our system generally out- performs BM25 ranking, it substantially underper-
1.0 0.54 âââââ os (tll A nDCG@10 -10 oo 4 se tT tt Tt tt i | | 23°15 14 7 20* 30* 17* 27* 18* 1.0 10 1* 24" 9* 11* 12 28 Query ID 4 29% 3 13* 6 2* 19 21* 22 2% 25 16 5 8 0.54 a oe a . ,2=== â0.5 4 OR 7b WiFi 2 2 2F a a 102 DT 6 Weir 4 2 9 5 55° Se 16 Query ID 1.0 Z 054 ââ S 0 eee | dL © 8 a â0.54 -10 237 «+14 24% 4 3 22 30% 27* 1* 20* 11* 18* 28 26 17* 13* 15 6* Query ID 16 25 9 21* 12 10 19 2* 5 29% 8
Figure 3: Difference in ranking effectiveness between our system and an untuned BM25 model by query for nDCG@10, P@5, and P@5 (fully relevant only). Queries in blue and marked with * were annotated by non- experts and used for hyperparameter tuning.
forms for Query 23 (coronavirus hypertension). When observing the failure cases, we found that the BM25 model successfully exploited term rep- etition to identify its top documents as relevant. Meanwhile, our system ranked documents with in- cidental mentions of hypertension highly. This sug- gests that more effective utilization of approaches that include a count-based component in the rank- ing score (such as TK (Hofst¨atter et al., 2020) or CEDR-KNRM (MacAvaney et al., 2019)) could yield improvements.
that recent articles (i.e., those published in 2020) tend to exhibit higher relevance, suggesting the im- portance of ï¬ltering by date for high-precision re- trieval. We also ï¬nd that our non-expert annotation phase helped converge on good hyperparameters, while not likely contributing to substantial overï¬t- ting to the test set. Finally, through failure case analysis, we ï¬nd that count-based approaches may be a good direction to explore in subsequent rounds of the shared task.
# Acknowledgments
# 6 Conclusion
In this work we present SLEDGE, a baseline for literature search related to COVID-19. SLEDGE is a two stage approach consisting of an initial BM25 ranker followed by SciBERT-based reranker, a do- main speciï¬c pretrained language model. SLEDGE is trained on the general domain MS-MARCO passage ranking dataset and evaluated on TREC COIVD-search benchmark in a zero-shot transfer setting. SLEDGE tops the leaderboard among the initial round submissions from 55 teams to TREC- COVID Search shared task, demonstrating its ef- fectiveness.
This work was partially supported by the ARCS Foundation.
# References
Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciB- ERT: A pretrained language model for scientiï¬c text. In EMNLP, pages 3615â3620.
Daniel Fernando Campos, Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A human generated machine reading comprehension dataset. ArXiv, abs/1611.09268.
Through our analysis we ï¬nd that the parameters for initial retrieval are fairly robust. We also ï¬nd
Tsung Teng Chen and Maria R. Lee. 2018. Research paper recommender systems on big scholarly data. In PKAW.
Zhuyun Dai and James P. Callan. 2019a. Context- aware sentence/passage term importance estimation for ï¬rst stage retrieval. ArXiv, abs/1910.10687.
Zhuyun Dai and Jamie Callan. 2019b. Deeper text un- derstanding for ir with contextual neural language modeling. Proceedings of the 42nd International ACM SIGIR Conference on Research and Develop- ment in Information Retrieval.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. ArXiv, abs/1810.04805.
Luyu Gao, Zhuyun Dai, and James P. Callan. 2020. EARL: Speedup transformer-based rankers with pre- computed representation. ArXiv, abs/2004.13313.
Maura R. Grossman, Gordon V. Cormack, and Adam track Roegiest. 2015. overview. In TREC. TREC 2016 total recall
William Hersh and Ellen Voorhees. 2009. TREC ge- nomics special issue overview.
Sebastian Hofst¨atter, Markus Zlabinger, and Allan Interpretable & time-budget- In Hanbury. 2020. constrained contextualization for re-ranking. ECAI.
Chien-yu Huang, Arlene Casey, Dorota GÅowacka, and Alan Medlar. 2019. Holes in the outline: Subject- dependent abstract quality and its implications for In Proceedings of the scientiï¬c literature search. 2019 Conference on Human Information Interaction and Retrieval, pages 289â293.
Nasreen Abdul Jaleel, James Allan, W. Bruce Croft, Fernando Diaz, Leah S. Larkey, Xiaoyan Li, Mark D. Smucker, and Courtney Wade. 2004. UMass at TREC 2004: Novelty and HARD. In TREC.
Omar Khattab and Matei Zaharia. 2020. ColBERT: Ef- ï¬cient and effective passage search via contextual- ized late interaction over bert. In SIGIR.
Mounia Lalmas and Anastasios Tombros. 2007. INEX 2002 - 2006: Understanding xml retrieval evaluation. In DELOS.
Steve Lawrence, Kurt D. Bollacker, and C. Lee Giles. 1999. Indexing and retrieval of scientiï¬c literature. In CIKM â99.
Is searching full text more effec- tive than searching abstracts? BMC Bioinformatics, 10:46 â 46.
Sean MacAvaney. 2020. OpenNIR: A complete neu- ral ad-hoc ranking pipeline. In Proceedings of the Thirteenth ACM International Conference on Web Search and Data Mining, pages 845â848.
Sean MacAvaney, Arman Cohan, Nazli Goharian, and Ross Filice. 2020a. Ranking signiï¬cant discrepan- cies in clinical reports. In ECIR.
Sean MacAvaney, Franco Maria Nardini, Raffaele Perego, Nicola Tonellotto, Nazli Goharian, and Ophir Frieder. 2020b. Efï¬cient document re- ranking for transformers by precomputing term rep- resentations. In SIGIR.
Sean MacAvaney, Franco Maria Nardini, Raffaele Perego, Nicola Tonellotto, Nazli Goharian, and Ophir Frieder. 2020c. Expansion via prediction of importance with contextualization. In SIGIR.
Sean MacAvaney, Andrew Yates, Arman Cohan, and Nazli Goharian. 2019. CEDR: Contextualized em- beddings for document ranking. Proceedings of the 42nd International ACM SIGIR Conference on Re- search and Development in Information Retrieval.
Alan Medlar, Kalle Ilves, Ping Wang, Wray Buntine, and Dorota Glowacka. 2016. Pulp: A system for ex- ploratory search of scientiï¬c literature. In Proceed- ings of the 39th International ACM SIGIR confer- ence on Research and Development in Information Retrieval, pages 1133â1136.
Donald Metzler and W. Bruce Croft. 2005. A markov random ï¬eld model for term dependencies. In SIGIR â05.
Rodrigo Nogueira. 2019. From doc2query to docTTTTTquery.
Rodrigo Nogueira and Kyunghyun Cho. 2019. Passage re-ranking with bert. ArXiv, abs/1901.04085.
Rodrigo Nogueira, Zhiying Jiang, Kyunghyun Cho, and Jimmy Lin. 2020a. Evaluating pretrained trans- former models for citation recommendation. In BIR@ECIR.
Rodrigo Nogueira, Zhiying Jiang, and Jimmy Lin. ranking with a pretrained arXiv preprint 2020b. Document sequence-to-sequence model. arXiv:2003.06713.
and Kyunghyun Cho. 2019. Document expansion by query prediction. ArXiv, abs/1904.08375.
Kirk Roberts, Tasmeer Alam, Steven Bedrick, Dina Demner-Fushman, Kyle Lo, Ian Soboroff, Ellen Voorhees, Lucy Lu Wang, and William R Hersh. 2020. TREC-COVID: Rationale and Structure of an Information Retrieval Shared Task for COVID- 19. Journal of the American Medical Informatics Association. Ocaa091.
Luca Soldaini, Andrew Yates, and Nazli Goharian. 2017. Learning to reformulate long queries for clin- ical decision support. J. Assoc. Inf. Sci. Technol., 68:2602â2619.
Amin Sorkhei, Kalle Ilves, and Dorota Glowacka. 2017. Exploring scientiï¬c literature search through topic models. In Proceedings of the 2017 ACM Workshop on Exploratory Search and Interactive Data Analyt- ics, pages 65â68.
Raphael Tang, Rodrigo Nogueira, Edwin Zhang, Nikhil Gupta, Phuong Cam, Kyunghyun Cho, and Jimmy Lin. 2020. Rapidly bootstrapping a question answer- ing dataset for COVID-19. ArXiv, abs/2004.11339.
Lucy Lu Wang, Kyle Lo, Yoganand Chandrasekhar, Russell Reas, Jiangjiang Yang, Darrin Eide, Kathryn Funk, Rodney Kinney, Ziyang Liu, William. Mer- rill, Paul Mooney, Dewey A. Murdick, Devvret Rishi, Jerry Sheehan, Zhihong Shen, Brandon Stil- son, Alex D. Wade, Kuansan Wang, Christopher Wil- helm, Boya Xie, Douglas M. Raymond, Daniel S. Weld, Oren Etzioni, and Sebastian Kohlmeier. 2020. CORD-19: The COVID-19 open research dataset. ArXiv, abs/2004.10706.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Râemi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingfaceâs trans- formers: State-of-the-art natural language process- ing. ArXiv, abs/1910.03771.
Peilin Yang, Hui Fang, and Jimmy Lin. 2017. Anserini: Enabling the use of lucene for information retrieval research. In SIGIR.
Andrew Yates and Nazli Goharian. 2013. ADRTrace: Detecting expected and unexpected adverse drug re- actions from user reviews on social media sites. In ECIR.
Zeynep Akkalyoncu Yilmaz, Wei Yang, Haotian Zhang, and Jimmy Lin. 2019. Cross-domain mod- eling of sentence-level evidence for document re- trieval. In EMNLP/IJCNLP.
Edwin M. Zhang, Nikhil Gupta, Rodrigo Nogueira, Kyunghyun Cho, and Jimmy Lin. 2020. Rapidly deploying a neural search engine for the COVID- 19 open research dataset: Preliminary thoughts and lessons learned. ArXiv, abs/2004.05125. | {
"id": "2003.06713"
} |
2005.01795 | Generating SOAP Notes from Doctor-Patient Conversations Using Modular Summarization Techniques | Following each patient visit, physicians draft long semi-structured clinical
summaries called SOAP notes. While invaluable to clinicians and researchers,
creating digital SOAP notes is burdensome, contributing to physician burnout.
In this paper, we introduce the first complete pipelines to leverage deep
summarization models to generate these notes based on transcripts of
conversations between physicians and patients. After exploring a spectrum of
methods across the extractive-abstractive spectrum, we propose Cluster2Sent, an
algorithm that (i) extracts important utterances relevant to each summary
section; (ii) clusters together related utterances; and then (iii) generates
one summary sentence per cluster. Cluster2Sent outperforms its purely
abstractive counterpart by 8 ROUGE-1 points, and produces significantly more
factual and coherent sentences as assessed by expert human evaluators. For
reproducibility, we demonstrate similar benefits on the publicly available AMI
dataset. Our results speak to the benefits of structuring summaries into
sections and annotating supporting evidence when constructing summarization
corpora. | http://arxiv.org/pdf/2005.01795 | Kundan Krishna, Sopan Khosla, Jeffrey P. Bigham, Zachary C. Lipton | cs.CL, cs.AI, cs.LG, stat.ML | Published at ACL 2021 Main Conference | null | cs.CL | 20200504 | 20210602 | 1 2 0 2
n u J 2 ] L C . s c [
3 v 5 9 7 1 0 . 5 0 0 2 : v i X r a
# Generating SOAP Notes from Doctor-Patient Conversations Using Modular Summarization Techniques
Kundan Krishna, Sopan Khosla, Jeffrey Bigham, Zachary C. Lipton Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA {kundank,sopank,jbigham,zlipton}@andrew.cmu.edu
# Abstract
Following each patient visit, physicians draft long semi-structured clinical summaries called SOAP notes. While invaluable to clini- cians and researchers, creating digital SOAP notes is burdensome, contributing to physician burnout. In this paper, we introduce the ï¬rst complete pipelines to leverage deep summa- rization models to generate these notes based on transcripts of conversations between physi- cians and patients. After exploring a spectrum of methods across the extractive-abstractive spectrum, we propose CLUSTER2SENT, an al- gorithm that (i) extracts important utterances relevant to each summary section; (ii) clus- ters together related utterances; and then (iii) generates one summary sentence per cluster. CLUSTER2SENT outperforms its purely ab- stractive counterpart by 8 ROUGE-1 points, and produces signiï¬cantly more factual and co- herent sentences as assessed by expert human evaluators. For reproducibility, we demon- strate similar beneï¬ts on the publicly available AMI dataset. Our results speak to the beneï¬ts of structuring summaries into sections and an- notating supporting evidence when construct- ing summarization corpora.
# Introduction
In a parallel development, patients increasingly record their doctorâs visits, either in lieu of taking notes or to share with a family member. A budding line of research has sought to leverage transcripts of these clinical conversations both to provide in- sights to patients and to extract structured data to be entered into EHRs (Liu et al., 2019b; Schloss and Konam, 2020; Krishna et al., 2021).
In this paper, we introduce the ï¬rst end-to-end methods for generating whole SOAP notes based on clinical conversations. Our work builds on a unique corpus, developed in collaboration with Abridge AI, Inc.1), that consists of thousands of transcripts of recorded clinical conversations to- gether with associated SOAP notes drafted by a work force trained in the ofï¬cial style of SOAP note documentation. On one hand, this task is much harder than traditional summarization bench- marks, in part, because SOAP notes are longer (320 words on average) than summaries in popu- lar datasets like CNN/Dailymail (Nallapati et al., 2016), Newsroom (Grusky et al., 2018), and Sam- Sum (Gliwa et al., 2019) (55, 27, and 24 words on average). On the other hand, our dataset offers useful structure: (i) segmentation of each SOAP note into subsections; and (ii) a set of supporting utterances that provide evidence for each sentence in the SOAP note. Exploiting this structure, our methods outperform appropriate baselines.
Electronic health records (EHR) play a crucial role in patient care. However, populating them can take as much time as attending to patients (Sinsky et al., 2016) and constitutes a major cause of physician burnout (Kumar and Mezoff, 2020). In particular, doctors document patient encounters with SOAP notes, semi-structured written accounts containing four sections: (S)ubjective information reported by the patient; (O)bjective observations, e.g., lab re- sults; (A)ssessments made by the doctor (typically, the diagnosis); and a (P)lan for future care, includ- ing diagnostic tests, medications, and treatments. Sections can be subdivided into 15 subsections.
Our ï¬rst methodological contribution is to pro- pose a spectrum of methods, for decomposing sum- marizaton tasks into extractive and abstractive sub- tasks. Starting from a straightforward sequence-to- sequence model, our methods shift progressively more work from the abstractive to the extractive component: (i) CONV2NOTE: the extractive mod- ule does nothing, placing the full burden of summa- rization on an end-to-end abstractive module. (ii)
1http://abridge.com
Conversation (1) Extract Noteworthy for review of systems: (2) Cluster (3) Generate DR: So are you taking the Monteluekast regularly? PT: Yeah, one everyday like you sai. DR: Good. And is it helping? Do you have chest pains anymore? PT: No. No chest pains. DR:That's good. PT: Although | do still have some cough. DR: I see. And do you get, like, mucous with it or is it dry? PT: Umm no it's usually dry. No mucous. bo} IDR: Good. And is it helping? Do you Ihave chest pains anymore? IPT: No. No chest pains PT: Although I do still have some cough. IDR: | see. And do you get, like, mucous ith it or is it dry? PT: Umm no it's usually dry. No mucous. plchest pains anymore? PT: No. No chest pains DR: Good. And is it helping? Do you have SUBEJ Post viral respiratory allerg [Denies chest pain.|bonfirms dry cough. _-[Monteleukast] Fluticasone J PT: Although | do still have some cough. DR: | see. And do you get, like, mucous âpiwith it or is it dry? PT: Umm no it's usually dry. No mucous. Noteworthy for medications l>| DR: So are you taking the Monteluekast regularly? PT: Yeah, one everyday like you said. DR: So are you taking the Monteluekast > regularly? PT: Yeah, one everyday like you said. OBJECTIVE ASSESSMENT - Patient feeling better after taking inhaler. Still has some cough but no chest pain. PLAN - Followup in 1 week to assess condition and decide when to stop using the inhaler.
Figure 1: Workï¬ow of our best performing approach involving extraction and clustering of noteworthy conversa- tion utterances followed by abstractive summarization of each cluster (ï¬ctitious data)
EXT2NOTE: the extractive module selects all utter- ances that are noteworthy (i.e., likely to be marked as supporting utterances for at least one SOAP note sentence), and the decoder is conditioned only on these utterances; (iii) EXT2SEC: the extractive module extracts per-subsection noteworthy utter- ances and the decoder generates each subsection, conditioned only on the corresponding utterances; (iv) CLUSTER2SENT: the extractive module not only extracts per-subsection noteworthy utterances but clusters together those likely to support the same SOAP sentenceâhere, the decoder produces a single sentence at a time, each conditioned upon a single cluster of utterances and a token indicating the SOAP subsection. We see consistent beneï¬ts as we move from approach (i) through (iv).
Both to demonstrate the generality of our meth- ods and to provide a reproducible benchmark, we conduct parallel experiments on the (publicly avail- able) AMI corpus (Carletta, 2007)2 Like our med- ical conversations dataset, the AMI corpus ex- hibits section-structured summaries and contains annotations that link summary sentences to corre- sponding supporting utterances. Our experiments with AMI data show the same trends, favoring pipelines that demand more from the extractive component. These results speak to the wider use- fulness of our proposed approaches, EXT2SEC and CLUSTER2SENT, whenever section-structured summaries and annotated evidence utterances are available.
clustering heuristic leads to similar performance on SOAP note generation as we obtain when us- ing ground-truth clustersâeven though the ground truth noteworthy utterances are not always local- ized. Applied with predicted noteworthy utterances and clusters, this approach achieves the highest ROUGE scores and produces the most useful (fac- tual, coherent, and non-repetitive) sentences as rated by human experts. As an additional beneï¬t of this approach, due to the smaller lengths of the in- put and output sequences involved, we can feasibly train large transformer-based abstractive summa- rization models (e.g., T5), whose memory require- ments grow quadratically with sequence length. Additionally, our approach localizes the precise utterances upon which each SOAP note sentence depends, enabling physicians to verify the correct- ness of each sentence and potentially to improve the draft by highlighting the correct noteworthy utterances (versus revising the text directly). In summary, we contribute the following:
⢠The ï¬rst pipeline for drafting entire SOAP notes from doctor-patient conversations.
⢠A new collection of extractive-abstractive approaches for generating long section- segmented summaries of conversations, in- cluding new methods that leverage annota- tions attributing summary sentences to conver- sation utterances.
Our best performing model, CLUSTER2SENT (Figure 1), demands the most of the extractive module, requiring that it both select and group each subsectionâs noteworthy utterances. Interest- ingly, we observe that given oracle (per-subsection) noteworthy utterances, a simple proximity-based
⢠A rigorous quantitative evaluation of our pro- posed models and appropriate baselines for both the extractive and abstractive compo- nents, including sensitivity of the pipeline to simulated ASR errors.
2Our dataset: modular-summarization code and the AMI https://github.com/acmi-lab/ trained models for
⢠A detailed human study to evaluate the fac- tuality and quality of generated SOAP notes, and qualitative error analysis.
# 2 Related Work
Summarization is a well-studied problem in NLP (Nenkova et al., 2011). While early works fo- cused on simply extracting important content from a document (Erkan and Radev, 2004; Wong et al., 2008), later approaches attempted to paraphrase the content into new sentences (abstractive summa- rization) (Filippova, 2010; Berg-Kirkpatrick et al., 2011; Wang and Cardie, 2013). Following the de- velopment of neural sequence models (Sutskever et al., 2014), more research focuses on neural gen- eration of abstractive summaries (Nallapati et al., 2016; See et al., 2017; Celikyilmaz et al., 2018). While many papers summarize news articles, oth- ers summarize conversations, in business meetings (Wang and Cardie, 2013; Zhu et al., 2020), cus- tomer service (Liu et al., 2019a), and tourist infor- mation center (Yuan and Yu, 2019) contexts.
In the space of two-step extractive-abstractive summarization approaches, Subramanian et al. (2019) summarize scientiï¬c papers by ï¬rst extract- ing sentences from it and then abstractively summa- rizing them. Chen and Bansal (2018) extract impor- tant sentences from the input and then paraphrase each of them to generate the abstractive summary. While they assume that each summary sentence is supported by exactly one source sentence, in our medical conversations, many summary sentences synthesize content spread across multiple dialogue turns (e.g., a series of questions and answers).
Past work on abstractive summarization of med- ical conversations has focused on summarizing patient-nurse conversations with goals including capturing symptoms of interest (Liu et al., 2019c) and past medical history (Joshi et al., 2020). These tasks are respectively similar to generating the re- view of systems and past medical history subsec- tions of a SOAP note. In contrast, we aim to gen- erate a full-length SOAP note containing up to 15 subsections, and propose methods to address this challenge by extracting supporting context for smaller parts and generating them independently.
# 3 Dataset
We use two different datasets in this work. The pri- mary medical dataset, developed through a collab- oration with Abridge AI, consists of doctor-patient conversations with annotated SOAP notes. Addi- tionally, we evaluate our summarization methods on the AMI dataset (Carletta, 2007), comprised of business meeting transcripts and their summaries.
# 3.1 Medical dataset
Our work builds on a unique resource: a corpus con- sisting of thousands of recorded English-language clinical conversations, with associated SOAP notes created by a work force trained in SOAP note doc- umentation standards. Our dataset consists of tran- scripts from real-life patient-physician visits from which sensitive information such as names have been de-identiï¬ed. The full medical dataset con- sists of 6862 visits consisting of 2732 cardiologist visits, 2731 visits for family medicine, 989 inter- ventional cardiologist visits, and 410 internist visits. Owing to the sensitive nature of the data, we can- not share it publicly (an occupational hazard of research on machine learning for healthcare).
For each visit, our dataset contains a human- generated transcript of the conversation. The tran- script is segmented into utterances, each annotated with a timestamp and speaker ID. The average con- versation lasts 9.43 minutes and consists of around 1.5k words (Appendix Figure A1). Associated with each conversation, we have a human-drafted SOAP note created by trained, professional annota- tors. The annotators who created the SOAP notes worked in either clinical transcription, billing, or re- lated documentation-related departments, but were not necessarily professional medical scribes. The dataset is divided into train, validation and test splits of size 5770, 500 and 592, respectively.
Our annotated SOAP notes contain (up to) 15 subsections, each of which may contain multiple sentences. The subsections vary in length. The Allergies subsections is most often empty, while the Assessment subsection contains 5.16 sentences on average (Table 1). The average SOAP note con- tains 27.47 sentences. The different subsections also differ in the style of writing. The Medications subsection usually consists of bulleted names of medicines and their dosages, while the Assessment subsection typically contains full sentences. On av- erage, the fraction of novel (i.e., not present in the conversation) unigrams, bigrams, and trigrams, in each SOAP note are 24.09%, 67.79% and 85.22%, respectively.
Each SOAP note sentence is also annotated with utterances from the conversation which provide evi- dence for that sentence. A SOAP note sentence can have one or more supporting utterances. On aver- age, each SOAP sentence has 3.84 supporting utter- ances, but the mode is 1 (Appendix Figure A1). We refer to these utterances as noteworthy utterances
Subsection Mean length Family Medical History Past Surgical History Review of Systems Chief Complaint Miscellaneous Allergies Past Medical History Social History Medications 0.23 0.58 3.65 2.17 2.81 0.06 2.93 0.27 3.74 Immunizations Laboratory and Imaging Results 0.11 2.27 Assessment 5.16 Diagnostics and Appointments Prescriptions and Therapeutics 1.65 1.75 Healthcare Complaints 0.09
Table 1: Average number of sentences in different SOAP note subsections grouped by parent sections (Subjective, Objective, Assessment, Plan, Others resp.)
throughout this paper. Throughout this work, we deal with the 15 more granular subsections rather than the 4 coarse sections of SOAP notes, and thus for convenience, all further mentions of section technically denote a SOAP subsection.
# 3.2 AMI dataset
The AMI dataset is a collection of 138 business meetings, each with 4 participants with various roles (e.g., marketing expert, product manager, etc.). Each meeting transcript comes with an asso- ciated abstractive summary that is divided into four sectionsâabstract, decisions, actions, and prob- lems. Each conversation also has an associated extractive summary, and there are additional anno- tations linking the utterances in the extractive sum- mary to sentences in the abstractive summary. For any given sentence in the abstractive summary, we refer to the linked set of utterances in the extractive summary as its noteworthy utterances. We note that 7.9% of the abstractive summary sentences have no annotated noteworthy utterances. To simplify the analysis, we remove these sentences from sum- maries in the training, validation, and test splits.
# 4 Methods
We investigate the following four decompositions of the summarization problem into extractive and abstractive phases, ordered from abstraction-heavy
to extraction-heavy: CONV2NOTE takes an end- to-end approach, generating the entire SOAP note from the entire conversation in one shot. EXT2NOTE ï¬rst predicts all of the noteworthy ut- terances in the conversation (without regard to the associated section) and then generates the entire SOAP note in one shot from only those utterances. EXT2SEC extracts noteworthy utterances, while also predicting the section(s) for which they are rel- evant, and then generates each SOAP section sepa- rately using only that sectionâs predicted notewor- thy utterances. CLUSTER2SENT attempts to group together the set of noteworthy utterances associ- ated with each summary sentence. Here, we cluster separately among each set of section-speciï¬c note- worthy utterances and then generate each section one sentence at a time, conditioning each on the associated cluster of utterances.
Each of these pipelines leaves open many choices for speciï¬c models to employ for each sub- task. For the abstractive modules of CONV2NOTE and EXT2NOTE, we use a pointer-generator net- work. The abstractive modules of EXT2SEC and CLUSTER2SENT, which require conditioning on section are modeled using conditioned pointer- generator networks (described in Section 5), and ï¬ne-tuned T5 models which condition on the sec- tion being generated by means of prepending it to the input. T5 models could not be used in the CONV2NOTE and EXT2NOTE settings because their high memory requirement for long inputs could not be accommodated even with 48GB of GPU memory.
For noteworthy utterance extraction, we primar- ily use a hierarchical LSTM model and a BERT- LSTM model as described in the next section. All models are conï¬gured to have a scalar output for binary classiï¬cation in EXT2NOTE, whereas for EXT2SEC and CLUSTER2SENT, they have multi- label output separately predicting noteworthiness for each section. Note that the same utterance can be noteworthy with respect to multiple sections. We use the same trained utterance extraction mod- els for both EXT2SEC and CLUSTER2SENT.
For the clustering module in CLUSTER2SENT, we propose a heuristic that groups together any two supporting utterances that are close, meaning they have less than or equal to Ï utterances sep- arating them, where Ï is a hyperparameter. This process is iterated, with the clusters growing in size by merging with other singletons or clusters, until
every pair of close utterances have the same cluster membership. The value of Ï is tuned on the vali- dation set. Since each cluster necessarily produces one sentence in the SOAP note, having too many or too few clusters can make the SOAP note too long or too short, respectively. Therefore, for any given value of the hyper-parameter Ï and any given section, the prediction thresholds of the extractor are tuned on the validation set to produce approxi- mately the same number of clusters over the entire validation set as present in the ground truth for that section. Among ground truth clusters containing multiple noteworthy utterances, 82% are contigu- ous. In an experiment where the heuristic is used to cluster the oracle noteworthy utterances for each section, and summaries are subsequently generated via the abstractive modules from CLUSTER2SENT, ROUGE-1 and ROUGE-2 metrics deteriorate by less than 1 point as compared to oracle clusterings (Appendix Table A3), demonstrating our heuristicâs effectiveness.
# 5 Model Architectures
Pointer-Generator Network We use the pointer-generator network introduced by See et al. (2017) for CONV2NOTE and EXT2NOTE. The model is a bidirectional LSTM-based encoder- decoder model with attention. It employs a pointer mechanism to copy tokens directly from the input in addition to generating them by predicting gen- eration probabilities for the entire vocabulary. The model also computes the weights that govern copy- ing versus generating at each decoding timestep.
Section-conditioned Pointer-Generator Net- work We modify the pointer-generator network for algorithms EXT2SEC and CLUSTER2SENT, to condition on the (sub)section of the summary to be generated. The network uses a new lookup table to embed the section z into an embedding ez. The section embedding is concatenated to each input word embedding fed into the encoder. The section embedding is also appended to the inputs of the decoder LSTM in the same fashion.
T5 We use the recently released T5 model (Raf- fel et al., 2020) as an abstractive module. It is an encoder-decoder model, where both encoder and decoder consist of a stack of transformer layers. The T5 model is pre-trained on 5 tasks, includ- ing summarization, translation etc. We use the pre-trained T5 model parameters and ï¬ne-tune it
on our task dataset. For introducing the section- conditioning in EXT2SEC and CLUSTER2SENT, we simply add the name of the section being gener- ated to the beginning of the input.
Hierarchical LSTM classiï¬er(H-LSTM) In this model, we ï¬rst encode each utterance ui in- dependently by passing its tokens through a bidi- rectional LSTM and mean-pooling their encoded representations to get the utterance representation hi. We pass the sequence of utterance represen- tations {h1, h2, ..., hn} through another bidirec- tional LSTM to get new utterance representations which incorporate neighboring contexts. These are then passed through a sigmoid activated linear layer to predict each utteranceâs probability of notewor- thiness with respect to each section.
BERT-LSTM classiï¬er(B-LSTM) In this tokens in the utterance ui are passed model, through a BERT encoder to obtain their contextual- ized representations, which are mean-pooled to get the utterance representation hi. The subsequent ar- chitecture exactly mirrors hierarchical LSTM, and involves passing utterance representations through a bidirectional LSTM and linear layer to get out- put probabilities. BERT-LSTM is ï¬ne-tuned in an end-to-end manner.
# 6 Experiments
We ï¬rst establish two baselines. RANDOMNOTE randomly and uniformly samples a SOAP note from the training set and outputs it as the sum- mary for any input conversation. ORACLEEXT presents all the ground truth noteworthy utterances (evidence) from the conversation as the SOAP note without any abstractive summarization. Thus, the ORACLEEXT baseline has the advantage of con- taining all the desired information (e.g., names of medicines) from the conversation, but the disadvan- tage of not being expressed in the linguistic style of a SOAP note which leads to lower n-gram over- lap. The opposite is true for the RANDOMNOTE baseline. Both baselines give similar performance and are outperformed by the simple CONV2NOTE approach (Table 2).
We train the abstractive modules for the 4 ap- proaches described in Section 4 with the ground truth noteworthy utterances as inputs. To estimate an upper bound on the performance we can reason- ably hope to achieve by improving our noteworthy utterance extractors, we test our models with oracle
noteworthy utterances in the test set. All algorithms relying on oracle noteworthy utterances outperform CONV2NOTE, and exhibit a monotonic and signiï¬- cant rise in ROUGE scores as we move towards the extraction-heavy end of the spectrum (Table 3)3.
For predicting noteworthy utterances, we use two baselines: (i) logistic regression on TF-IDF utterance representations; and (ii) a model with a bidirectional LSTM to compute token-averaged utterance representations, followed by a linear clas- siï¬cation layer. These two models make the pre- dictions for each utterance independent of others. In contrast, we also use models which incorporate context from neighboring utterances: (a) a hierar- chical LSTM; and (b) a BERT-LSTM model as described in Section 5. The latter two methods perform much better (Table 5), demonstrating the beneï¬t of incorporating neighboring context, with BERT-LSTM performing the best (see Appendix Table A6 for section-wise performance).
Using predicted noteworthy utterances and clusters instead of oracle ones leads to a drop in ROUGE scores, but the performance of EXT2SEC and CLUSTER2SENT is still better than CONV2NOTE (Table 2). For the medical dataset, using a BERT-LSTM extractor leads to the best performance, with CLUSTER2SENT outperforming CONV2NOTE by about 8 points in ROUGE-1 (see Appendix Table A5 for section-wise performance). Interestingly, the T5-Small variant achieves similar performance to T5-Base, despite being only about a quarter of the latterâs size.
Performance on AMI dataset We see a sim- ilar trend in the ROUGE scores when applying these methods on the AMI dataset. One excep- tion is the poor performance of pointer-generator based EXT2NOTE, which excessively repeated sen- tences despite using a high coverage loss coef- ï¬cient. There is a larger gap between the per- formance of the T5-Small and T5-Base abstrac- tive models on this dataset. As an extractor, the performance of BERT-LSTM is again better than HLSTM (Table 5), but when used in tandem with the abstractive module, ROUGE scores achieved by the overall pipeline do not always follow the same order. We also observe that the clustering heuristic does not work as well on this dataset. Speciï¬cally, tuning the thresholds of the extractive model, while ï¬xing the clustering threshold Ï gave worse results on this dataset. Tuning the thresholds independent
3The character â-â represents GPU memory overï¬ow
of the clusters performed better. However, the best method still outperforms CONV2NOTE by about 11 ROUGE-1 points (Table 2).
Performance with ASR errors In the absence of human-generated transcripts of conversations, Automatic Speech Recognition (ASR) techniques can be used to transcribe the conversations for use by our models. To account for ASR errors, we ar- tiï¬cially added errors in transcripts of the medical dataset by randomly selecting some percentage of the words and replacing them with phonetically similar words using Reï¬nedSoundEx (Commons) (details in the Appendix). Models trained on clean dataset perform worse on a 10% corrupted test dataset (Table 4). Since ASR errors lead to re- placement of a correct word by only a small set of phonetically similar words, there is still some information indicating the original word that can be used by the models. When we train our mod- els on data corrupted at the 10% ASR error rate, our models recover much of the performance drop (Table 4). Notably when simulated ASR errors are dialed up to a 30% error rate, (both at train and test time) we see a smaller performance drop for CLUSTER2SENT as compared to CONV2NOTE.
# 7 Qualitative Analysis
The conditioned pointer-generator and T5 models used in CLUSTER2SENT learn to place information regarding different topics in appropriate sections. Hence, given a cluster of supporting utterances, the models can generate different summaries for multiple sections (Figure 2). For example, given the same supporting utterances discussing the pa- tientâs usage of lisinopril for low blood pressure, a model generates âlow blood pressureâ in the review of systems section, and âlisinoprilâ in medications section. We direct the reader to the appendix for examples of full-length generated SOAP notes.
Interestingly, when the abstractive model is given a cluster of utterances that are not relevant to the section being generated, the model sometimes outputs fabricated information relevant to that sec- tion such as saying the patient is a non-smoker in social history, or that the patient has taken a ï¬u shot in immunizations . Hence, the quality of pro- duced summaries heavily depends on the ability of the extractive step to classify the extracted utter- ances to the correct section. Another cause of false information is the usage of pronouns in clusters without a mention of the referred entity. In such
Method R-1 R-2 R-L R-1 R-2 R-L RANDOMNOTE ORACLEEXT 34.99 33.07 12.69 12.22 21.37 17.42 42.47 39.97 11.55 11.17 21.47 20.91 CONV2NOTE (PG) 49.56 25.68 32.87 39.62 13.16 23.95 EXT2NOTE (PG + HLSTM) EXT2NOTE (PG + BLSTM) EXT2NOTE (T5-Small + HLSTM) EXT2NOTE (T5-Small + BLSTM) 49.58 50.50 - - 24.91 25.4 - - 31.68 31.93 - - 21.28 21.71 40.48 40.36 7.06 6.83 13.82 13.73 15.96 15.69 24.64 24.13 EXT2SEC (PG + HLSTM) EXT2SEC (PG + BLSTM) EXT2SEC (T5-Small + HLSTM) EXT2SEC (T5-Small + BLSTM) 55.23 55.74 55.77 56.00 27.14 27.54 28.64 29.16 35.15 36.09 37.50 38.38 43.75 40.48 42.45 45.44 15.25 15.61 15.20 16.59 23.46 23.31 23.92 26.14 CLUSTER2SENT (PG + HLSTM) CLUSTER2SENT (PG + BLSTM) CLUSTER2SENT (T5-Small + HLSTM) CLUSTER2SENT (T5-Small + BLSTM) CLUSTER2SENT (T5-Base + HLSTM) CLUSTER2SENT (T5-Base + BLSTM) 55.46 55.60 56.88 57.14 57.27 57.51 27.41 27.68 28.63 29.11 29.10 29.56 35.81 36.29 36.78 37.43 37.38 38.06 46.19 42.31 45.10 42.38 50.52 45.91 16.64 15.92 15.06 15.36 17.56 17.70 24.29 23.51 23.52 23.9 24.89 25.24
Table 2: ROUGE scores achieved by different methods on the two datasets
Cluster of utterances Subsection âSummary-PG Summary-T5 DR That one thing that we can do to reduce risk with that cholesterol is 100 mg metoprolol. DR But | want you on two a day. Prescriptions and SEE RErats metoprolol 100 mg twice a day. metoprolol 100 mg twice aday. DR Um, the first thing | didnât get was that, um, are you, you 're on digoxin, right? PT Um-hum. Past Medical History history of heart disease. patient is on digoxin. Medications digoxin. digoxin. âAssessment the patient is on digoxin. patient is on digoxin. DR Uh, and have you had any more chest pain? PT I did, yeah, | do. Review of Systems confirms chest pain. confirms chest pain. DR Uh, and have you had any more chest pain? PT Not really. No. Review of Systems denies chest pain. denies chest pain. DR This one, this amlodipine that you are taking it's a good pill for high blood pressure. PT Okay DR But right now your blood pressure is a bit low. PT Um-hum DR So | will reduce it to half a pill per day, alright? Chief Complaint high blood pressure. Tow blood pressure. Review of Systems blood pressure is a bit low. blood pressure isa bit low. Past Medical History high blood pressure. Tow blood pressure. Prescriptions and Geert amlodipine half a pill a day. reduce amlodipine to half a pill per day. DR And nothing like that? PTI, and , of course , when you break something, like | fractured my leg , | don't think that whatever that feeling is ever goes away completely. Chief Complaint Teg swelling. fractured leg. Past Medical History leg pain. fractured leg. Medications patient is on leg. xarelto. oe patient had a flu shot in the Immunizations patient had past. immunizations. Diagnostics and the patient will undergo leg follow-up. Appointments surgery.
Figure 2: Noteworthy utterance clusters summarized in different ways for different sections by the abstractive summarization modules of CLUSTER2SENT (utterances were slightly obfuscated for privacy reasons)
situations, T5 models frequently replace the pro- noun with some arbitrary entity (e.g. âsheâ with âdaughterâ, compounds with âhaemoglobinâ, and medicines with âlisinoprilâ).
Occasionally, the abstractive module produces new inferred information that is not mentioned ex- plicitly in the conversation. In one instance, the model generated that the patient has a history of heart disease conditioned on a cluster that men-
tioned he/she takes digoxin, a popular medicine for heart disease. Similarly, the model can infer past medical history of âhigh cholesterolâ upon seeing pravastatin usage. Such inferences can also lead to incorrect summaries, e.g., when a doctor explained that a patient has leaky heart valves, a model added a sentence to the diagnostics and appointments sec- tion saying âcheck valvesâ.
CLUSTER2SENT summarizes localized regions
Medical dataset AMI corpus Method PG T5-Small PG T5-Small EXT2NOTE EXT2SEC CLUSTER2SENT 52.95 61.00 63.63 - 62.37 66.50 20.44 43.32 51.86 41.10 46.85 54.23
Table 3: ROUGE-1 achieved on test set when using the abstractive models with oracle noteworthy utterances and clusters (more results with oracle in the Appendix)
Method R-1 R-2 R-L Train on clean data + Test on data with 10% error rate CONV2NOTE(PG) CLUSTER2SENT(PG + BLS) CLUSTER2SENT(T5-Base+ BLS) 46.52 51.84 54.88 22.60 23.74 26.65 30.45 32.94 35.88 Train and test on data with 10% error rate CONV2NOTE(PG) CLUSTER2SENT(PG + BLS) CLUSTER2SENT(T5-Base+ BLS) 48.85 54.68 56.35 24.85 26.59 28.50 31.27 35.70 37.04 Train and test on data with 30% error rate CONV2NOTE(PG) CLUSTER2SENT(PG + BLS) CLUSTER2SENT(T5-Base+ BLS) 45.16 53.69 55.90 22.26 25.88 27.73 30.14 35.12 36.06
Table 4: Performance of models trained and tested on data with different simulated ASR error rates. BLS: BERT-LSTM
of the conversation independently, which may lead to contradictions in the SOAP note. In one visit, the patient was asked about chest pain twiceâonce in the beginning to get to know his/her current state, and once as a question about how he/she felt just before experiencing a fall in the past. This led to the model generating both that the patient denied chest pain as well as conï¬rmed chest pain, without clarifying that one statement was for the present and another for the past.
# 8 Human evaluation
We asked trained human annotators to evaluate gen- erated SOAP notes for 45 conversations. Every sen- tence in each SOAP note was labeled according to various quality dimensions such whether it was fac- tually correct, incoherent, irrelevant, redundant, or placed under an inappropriate section. The detailed statistics of annotations received for each quality dimension are provided in the Appendix. We also collected aggregate annotations for the comprehen- siveness of each SOAP note and the extent to which it verbatim copied the transcript on a 5-point Likert scale.
Human raters were presented with a web in-
Medical conversations AMI corpus Metric LR LS HLS BLS HLS BLS Accuracy 96.0 Ma-AUC 78.1 29.5 Ma-F1 87.3 Mi-AUC 31.2 Mi-F1 96.1 79.3 31.0 87.6 32.9 96.5 90.0 38.6 92.7 39.6 96.5 90.5 40.9 93.3 41.1 93.77 83.81 19.95 93.21 43.76 94.16 90.76 33.08 94.90 49.93
Table 5: Performance on multilabel classiï¬ca- tion of noteworthy utterances with logistic regres- sion(LR), LSTM(LS), Hierarchical-LSTM(HLS) and BERT-LSTM(BLS). Ma:macro-averaged. Mi:micro- averaged
terface showing the conversation, along with a search feature to help them in looking up desired information. The summaries gener- ated by three methods (CONV2NOTE(pointer- generator), CLUSTER2SENT(pointer-generator) and CLUSTER2SENT(T5-base)) were presented in random order to hide their identities. For each sentence, we asked for (i) Factual correctness of the sentence; (ii) If the statement is simply repeat- ing what has already been mentioned before; (iii) If the statement is clinically irrelevant; (iv) If the statement is incoherent (not understandable due to grammatical or semantic errors); and (v) If the state- mentâs topic does not match the section in which it is placed. In addition, we asked two separate ques- tions for rating the overall summary on a scale of 1-5 for its (i) comprehensiveness and (ii) extent of verbatim copying from conversation. The human evaluation of the SOAP notes was done by work- ers who had also participated in the creation of the dataset of SOAP notes. Hence, they had already been extensively trained in the task of SOAP note creation, which gave them appropriate knowledge to judge the SOAP notes.
To quantify the performance among different methods, we consider a scenario where each gener- ated SOAP note has to be post-edited by discard- ing undesirable sentences. For a generated SOAP note, we deï¬ne its yield as the fraction of its total sentences that are not discarded. The sentences that are retained are those that are both factually correct and were not labeled as either repetitive or incoherent. The human annotations show that both CLUSTER2SENT-based methods tested pro- duced a higher yield than the CONV2NOTE base- line (p< 0.02). T5-base performs better than condi- tioned pointer-generator as the abstractive module in CLUSTER2SENT setting, producing signiï¬cantly more yield (Table 6). T5 also produces fewer inco-
Medical conversations AMI corpus Metric C2N C2S-P C2S-T C2N C2S-P C2S-T Length %Yield Comp Copy 21.2 62.0 2.44 2.18 28.2 69.0 2.42 2.64 28.4 74.7 2.76 2.76 20.7 27.22 2.30 1.80 17.9 30.22 2.55 1.80 19.05 59.45 3.75 1.90
for Table with CONV2NOTE(C2N), (C2S-T). pointer-generator Comp:comprehensiveness, Copy:amount of copying. Length: number of sentences generated.
herent sentences (Appendix Table A4) likely due to its exposure to a large number of well-formed coherent sentences during pretraining.
We conducted an analogous human evaluation of summaries generated for all 20 conversations in the test set of the AMI corpus, and saw a similar trend in the expected yield for different methods. Notably, for the AMI corpus, CONV2NOTE pro- duced a very high proportion of redundant sen- tences (> 0.5) despite using the coverage loss, while the pointer-generator based CLUSTER2SENT produced a high proportion of incoherent sentences (Appendix Table A4).
# 9 Conclusion
This paper represents the ï¬rst attempt at generating full-length SOAP notes by summarizing transcripts of doctor-patient conversations. We proposed a spectrum of extractive-abstractive summarization methods that leverage: (i) section-structured form of the SOAP notes and (ii) linked conversation utterances associated with every SOAP note sen- tence. The proposed methods perform better than a fully abstractive approach and standard extractive- abstractive approaches that do not take advantage of these annotations. We demonstrate the wider applicability of proposed approaches by showing similar results on the public AMI corpus which has similar annotations and structure. Our work demon- strates the beneï¬ts of creating section-structured summaries (when feasible) and collecting evidence for each summary sentence when creating any new summarization dataset.
# Ethics Statement
The methods proposed in this work to generate SOAP notes involve neural models that sometimes generate factually incorrect text (Maynez et al., 2020). The detection and correction of such factual
errors in automatically generated summaries is an active area of research (Cao et al., 2018; Zhang et al., 2020; Dong et al., 2020). We emphasize that the methods are intended to be used with super- vision from a medical practitioner who can check for factual errors and edit the the generated SOAP note if needed. We have estimated the frequency of such factual errors (Appendix Table A4) and char- acterized multiple types of errors seen in generated SOAP notes in Section 7, for which the medical practitioners should remain vigilant. For example, there is a bias to incorrectly generate information that occur frequently in speciï¬c sections (e.g. âpa- tient took ï¬u shotâ), and to replace pronouns with frequently seen entities (such as âlisinoprilâ for ref- erences to medicine). All data used in this study was manually de-identiï¬ed before we accessed it. Deploying the proposed methods does not require long-term storage of conversations. After the corre- sponding SOAP notes are generated, conversations can be discarded. Hence, we do not anticipate any additional privacy risks from using the proposed methods.
# Acknowledgements
This work was funded by the Center for Machine Learning and Health in a joint venture between UPMC and Carnegie Mellon University. We grate- fully acknowledge support from Abridge AI, Inc. for creating the dataset of SOAP notes and provid- ing human resources for evaluation.
# References
Taylor Berg-Kirkpatrick, Dan Gillick, and Dan Klein. 2011. Jointly learning to extract and compress. In Proceedings of the 49th Annual Meeting of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies-Volume 1, pages 481â490. As- sociation for Computational Linguistics.
Ziqiang Cao, Furu Wei, Wenjie Li, and Sujian Li. 2018. Faithful to the original: Fact aware neural abstrac- In Proceedings of the AAAI tive summarization. Conference on Artiï¬cial Intelligence, volume 32.
Jean Carletta. 2007. Unleashing the killer corpus: ex- periences in creating the multi-everything ami meet- ing corpus. Language Resources and Evaluation, 41(2):181â190.
Asli Celikyilmaz, Antoine Bosselut, Xiaodong He, and Yejin Choi. 2018. Deep communicating agents for In Proceedings of the abstractive summarization. 2018 Conference of the North American Chapter of
the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long Pa- pers), pages 1662â1675.
Yen-Chun Chen and Mohit Bansal. 2018. Fast abstrac- tive summarization with reinforce-selected sentence rewriting. In Proceedings of the 56th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 675â686.
Apache Commons. Reï¬nedsoundex.
Yue Dong, Shuohang Wang, Zhe Gan, Yu Cheng, Jackie Chi Kit Cheung, and Jingjing Liu. 2020. Multi-fact correction in abstractive text summariza- In Proceedings of the 2020 Conference on tion. Empirical Methods in Natural Language Processing (EMNLP), pages 9320â9331.
G¨unes Erkan and Dragomir R Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. Journal of artiï¬cial intelligence re- search, 22:457â479.
Katja Filippova. 2010. Multi-sentence compression: Finding shortest paths in word graphs. In Proceed- ings of the 23rd international conference on compu- tational linguistics, pages 322â330. Association for Computational Linguistics.
Bogdan Gliwa, Iwona Mochol, Maciej Biesek, and Aleksander Wawer. 2019. Samsum corpus: A human-annotated dialogue dataset for abstractive summarization. In Proceedings of the 2nd Workshop on New Frontiers in Summarization, pages 70â79.
Max Grusky, Mor Naaman, and Yoav Artzi. 2018. Newsroom: A dataset of 1.3 million summaries arXiv preprint with diverse extractive strategies. arXiv:1804.11283.
Anirudh Joshi, Namit Katariya, Xavier Amatriain, and Anitha Kannan. 2020. Dr. summarize: Global sum- marization of medical dialogue by exploiting local structures. arXiv preprint arXiv:2009.08666.
Kundan Krishna, Amy Pavel, Benjamin Schloss, Jef- frey P Bigham, and Zachary C Lipton. 2021. Ex- tracting structured data from physician-patient con- versations by predicting noteworthy utterances. In Explainable AI in Healthcare and Medicine, pages 155â169. Springer.
Physician burnout at a childrenâs hospital: Incidence, interven- tions, and impact. Pediatric Quality & Safety, 5(5).
Chunyi Liu, Peng Wang, Jiang Xu, Zang Li, and Jieping Ye. 2019a. Automatic dialogue summary generation for customer service. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1957â 1965.
Zhengyuan Liu, Jia Hui Hazel Lim, Nur Farah Ain Suhaimi, Shao Chuen Tong, Sharon Ong, Angela Ng, Sheldon Lee Shao Guang, Michael Ross Mac- donald, Savitha Ramasamy, Pavitra Krishnaswamy, et al. 2019b. Fast prototyping a dialogue compre- hension system for nurse-patient conversations on symptom monitoring. In NAACL-HLT (2).
Zhengyuan Liu, Angela Ng, Sheldon Lee, Ai Ti Aw, and Nancy F Chen. 2019c. Topic-aware pointer- generator networks for summarizing spoken conver- sations. arXiv preprint arXiv:1910.01335.
Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan McDonald. 2020. On faithfulness and factu- ality in abstractive summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1906â1919.
Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, C¸ aËglar Gulc¸ehre, and Bing Xiang. 2016. Abstrac- tive text summarization using sequence-to-sequence In Proceedings of The 20th rnns and beyond. SIGNLL Conference on Computational Natural Lan- guage Learning, pages 280â290.
Ani Nenkova, Kathleen McKeown, et al. 2011. Auto- matic summarization. Foundations and Trends® in Information Retrieval, 5(2â3):103â233.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the lim- its of transfer learning with a uniï¬ed text-to-text transformer. Journal of Machine Learning Research, 21(140):1â67.
Benjamin Schloss and Sandeep Konam. 2020. Towards an automated soap note: Classifying utterances from In Machine Learning for medical conversations. Healthcare Conference, pages 610â631. PMLR.
Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointer- generator networks. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073â 1083.
Christine Sinsky, Lacey Colligan, Ling Li, Mirela Prgomet, Sam Reynolds, Lindsey Goeders, Johanna Westbrook, Michael Tutty, and George Blike. 2016. Allocation of physician time in ambulatory practice: a time and motion study in 4 specialties. Annals of internal medicine.
Sandeep Subramanian, Raymond Li, Jonathan Pi- lault, and Christopher Pal. 2019. On extrac- tive and abstractive neural document summarization with transformer language models. arXiv preprint arXiv:1909.03186.
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing sys- tems, pages 3104â3112.
Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, and Hang Li. 2016. Modeling coverage for neural machine translation. In ACL, pages 76â85.
Domain- independent abstract generation for focused meet- ing summarization. In Proceedings of the 51st An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1395â 1405.
Kam-Fai Wong, Mingli Wu, and Wenjie Li. 2008. Ex- tractive summarization using supervised and semi- supervised learning. In Proceedings of the 22nd in- ternational conference on computational linguistics (Coling 2008), pages 985â992.
Lin Yuan and Zhou Yu. 2019. Abstractive dialog sum- marization with semantic scaffolds. arXiv preprint arXiv:1910.00825.
Yuhao Zhang, Derek Merck, Emily Tsai, Christo- pher D. Manning, and Curtis Langlotz. 2020. Op- timizing the factual correctness of a summary: A study of summarizing radiology reports. In Proceed- ings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5108â5120.
Chenguang Zhu, Ruochen Xu, Michael Zeng, and End-to-end abstractive arXiv preprint Xuedong Huang. 2020. summarization for meetings. arXiv:2004.02016.
# Appendix
# Decoder Results with Oracle extracts
We present additional quantitative results (Ta- ble A3), including (i) The ROUGE scores on the test set when using oracle noteworthy utter- ances with both oracle and predicted clusters (for CLUSTER2SENT models). (ii) Two ablations on EXT2SEC: ALLEXT2SEC uses binary classiï¬ca- tion to extract all noteworthy utterances (not per- section), and an abstractive decoder that condi- tions on the section; while EXT2SECNOCOND uses a multilabel classiï¬cation based extractor but does not use section-conditioning in the abstrac- tive module. Both methods mostly perform worse than EXT2SEC demonstrating the beneï¬t of us- ing both section-speciï¬c extraction and section- conditioning in abstractive decoder.
# Impact of copy mechanism
When we do not use copy mechanism in the pointer- generator model, we observed a drop in its per- formance in the CLUSTER2SENT setting with or- acle noteworthy noteworthy utterances and clus- ters(Table A1). Hence, we have used copy mecha- nism in all the pointer-generator models we train in this work.
# Impact of pretraining
When training a randomly initialized T5-Base model on the medical dataset, even in CLUS- TER2SENT setting with oracle clusters, it only got a ROUGE-1 around 40 (Table A2). This is over 16 points lower than what we get by starting with off-the-shelf pretrained T5 parameters, and is even
Copy mechanism R-1 R-2 R-L Present Absent 63.63 61.92 35.62 34.37 48.85 47.86
Table A1: Impact of copy mechanism in peformance of a pointer-generator model on medical dataset in CLUS- TER2SENT using oracle noteworthy utterance clusters
Model R-1 R-2 R-L Pretrained T5 Randomly initialized T5 66.45 40.07 39.01 20.95 52.46 32.42
Table A2: Impact of pretraining on performance of T5- Base model on medical dataset with CLUSTER2SENT using oracle noteworthy utterance clusters
worse than CONV2NOTE, highlighting the impor- tance of pretraining.
# Sample generated SOAP notes
Due to privacy concerns, we can not publish conver- sations from our dataset. Here, we present an obfus- cated conversation from our test dataset, modiï¬ed by changing sensitive content such as medicines, diseases, dosages (Figure A2). We also present the SOAP note generated by our best method, as well as the ground truth.
# Model implementation details
For the hierarchical LSTM classiï¬er, we have a word embedding size of 128 and both bidirectional LSTMs have a hidden size of 256. For BERT- LSTM, the BERT embeddings are initialized from bert-base-uncased (768 dimensions). LSTMs in either direction have a hidden-layer of size 512 and the entire model is optimized end-to-end with a learning-rate of 0.001. For BERT-LSTM, an input conversation is divided into chunks of 128 utter- ances. Due to GPU constraints, these chunks are processed one at a time. The pointer-generator models have a word embedding size of 128, and a hidden size of 256 for both the encoder and the decoder. The section embeddings used in section- conditioned pointer-generator network have 32 di- mensions. During training of all pointer-generator models, the model is ï¬rst trained without cover- age loss (Tu et al., 2016) to convergence, and then trained further with coverage loss added. We tried coverage loss coefï¬cients varying from 0.5 to 8.
The pointer-generator models were trained us- ing Adam optimizer before coverage and using SGD after adding coverage. We tried learning rates between 10â4 and 10â3 with Adam. The next word prediction accuracy was used as the validation criterion for early stopping while train-
300 60K 200 40K 100 20K 0 0 3K aK 0 5 10 b number of references fe) 1K 2K number of words
Figure A1: Histogram of number of words in a conver- sation and the number of evidence utterances per sum- mary sentence for the medical dataset
Medical dataset AMI corpus Method R-1 R-2 R-L R-1 R-2 R-L EXT2NOTE (PG) EXT2NOTE (T5-Small) ALLEXT2SEC (PG) ALLEXT2SEC (T5-Small) EXT2SECNOCOND (PG) EXT2SECNOCOND (T5-Small) EXT2SEC (PG) EXT2SEC (T5-Small) CLUSTER2SENT (PG) CLUSTER2SENT (T5-Small) CLUSTER2SENT (T5-Base) 52.95 - 50.74 - 56.10 58.69 61.00 62.37 63.63 66.50 66.45 27.6 - 24.33 - 32.05 34.92 33.64 36.39 35.62 38.41 39.01 32.87 - 32.18 - 43.23 47.24 45.2 49.11 48.85 51.73 52.46 21.23 41.10 40.20 41.68 42.51 48.14 43.30 46.85 51.86 54.23 57.42 6.71 14.12 13.71 15.43 15.71 18.49 16.56 18.19 21.86 22.90 24.45 14.95 25.03 22.52 24.72 23.79 28.23 24.83 28.74 31.84 34.54 35.70 CLUSTER2SENT (PG+clustering heuristic) CLUSTER2SENT (T5-Small+clustering heuristic) CLUSTER2SENT (T5-Base+clustering heuristic) 63.12 66.08 65.94 35.08 37.73 38.26 47.96 50.66 51.31 47.17 47.53 51.24 18.99 19.70 21.47 27.31 28.95 29.81
Table A3: ROUGE scores achieved by different abstractive decoders using oracle noteworthy utterances
ing abstractive modules, with the exception of coverage-augmented models that used a combina- tion of crossentropy and coverage loss. Micro- averaged AUC was used as the validation criterion for training of extractive modules.
We employ beam search with beam size 4 to decode outputs from both models. For the vanilla pointer-generator model used in CONV2NOTE and EXT2NOTE, we modiï¬ed the beam search proce- dure to make sure that all the SOAP note sections are generated in proper order. We start the beam search procedure by feeding the header of the ï¬rst section (chief complaint). Whenever the model pre- dicts a section header as the next word and it shows up in a beam, we check if it is the next section to be generated. If not, we replace it with the correct next sectionâs header. Any end-of-summary tokens generated before all the sections have been produced are also replaced similarly. Note that pro- ducing all sections simply means that the headers for each section have to be generated, and a section can be left empty by starting the next section imme- diately after generating the previous header. The decoding length for beam search is constrained to be between 5th and 95th percentile of the tar- get sequence length distribution, calculated on the training set.
words in the conversation and replacing them with phonetically similar words. To reduce the search space of possible candidates for each word, we use the suggest() function taken from the Pyenchant4 library that provides auto-correct suggestions for the input word. Each suggestion is then passed through the Reï¬ned SoundEx algorithm to ï¬nd the phonetic distance between the original and the suggested word. We use the pyphonetics5 package for a python implementation of this algorithm. For our ï¬nal candidate list, we choose words that are at phonetic distance of 1 from the original word. Finally, a candidate is chosen at random from this list to replace the original.
# More Experimental Details
We trained models on multiple Nvidia Quadro RTX 8000, RTX 2080Ti and V100 GPUs. The extractive modules were evaluated us- ing from scikit-learn 6 and quality of summaries were evaluated using ROUGE scores calculated with the pyrouge Python package 7 which is a wrapper around the ROUGE-1.5.5 Perl script.
# Simulating ASR Errors
We simulate ASR errors at any given percentage rate by randomly selecting the percentage of the
4https://pypi.org/project/pyenchant/ 5https://pypi.org/project/pyphonetics/ 6https://scikit-learn.org 7https://pypi.org/project/pyrouge
Medical conversations AMI corpus Count C2N C2S-P C2S-T C2N C2S-P C2S-T Total sentences Repetitive Incoherent True statements False statements Truthfulness undecided Irrelevant Under incorrect section 956 96 162 587 100 11 25 56 1268 127 158 848 116 19 34 42 1277 147 58 931 125 16 24 39 414 213 9 89 71 32 14 4 358 14 134 103 75 32 15 2 381 14 27 227 68 45 2 18
Table A4: Number of sentences produced by different methods that were judged to have different listed characteristics by human raters. C2N:CONV2NOTE, C2S-P:CLUSTER2SENT with pointer-generator, C2S- T:CLUSTER2SENT with T5-base. BERT-LSTM used for medical dataset, hierarchical-LSTM used for AMI corpus.
Subsection ROUGE-1 ROUGE-2 ROUGE-L N L chief complaint review of systems past medical history past surgical history family medical history social history medications allergies miscellaneous immunizations laboratory and imaging results assessment diagnostics and appointments prescriptions and therapeutics healthcare complaints 44.34 46.88 53.48 58.44 51.94 57.72 49.56 39.29 28.87 55.95 58.36 39.01 52.85 50.53 30.11 28.12 28.35 37.70 43.08 36.49 37.82 23.53 6.63 11.61 27.49 41.18 15.31 35.70 33.51 15.79 43.59 43.28 51.80 57.04 50.13 56.30 47.64 38.32 24.90 54.81 55.11 25.35 50.43 48.10 29.57 592 514 547 230 72 97 549 21 415 25 448 570 488 446 43 11.46 29.24 17.81 10.36 16.14 10.33 15.28 8.57 34.44 7.32 19.37 132.41 17.67 18.73 16.74
Table A5: Average ROUGE scores (from CLUSTER2SENT T5Base+BLSTM) for each section of SOAP note (N- number of test datapoints with the section populated, L-average number of words in ground truth)
Section Base rate(%) Precision Recall F1 Accuracy chief complaint review of systems past medical history past surgical history family medical history social history medications allergies miscellaneous immunizations laboratory and imaging results assessment diagnostics and appointments prescriptions and therapeutics healthcare complaints 3.12 5.10 3.41 0.99 0.31 0.53 4.45 0.16 3.71 0.05 2.46 14.19 2.10 3.11 0.25 34.71 51.35 36.00 33.80 52.31 59.81 51.66 30.86 24.06 63.64 50.00 38.09 55.60 41.28 20.47 33.93 51.82 36.52 34.50 45.25 54.87 49.13 12.44 16.17 64.62 55.15 42.01 40.16 38.43 21.90 34.31 51.58 36.26 34.14 48.53 57.23 50.36 17.73 19.34 64.12 52.45 39.96 46.63 39.81 21.17 95.95 95.04 95.63 98.68 99.70 99.56 95.69 99.82 95.00 99.96 97.54 82.08 98.07 96.39 99.60 AUC 86.81 93.12 88.00 93.74 99.23 95.41 92.02 89.46 80.05 97.63 93.84 76.89 94.22 92.40 85.93
Table A6: Performance of BERT-LSTM on extracting noteworthy utterances for various SOAP sections
Predicted relevant
# Conversation utterances
subsections (PT) (A) DR Okay, so, um, we are going to talk a little bit about being a Metformin candidate . (CC) (PMH) (A) DR Um, we have talked about your hemoglobin and the things , what are , so what are the things that, that keep you from, um, from | managing your anemia well ? DR, | know thereâs a lot of stuff that troubles you. (M) PT Snacking and stress eating. PT Eating late in the evenings instead of, um, at a reasonable time - DR Right. PT At night, late. (M) (A) PT Poor meal planning. (PMH) (LIR) (A) DR Right, and | think thatâs in the, we can all take a little note for but one of things that really got me worried because your last Hemoglobin was really low - PT Uh-huh. (LIR) (A) DR It was below , it was below 10, and we âve had this consistent pattern and you âve really , | mean, you really have given it an effort and | have to give it up to you that you 've been trying and , um, so we 're down to like just a couple of options and so | want to just kind of put them before you . (A) (PT) (Med) DR | 've got, I'm, | 'm considering once a day Metformin with you at some point . (A) DR Um, | do n't want to use that as a threat. (A) DR | do n't want to use it as like a, oh , you 've been a bad patient you deserve to be on Metformin . (A) (PT) DR Um, | do have one other option , um, but | want to counsel you that , that Metformin , even if, if we did, we do gotoit, itis nota punishment. (A) DR It is something to kind of get your baseline down to a regular, regular situation and you only have to do it once a day. (A) DR Um, and | know that one of the things that we have for anemics is their eating habits . (A) (PT) DR And, so , | am proposing as instead of using Metformin this time , um , that we use something called Lipitor for the , for the eating at nighttime . (A) DR Um, itâs supposed to reduce the incidence of having those nighttime cravings so that you can work , you can do your things , you can plan a little bit better . (A) DR It's, itâs originally for ADHD so some people actually feel a little bit more focused , um, and controlled but it also affects appetite centers and so itâs supposed to do it for the longer term as opposed to using like a fen phen, um, so, which is short term . DR So, um, |'m really hoping with your interest in it and with the coverage hopefully , | know , with your particular plan it should be covered and we can get a discount . (PT) DR Um, we do it once a day with your other medications , which are actually pretty minor . (DA) DR Um, and then we check you again in eight weeks . (DA) PT Okay. DR All right? (A) (DA) DR And, so what we do is we say, you know, it should be , we usually will do three months but then eight weeks we should see some difference from today . DR We should see some kind of improvement and then we can sort of celebrate that in and of itself, if thatâs okay with you. PT That sounds great. (DA) DR Cool, all right well we will plan to meet again in eight weeks . PT Okay. DR And, uh, and we âIl go from there . PT Okay. DR Cool, all right , cool .
# Chief Complaint: anemia Past Medical History: anemia Medications: metformin
.
Miscellaneous: patient has snacking and stress eating . poor meal planning . Laboratory and Imaging Results: last hemoglobin was low at 10.
Assessment: discussed about being a metformin candidate . discussed about hemoglobin and the things that keep patient from managing anemia well. discussed that patient 's last hemoglobin was really low , it was really low , it was really low, it was really low , it was really low , it was really low , and we have had this consistent pattern and you really have given it effort and we have had this. followup in 8 weeks .
# Diagnostics and Appointments: followup in 8 weeks .
Prescriptions and Therapeutics: the patient will be a metformin candidate. metformin once a day. coumadin twice a day with other medications, which are actually pretty minor.
Chief Complaint: follow-up. anemia. Past Medical History: anemia. Medications: metformin
Miscellaneous: patient is not following a correct diet plan (snacking and stress eating).
Laboratory and Imaging Results: hemoglobin was really low below 10.
Assessment: anemia. night time eating. discussed with the patient the importance of bringing up the hemoglobin to a considerable level and also discussed couple of other options. discussed the new medication called lipitor which will help the patient bringing up the hemoglobin and can take it once a day with other medications. discussed that the lipitor will reduce the nighttime cravings so that the patient can plan better (originally for ADHD to better focus). discussed with the patient that with the current insurance coverage , the patient may get a discount with lipitor.
Diagnostics and Appointments: advised to follow up in 8 weeks. Prescriptions and Therapeutics: metformin. lipitor.
Figure A2: Sample conversation (obfuscated) with SOAP note generated by the best method and the ground truth | {
"id": "1910.00825"
} |
2005.00724 | Obtaining Faithful Interpretations from Compositional Neural Networks | Neural module networks (NMNs) are a popular approach for modeling
compositionality: they achieve high accuracy when applied to problems in
language and vision, while reflecting the compositional structure of the
problem in the network architecture. However, prior work implicitly assumed
that the structure of the network modules, describing the abstract reasoning
process, provides a faithful explanation of the model's reasoning; that is,
that all modules perform their intended behaviour. In this work, we propose and
conduct a systematic evaluation of the intermediate outputs of NMNs on NLVR2
and DROP, two datasets which require composing multiple reasoning steps. We
find that the intermediate outputs differ from the expected output,
illustrating that the network structure does not provide a faithful explanation
of model behaviour. To remedy that, we train the model with auxiliary
supervision and propose particular choices for module architecture that yield
much better faithfulness, at a minimal cost to accuracy. | http://arxiv.org/pdf/2005.00724 | Sanjay Subramanian, Ben Bogin, Nitish Gupta, Tomer Wolfson, Sameer Singh, Jonathan Berant, Matt Gardner | cs.CL, cs.AI, cs.CV, cs.LG | ACL 2020; first three authors contributed equally | null | cs.CL | 20200502 | 20200908 | 0 2 0 2
p e S 8 ] L C . s c [
2 v 4 2 7 0 0 . 5 0 0 2 : v i X r a
# Obtaining Faithful Interpretations from Compositional Neural Networks
Sanjay Subramanianââ1 Ben Boginâ2 Nitish Guptaâ3
Tomer Wolfson1,2
# Sameer Singh4 1Allen Institute for AI
# Jonathan Berant1,2 Matt Gardner1 2Tel-Aviv University
âAllen Institute for AI Tel-Aviv University
3University of Pennsylvania 4University of California, Irvine {sanjays,mattg}@allenai.org, {ben.bogin,joberant}@cs.tau.ac.il, [email protected], [email protected], [email protected]
# Abstract
Neural module networks (NMNs) are a pop- ular approach for modeling compositionality: they achieve high accuracy when applied to problems in language and vision, while reï¬ect- ing the compositional structure of the prob- lem in the network architecture. However, prior work implicitly assumed that the struc- ture of the network modules, describing the abstract reasoning process, provides a faith- ful explanation of the modelâs reasoning; that is, that all modules perform their intended be- In this work, we propose and con- haviour. duct a systematic evaluation of the intermedi- ate outputs of NMNs on NLVR2 and DROP, two datasets which require composing multi- ple reasoning steps. We ï¬nd that the inter- mediate outputs diï¬er from the expected out- put, illustrating that the network structure does not provide a faithful explanation of model be- haviour. To remedy that, we train the model with auxiliary supervision and propose partic- ular choices for module architecture that yield much better faithfulness, at a minimal cost to accuracy.
âAll the dogs are black.â Basic-NMN Faithful-NMN ;â find [dogs] find[dogs] â filter [black] filter [black] â count count 14 1.6 equal | | False (57%) False (98%)
Figure 1: An example for a visual reasoning prob- lem where both the Basic and Faithful NMNs produce the correct answer. The Basic NMN, however, fails to give meaningful intermediate outputs for the find and filter modules, whereas our improved Faithful- NMN assigns correct probabilities in all cases. Boxes are green if probabilities are as expected, red otherwise.
# Introduction
Models that can read text and reason about it in a particular context (such as an image, a paragraph, or a table) have been recently gaining increased at- tention, leading to the creation of multiple datasets that require reasoning in both the visual and textual domain (Johnson et al., 2016; Suhr et al., 2017; Talmor and Berant, 2018; Yang et al., 2018; Suhr et al., 2019; Hudson and Manning, 2019; Dua et al., 2019). Consider the example in Figure 1 from NLVR2: a model must understand the composi- tional sentence in order to then ground dogs in the input, count those that are black and verify that the count of all dogs in the image is equal to the number of black dogs.
Both models that assume an intermediate struc- ture (Andreas et al., 2016; Jiang and Bansal, 2019) and models without such structure (Tan and Bansal, 2019; Hu et al., 2019; Min et al., 2019) have been proposed for these reasoning problems. While good performance can be obtained without a struc- tured representation, an advantage of structured approaches is that the reasoning process in such approaches is more interpretable. For example, a structured model can explicitly denote that there are two dogs in the image, but that one of them is not black. Such interpretability improves our sci- entiï¬c understanding, aids in model development, and improves overall trust in a model.
â Equal Contribution
two dogs are touching a food dish with their face Program Output equal rue | count 2 with-relation [is touching] [2, 5] relocate [face] [2, 5] find [dog] (1, 4] find [food dish] (3, 6] number [two] 2 Who threw the longest touchdown pass in the second half? In the first quarter, the Texans trailed early after QB Kerry Collins threw Program Output a 19-yard TD pass|1]to WR Nate Washington. Second quarter started Mn with kicker Neil Rackers made a 37-yard field goal, and the quarter relocate[who threw] âSchaub [4]! closed with kicker Rob Bironas hitting a 30-yard field goal. The Texans find-max-num [5] tried to cut the lead with QB Matt Schaub getting a evan TD pass |2 filter [the second half] (2,3, 5] to WR Andre Johnson, but the Titans would pull away with RB Javon find [touchdown pass] (1,2, 3, 5] Ringer throwing a 7-yard TD pass[3} The Texans tried to come back vere into the game in the fourth quarter, but only came away with Schaub | 4 throwing a 12-yard TD pass [5 |to WR Kevin Walter.
Figure 2: An example for a mapping of an utterance to a gold program and a perfect execution in a reasoning problem from NLVR2 (top) and DROP (bottom).
Neural module networks (NMNs; Andreas et al., 2016) parse an input utterance into an executable program composed of learnable modules that are designed to perform atomic reasoning tasks and can be composed to perform complex reasoning against an unstructured context. NMNs are appealing since their output is interpretable; they provide a logical meaning representation of the utterance and also the outputs of the intermediate steps (modules) to reach the ï¬nal answer. However, because module parameters are typically learned from end-task su- pervision only, it is possible that the program will not be a faithful explanation of the behaviour of the model (Ross et al., 2017; Wiegreï¬e and Pinter, 2019), i.e., the model will solve the task by execut- ing modules according to the program structure, but the modules will not perform the reasoning steps as intended. For example, in Figure 1, a basic NMN predicts the correct answer False, but incorrectly predicts the output of the find[dogs] operation. It does not correctly locate one of the dogs in the image because two of the reasoning steps (find and filter) are collapsed into one module (find). This behavior of the find module is not faithful to its intended reasoning operation; a human reading the program would expect find[dogs] to locate all dogs. Such unfaithful module behaviour yields an unfaithful explanation of the model behaviour.
Unfaithful behaviour of modules, such as multi- ple reasoning steps collapsing into one, are undesir- able in terms of interpretability; when a model fails to answer some question correctly, it is hard to tell which modules are the sources of error. While re- cent work (Hu et al., 2018; Jiang and Bansal, 2019)
has shown that one can obtain good performance when using NMNs, the accuracy of individual mod- ule outputs was mostly evaluated through qualita- tive analysis, rather than systematically evaluating the intermediate outputs of each module.
We provide three primary contributions regard- ing faithfulness in NMNs. First, we propose the concept of module-wise faithfulness â a system- atic evaluation of individual module performance in NMNs that judges whether they have learned their intended operations, and deï¬ne metrics to quantify this for both visual and textual reason- ing (§3). Empirically, we show on both NLVR2 (Suhr et al., 2019) and DROP (Dua et al., 2019) that training a NMN using end-task supervision, even using gold programs, does not yield module- wise faithfulness, i.e., the modules do not perform their intended reasoning task. Second, we provide strategies for improving module-wise faithfulness in NMNs (§4). Speciï¬cally, (a) we demonstrate how module architecture aï¬ects faithfulness (§4.1), (b) propose supervising module outputs with either a proxy task or heuristically generated data (§4.2), and (c) show that providing modules with uncon- texualized token representations improves faithful- ness (§4.3). Figure 1 shows an example where our approach (Faithful-NMN) results in expected mod- ule outputs as compared to the Basic-NMN. Last, we collect human-annotated intermediate outputs for 536 examples in NLVR2 and for 215 exam- ples in DROP to measure the module-wise faith- fulness of models, and publicly release them for future work. Our code and data are available at https://github.com/allenai/faithful-nmn.
# 2 Neural Module Networks
Overview Neural module networks (NMNs; An- dreas et al., 2016) are a class of models that map a natural language utterance into an exe- cutable program, composed of learnable modules that can be executed against a given context (im- ages, text, etc.), to produce the utteranceâs deno- tation (truth value in NLVR2, or a text answer in DROP). Modules are designed to solve atomic reasoning tasks and can be composed to perform complex reasoning. For example, in Figure 1, the utterance âAll the dogs are blackâ is mapped to the program equal(count(find[dogs]), count(filter[black](find[dogs]))). The find module is expected to ï¬nd all dogs in the image and the filter module is expected to out- put only the black ones from its input. Figure 2 shows two other example programs with the ex- pected output of each module in the program.
A NMN has two main components: (1) parser, which maps the utterance into an executable pro- gram; and (2) executor, which executes the pro- gram against the context to produce the denotation. In our setup, programs are always trees where each tree node is a module. In this work, we focus on the executor, and speciï¬cally the faithfulness of module execution. We examine NMNs for both text and images, and describe their modules next.
# 2.1 Modules for visual reasoning
In this task, given two images and a sentence that describes the images, the model should output True iï¬ the sentence correctly describes the im- ages. We base our model, the Visual-NMN, on LXMERT (Tan and Bansal, 2019), which takes as input the sentence x and raw pixels, uses Faster R-CNN (Ren et al., 2015) to propose a set of bounding boxes, B, that cover the objects in the image, and passes the tokens of x and the bounding boxes through a Transformer (Vaswani et al., 2017), encoding the interaction between both modali- ties. This produces a contextualized representation t â R|x|Ãh for each one of the tokens, and a repre- sentation v â R|B|Ãh for each one of the bounding boxes, for a given hidden dimension h.
We provide a full list of modules and their imple- mentation in Appendix A. Broadly, modules take as input representations of utterance tokens through an utterance attention mechanism (Hu et al., 2017), i.e., whenever the parser outputs a module, it also predicts a distribution over the utterance to-
kens (p1,..., Pjx), and the module takes as input bya piti, where t; is the hidden representation of token i. In addition, modules produce as output (and take as input) vectors p ⬠[0, 1741, indicating for each bounding box the probability that it should be output by the module (Mao et al., 2019). For ex- ample, in the program filter[black](find[dog]), the find module takes the word âdogâ (using ut- terance attention, which puts all probability mass on the word âdogâ), and outputs a probability vec- tor p ⬠[0, 1]9!, where ideally all bounding boxes corresponding to dogs have high probability. Then, the filter module takes p as input as well as the word âblackâ, and is meant to output high probabil- ities for bounding boxes with âblack dogsâ.
For the Visual-NMN we do not use a parser, but rely on a collected set of gold programs (including gold utterance attention), as described in §5. We will see that despite this advantageous setup, a basic NMN does not produce interpretable outputs.
# 2.2 Modules for textual reasoning
Our Text-NMN is used to answer questions in the DROP dataset and uses the modules as de- signed for DROP in prior work (Gupta et al., 2020) along with three new modules we deï¬ne in this work. The modules introduced in Gupta et al. (2020) and used as is in our Text-NMN are find, filter, relocate, count, find-num, find-date, find-max-num, find-min-num, num-compare and date-compare. All these mod- ules are probabilistic and produce, as output, a dis- tribution over the relevant support. For example, find outputs a distribution over the passage to- kens and find-num outputs a distribution over the numbers in the passage. We extend their model and introduce additional modules; addition and subtraction to add or subtract passage numbers, and extract-answer which directly predicts an answer span from the representations of passage to- kens without any explicit compositional reasoning. We use BERT-base (Devlin et al., 2019) to encode the input question and passage.
The Text-NMN does not have access to gold programs, and thus we implement a parser as an encoder-decoder model with attention similar to Krishnamurthy et al. (2017), which takes the ut- terance as input, and outputs a linearized abstract syntax tree of the predicted program.
# 3 Module-wise Faithfulness
Neural module networks (NMNs) facilitate inter- pretability of their predictions via the reasoning steps in the structured program and providing the outputs of those intermediate steps during execu- tion. For example, in Figure 2, all reasoning steps taken by both the Visual-NMN and Text-NMN can be discerned from the program and the interme- diate module outputs. However, because module parameters are learned from an end-task, there is no guarantee that the modules will learn to per- form their intended reasoning operation. In such a scenario, when modules do not perform their intended reasoning, the program is no longer a faithful explanation of the model behavior since it is not possible to reliably predict the outputs of the intermediate reasoning steps given the program. Work on NMNs thus far (Hu et al., 2018; Jiang and Bansal, 2019) has overlooked systematically evaluating faithfulness, performing only qualitative analysis of intermediate outputs.
We introduce the concept of module-wise faith- fulness aimed at evaluating whether each module has correctly learned its intended operation by judg- ing the correctness of its outputs in a trained NMN. For example, in Figure 2 (top), a model would be judged module-wise faithful if the outputs of all the modules, find, relocate, and with relation, are correct â i.e. similar to the outputs that a human would expect. We provide gold programs when evaluating faithfulness, to not conï¬ate faithfulness with parser accuracy.
# 3.1 Measuring faithfulness in Visual-NMN
Modules in Visual-NMN provide for each bound- ing box a probability for whether it should be a module output. To evaluate intermediate out- puts, we sampled examples from the develop- ment set, and annotated gold bounding boxes for each instance of find, filter, with-relation and relocate. The annotator draws the correct bounding-boxes for each module in the gold pro- gram, similar to the output in Figure 2 (top).
A module of a faithful model should assign high probability to bounding-boxes that are aligned with the annotated bounding boxes and low probabilities to other boxes. Since the annotated bounding boxes do not align perfectly with the modelâs bounding boxes, our evaluation must ï¬rst induce an align- ment. We consider two bounding boxes as âalignedâ if the intersection-over-union (IOU) between them
exceeds a pre-deï¬ned threshold T = 0.5. Note that it is possible for an annotated bounding box to be aligned with several proposed bounding boxes and vice versa. Next, we consider an annotated bounding box BA as âmatchedâ w.r.t a module out- put if BA is aligned with a proposed bounding box BP, and BP is assigned by the module a probability > 0.5. Similarly, we consider a proposed bounding box BP as âmatchedâ if BP is assigned by the mod- ule a probability > 0.5 and is aligned with some annotated bounding box BA.
We compute precision and recall for each mod- ule type (e.g. find) in a particular example by considering all instances of the module in that ex- ample. We deï¬ne precision as the ratio between the number of matched proposed bounding boxes and the number of proposed bounding boxes assigned a probability of more than 0.5. We deï¬ne recall as the ratio between the number of matched annotated bounding boxes and the total number of annotated bounding boxes.1 F1 is the harmonic mean of preci- sion and recall. Similarly, we compute an âoverallâ precision, recall, and F1 score for an example by considering all instances of all module types in that example. The ï¬nal score is an average over all examples. Please see Appendix B.2 for further discussion on this averaging.
# 3.2 Measuring faithfulness in Text-NMN
Each module in Text-NMN produces a distribu- tion over passage tokens (§2.2) which is a soft dis- tributed representation for the selected spans. To measure module-wise faithfulness in Text-NMN, we obtain annotations for the set of spans that should be output by each module in the gold pro- gram (as seen in Figure 2 (bottom)) Ideally, all modules (find, filter, etc.) should predict high probability for tokens that appear in the gold spans and zero probability for other tokens.
To measure a module outputâs correctness, we use a metric akin to cross-entropy loss to measure the deviation of the predicted module output pat from the gold spans S = [s1,..., sy]. Here each span s; = (¢/, t/) is annotated as the start and end tokens. Faithfulness of a module is measured by: IT=- yy 1 | log ae Pix . Lower cross-entropy corresponds to better faithfulness of a module.
1The numerators of the precision and the recall are diï¬er-
ent. Please see Appendix B.1 for an explanation.
# Improving Faithfulness in NMNs
Module-wise faithfulness is aï¬ected by various factors: the choice of modules and their implemen- tation (§ 4.1), use of auxiliary supervision (§ 4.2), and the use of contextual utterance embeddings (§ 4.3). We discuss ways of improving faithfulness of NMNs across these dimensions.
# 4.1 Choice of modules
Visual reasoning The count module always ap- pears in NLVR2 as one of the top-level modules (see Figures 1 and 2).2 We now discuss how its architecture aï¬ects faithfulness. Consider the program, count(filter[black](find[dogs])). Its gold denotation (correct count value) would provide minimal feedback using which the descen- dant modules in the program tree, such as filter and find, need to learn their intended behavior. However, if count is implemented as an expres- sive neural network, it might learn to perform tasks designated for find and filter, hurting faithful- ness. Thus, an architecture that allows counting, but also encourages descendant modules to learn their intended behaviour through backpropagation, is desirable. We discuss three possible count ar- chitectures, which take as input the bounding box probability vector p â [0, 1]|B| and the visual fea- tures v â R|B|Ãh. Layer-count module is motivated by the count ar- chitecture of Hu et al. (2017), which uses a linear projection from image attention, followed by a soft- max. This architecture explicitly uses the visual features, v, giving it greater expressivity compared to simpler methods. First we compute p · v, the weighted sum of the visual representations, based on their probabilities, and then output a scalar count using: FF1(LayerNorm(FF2(p · v)), where FF1 and FF2 are feed-forward networks, and the activation function of FF1 is ReLU in order to output positive numbers only.
As discussed, since this implementation has ac- cess to the visual features of the bounding boxes, it can learn to perform certain tasks itself, without providing proper feedback to descendant modules. We show in §5 this indeed hurts faithfulness. Sum-count module on the other extreme, ignores v, and simply computes the sum ye f p;. Being ?Top-level modules are Boolean quantifiers, such as number comparisons like equal (which require count) or exist. We implement exist using a call to count and greater-equal (see Appendix A), so count always occurs in the program.
parameter-less, this architecture provides direct feedback to descendant modules on how to change their output to produce better probabilities. How- ever, such a simple functional-form ignores the fact that bounding boxes are overlapping, which might lead to over-counting objects. In addition, we would want count to ignore boxes with low probability. For example, if filter predicts a 5% probability for 20 diï¬erent bounding boxes, we would not want the output of count to be 1.0. Graph-count module (Zhang et al., 2018) is a mid- dle ground between both approaches - the na¨ıve Sum-Count and the ï¬exible Layer-Count. Like Sum-Count, it does not use visual features, but learns to ignore overlapping and low-conï¬dence bounding boxes while introducing only a minimal number of parameters (less than 300). It does so by treating each bounding box as a node in a graph, and then learning to prune edges and clus- ter nodes based on the amount of overlap between their bounding boxes (see paper for further details). Because this is a light-weight implementation that does not access visual features, proper feedback from the module can propagate to its descendants, encouraging them to produce better predictions.
Textual reasoning In the context of Text-NMN (on DROP), we study the eï¬ect of several modules on interpretability.
First, we introduce an extract-answer mod- ule. This module bypasses all compositional rea- soning and directly predicts an answer from the input contextualized representations. This has po- tential to improve performance, in cases where a question describes reasoning that cannot be cap- tured by pre-deï¬ned modules, in which case the program can be the extract-answer module only. However, introducing extract-answer adversely aï¬ects interpretability and learning of other mod- ules, speciï¬cally in the absence of gold programs. First, extract-answer does not provide any in- terpretability. Second, whenever the parser pre- dicts the extract-answer module, the param- eters of the more interpretable modules are not trained. Moreover, the parameters of the encoder are trained to perform reasoning internally in a non- interpretable manner. We study the interpretability vs. performance trade-oï¬ by training Text-NMN with and without extract-answer. the
program find-max-num(find[touchdown]) that aims to ï¬nd the longest touchdown. find-max-num
should sort spans by their value and return the maximal one; if we remove find-max-num, the program would reduce to find[touchdown], and the find module would have to select the longest touchdown rather than all touchdowns, following the true denotation. More generally, omitting atomic reasoning modules pushes other modules to compensate and perform complex tasks that were not intended for them, hurting faithfulness. To study this, we train Text-NMN by removing sorting and comparison modules (e.g., find-max-num and num-compare), and evaluate how this aï¬ects module-wise interpretability.
# 4.2 Supervising module output
As explained, given end-task supervision only, modules may not act as intended, since their param- eters are only trained for minimizing the end-task loss. Thus, a straightforward way to improve in- terpretability is to train modules with additional atomic-task supervision.
Visual reasoning For Visual-NMN, we pre-train find and filter modules with explicit intermedi- ate supervision, obtained from the GQA balanced dataset (Hudson and Manning, 2019). Note that this supervision is used only during pre-training â we do not assume we have full-supervision for the actual task at hand. GQA questions are annotated by gold programs; we focus on âexistâ questions that use find and filter modules only, such as âAre there any red cars?â.
Given gold annotations from Visual Genome (Kr- ishna et al., 2017), we can compute a label for each of the bounding boxes proposed by Faster-RCNN. We label a proposed bounding box as âpositiveâ if its IOU with a gold bounding box is > 0.75, and ânegativeâ if it is < 0.25. We then train on GQA examples, minimizing both the usual denotation loss, as well as an auxiliary loss for each instance of find and filter, which is binary cross en- tropy for the labeled boxes. This loss rewards high probabilities for âpositiveâ bounding boxes and low probabilities for ânegativeâ ones.
Textual reasoning Prior work (Gupta et al., 2020) proposed heuristic methods to extract super- vision for the find-num and find-date modules in DROP. On top of the end-to-end objective, they use an auxiliary objective that encourages these modules to output the âgoldâ numbers and dates according to the heuristic supervision. They show that supervising intermediate module outputs helps
improve model performance. In this work, we eval- uate the eï¬ect of such supervision on the faithful- ness of both the supervised modules, as well as other modules that are trained jointly.
# 4.3 Decontextualized word representations
The goal of decomposing reasoning into multi- ples steps, each focusing on diï¬erent parts of the utterance, is at odds with the widespread use of contextualized representations such as BERT or LXMERT. While the utterance attention is meant to capture information only from tokens relevant for the moduleâs reasoning, contextu- alized token representations carry global infor- mation. For example, consider the program filter[red](find[car]) for the phrase red car. Even if find attends only to the token car, its rep- resentation might also express the attribute red, so find might learn to ï¬nd just red cars, rather than all cars, rendering the filter module useless, and harming faithfulness. To avoid such contextualiza- tion in Visual-NMN, we zero out the representa- tions of tokens that are unattended, thus the input to the module is computed (with LXMERT) from the remaining tokens only.
# 5 Experiments
We ï¬rst introduce the datasets used and the exper- imental setup for measuring faithfulness (§ 5.1). We demonstrate that training NMNs using end-task supervision only does not yield module-wise faith- fulness both for visual and textual reasoning. We then show that the methods from §4 are crucial for achieving faithfulness and how diï¬erent design choices aï¬ect it (§ 5.2). Finally, we qualitatively show examples of improved faithfulness and ana- lyze possible reasons for errors (§ 5.3).
# 5.1 Experimental setup
Please see Appendix C for further detail about the experimental setups.
Visual reasoning We automatically generate gold program annotations for 26, 311 training set examples and for 5, 772 development set examples from NLVR2. The input to this generation process is the set of crowdsourced question decomposi- tions from the Break dataset (Wolfson et al., 2020). See Appendix C.1 for details. For module-wise faithfulness evaluation, 536 examples from the de- velopment set were annotated with the gold output for each module by experts.
Model Performance (Accuracy) Overall Faithful. (â) Prec. Rec. F1 ï¬nd Module-wise Faithfulness F1(â) ï¬lter with-relation relocate LXMERT 71.7 Upper Bound NMN w/ Layer-count NMN w/ Sum-count NMN w/ Graph-count NMN w/ Graph-count + decont. NMN w/ Graph-count + pretraining NMN w/ Graph-count + decont. + pretraining 71.2 68.4 69.6 67.3 69.6 68.7 1 0.39 0.49 0.37 0.29 0.44 0.42 0.84 0.39 0.31 0.39 0.51 0.49 0.66 0.89 0.11 0.28 0.28 0.33 0.36 0.47 0.89 0.12 0.31 0.31 0.38 0.39 0.52 0.92 0.20 0.32 0.29 0.30 0.34 0.41 0.95 0.37 0.44 0.37 0.36 0.42 0.47 0.75 0.27 0.26 0.19 0.13 0.21 0.21
Table 1: Faithfulness and accuracy on NLVR2. âdecont.â refers to decontextualized word representations. Precision, recall, and F1 are averages across examples, and thus F1 is not the harmonic mean of the corresponding precision and recall.
Model Performance (F1 Score) Overall Faithful. (cross-entropyâ â) ï¬nd Module-wise Faithfulnessâ (â) ï¬lter relocate min-maxâ ï¬nd-argâ Text-NMN w/o prog-sup w/ extract-answer w/o extract-answer 63.5 60.8 9.5 6.9 13.3 8.1 9.5 7.3 3.5 1.3 2.6 1.7 9.9 8.5 Text-NMN w/ prog-sup no auxiliary sup w/o sorting & comparison w/ module-output-sup 65.3 63.8 65.7 11.2 8.4 6.5 13.7 9.6 7.6 16.9 11.1 10.7 1.5 1.6 1.3 2.2 1.3 1.2 13.0 10.6 7.6
Table 2: Faithfulness and performance scores for various NMNs on DROP. âlower is better. â min-max is average faithfulness of find-min-num and find-max-num; ï¬nd-arg of find-num and find-date.
Textual reasoning We train Text-NMN on DROP, which is augmented with program supervi- sion for 4, 000 training questions collected heuristi- cally as described in Gupta et al. (2020). The model is evaluated on the complete development set of DROP which does not contain any program super- vision. Module-wise faithfulness is measured on 215 manually-labeled questions from the develop- ment set, which are annotated with gold programs and module outputs (passage spans).
conditioned on the proposed bounding boxes.
We now compare the performance and faithful- ness scores of the diï¬erent components. When training our NMN with the most ï¬exible count module, (NMN w/ Layer-count), an accuracy of 71.2% is achieved, a slight drop compared to LXMERT but with low faithfulness scores. Using Sum-count drops about 3% of performance, but in- creases faithfulness. Using Graph-count increases accuracy while faithfulness remains similar.
# 5.2 Faithfulness evaluation
Visual reasoning Results are seen in Table 1. Accuracy for LXMERT, when trained and eval- uated on the same subset of data, is 71.7%; slightly higher than NMNs, but without providing evidence for the compositional structure of the problem.
For faithfulness, we measure an upper-bound on the faithfulness score. Recall that this score measures the similarity between module outputs and annotated outputs. Since module outputs are constrained by the bounding boxes proposed by Faster-RCNN (§2.1), while annotated boxes are not, perfect faithfulness could only be achieved by a model if there are suitable bounding boxes. Upper Bound shows the maximal faithfulness score
Next, we analyze the eï¬ect of decontextualized word representations (abbreviated âdecont.â) and pre-training. First, we observe that NMN w/ Graph- count + decont. increases faithfulness score to 0.33 F1 at the expense of accuracy, which drops to 67.3%. Pre-training (NMN w/ Graph-count + pre- training) achieves higher faithfulness scores with a higher accuracy of 69.6%. Combining the two achieves the best faithfulness (0.47 F1) with a min- imal accuracy drop. We perform a paired permuta- tion test to compare NMN w/ Graph-count + decont. + pretraining with NMN w/ Layer-count and ï¬nd that the diï¬erence in F1 is statistically signiï¬cant (p < 0.001). Please see Appendix D.1 for further details.
Textual reasoning As seen in Table 2, when trained on DROP using question-program super- vision, the model achieves 65.3 F1 performance and a faithfulness score of 11.2. When adding su- pervision for intermediate modules (§4.2), we ï¬nd that the module-wise faithfulness score improves to 6.5. Similar to Visual-NMN, this shows that su- pervising intermediate modules in a program leads to better faithfulness.
To analyze how choice of modules aï¬ects faith- fulness, we train without sorting and comparison modules (find-max-num, num-compare, etc.). We ï¬nd that while performance drops slightly, faith- fulness deteriorates signiï¬cantly to 8.4, showing that modules that perform atomic reasoning are crucial for faithfulness. When trained without pro- gram supervision, removing extract-answer im- proves faithfulness (9.5 â 6.9) but at the cost of performance (63.5 â 60.8 F1). This shows that such a black-box module encourages reasoning in an opaque manner, but can improve performance by overcoming the limitations of pre-deï¬ned mod- ules. All improvements in faithfulness are signif- icant as measured using paired permutation tests (p < 0.001).
Generalization A natural question is whether models that are more faithful also generalize better. We conducted a few experiments to see whether this is true for our models. For NLVR2, we per- formed (1) an experiment in which programs in training have length at most 7, and programs at test time have length greater than 7, (2) an exper- iment in which programs in training have at most 1 filter module and programs at test time have at least 2 filter modules, and (3) an experiment in which programs in training do not have both filter and with-relation modules in the same program, while each program in test has both mod- ules. We compared three of our models â NMN w/ Layer-count, NMN w/ Sum-count, and NMN w/ Graph-count + decont. + pretraining. We did not observe that faithful models generalize better (in fact, the most unfaithful model tended to achieve the best generalization).
To measure if faithful model behavior leads to better generalization in Text-NMN we conducted the following experiment. We selected the sub- set of data for which we have gold programs and split the data such that questions that require max- imum and greater-than operations are present in the training data while questions that require com-
puting minimum and less-than are in the test data. We train and test our model by providing gold- programs under two conditions, in the presence and absence of additional module supervision. We ï¬nd that providing auxiliary module supervision (that leads to better module faithfulness; see above) also greatly helps in model generalization (perfor- mance increases from 32.3 F1 â 78.3 F1).
# 5.3 Qualitative analysis
Model comparisons We analyze outputs of dif- ferent modules in Figure 3. Figures 3a, 3b show the output of find[llamas] when trained with con- textualized and decontextualized word representa- tions. With contextualized representations (3a), the find fails to select any of the llamas, presumably because it can observe the word eating, thus eï¬ec- tively searching for eating llamas, which are not in the image. Conversely, the decontextualized model correctly selects the boxes. Figure 3c shows that find outputs meaningless probabilities for most of the bounding boxes when trained with Layer-count, yet the count module produces the correct value (three). Figure 3d shows that find fails to pre- dict all relevant spans when trained without sorting modules in Text-NMN.
Error analysis We analyze cases where outputs were unfaithful. First, for visual reasoning, we no- tice that faithfulness scores are lower for long-tail objects. For example, for dogs, a frequent noun in NLVR2, the execution of find[dogs] yields an average faithfulness score of 0.71, while items such as roll of toilet paper, barbell and safety pin receive lower scores (0.22, 0.29 and 0.05 respectively; ex- ample for a failure case for safety pin in Fig. 3e). In addition, some objects are harder to annotate with a box (water, grass, ground) and therefore receive low scores. The issue of small objects can also explain the low scores of relocate. In the gold box annotations used for evaluation, the av- erage areas for find, filter, with-relation, and relocate (as a fraction of the total image area) are 0.19, 0.19, 0.15, and 0.07, respectively. Evidently, relocate is executed with small ob- jects that are harder to annotate (tongue, spots, top of ), and indeed the upper-bound and model scores for relocate are lowest among the module types.
# 6 Related Work
NMNs were originally introduced for visual ques- tion answering and applied to datasets with syn-
find [llamas] utt: âthe llamas in both images are eatingâ find [touchdown run] The Redskins obtained an early lead when RB Clinton Portis scored on a 3-yard TD run. St. Louis scored again when free safety Oshiomogho Atogwe scored a 75 yards touchdown. Washington regained the lead with ..... and a Clinton Portis . St. Louis would come back with a 49-yard field goal. (d) find [safety pin] utt: âat least one safety pin is not embellished.â
Figure 3: Comparison of module outputs between NMN versions: (a) Visual-NMN with contextualized representations, (b) Visual-NMN with decontextual- ized representations, (c) model using a parameter-rich count layer (Layer-Count), (d) Text-NMN trained with- out sorting module produces an incorrect find output (misses 2-yard rushing TD), and (e) Visual-NMN fail- ure case with a rare object (of w/ Graph-count + decont. + pretraining)
thetic language and images as well as VQA (Antol et al., 2015), whose questions require few reason- ing steps (Andreas et al., 2016; Hu et al., 2017, 2018). In such prior work, module-wise faithful- ness was mostly assessed via qualitative analysis of a few examples (Jiang and Bansal, 2019; Gupta et al., 2020). Hu et al. (2018) did an evaluation where humans rated the clarity of the reasoning process and also tested whether humans could de- tect model failures based on module outputs. In contrast, we quantitatively measure each moduleâs predicted output against the annotated gold outputs. A related systematic evaluation of interpretabil- ity in VQA was conducted by Trott et al. (2018). They evaluated the interpretability of their VQA counting model, where the interpretability score is given by the semantic similarity between the gold label for a bounding box and the relevant word(s) in
the question. However, they studied only counting questions, which were also far less compositional than those in NLVR2 and DROP.
Similar to the gold module output annotations that we provide and evaluate against, HotpotQA (Yang et al., 2018) and CoQA (Reddy et al., 2019) datasets include supporting facts or rationales for the answers to their questions, which can be used for both supervision and evaluation.
In concurrent work, Jacovi and Goldberg (2020) recommend studying faithfulness on a scale rather than as a binary concept. Our evaluation method can be viewed as one example of this approach.
# 7 Conclusion
We introduce the concept of module-wise faithful- ness, a systematic evaluation of faithfulness in neu- ral module networks (NMNs) for visual and textual reasoning. We show that na¨ıve training of NMNs does not produce faithful modules and propose sev- eral techniques to improve module-wise faithful- ness in NMNs. We show how our approach leads to much higher module-wise faithfulness at a low cost to performance. We encourage future work to judge model interpretability using the proposed evaluation and publicly published annotations, and explore techniques for improving faithfulness and interpretability in compositional models.
# Acknowledgements
We thank members of UCI NLP, TAU NLP, and the AllenNLP teams as well as Daniel Khashabi for comments on earlier drafts of this paper. We also thank the anonymous reviewers for their com- ments. This research was partially supported by The Yandex Initiative for Machine Learning, the European Research Council (ERC) under the Euro- pean Union Horizons 2020 research and innovation programme (grant ERC DELPHI 802800), funding by the ONR under Contract No. N00014-19-1- 2620, and by sponsorship from the LwLL DARPA program under Contract No. FA8750-19-2-0201. This work was completed in partial fulï¬llment for the Ph.D degree of Ben Bogin.
# References
Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2016. Learning to compose neural net- In Proceedings of works for question answering. NAACL-HLT, pages 1545â1554.
Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Mar- garet Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual Question An- swering. In International Conference on Computer Vision (ICCV).
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171â4186.
Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. Drop: A reading comprehension benchmark requir- ing discrete reasoning over paragraphs. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2368â2378.
Nitish Gupta, Kevin Lin, Dan Roth, Sameer Singh, and Matt Gardner. 2020. Neural Module Networks for In International Conference Reasoning over Text. on Learning Representations (ICLR).
Dan Hendrycks and Kevin Gimpel. 2016. Gaus- arXiv preprint sian error linear units (gelus). arXiv:1606.08415.
Minghao Hu, Yuxing Peng, Zhen Huang, and Dong- sheng Li. 2019. A multi-type multi-span network for reading comprehension that requires discrete rea- soning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 1596â1606, Hong Kong, China. Association for Computational Linguistics.
Ronghang Hu, Jacob Andreas, Trevor Darrell, and Kate Saenko. 2018. Explainable neural computation via stack neural module networks. In Proceedings of the European conference on computer vision (ECCV), pages 53â69.
Ronghang Hu, Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Kate Saenko. 2017. Learning to reason: End-to-end module networks for visual question answering. In Proceedings of the IEEE In- ternational Conference on Computer Vision, pages 804â813.
Drew A Hudson and Christopher D Manning. 2019. Gqa: A new dataset for real-world visual reasoning
and compositional question answering. In Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6700â6709.
Alon Jacovi and Yoav Goldberg. 2020. Towards faith- fully interpretable nlp systems: How should we de- In Proceedings of ï¬ne and evaluate faithfulness? the 2020 Conference of the Association for Compu- tational Linguistics.
Yichen Jiang and Mohit Bansal. 2019. Self-assembling modular networks for interpretable multi-hop rea- soning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 4473â4483, Hong Kong, China. Association for Computational Linguistics.
Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C. Lawrence Zitnick, and Ross B. Girshick. 2016. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. 2017 IEEE Conference on Computer Vi- sion and Pattern Recognition (CVPR), pages 1988â 1997.
Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin John- son, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. 2017. Visual genome: Connecting language and vision using crowdsourced dense image anno- International Journal of Computer Vision, tations. 123(1):32â73.
Jayant Krishnamurthy, Pradeep Dasigi, and Matt Gard- ner. 2017. Neural semantic parsing with type con- straints for semi-structured tables. In Proceedings of the 2017 Conference on Empirical Methods in Natu- ral Language Processing, pages 1516â1526.
Jiayuan Mao, Chuang Gan, Pushmeet Kohli, Joshua B. The neuro- Tenenbaum, and Jiajun Wu. 2019. Interpreting scenes, symbolic concept learner: words, and sentences from natural supervision. In International Conference on Learning Representa- tions.
Sewon Min, Eric Wallace, Sameer Singh, Matt Gard- ner, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2019. Compositional questions do not necessitate multi-hop reasoning. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 4249â4257, Florence, Italy. Asso- ciation for Computational Linguistics.
E.W. Noreen. 1989. Computer-Intensive Methods for Testing Hypotheses: An Introduction. Wiley.
Siva Reddy, Danqi Chen, and Christopher D Manning. 2019. Coqa: A conversational question answering challenge. Transactions of the Association for Com- putational Linguistics, 7:249â266.
Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster r-cnn: Towards real-time object In Pro- detection with region proposal networks. ceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1, NIPSâ15, pages 91â99, Cambridge, MA, USA. MIT Press.
Andrew Slavin Ross, Michael C. Hughes, and Finale Doshi-Velez. 2017. Right for the right reasons: Training diï¬erentiable models by constraining their explanations. In IJCAI.
Howard Seltman. 2018. Approximations for mean and variance of a ratio.
Alane Suhr, Mike Lewis, James Yeh, and Yoav Artzi. 2017. A corpus of natural language for visual rea- In Proceedings of the 55th Annual Meet- soning. ing of the Association for Computational Linguistics (Volume 2: Short Papers), pages 217â223.
Alane Suhr, Stephanie Zhou, Ally Zhang, Iris Zhang, Huajun Bai, and Yoav Artzi. 2019. A corpus for reasoning about natural language grounded in pho- tographs. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 6418â6428.
Alon Talmor and Jonathan Berant. 2018. The web as a knowledge-base for answering complex questions. In Proceedings of NAACL-HLT, pages 641â651.
Hao Tan and Mohit Bansal. 2019. LXMERT: Learning cross-modality encoder representations from trans- formers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 5099â5110, Hong Kong, China. Association for Computational Linguistics.
Alexander Trott, Caiming Xiong, and Richard Socher. 2018. Interpretable counting for visual question an- swering. In International Conference on Learning Representations.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Å ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998â6008. Curran Asso- ciates, Inc.
Dan Ventura. 2007. CS478 Paired Permutation Test Overview. Accessed April 29, 2020.
Sarah Wiegreï¬e and Yuval Pinter. 2019. Attention is not not explanation. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 11â20, Hong Kong, China. Associ- ation for Computational Linguistics.
Tomer Wolfson, Mor Geva, Ankit Gupta, Matt Gard- ner, Yoav Goldberg, Daniel Deutch, and Jonathan Berant. 2020. Break it down: A question under- standing benchmark. Transactions of the Associa- tion for Computational Linguistics, 8:183â198.
Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christo- pher D Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 2369â2380.
Alexander Yeh. 2000. More accurate tests for the sta- tistical signiï¬cance of result diï¬erences. In Pro- ceedings of the 18th conference on Computational linguistics-Volume 2, pages 947â953. Association for Computational Linguistics.
Yan Zhang, Jonathon Hare, and Adam Prgel-Bennett. 2018. Learning to count objects in natural images for visual question answering. In International Con- ference on Learning Representations.
# A Modules
We list all modules for Visual-NMN in Table 3.
For Text-NMN, as mentioned, we use all mod- ules are described in Gupta et al. (2020). In this work, we introduce the (a) addition and subtraction modules that take as input two dis- tributions over numbers mentioned in the passage and produce a distribution over all posssible addi- tion and subtraction values possible. The output distribution here is the expected distribution for the random variable Z = X +Y (for addition), and (b) extract-answer that produces two distributions over the passage tokens denoting the probabilities for the start and end of the answer span. This distri- bution is computed by mapping the passage token representations using a simple MLP and softmax operation.
# B Measuring Faithfulness in Visual-NMN
# B.1 Numerators of Precision and Recall
As stated in Section 3.1, for a given module type and a given example, precision is deï¬ned as the number of matched proposed bounding boxes di- vided by the number of proposed bounding boxes to which the module assigns a probability more than 0.5. Recall is deï¬ned as the number of matched annotated bounding boxes divided by the number of annotated bounding boxes. Therefore, the nu- merators of the precision and the recall need not be equal. In short, the reason for the discrepancy is that there is no one-to-one alignment between annotated and proposed bounding boxes. To fur- ther illustrate why we chose not to have a common numerator, we will consider two sensible choices for this shared numerator and explain the issues with them.
One choice for the common numerator is the number of matched proposed bounding boxes. If we were to keep the denominator of the recall the same, then the recall would be deï¬ned as the num- ber of matched proposed bounding boxes divided by the number of annotated bounding boxes. Con- sider an example in which there is a single anno- tated bounding box that is aligned with ï¬ve pro- posed bounding boxes. When this deï¬nition of recall is applied to this example, the numerator would exceed the denominator. Another choice would be to set the denominator to be the number of proposed bounding boxes that are aligned with
some annotated bounding box. In the example, this approach would penalize a module that gives high probability to only one of the ï¬ve aligned proposed bounding boxes. However, it is not clear that a module giving high probability to all ï¬ve proposed boxes is more faithful than a module giving high probability to only one bounding box (e.g. perhaps one proposed box has a much higher IOU with the annotated box than the other proposed boxes). Hence, this choice for the numerator does not make sense.
Another choice for the common numerator is the number of matched annotated bounding boxes. If we were to keep the denominator of the precision the same, then the precision would be deï¬ned as the number of matched annotated bounding boxes divided by the number of proposed bounding boxes to which the module assigns probability more than 0.5. Note that since a single proposed bounding box can align with multiple annotated bounding boxes, it is possible for the numerator to exceed the denominator.
Thus, these two choices for a common numerator have issues, and we avoid these issues by deï¬ning the numerators of precision and recall separately.
# B.2 Averaging Faithfulness Scores
The method described in Section 3.1 computes a precision, recall, and F1 score for each example for every module type occurring in that example. The faithfulness scores reported in Table 1 are averages across examples. We also considered two other ways of aggregating scores across examples: 1. Cumulative P/R/F1: For each module type, we compute a single cumulative precision and re- call across all examples. We then compute the dataset-wide F1 score as the harmonic mean of the precision and the recall. The results using this method are in Table 4. There are some diï¬erences between these results and those in Table 1, e.g. in these results, NMN w/ Graph-count + decont. + pretraining has the highest faithfulness score for every module type, including relocate.
2. Average over module occurrences: For each module type, for each occurrence of the mod- ule we compute a precision and recall and compute F1 as the harmonic mean of preci- sion and recall. Then for each module type, we compute the overall precision as the aver- age precision across module occurrences and
similarly compute the overall recall and F1. Note that a module can occur multiple times in a single program and that each image is considered a separate occurrence. The results using this method are in Table 5. Again, there are some diï¬erences between these results and those in Table 1, e.g. NMN w/ Sum-count has a slightly higher score for with-relation than NMN w/ Graph-count + decont. + pre- training.
With both of these alternative score aggregation methods, we still obtained p < 0.001 in our signiï¬- cance tests.
We also noticed qualitatively that the metric can penalize modules that assign high probability to proposed bounding boxes that have a relatively high IOU that does not quite pass the IOU threshold of 0.5. In such cases, while it may not make sense to give the model credit in its recall score, it also may not make sense to penalize the model in its precision score. Consequently, we also performed an evaluation in which for the precision calculation we set a separate ânegativeâ IOU threshold of 10â8 (eï¬ectively 0) and only penalized modules for high probabilities assigned to proposed boxes whose IOU is below this threshold. The results computed with example-wise averaging are provided in Table 6.
# C Details about Experiments
Visual Reasoning We use the published pre- trained weights and the same training conï¬gura- tion of LXMERT (Tan and Bansal, 2019), with 36 bounding boxes proposed per image. Due to memory constraints, we restrict training data to examples having a gold program with at most 13 modules.
# C.1 Program Annotations
We generated program annotations for NLVR2 by automatically canonicalizing its question decompo- sitions in the Break dataset (Wolfson et al., 2020). Decompositions were originally annotated by Ama- zon Mechanical Turk workers. For each utterance, the workers were asked to produce the correct de- composition and an utterance attention for each operator (module), whenever relevant.
Limitations of Program Annotations Though our annotations for gold programs in NLVR2 are largely correct, we ï¬nd that there are some ex- amples for which the programs are unnecessarily
Program exist in_right_image with-relation [with] filter [brown] find [dog] filter[exposed] relocate [tongue] filter [brown] find [dog]
Figure 4: An example of a gold program for NLVR2 that is unnecessarily complicated.
complicated. For instance, for the sentence âthe right image contains a brown dog with its tongue extended.â the gold program is shown in Figure 4. This program could be simpliï¬ed by replacing the with-relation with the second argument of with-relation. Programs like this make learn- ing more diï¬cult for the NMNs since they use modules (in this case, with-relation) in degen- erate ways. There are also several sentences that are beyond the scope of our language, e.g. compar- isons such as âthe right image shows exactly two virtually identical triï¬e desserts.â
# D Signiï¬cance tests
# D.1 Visual Reasoning
We perform a paired permutation test to test the hypothesis H0: NMN w/ Graph-count + decont. + pretraining has the same inherent faithfulness as NMN w/ Layer-count. We follow the procedure described by Ventura (2007), which is similar to tests described by Yeh (2000) and Noreen (1989). Speciï¬cally, we perform Ntotal = 100, 000 trials in which we do the following. For every exam- ple, with probability 1/2 we swap the F1 scores obtained by the two models for that example. Then we check whether the diï¬erence in the aggregated F1 scores for the two models is at least as ex- treme as the original diï¬erence in the aggregated F1 scores of the two models. The p-value is given by Nexceed/Ntotal, where Nexceed is the number of trials in which the new diï¬erence is at least as extreme as the original diï¬erence.
Module Output Implementation find[ qa] Wr (Lev) + br filter[ga](p) p© (Wi (Lx; v]) + by) with-relation|[qar](p1, p2) max(p2)p1 © MLP([x; v1; v2}) project[gar](p) max(p)find(gai) © MLP([W2; v1; v2]) count(p) number()\(p), o°) exist(p) greater-equal(p, 1) greater-equal (a: N,b:N) less-equal (a: N,b: N) equal(a: N,b: N) less(a:N,b:N) greater(a:N,b:N) and(a : B,b: B) or(a: B,b: B) number(m : F,v : F) sum(a : N,b: N) difference(a: N,b: N) division(a: N,b:N) intersect(p1, p2) discard(p, p2) in-left-image(p) in-right-image(p) in-at-least-one-image in-each-image in-one-other-image greater(a, b) + equal(a, b) less(a, b) + equal(a, b) DK Prla = kJ Prib = A Do Pria = KI Prib > A] Do Pria = AI Prib < KJ a*b at+b-a*b Normal(mean = m, var = v) number (dmean + Dmeans Qvar + Pyar) number (dmean â Dmean> Qvar + Pyar) 2 number ( @ 4 Poartmean yeu Soar 4, a )) mean mean \ mean âmean Dmean b; Pi: p2 max(p1 â p2,0) p s.t. probabilities for right image are 0 p s.t. probabilities for left image are 0 macro (see caption) macro (see caption) macro (see caption) DHWOWvvsvsvydv iZAAAywmWnawnwnnwsosAyAsrvss$sd
Table 3: Implementations of modules for NLVR2 NMN. First five contain parameters, the rest are deterministic. The implementation of count shown here is the Sum-count version; please see Section 4 for a description of other count module varieties and a discussion of their differences. âBâ denotes the Boolean type, which is a probability value ([0..1]). âNâ denotes the Number type which is a probability distribution. K = 72 is the maximum count value supported by our model. To obtain probabilities, we first convert each Normal random variable X to a categorical distribution over {0, 1, ..., K} by setting Pr[X = k] = ®(k + 0.5) â O(k - 0.5) if k ⬠{1,2,...,K - 1}. We set Pr[X = 0] = (0.5) and Pr[X = K] = 1 â ®(K â 0.5). Here ®(-) denotes the cumulative distribution function of the Normal distribution. W,, W2 are weight vectors with shapes 2h x 1 and h x 1, respectively. Here h = 768 is the size of LXMERTâs representations. b; is a scalar weight. MLP denotes a two-layer neural network with a GeLU activation (Hendrycks and Gimpel, 2016) between layers. x denotes a question representation, and v; denotes encodings of objects in the image. x and v; have shape h x |$|, where |$| is the number of proposals. Pp denotes a vector of probabilities for each proposal and has shape | x |S|. © and [;] represent elementwise multiplication and matrix concatenation, respectively. The expressions for the mean and variance in the division module are based on the approximations in Seltman (2018). The macros execute a given program on the two input images. in-at-least-one-image macro returns true iff the program returns true when executed on at least one of the images. in-each-image returns true iff the program returns true when executed on both of the images. in-one-other-image takes two programs and returns true iff one program return true on left image and second program returns true on right image, or vice-versa.
Model Performance (Accuracy) Overall Faithful.(â) Prec. Rec. F1 ï¬nd Module-wise Faithfulness(â) ï¬lter with-relation LXMERT 71.7 Upper Bound NMN w/ Layer-count NMN w/ Sum-count NMN w/ Graph-count NMN w/ Graph-count + decont. NMN w/ Graph-count + pretraining NMN w/ Graph-count + decont. + pretraining 71.2 68.4 69.6 67.3 69.6 68.7 1 0.069 0.25 0.20 0.21 0.28 0.34 0.63 0.29 0.18 0.22 0.29 0.31 0.43 0.77 0.11 0.21 0.21 0.24 0.30 0.38 0.78 0.13 0.23 0.24 0.28 0.34 0.43 0.79 0.09 0.20 0.19 0.22 0.27 0.34 0.73 0.07 0.16 0.17 0.19 0.25 0.29 relocate 0.71 0.05 0.05 0.04 0.04 0.09 0.11
Table 4: Faithfulness scores on NLVR2 using the cumulative precision/recall/F1 evaluation.
Model Performance (Accuracy) Overall Faithful.(â) Prec. Rec. F1 ï¬nd Module-wise Faithfulness(â) ï¬lter with-relation LXMERT 71.7 Upper Bound NMN w/ Layer-count NMN w/ Sum-count NMN w/ Graph-count NMN w/ Graph-count + decont. NMN w/ Graph-count + pretraining NMN w/ Graph-count + decont. + pretraining 71.2 68.4 69.6 67.3 69.6 68.7 1 0.67 0.70 0.55 0.47 0.58 0.58 0.91 0.64 0.59 0.64 0.70 0.70 0.79 0.92 0.39 0.48 0.43 0.45 0.47 0.55 0.90 0.21 0.38 0.36 0.42 0.42 0.54 0.95 0.50 0.53 0.47 0.47 0.49 0.55 0.96 0.61 0.63 0.54 0.55 0.58 0.62 relocate 0.82 0.50 0.49 0.41 0.33 0.41 0.43
Table 5: Faithfulness scores on NLVR2 using the average over module occurrences evaluation.
Model Performance (Accuracy) Overall Faithful.(â) Prec. Rec. F1 ï¬nd Module-wise Faithfulness(â) ï¬lter with-relation LXMERT 71.7 Upper Bound NMN w/ Layer-count NMN w/ Sum-count NMN w/ Graph-count NMN w/ Graph-count + decont. NMN w/ Graph-count + pretraining NMN w/ Graph-count + decont. + pretraining 71.2 68.4 69.6 67.3 69.6 68.7 1 0.59 0.79 0.68 0.62 0.70 0.71 0.8377 0.39 0.31 0.39 0.51 0.49 0.66 0.89 0.25 0.34 0.38 0.47 0.47 0.62 0.89 0.31 0.38 0.43 0.53 0.52 0.68 0.92 0.28 0.36 0.36 0.39 0.41 0.50 0.95 0.45 0.48 0.44 0.43 0.51 0.55 relocate 0.75 0.30 0.28 0.22 0.16 0.27 0.31
Table 6: Faithfulness scores on NLVR2 using a negative IOU threshold of 10â8 and example-wise averaging. | {
"id": "1606.08415"
} |
2005.00700 | UnifiedQA: Crossing Format Boundaries With a Single QA System | Question answering (QA) tasks have been posed using a variety of formats,
such as extractive span selection, multiple choice, etc. This has led to
format-specialized models, and even to an implicit division in the QA
community. We argue that such boundaries are artificial and perhaps
unnecessary, given the reasoning abilities we seek to teach are not governed by
the format. As evidence, we use the latest advances in language modeling to
build a single pre-trained QA model, UnifiedQA, that performs surprisingly well
across 17 QA datasets spanning 4 diverse formats. UnifiedQA performs on par
with 9 different models that were trained on individual datasets themselves.
Even when faced with 12 unseen datasets of observed formats, UnifiedQA performs
surprisingly well, showing strong generalization from its out-of-format
training data. Finally, simply fine-tuning this pre-trained QA model into
specialized models results in a new state of the art on 6 datasets,
establishing UnifiedQA as a strong starting point for building QA systems. | http://arxiv.org/pdf/2005.00700 | Daniel Khashabi, Sewon Min, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Clark, Hannaneh Hajishirzi | cs.CL, cs.AI | EMNLP 2020 (Findings) | null | cs.CL | 20200502 | 20201007 | 0 2 0 2
t c O 7 ] L C . s c [
3 v 0 0 7 0 0 . 5 0 0 2 : v i X r a
EMNLP-Findingsâ20
# UNIFIEDQA: Crossing Format Boundaries with a Single QA System
# Daniel Khashabi1 Sewon Min2 Tushar Khot1 Ashish Sabharwal1 Oyvind Tafjord1 Peter Clark1 Hannaneh Hajishirzi1,2
# 1Allen Institute for AI, Seattle, U.S.A. 2University of Washington, Seattle, U.S.A.
# Abstract
Question answering (QA) tasks have been posed using a variety of formats, such as ex- tractive span selection, multiple choice, etc. This has led to format-specialized models, and even to an implicit division in the QA commu- nity. We argue that such boundaries are artiï¬- cial and perhaps unnecessary, given the reason- ing abilities we seek to teach are not governed by the format. As evidence, we use the latest advances in language modeling to build a sin- gle pre-trained QA model, UNIFIEDQA, that performs well across 20 QA datasets spanning 4 diverse formats. UNIFIEDQA performs on par with 8 different models that were trained on individual datasets themselves. Even when faced with 12 unseen datasets of observed for- mats, UNIFIEDQA performs surprisingly well, showing strong generalization from its out-of- format training data. Finally, ï¬ne-tuning this pre-trained QA model into specialized mod- els results in a new state of the art on 10 fac- toid and commonsense QA datasets, establish- ing UNIFIEDQA as a strong starting point for building QA systems.1
Extractive [SQUAD] Question: At what speed did the turbine operate? Context: (Nikola_Tesla) On his 50th birthday in 1906, Tesla demonstrated his 200 horsepower (150 kilowatts) 16,000 rpm bladeless turbine. ... Gold answer: 16,000 rpm Abstractive [NarrativeQA] Question: What does a drink from narcissus's spring cause the drinker to do? Context: Mercury has awakened Echo, who weeps for Narcissus, and states that a drink from Narcissus's spring causes the drinkers to "Grow dotingly enamored of themselves." ... Gold answer: fall in love with themselves Multiple-Choice [ARC-challenge] Question: What does photosynthesis produce that helps plants grow? Candidate Answers: (A) water (B) oxygen (C) protein (D) sugar Gold answer: sugar Yes/No [BoolQ] Question: Was America the first country to have a president? Context: (President) The first usage of the word president to denote the highest official in a government was during the Commonwealth of England ... Gold answer: no
Figure 1: Four formats (color-coded throughout the paper) commonly used for posing questions and an- swering them: Extractive (EX), Abstractive (AB), Multiple-Choice (MC), and Yes/No (YN). Sample dataset names are shown in square brackets. We study generalization and transfer across these formats.
1
# 1 Introduction
Question answering is a common tool for assessing how well can computers understand language and reason with it. To this end, the NLP community has introduced several distinct datasets, with four popular QA formats illustrated in Fig. 1. For in- stance, some datasets expect the answer to be âyesâ or ânoâ, or a unique answer span in the associated paragraph (as opposed to multiple or no spans). These differences have motivated their study in silos, often encoding QA format into the model ar- chitecture itself. Efforts to exploit multiple datasets remain largely restricted to a single format. For example, Clark et al. (2019c) limit consideration to
multiple-choice datasets, while Talmor and Berant (2019) focus their generalization study on extrac- tive span prediction models. To the best of our knowledge, no single QA system targets, not to mention excels at, all of these formats.
This raises the question: Can QA models learn linguistic reasoning abilities that generalize across formats? Our intuition is simple: while question format and relevant knowledge may vary across QA datasets, the underlying linguistic understand- ing and reasoning abilities are largely common. A multiple-choice model may, therefore, beneï¬t from training on an extractive answers dataset. Building upon this intuition, we present a single pre-trained QA system, named UNIFIEDQA, that exploits in- formation across 4 different QA formats to achieve strong performance across 20 different factoid and
# 1 https://github.com/allenai/unifiedqa
Datasets Format Extractive QA (EX) Abstractive QA (AB) âHas paragraphs? v v v v v v v âHas explicit candidate ans? # of explicit candidates Para contains ans as substring? v v v Has idk questions? v SQuADII SQuAD2 NewsQA Quoref ROPES NarQA DROP NatQA RACE MCTest OBQA ARC QASC CQA WG PIQA SIQA BoolQ NP-BoolQ MultiRC Multiple-choice QA (MC) Yes/NO QA (YN) v v v v v v v A A an A 4 4 4[4{s{s|2{2]|3
Figure 2: Properties of various QA datasets included in this study: 5 extractive (EX), 3 abstractive (AB), 9 multiple- choice (MC), and 3 yes/no (YN). âidkâ denotes âI donât knowâ or unanswerable questions. BoolQ represents both the original dataset and its contrast-sets extension BoolQ-CS; similarly for ROPES, Quoref, and DROP.
commonsense QA datasets listed in Fig. 2.
sirable but also quantitatively beneï¬cial.
In this work, we advocate for a unifying view of QA formats by building a format-agnostic QA system. Our work leverages recent progress in text-to-text pre-trained neural models, speciï¬cally T5 (Raffel et al., 2020) and BART (Lewis et al., 2020), but with a strong focus on differing QA formats. This paradigm allows unifying many NLP models, which formerly had task-speciï¬c designs, into a single text-to-text framework. Previous work uses textual preï¬xes to explicitly deï¬ne the task associated with each input instance (Raffel et al., 2020; Radford et al., 2019b); often such attempts to build a single model for multiple NLP tasks underperform the standard pre-training plus ï¬ne- tuning setup (a model per task) (Raffel et al., 2020). Our work narrows down the scope to tasks that stay within the boundaries of QA, demonstrating that a uniï¬ed text-to-text paradigm can, in fact, be successful across different QA tasks and formats. We develop a single pre-trained QA model by train- ing text-to-text models on a set of seed QA datasets of multiple formats, taking natural text as input, without using format-speciï¬c preï¬xes. Our experi- ments show that UNIFIEDQA can be applied as-is to different QA tasks, generalizes well to other unseen datasets (zero-shot), and with further ï¬ne- tuning achieves state-of-the-art results on many QA tasks including commonsense and factual datasets.
Contributions. This work advocates for a uni- ï¬ed view of different QA formats, and for build- ing format-agnostic QA systems. To support this view, we present UNIFIEDQA, a single pre-trained QA system that works well on and generalizes to datasets with different formats (§6.2), while per- forming on par with state-of-the-art dedicated sys- tems tailored to each dataset (§6.1). Additionally, ï¬ne-tuning UNIFIEDQA into specialized systems sets a new state of the art for 10 datasets (§6.3), establishing it as a powerful starting point for QA research. Our ï¬ndings demonstrate that crossing QA format boundaries is not only qualitatively de-
# 2 Related Work
Several QA efforts have studied generalization across datasets of a single format. For instance, in MultiQA, Talmor and Berant (2019) study gen- eralization and transfer, but only across extractive span selection datasets. Further, while they show strong leave-one-out style results, they ï¬nd a sin- gle system performs substantially worse than one tuned to each dataset. In ORB, Dua et al. (2019a) propose a multi-dataset evaluation benchmark span- ning extractive and abstractive formats. However, that study is limited to an evaluation of systems, falling short of addressing how to build such gener- alized models. The MRQA shared task (Fisch et al., 2019) focuses on span-prediction datasets. Unlike all these efforts, our goal is to investigate transfer and generalization across different QA formats, as well as to build a single system that does this well. Exploiting commonality across machine learn- ing tasks has a rich history studied under transfer learning (Caruana, 1997; Clark et al., 2019b). Mc- Cann et al. (2018) and Keskar et al. (2019) study transfer among various NLP tasks by casting them into a single QA formatâan elegant transfer learn- ing approach but orthogonal to the goal of this work. As noted earlier, Raffel et al. (2020) investi- gate the transfer between several diverse NLP tasks (machine translation, summarization, etc). Their key contribution is a text-to-text framework, and a powerful model called T5, that makes it easier to mix multiple tasks by encoding both inputs and outputs as text. They rely on textual preï¬xes to ex- plicitly deï¬ne the task corresponding to each input instance. While we build upon their framework, we narrow our focus to variations of QA. This allows us to achieve strong results while avoiding reliance on any format-speciï¬c preï¬xes. Our models learn to infer the format of each input question based on its content (e.g., whether the phrasing of the ques- tion demands a yes/no answer). Moreover, we are able to demonstrate generalization across QA tasks,
which prior work failed to achieve presumably due to its focus on too broad a set of NLP tasks.
# 3 UNIFIEDQA: Multi-format Training
Suppose we would like to train a unified QA model that can operate over k formats F\, Fo,..., fF. For each format F;, suppose we have ¢; datasets sets Dh... Di, where Dj = (Ti, E\) includes both training and evaluation examples. In some cases, the training set T} may be empty or we may want to ignore it in order to treat Dj as an âun- seenâ, evaluation-only dataset and assess a modelâs generalization to it.
We use the text-to-text paradigm to convert each training question q in format Fi into a plain-text input representation enci(q). This conversion uses a natural encoding process that will be described shortly (§3.1) for four common QA formats, and is easily extensible to other formats as well. We fol- low a simple approach of creating a mixed training pool consisting of all available training instances:
6 fH U U {enci(a) | ge T;} 1j=1
Training batches are drawn from this pooled data, ËT , by including each q â T i j with a probability pro- portional 1/|T i j |. Each batch thus, on average, con- tains the same number of instances from each train- ing set, regardless of its size. Similar treatments of task mixing have also been adopted by Arivazha- gan et al. (2019) and Raffel et al. (2020). As our experiments will show, our multi-format mixing ap- proach works well. It clearly highlights the value of training on out-of-format data and conï¬rms our in- tuition that there are strong ties across QA formats in terms of the underlying reasoning abilities.2
Our uniï¬ed question-answering system is based on the recent text-to-text frameworks, particularly, T5 (Raffel et al., 2020) and BART (Lewis et al., 2020). We ï¬rst deï¬ne a unifying encoding of the instances across various formats (§3.1). We then introduce UNIFIEDQA (§3.2) that is a QA system trained on datasets in multiple formats, indicating new state-of-the-art results on 10 datasets and gen- eralization to unseen datasets.
2A more sophisticated teaching curriculum (Sachan and Xing, 2016) or approaches such as model distillation and teacher annealing (Clark et al., 2019b) are likely to further improve the performance of the resulting uniï¬ed model, bol- stering the strength of our advocacy for a uniï¬ed view of all QA formats. We leave their exploration to future work.
# 3.1 Text-to-Text Encoding
We convert each of our target datasets into a text- in/text-out format (Raffel et al., 2020; Lewis et al., 2020; Radford et al., 2019b). The question always comes ï¬rst, followed by some additional informa- tion (context paragraph or candidate answers, or both). We use â
â separators between different parts of the input. This ensures having a human- like encoding while not making it overly-speciï¬c to a certain format.
Our uniï¬ed model incorporates the following four common question-answering formats. Speciï¬c datasets within them are deferred to Section 4.1. Extractive (EX) questions Q include a context paragraph C (typically a paragraph) and require models to extract the answer as a substring from the context. In some datasets, âunanswerableâ can sometimes be the correct response.
Abstractive (AB) questions Q require models to produce answers that are often not mere substrings of the provided context paragraph C.
Multiple-choice (MC) questions Q come with a set of candidate answers {Ai}, of which generally exactly one is correct. In some cases, they also include a context paragraph C.
Yes/No (YN) questions Q expect a âyesâ or ânoâ answer as the response and may include a context paragraph C.
Table 1 provides examples of the natural input and output encoding for each of these formats, where both input and output representations are raw text. There is no explicit information regard- ing a question being an MC question or having exactly four candidate answers. Speciï¬cally, MC questions without any context paragraph are en- coded as question
(A) c1 (B) c2 . . . where c1, c1, . . . are the set of candidate answers (see the example from ARC dataset). If the question in- cludes a context paragraph, it is appended after the candidate answers: question
(A) c1 (B) c2 . . .
paragraph, as shown in the example from the MCTest dataset. Questions in the other three formats (EX, AB, and YN) are encoded simply as question
paragraph.
To re-emphasize, unlike prior work (Raffel et al., 2020), we do not specify any task-, dataset-, or format-speciï¬c preï¬xes in the input representa- tion. Whether the answer should be extracted or abstracted, and whether from the provided context paragraph or candidate answers (or the fact that
Dataset SQuAD 1.1 At what speed did the turbine operate?
(Nikola_Tesla) On his 5@th birthday in 1906, Tesla demonstrated his 20@ horsepower (150 kilowatts) 16,008 rpm bladeless turbine. ... EX Input Output 16,000 rpm Dataset NarrativeQA What does a drink from narcissus's spring cause the drinker to do?
Mercury has awakened Echo, who AB Input weeps for Narcissus, and states that a drink from Narcissus's spring causes the drinkers to ~~ Grow dotingly enamored of themselves.'' ... Output fall in love with themselves Dataset ARC-challenge What does photosynthesis produce that helps plants grow?
(A) water (B) oxygen (C) protein (D) sugar Input Output sugar mic Dataset MCTest Who was Billy?
(A) The skinny kid (B) A teacher (C) A little kid (D) The big kid
Billy was like a king on the school yard. A king without a queen. He was the biggest kid in our grade, so he made all the rules during recess. ... Output The big kid Dataset BoolQ Was America the first country to have a president?
(President) The first usage of the word president to denote the highest official in a government was during the Commonwealth of England ... Output no Input YN Input
Table 1: Example text-to-text encoding of instances.
these even are candidate answers) is expected to be inferred by the system.
# 3.2 UNIFIEDQA: The Pre-Trained Model
The speciï¬c pre-trained QA model we provide and use in all our experiments is trained on represen- tative datasets for each of the 4 formats discussed earlier. We empirically chose the following 8 seed datasets for training UNIFIEDQA,3 based on their effectiveness in our pilot study (details deferred to Section 5) assessing which datasets are most valuable for out-of-format training:
EX: SQuAD 1.1, SQuAD 2.0 ⢠AB: NarrativeQA ⢠MC: RACE, ARC, OBQA, MCTest ⢠YN: BoolQ
One can easily use other combinations of for- mats and datsets to create variants of our UNI- FIEDQA model, or extend it as future datasets be- come available or new formats are introduced.
Unless otherwise noted, we use the largest avail- able T5 model (11B parameters) as the starting point for training our model and call the system UNIFIEDQA. We also report results of training our system with BARTlarge, referred to as UNI- FIEDQABART (see §6.3). Details on the parameters of the models used are deferred to Appendix A.2.
3Future references to âseed datasetâ point to the QA datasets used in this section.
Similar to pre-trained language models, the result- ing pre-trained QA model can be used as a starting point for ï¬ne-tuning on other QA datasets.
# 4 Formats and Datasets
# 4.1 Datasets
We evaluate UNIFIEDQA on 20 existing datasets that target different formats as well as various com- plex linguistic phenomena. Fig. 2 summarizes key properties of our datasets (whether it comes with a paragraph or answer candidates, whether the paragraph explicitly contains the answer, etc). Most importantly, they are grouped into several for- mats/categories as described below. Table 2 gives certain statistics of these datasets. We next pro- vide a summary enumerating these datasets, with additional details deferred to Appendix A.1.
Extractive QA (EX). Among the datasets in this popular format, we adopt SQuAD 1.1 (Rajpurkar et al., 2016), SQuAD 2 (Rajpurkar et al., 2018), NewsQA (Trischler et al., 2017), Quoref (Dasigi et al., 2019), ROPES (Lin et al., 2019).
Abstractive QA (AB). The datasets used from this format are: NarrativeQA/NarQA (Kociský et al., 2018), the open-domain version of Natu- ralQuestions/NatQA (Kwiatkowski et al., 2019), and DROP (Dua et al., 2019b).
Multiple-choice QA (MC). We the use following MC datasets: MCTest (Richard- son et al., 2013), RACE (Lai et al., 2017), OpenBookQA/OBQA (Mihaylov et al., 2018), ARC (Clark et al., 2018, 2016), QASC (Khot et al., 2019), CommonsenseQA/CQA (Talmor et al., 2019), PIQA (Bisk et al., 2020), SIQA (Sap et al., 2019), and Winogrande (Sakaguchi et al., 2020). Several of the MC datasets do not come with accompanying paragraphs (such as ARC, QASC, OBQA). For most of this the work, we keep the questions as is with no additional retrieval (unless otherwise mentioned). One other variability among these datasets is their number of candidate answers. While many datasets have four candidates (see Fig. 2), others have more. Later (in §6.2) we will see that our approach generalizes to datasets with different numbers of candidates, even if such questions have not been seen during training.
Yes/No QA (YN). The YN datasets we use are BoolQ (Clark et al., 2019a) and a
Dataset Train set size Eval. set size Best published 95% CI (%) Input length SQuAD 1.1 SQuAD 2.0 NewsQA Quoref Quoref-CS ROPES ROPES-CS 87k 130k 76k 22k - 10k - 10k 11k 4k 2k 700 1.4k 974 95.6 91.2 66.8 86.1 55.4 61.1 32.5 0.4 0.5 1.4 1.5 3.6 2.5 3.0 136.2 139.9 606.6 352.7 324.1 169.1 182.7 3.0 2.6 4.0 1.7 2.2 1.4 1.3 NarQA NatQA DROP DROP-CS 65k 79k 77k - 21k 3.6k 9k 947 58.9 42.2 89.1 54.2 0.7 1.6 0.6 3.2 563.6 607.0 189.1 206.0 6.2 2.2 1.6 2.1 RACE OBQA MCTest ARC (easy) ARC (chal.) CQA WG PIQA SIQA 87k 4k 1.4k 2k 1k 9.7k 40.3k 16.1k 33.4k 4k 501 320 2k 1k 1.2k 1.7k 3k 2.2k 89.5 80.0 86.5 80.0 67.8 79.1 67.5 79.4 78.0 0.9 3.3 3.4 1.7 2.9 2.2 2.2 1.4 1.7 317.9 28.7 245.4 39.4 47.4 26.8 25.2 49.6 37.3 6.9 3.6 4.0 3.7 5.0 1.5 3.0 20.2 4.7 BoolQ BoolQ-CS NP-BoolQ MultiRC 9k - 10k - 3k 461 3k 312 91.0 71.1 78.4 91.7 1.0 4.0 1.4 2.6 105.1 108.9 106.2 293.3 1.0 1.0 1.0 1.0
Table 2: Dataset Statistics. CQA, OBQA, WG, and NarQA refer to CommonsenseQA, OpenBookQA, Winogrande, and NarrativeQA, respectively. The CI column shows the upper 95% conï¬dence interval for the evaluation set as a percentage, based on the Wil- son test around the mean score listed as a percentage in the best known performance column. Input and output representation lengths are measured in the number of tokens and averaged across the dataset.
naturally-perturbed version of this dataset, BoolQ- NP (Khashabi et al., 2020), and the binary (yes/no) subset of MultiRC (Khashabi et al., 2018).
Contrast-sets. Additionally, we use contrast- sets (Gardner et al., 2020) for several of our datasets (denoted with âCSâ): BoolQ-CS, ROPES- CS, Quoref-CS, DROP-CS. These evaluation sets are expert-generated perturbations that deviate from the patterns common in the original dataset.
# 4.2 Evaluation Metrics for Textual Output
We evaluate each dataset using the metric used most often for it in prior work. For the EX format, itâs the F1 score of the extracted span relative to the gold label. For the AB format, we use ROUGE-L metric (Lin et al., 2006; Min et al., 2019; Nishida et al., 2019). For NatQA we use the exact-match metric, following Min et al. (2020). For the MC format, we match the generated text with the closest answer candidate based token overlap and compute the accuracy. For the YN format, we follow Clark et al. (2019a) to measure if the generated output matches the correct âyesâ or ânoâ label. In rare cases where the output is longer than one word (e.g., âyes it isâ), we check if it contains the correct label but
# not the incorrect one.4
# 5 Pilot Study: Can Out-of-Format Training Help?
We ï¬rst answer the question: Is the broad idea of beneï¬ting from out-of-format training even viable? For instance, is our intuition correct that an MC dataset can, in practice, beneï¬t from training on an EX dataset? Before discussing our main exper- imental results, we brieï¬y report on a pilot study that assesses the following basic question: Given a training set T i 1 (the anchor dataset) of QA for- mat Fi, is there an out-of-format training set T j 1 1 ⪠T j of format Fj such that training jointly on T i 1 improves performance relative to training only on T i 1? To this end, we evaluate both on the match- ing evaluation set Ei 1 as well as on âunseenâ data Ei
2, Ei The results are summarized in Table 3. The two rows in each individual table correspond to training on T i 1 ⪠X, where X is an out-of-format dataset corresponding to T j 1 above. The columns represent various evaluation sets of format Fi. For each column, âX = . . .â at the very bottom indicates the out-of-format dataset X that was the most helpful in improving perfor- mance on the evaluation set in that column.5 Consider the case of the anchor set T i
1 being BoolQ and the evaluation set being NP-BoolQ, both of format YN. Here, including out-of-format training data X=SQuAD2 boosts performance from 51% to as much as 59%. The gain may be less in other cases, but across all anchor and evalu- ation datasets, we generally observe that there is at least one out-of-format training set whose inclusion improves performance.
This pilot study thus provides a proof of concept that out-of-format training can indeed help a QA model in nearly every case. Of course, this study only shows the existence of such an out-of-format dataset, rather than provide a single uniï¬ed model. Nevertheless, it helps identify representative train- ing sets from each format that were most helpful. As alluded to earlier, we used this empirical data to guide which training sets to include when building UNIFIEDQA in Section 3.2.
The experimental results from this case study are summarized in the aggregated plot shown in
4The evaluation code is available at the URL in Footnote 1. 5Appendix A.5 reports extended results, including the per- formance with various choices of X.
Trained on | - Evaluated on + SQuADI1 SQuAD2 NewsQA Quoref Quoref-CS SQuADI1 85.9 42.8 517 28.2 28.11 SQuADI1 +X 85.8 42.8 52.1 29.4 29.84 Best X BoolQ OBQA OBQA NarQA OBQA Trained on | - Evaluatedonâ> BoolQ MultiRC NP-BoolQ BoolQ-CS BoolQ 76.4 64.1 51.3 53.4 BoolQ + X 78.9 66.0 59.4 61.0 Best X SQuAD2 OBQA SQuAD2 NarQA Trained on | - Evaluatedonâ> RACE OBQA ARC-chal| MCTest RACE 558 26.6 28.0 62.5 RACE +X 59.1 32.2 284 69.4 Best X SQUADII NarQA NewsQA SQuADI1 Trained on | - Evaluatedonâ> â NarQA DROP â_DROP-CS NarQA 51.5 10.2 Hl NarQA +X 53.0 14.4 14.6 Best X SQuAD2 SQuAD2 â SQuAD2
Table 3: Pilot study showing that out-of-format training can help improve performance. Each table compares training on just the anchor dataset (e.g., BoolQ in the top-left table) with training also on an out-of-format dataset denoted âXâ. Evaluation is on the anchor dataset as well as unseen datasets of that format. The last row identiï¬es the out-of-format dataset that helped most on each evaluation dataset. All results are based on the âsmallâ size T5 model. Color denotes QA format (see Table 2).
model. Color denotes QA format (see Table 2). NQA SQuADI.1 ROPES | NewsQA. SQUAD 2.0 B Quore j SQUAD 1.1 SQuAD 2.0 © ARC (easy) NewsQA = ARC (chal) I CQA MCTest 4 â OBQA Re @ aasc NQA 1 Race SEE 0019 ete es BoolQ-CS RACE ââââ ee Multize opga © â Bool â NP-BoolQ Figure 3: Bipartite graph showing the value of various datasets. The datasets on the left were used for training and on the right for evaluation. The wider the edge from a dataset ¢ (on the left) to a dataset r (on the right), the higher the contribution of adding the out-of-format edge weight, the overall the left also depicts out-of-format datasets. is the most helpful tiple formats. Similarly (SQuAD11.1, SQUAD relatively more helpful. erally appear to help, dataset, doesnât help ful dataset in the mix yes/no questions. In a similar vein, right hand side, the out-of-format datasets. datasets, all four formats 6 Experimental We now discuss our uating UNIFIEDQA on
edge weight, the overall width of a dataset @ on the left also depicts how much it contributes to out-of-format datasets. E.g., NQA (NarrativeQA) is the most helpful dataset and even helps mul- tiple formats. Similarly our extractive datasets (SQuAD11.1, SQUAD 2, and NewsQA) are also relatively more helpful. While large datasets gen- erally appear to help, RACE, another large-scale dataset, doesnât help that much. The least help- ful dataset in the mix is BoolQ which focuses on yes/no questions.
In a similar vein, the wider the dataset on the right hand side, the more it can be beneï¬t from out-of-format datasets. Among these beneï¬ciary datasets, all four formats are equally represented.
# 6 Experimental Results
We now discuss our main experimental results, eval- uating UNIFIEDQA on seed datasets (used for train- ing the system) as well as unseen datasets.
# dataset
# to the training set of questions in râs format.
# 6.1 UNIFIEDQA vs. 8 Dedicated Models
Fig. 3. In this bipartite graph, the datasets used for training are on the left hand side and the evaluation datasets are on the right hand side. The weight of each edge w(@,7°) indicates the contribution of a dataset £ when used for training jointly with an an- chor dataset d, when evaluated on r (d and r have the same format.) Specifically,
Is UNIFIEDQA, a single pre-trained multi-format QA system, as good as dedicated systems trained for individual datasets? We emphasize that the an- swer to this question is not as simple as it may seem, since earlier works have observed that a sys- tem addressing multiple tasks often underperforms a focused system (Raffel et al., 2020).
[s(e Ud; r) - S(d; r)| ;
w(é,r) = avga [s(e Ud; r) - S(d; r)| ;
where S(d, r) is the score achieved on r after train- ing on d. Since we only focus on gains from out-of- format training, we drop the edges that are negative or between two datasets of the same format.
As expected, there are strong connections be- tween the AB and EX datasets in Fig. 3 since their deï¬nitions are quite similar. Apart from the
Fig. 4 summarizes the results of the relevant ex- periment. The gray bars belong to UNIFIEDQA (a single system for multiple datasets of different formats). The colored bars are different T5-based systems tailored to individual datasets (a different system for each dataset). The results show that UNIFIEDQA performs almost as good as individ- ual T5 models targeted to each dataset. In some cases UNIFIEDQA performs even better than the
Common senseQA 62.8 59.0 735.9 20.8 76.2 79.1 NewsQA Quoref Quoref-CS ROPES ROPES-CS DROP DROP-CS QASC NP-BoolQ BoolQ-CS MuiRE Avg 20.6 12.8 72 38.1 27.2 39.9 284 | 45.8 2.6 5.7 97 42.3 79.1 78.6 91.7 | 24.2 81.3 80.4 59.9 | 60.7 78.4 TA RoBERTa UnifiedQA [EX] UnifiedQA [AB] UnifiedQA [MC] UnifiedQA [YN] 58.7 58.0 48.5 0.6 58.9 64.7 68.2 67.9 17 63.5 53.3 57.6 58.0 14 55.3 55.4 XLNet 4B4 48.1 61.0 0.0 67.0 61d 29.4 4.7 444 07 45.5 32.5 24.6 30.7 28.9 04 32.5 89.1 ROBERTa RoBERTa ALBERT MTMSN 24.2 36.8 37.2 0.1 40.1 54.2 55.3 54.1 67.9 14.8 68.5 85.2 KF+SIR+2Step reeLB-RoBERT RoBERTa
Table 4: Generalization to unseen datasets: Multi-format training (UNIFIEDQA) often outperforms models trained the same way but solely on other in-format datasets (e.g., UNIFIEDQA [EX], which is trained on all extractive train- ing sets of UNIFIEDQA. When averaged across all evaluation datasets (last column), UNIFIEDQA shows strong generalization performance across all formats. Notably, the âPrevious bestâ models (last row) were trained on the target datasetâs training data, but are even then outperformed by Uniï¬edQA (which has never seen these datasets during training) on the YN tasks.
8 Dedicated Models = UnifiedQA
Figure 4: UNIFIEDQA is on-par with, and often out- performs, 9 different equally-sized T5-based systems tailored to individual datasets. The ï¬gure contains sep- arate models for each of the two subsets of the ARC and Regents datasets.
single-dataset experts (e.g., on OBQA or NQA). On average (last column) UNIFIEDQA clearly out- performs the ensemble of dataset/format-speciï¬c systems. UNIFIEDQA thus offers ï¬exibility across multiple QA formats while compromising almost nothing compared to dataset-speciï¬c experts.
# 6.2 Generalization to Unseen Datasets
We now explore whether UNIFIEDQA generalizes well to other, unseen datasets. Table 4 summarizes the results of experiments where we evaluate var- ious models on datasets that are not used to train them. It compares UNIFIEDQA (training on mul- tiple formats) with training on various datasets of a single format (e.g., UNIFIEDQA [EX], built by training the model on only extractive datasets).
The ï¬rst few rows of the table show T5 models trained for individual formats, followed by UNI- FIEDQA. For completeness, we include the high- est previous scores for each dataset; one must be careful when reading these numbers as the best previous numbers follow the fully super- vised protocol (for NewsQA (Zhang et al., 2020),
Quoref (Segal et al., 2019), DROP (Lan et al., 2019), ROPES (Lin et al., 2019), QASC (Khot et al., 2019), CommonsenseQA (Zhu et al., 2020) and x-CS datasets (Gardner et al., 2020).)
We make three key observations: (1) On average (last column), UNIFIEDQA shows much stronger generalization across a wide range of datasets. (2) on 9 (out of 12) datasets, UNIFIEDQA shows a better generalization than any single-format ex- pert. For example, while the system is trained on multiple-choice questions with 4 candidate an- swers, it works quite well on datasets with more than 4 candidate answers (QASC and Common- senseQA have has 8 and 5 candidate answers per question, respectively). (3) Single-format experts are better at generalization only when the source and target datasets are very similar (for instance SQuAD and Quoref).
# 6.3 State-of-the-Art via Simple Fine-tuning
Fine-tuning of pre-trained language models has become the standard paradigm for building dataset- speciï¬c stat-of-the-art systems (Devlin et al., 2019; Liu et al., 2019). The question we address here is: when it comes to QA, is there a value in using UNIFIEDQA as a starting point for ï¬ne-tuning, as opposed to a vanilla language model that has not seen other QA datasets before?
To address this question, we ï¬ne-tune each of UNIFIEDQA, T5, and BART on several datasets by selecting the best check point on the dev set, and evaluating on the test set. Table 5 summarizes the results of the experiments. The table shows two variants: UNIFIEDQAT5 and UNIFIEDQABART. All results are based on the 11B version of T5.
The columns indicate the evaluation on the test set corresponding to the data that was used for training. For each dataset, the ï¬rst line of the table
OBQA * on ARC-easy * âa ARC-chal * ee QASC on) RoBERTa KF+SIR RoBERTa RoBERTs (Zhu et RoBERTa FreeLB- RoBERTa s2tep (wta (Clark et al.,2019c) (Mitra et al., 2020) (Clark et al.,2019c) al, 2020) (Clark et al.,2019c) (Zhu et al., 2020) etal. 2020) 738.7 80.0 69.9 80.0 55.9 678 - 85.2 678 66.2 64.1 79.6 36.6 40.4 50.0 753 63.8 70.0 68.0 82.7 52.1 55.0 53.2 78.2 84.2 84.2 83.8 90.0 65.4 69.7 77.0 88.5 86.0 87.2 86.4 92.0 75.0 78.5 78.5 89.6 RACE * ComQA WG PIQA SIQA ROPES NatQ (w/ IR) ALBERT FreeLB- RoBERTa RoBERTa RoBERTa RoBERTa RoBERTa DPR+BART (Lan etal.,2019) (Zhuetal.,2020) (Sakaguchi et al.,2019) (Bisk et al., 2019) (Mitra et al, 2020) (Linetal., 2019) (Min et al.,2020) 89.5 72.2 61.5 79.4 78.0 611 42.2 78.8 62.5 62.4 714 74.0 60.5 42.1 79.4 64.0 63.6 719 B2 60.0 44.5 87.1 78.1 84.9 88.9 81.4 74 49.3 89.4 79.1 85.7 89.5 81.4 75.2 49.3
Table 5: Fine-tuning UNIFIEDQA (last row) results in new state-of-the-art performance on 11 datasets. Further, it consistently improves upon ï¬ne-tuned T5 (2nd last row) by a margin ranging from 1% for CommonsenseQA (CQA) to as much as 13% for ARC-challenge. â(w/ IR)â denotes relevant information is retrieved and appended as context sentences in the input encoding. Datasets marked with * are used in UNIFIEDQAâs original training.
SQuADI1 93.4 SQuAD2 89.6 NarQA RACE OBQA ARC-easy ARC-hard MCTest BoolQ âAvg A 652 873 86.0 85.7 756 95.0 90.2 | 85.4
Table 6: The results of a leave-one-out ablation. The ï¬rst row indicates the performance of UNIFIEDQA on each dataset it was trained on. The rest of the rows exclude one dataset at a time. The rows are sorted based on the last column: the dataset with biggest contribution appear ï¬rst. The red highlights indicate the top 3 performance drops for each column.
reports the best previously published work. For several MC datasets that do not come with evi- dence paragraphs, we include two variants: one where we use them as-is and another that uses para- graphs fetched via an Information Retrieval (IR) system as additional evidence, indicated with âw/ IRâ tags. We use the same IR sentences as used by the baselines: Aristo corpus for ARC and OBQA datasets (Clark et al., 2019c), and 2-step IR for QASC (Khot et al., 2019). For NatQA, follow- ing (Min et al., 2020), we use the DPR retrieval engine (Karpukhin et al., 2020) to augment each question with additional paragraphs.
We see that ï¬ne-tuning on UNIFIEDQA con- sistently dominates ï¬ne-tuning on T5 and BART, respectively. It also dominates the best previous Intuitively, since UNI- scores on the datasets.
FIEDQA has seen different formats, it should be positioned to achieve higher scores after a little ï¬ne-tuning, compared to ï¬ne-tuning on a vanilla T5 or BART model. This could be especially ef- fective when a user has limited training data for a target QA task (also shown in Appendix A.6.) This also highlights that the effectiveness of cross- format training is not limited only to T5, but is rather a general trend for text-to-text architectures.
# 6.4 Ablation: Training Set Contributions
We now perform a leave-one-out experiment to better understand the contribution of each seed dataset to UNIFIEDQA. We take the system from §3.2 and assess how strong the model is when indi- vidual seed training datasets are dropped from the union. The result of this experiment is summarized
in Table 6. It compares the performance of full UNIFIEDQA (the ï¬rst row) with ablated variants that exclude one seed dataset at a time. The rows are sorted based on the last column: datasets with higher contributions appear ï¬rst.
Looking at ï¬rst few rows of the table, BoolQ, SQuAD 2.0, OBQA, NarQA are the top four con- tributing datasets, each with a different format. SQuAD 1.1 has the least importance, presumably because it is mostly covered by SQuAD 2.0.
This study suggests that in order to build an ef- fective uniï¬ed QA system, it sufï¬ces to have a relatively small set of datasets as long as the set includes representatives from each format.
# 7 Discussion
The key motivation for this work is the observa- tion that nearly all prior efforts on QA research were limited to the boundaries deï¬ned by narrow formats. A format-speciï¬c design would not gen- eralize across QA datasets with slightly different deï¬nitions (e.g., a model built for SQuAD would not work for RACE). Additionally, such a design would prevent us from beneï¬ting from the labeled data available in other formats. We challenge this view by advocating for approaches that combine seemingly different datasets. We believe that devel- oping QA systems targeted to a speciï¬c format is a conceptual barrier for progress in the ï¬eld.
Factors affecting generalization. Format is not the only factor affecting generalization across datasets. We additionally studied the value of other factors including dataset size and domain (vocabu- lary, topic, and style) in improving generalization. We observed that larger datasets often help with generalization, but not always (§5); e.g., RACE or OBQA show similar beneï¬ts (Fig. 3), even though RACE is much larger than OBQA. We observed a similar phenomenon with domain: similar domains help with transfer, but that is not always the case. For example, while BoolQ questions, similar to SQuAD, are accompanied with Wiki paragraphs, they barely beneï¬t each other. Overall, the factors affecting generalization are not well-understood, leaving room for future investigations.
Unifying QA formats and text-to-text models. While UNIFIEDQA is built based using existing text-to-text models (Radford et al., 2019a; Raf- fel et al., 2020), we emphasize that the choice of tasks for multi-task learning plays a crucial role
in achieving successful results. Previous studies (Raffel et al., 2020) did not observe gains when mixing tasks that are very different. The key intu- ition is that a more coherent choice of tasks is more likely to succeed. Further, focusing on a coherent space of QA tasks/formats allows us to simplify the input by not requiring âpreï¬xesâ to explicitly deï¬ne tasks/formats.
# 8 Conclusion
The question-answering community has fruitfully explored the design of strong models, but while staying within the boundaries of individual QA for- mats. We argued that such boundaries are artiï¬cial and can even limit the performance of systems, be- cause the desired reasoning abilities being taught and probed are not tied to speciï¬c formats. Train- ing data in one format should, in principle, help QA systems perform better even on questions in another format.
With this intuition in mind, we presented UNI- FIEDQA, a single pre-trained QA system based on the text-to-text paradigm, seeking to bring uni- ï¬cation across four common QA formats. We showed that even with its simple multi-format train- ing methodology, UNIFIEDQA achieves perfor- mance on par with 8 dataset-speciï¬c expert models (§6.1), while also generalizing well to many unseen datasets of seen formats (§6.2). At the same time, we demonstrated that UNIFIEDQA is a strong start- ing point for building QA systems: it can achieve state-of-the-art performance by simply ï¬ne-tuning on target datasets (6.3).
We hope this effort will inspire a future line of work in the QA and NLP communities, moving towards more general and broader system designs. We leave extensions of UNIFIEDQA to other for- mats such as to direct-answer questions (Roberts et al., 2020) as a promising avenue for future work.
# Acknowledgments
The authors would like to thank Collin Raffel, Adam Roberts, and Nicholas Lourie for their help with the T5 framework and for providing feed- back on an earlier version of this work. The au- thors would like to acknowledge grants by ONR N00014-18-1-2826 and DARPA N66001-19-2-403, and gifts from the Sloan Foundation and the Allen Institute for AI. Moreover, the authors would like to thank members of the Allen Institute for AI, UW-NLP, and the H2Lab at the University of Wash-
ington for their valuable feedback and comments. TPU machines for conducting experiments were provided by Google.
# References
Naveen Arivazhagan, Ankur Bapna, Orhan Firat, Dmitry Lepikhin, Melvin Johnson, Maxim Krikun, Mia Xu Chen, Yuan Cao, George Foster, Colin Cherry, et al. 2019. Massively multilingual neural machine translation in the wild: Findings and chal- lenges. In NAACL.
Pratyay Banerjee and Chitta Baral. 2020. Knowl- edge fusion and semantic knowledge ranking for open domain question answering. arXiv preprint arXiv:2004.03101.
Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jian- feng Gao, and Yejin Choi. 2020. Piqa: Reasoning about physical commonsense in natural language. In AAAI.
Rich Caruana. 1997. Multitask learning. Machine learning, 28(1):41â75.
Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019a. Boolq: Exploring the surprising In NAACL- difï¬culty of natural yes/no questions. HLT.
Kevin Clark, Minh-Thang Luong, Urvashi Khandel- wal, Christopher D Manning, and Quoc Le. 2019b. BAM! Born-again multi-task networks for natural language understanding. In ACL, pages 5931â5937.
Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question an- swering? Try ARC, the AI2 reasoning challenge. ArXiv, abs/1803.05457.
Peter Clark, Oren Etzioni, Tushar Khot, Bha- vana Dalvi Mishra, Kyle Richardson, Ashish Sab- harwal, Carissa Schoenick, Oyvind Tafjord, Niket Tandon, Sumithra Bhakthavatsalam, et al. 2019c. From âFâ to âAâ on the NY Regents science ex- ams: An overview of the Aristo project. ArXiv, abs/1909.01958.
Peter Clark, Oren Etzioni, Tushar Khot, Ashish Sab- harwal, Oyvind Tafjord, Peter Turney, and Daniel Khashabi. 2016. Combining retrieval, statistics, and inference to answer elementary science questions. In AAAI.
Pradeep Dasigi, Nelson F. Liu, Ana Maraso- vi´c, Noah A. Smith, and Matt Gardner. 2019. Quoref: A reading comprehension dataset with questions requiring coreferential reasoning. In EMNLP/IJCNLP.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In NAACL.
Dheeru Dua, Ananth Gottumukkala, Alon Talmor, Matt Gardner, and Sameer Singh. 2019a. Compre- hensive multi-dataset evaluation of reading compre- hension. In 2nd Workshop on Machine Reading for Question Answering.
Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019b. DROP: A reading comprehension benchmark requir- ing discrete reasoning over paragraphs. In NAACL.
Adam Fisch, Alon Talmor, Robin Jia, Minjoon Seo, Eu- nsol Choi, and Danqi Chen. 2019. MRQA 2019 shared task: Evaluating generalization in reading comprehension. In 2nd Workshop on Machine Read- ing for Question Answering, at EMNLP.
Matt Gardner, Yoav Artzi, Victoria Basmova, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Evaluating modelsâ local decision et al. 2020. boundaries via contrast sets. In EMNLP - Findings.
Vladimir Karpukhin, Barlas OËguz, Sewon Min, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In EMNLP.
Nitish Shirish Keskar, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Unifying question an- swering, text classiï¬cation, and regression via span extraction. arXiv preprint arXiv:1904.09286.
Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth. 2018. Looking beyond the surface: A challenge set for reading comprehension over multiple sentences. In NAACL- HLT.
Daniel Khashabi, Tushar Khot, and Ashish Sabharwal. 2020. More bang for your buck: Natural perturba- tion for robust question answering. In EMNLP.
Tushar Khot, Peter Clark, Michal Guerquin, Peter Jansen, and Ashish Sabharwal. 2019. QASC: A dataset for question answering via sentence compo- sition. In AAAI.
Tomás Kociský, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, Gábor Melis, and Edward Grefenstette. 2018. The narrativeqa reading comprehension challenge. TACL, 6:317â328.
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- ï¬eld, Michael Collins, Ankur P. Parikh, Chris Al- berti, Danielle Epstein, Illia Polosukhin, Jacob De- vlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question an- swering research. TACL, 7:453â466.
Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard H. Hovy. 2017. RACE: Large-scale reading comprehension dataset from examinations. In EMNLP.
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learn- ing of language representations. In ICLR.
Hector J. Levesque, Ernest Davis, and Leora Morgen- stern. 2011. The winograd schema challenge. In KR.
Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. In ACL.
Chin-Yew Lin, Guihong Cao, Jianfeng Gao, and Jian- Yun Nie. 2006. An information-theoretic approach to automatic evaluation of summaries. In NAACL.
Kevin Lin, Oyvind Tafjord, Peter Clark, and Matt Gard- ner. 2019. Reasoning over paragraph effects in situ- In 2nd Workshop on Machine Reading for ations. Question Answering, at EMNLP.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.
Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. 2018. The natural language de- cathlon: Multitask learning as question answering. arXiv preprint arXiv:1806.08730.
Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct elec- tricity? a new dataset for open book question answer- ing. In EMNLP.
Sewon Min, Danqi Chen, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2019. A discrete hard EM ap- proach for weakly supervised question answering. In EMNLP/IJCNLP.
Sewon Min, Julian Michael, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2020. AmbigQA: Answering am- biguous open-domain questions. In EMNLP.
Arindam Mitra, Pratyay Banerjee, Kuntal Kumar Pal, Swaroop Mishra, and Chitta Baral. 2020. How addi- tional knowledge can improve natural language com- monsense question answering. arXiv: Computation and Language.
Kosuke Nishida, Kyosuke Nishida, Masaaki Nagata, Atsushi Otsuka, Itsumi Saito, Hisako Asano, and Junji Tomita. 2019. Answering while summarizing: Multi-task learning for multi-hop qa with evidence extraction. In ACL.
Haoruo Peng, Daniel Khashabi, and Dan Roth. 2015. In NAACL, Solving hard coreference problems. pages 809â819.
Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019a. Language models are unsupervised multitask learners.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019b. Lan- guage models are unsupervised multitask learners. OpenAI Blog, 1(8):9.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a uniï¬ed text-to-text trans- former. JMLR.
Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you donât know: Unanswerable ques- tions for squad. In ACL.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In EMNLP.
Matthew Richardson, Christopher J. C. Burges, and Erin Renshaw. 2013. Mctest: A challenge dataset for the open-domain machine comprehension of text. In EMNLP.
Adam Roberts, Colin Raffel, and Noam Shazeer. 2020. How much knowledge can you pack into the param- eters of a language model? In EMNLP.
Mrinmaya Sachan and Eric Xing. 2016. Easy questions ï¬rst? a case study on curriculum learning for ques- tion answering. In ACL, pages 453â463.
Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhaga- vatula, and Yejin Choi. 2020. WINOGRANDE: an adversarial winograd schema challenge at scale. In AAAI.
Maarten Sap, Hannah Rashkin, Derek Chen, Ronan Le Bras, and Yejin Choi. 2019. Social iqa: Com- monsense reasoning about social interactions. In EMNLP-IJCNLP, pages 4453â4463.
Elad Segal, Avia Efrat, Mor Shoham, Amir Globerson, and Jonathan Berant. 2019. A simple and effec- tive model for answering multi-span questions. In EMNLP.
Alon Talmor and Jonathan Berant. 2019. Multiqa: An empirical investigation of generalization and transfer in reading comprehension. In ACL.
Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. Commonsenseqa: A ques- tion answering challenge targeting commonsense knowledge. In NAACL-HLT.
Adam Trischler, Tong Wang, Xingdi Yuan, Justin Har- ris, Alessandro Sordoni, Philip Bachman, and Ka- heer Suleman. 2017. Newsqa: A machine compre- hension dataset. In Rep4NLP@ACL.
Zhuosheng Zhang, Junjie Yang, and Hai Zhao. 2020. Retrospective reader for machine reading compre- hension. ArXiv, abs/2001.09694.
Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Tom Gold- stein, and Jingjing Liu. 2020. Freelb: Enhanced ad- versarial training for natural language understanding. In ICLR.
# A Appendices
# A.1 Datasets: Details
We evaluate our UNIFIEDQA on 19 existing datasets that target various formats, as well as various complex linguistic phenomena. Table 2 shows different properties for our datasets (whether it comes with a paragraph, whether the paragraph explicitly contains the answer, whether there are candidate-answers as part of the input, etc.) Most importantly, they are grouped into several formats/categories described below. Table 2 gives summary statistics of these datasets.
Extractive QA (EX). All the datasets in this format require models to extract the answer to a given question as a substring from a context paragraph. SQuAD 1.1 (Rajpurkar et al., 2016) contains questions about Wikipedia paragraphs. A later version of this dataset, SQuAD 2 (Rajpurkar et al., 2018), includes unanswerable questions which empirically makes the task much harder. For our evaluation, we use the development sets of SQuAD 1.1 and SQuAD 2. NewsQA (Trischler et al., 2017) dataset focuses on paraphrased questions with predicate-argument structure understanding collected from news articles from CNN/DailyMail articles. Quoref (Dasigi et al., 2019) contains questions that require coreference resolution in Wikipedia articles and can even have disjoint spans as answers. ROPES (Lin et al., 2019) centers around situation understanding, where the model must under the causes and effects implicit in the given situation.
Abstractive QA (AB). All the datasets in this format require models to produce answers that are often not mere substrings of the given context paragraph. NarrativeQA (Kociský et al., 2018) focuses on understanding various events that happen in a given movie plot, based on summaries of their movie adaptations from various web resources. Many of the answers do not have high overlap with the context. DROP (Dua et al., 2019b) contains questions that involve rudimentary mathematical skills (such as counting, addition, subtraction, maximum, minimum, etc.) and questions query multiple parts of the paragraph. The answer can be either a number or a date that can be inferred from the paragraph, or several spans from the context paragraph. Finally, we use an open-domain version of NaturalQuestions (Kwiatkowski et al., 2019) where the paragraph that was used for creating the question is eliminated, and only the questions with short answers up to ï¬ve tokens are taken. Instead, we follow (Min et al., 2020) to use a DPR retrieval (Karpukhin et al., 2020) engine to augment each question with an additional context paragraph. We call this dataset NatQA.
Multiple-choice QA (MC). All the datasets in this format contain questions that come with candidate answers. MCTest (Richardson et al., 2013) contains questions about simple, ï¬ctional stories. RACE (Lai et al., 2017) is a challenging set of English comprehension multiple choice exams given in Chinese middle and high schools. OpenBookQA (Mihaylov et al., 2018), ARC (Clark et al., 2018, 2016), QASC (Khot et al., 2019) are different MC tests focusing on elementary/high school-style science exams. We use several othern datasets that are often framed as commonsense reasoning benchmarks: CommonsenseQA (Talmor et al., 2019) is geared towards activity/concept questions, PIQA (Bisk et al., 2020) addresses physical interaction reasoning, SIQA (Sap et al., 2019) contains question that require social reasoning (motivations, reactions, event orders) and ï¬nally Winogrande (Sakaguchi et al., 2020) which a benchmark for hard pronoun resolution problems (Levesque et al., 2011; Peng et al., 2015).
Other than MCTest and RACE, the rest of the datasets do not come with accompanying paragraphs. On such datasets, occasionally a retrieval system is used to supplement each question with a relevant retrieved context paragraph. For most of this the work, we keep the questions as is with no additional retrieval (unless otherwise mentioned), except in §6.3 where we use IR to get numbers comparable to earlier work. One other variability among these datasets is their number of candidate answers. While many datasets have four candidates (see Figure 2), others have more. Later, in §6.2 we will see that our approach generalizes to datasets with different number of candidates, even if itâs not seen during training.
Yes/No QA (YN). All the datasets in this format contain questions that could be responded with yes/no answers. One can think of these as multiple-choice questions with 2 candidates; however, theyâre usually treated differently. Several examples we use are BoolQ (Clark et al., 2019a) and a version of this dataset
with natural-perturbations, BoolQ-NP (Khashabi et al., 2020), the subset of MultiRC (Khashabi et al., 2018) that have binary(yes/no) answers.
Contrast-sets. Additionally, we use contrast-sets (Gardner et al., 2020) for several of our datasets (denoted with âCSâ): BoolQ-CS, ROPES-CS, Quoref-CS, DROP-CS. These evaluation sets are expert- generated perturbations that deviate from the patterns common in the original dataset.
# A.2 Details on the experiments:
Below is several details on the experiments:
⢠Models: we use two text-to-text frameworks: T5 and BART.
⢠Model sizes: Most of the experiments are done on T5(11B) which has 11 billion parameters. We also report experiments with BART (large) with 440 million parameters.
⢠Input/output size: For all experiments, we use token-limits of size 512 and 100 for inputs and outputs sequences, respectively.
⢠# of iterations for pretraining on the seed datasets (§3): All models are trained for 100k steps on the seed datasets.
⢠Learning rates: we use 1e-3 and 1e-5, for T5 and BART, following the original works on each framework.
⢠Batch sizes: We use batches of 8 and 120, for the T5 (11B) and BART models, respectively.
⢠Infrastructure: In the experiments, we use v3-8 TPUs for T5 models, and eight 32GB GPUs for BART models.
⢠Time spent to build UNIFIEDQA: pretraining UNIFIEDQA approximately takes about 36 and 55 hours, on T5(11B) and BART models, respectively.
⢠Finetuning on datasets (§6.3): the only hyperparameter we iterated over is the training steps. Each model was ï¬ne-tuned for 60k steps and checkpoints were saved every 2k steps. The model with the highest score on the dev set is our selected model.
# A.3 UNIFIEDQA: Different Sizes
For completeness weâre also showing the scores of UNIFIEDQA of different sizes on each dataset. For these systems each row is a single system.
# SQuADI1
# SQuAD2 NewsQA Quoref Quoref-CS
# ROPES
# ROPES-CS
# NarQA
# DROP
# DROP-CS
# BoolQ
# MultiRC NP-BoolQ BoolQ-CS
794 676 Sil 256 276 310 329 53.7 146 172 771 469 59.4 58.1 88.2 781 54.2 40.0385 33.9 8A S87 19.7 23.7 82S OB 66.3 619 914 859 485 45.521 47.7 37.9 0.8 24.6 30.7 = 86.1 54.2 72.6 73.0 93.2 874 59.6 60.4 S47 4B. 3.1 633 285 339 893 62.6 78.4 771.0 934 89.6 589 63.5 55.3 67.0 «4565.2 32.5 40.9 90.2 59.9 813 80.4
RACE OBQA Cana) ARC-easy ay ARC-chal eine MCTest QASC oR) CQA 56.0 50.4 35.4 42.9 59.5 35.9 35.8 80.0 191 379 328 70.3 59.0 38.4 53.0 69.4 42.4 44.2 86.9 258 50.8 45.0 781 684 54.6 65.9 14 54.4 54.8 90.0 43.3 62.6 60.9 83.2 808 63.2 78.7 86.2 66.7 64.8 95.0 62.2 766 713 87.3 86.0 71.2 85.7 89.2 75.6 747 95.0 685 80.1 76.2
Table 7: UNIFIEDQA of different sizes on our datasets.
# A.4 Comparison with the Dedicated Models: extended results
Here we summarize an extension of the results in §6.1. Table 8 summarizes the results of the relevant experiment. In the top portion of the table we have evaluations of T5 model ï¬ne-tuned for individual datasets, followed by UNIFIEDQA. As it can be observed from the table, UNIFIEDQA performs almost as good as the best single dataset experts. In some cases UNIFIEDQA performs even better than than the single-dataset experts (e.g., on OBQA or NQA.) On average (last column) UNIFIEDQA is doing much better dataset/format-speciï¬c systems. In conclusion, UNIFIEDQA offers ï¬exibility across multiple QA formats while compromising almost nothing compared to dataset-speciï¬c experts.
NewsQA Quoref Quoref-CS DROP DROP-CS ROPES ROPES-CS QASC â COMMONSED Np. BooIQ BoolQ-CS Muli Avg seQA 625 715 610 315 370 62.0 39.9 64.5 70.4 16 0.0 24 (42.0 557 547 46.0 203.20. 29.4 23.9 393 52.6 22.2 18.2 95 327 49.9 70.7 56.6 = 29.2 36.5 Tl 48.2 64.1 BA 25 45 33 426 93 20.7 143 17 94 20.6 34 52.2 674 02 0.1 O1 173 0.6 7 14 04 0.1 0.0 07 148 20.8 79.1 78.6 91.7 24.2 58.0 68.2 57.6 30.7368 BI 417 54.1 59.0 27.2 39.9 28.4 45.8 389 63.5 55.3325 AOL 67.0 45.5 68.5 76.2 813 80.4 59.9 | 60.7 668 705 554 89.1 34.2 61.1 32.5 85.2 79.1 184 TA Retro Reader XLNet_-XLNet_ ALBERT MTMSN--ROBERTa âRoBERTa âKF+SIR42Step FreeLB-RoBERTa RoBERTa â RoBERTa
Table 8: UNIFIEDQA is on-par with systems tailored to individual datasets (the diagonal cells vs the last row) while functioning across a wide range of datasets (the last column).
# A.5 Pairwise Mixing: extended results
Here we summarize an extension of the results in §5. The question addressed here is whether there is value in mixing datasets with different formats. We evaluated this by adding one dataset of a different format to four different datasets (one for each format). The results are summarized in Table 9. The goal of each sub-table is to measure the within-format generalization one can gain via out-of-format training. Each sub-table has an anchor dataset, indicated in the ï¬rst column. For example in the ï¬rst table the anchor dataset is SQuAD. Rows of the table: Each table combines datasets of other formats with the anchor dataset (e.g., SQuAD + RACE, etc). The columns of the sub-tables contain evaluations on the dataset with the same format as the anchor dataset. For example, on the ï¬rst table, the evaluation is done on SQuAD 1.1/2.0, NewsQA, Quoref which have the same format as SQuaD 1.1, the anchor dataset. The results show that one can achieve gains for question-answering in a certain format by incorporating resources in other formats. In the ï¬rst two sub-tables, we see that NarQA (AB) and OBQA (MC) help a SQuAD models generalize better to other EX datasets. In the third table where the anchor dataset is NQA
(AB), EX datasets help a NQA model generalize better to other AB datasets. In the 4th/5th subtable, EX and AB datasets help a RACE/OBQA (MC) models generalize better to other MC datasets. Similarly, in the ï¬nal sub-table, MC dataset helps improve the scores on a YN datasets.
SS tence rato ~~ eee ee om ae ce SQuADII 85.9 42.8 S17 28.2 28.11 414 SQuADI1 + RACE 85.6 42.6 S17 26.6 27.43 46.8 SQuADII SQuAD11 + OBQA 85.7 42.8 52.1 207 29.84 476 SQUADII + BoolQ 85.8 42.7 52.1 207 29.42 475 SQuaADII + NarQA. 85.6 42.7 313 29.4 26.56 47.1 SQuAD2 16.5 10.7 46.0 17.7 22.04 46.6 SQuAD2 + RACE 16.5 10.6 47.9 18.6 20.40 468 SQuabD2 SQuAD2 + OBQA 16.7 70.8 48.4 16.9 19.80 46.5 SQuAD2 + BoolQ 15.9 72.0 45.4 16.3 20.35 46.0 SQuaAD?2 + NarQA 72.5 70.9 473 20.0 23.39 468 NarQA DROP â_ DROP-CS_ââs ROPES -~âROPES-CS. oo NarQA 315 10.2 Wl 22.8 15.3 22.2 NarQA + SQuADI1 92.7 14.1 14.6 30.5 33.2 29.0 NarQa + SQuAD2 53.0 14.4 14.6 31.3 33.2 29.3 NQA NarQa + NewsQA 52.5 10.4 12.3 16.6 15.6 21.5 NarQA +RACE 52.0 10.7 13.5 20.0 179 22.8 NarQA + OBQA 518 10.1 113 15.4 17.0 21.1 NarQA +BoolQ 518 10.2 109 20.7 10.9 20.9 Samana RACE OBQA ARC-easy ARC-hard MCTest QASC CQA oe RACE 558 266 318 28.0 625 179 283 35.8 RACE + SQuADI1 59.1 28.0 324 28.1 694 23.5 36.1 30.5 RACE RACE + NewsQA 575 28.0 31.6 28.4 650 199 32.1 315 RACE + BoolQ 574. 268 318 27.9 63.1 18.0 29.6 36.4 RACE +NarQ 55.7 32.2 30.6 28.4 60.9 179 28.1 36.3 OBQA 288 S18 26.1 34.8 33169 173 28.4 OBQA + SQUADII 29.6 516 27.2 33.3 463 9.5 23.3 315 GZ OBQA + SQuAD2 295 53.2 27.2 33.5 46.6 9.3, 23.1 31.8 OBQA + NewsQA 30.7 49.4 26.1 32.3 37889 22.9 29.7 OBQA + BoolQ 25.0 504 26.0 34.3 2200-74 18.3 26.9 OBQA + NarQA 29.7 528 25.6 33.0 49.1 8.9 19.1 31.2 BoolQ MultiRC ââ-NP-BoolQ_â_BoolQ-CS. Sse BoolQ 76.36 64.10 51.33 53.37 613 BoolQ + SQuAD11 78.41 51.28 54,33 58.36 60.6 BoolQ + SQuAD2 78.93 56.89 59.38 58.06 63.3 BoolQ BoolQ +NewsQA 7161 34.17 55.46 59.82 618 BoolQ + RACE 75.69 61.22 54.59 56.89 62.1 BoolQ + OBQA 76.42 66.03 52.03 37.77 63.1
# BoolQ + NarQA
78.90
59.02
55.33
61.00
63.6
Table 9: Pairwise mixing of formats: mixing with QA of datasets with different formats helps.
# A.6 Extended Results of Fine-tuning on Winogrande
Here we provide extended result for the Winogrande dataset. The results are summarized in Table 10. The table include results of ï¬ne-tuning UNIFIEDQAT5 and UNIFIEDQABART, as well as ï¬ne-tuning of the vanilla language models, T5 and BART. As it can be observed, on this dataset, ï¬ne-tuning UNIFIEDQA gives stronger results when the size of the training data is limited. With respect to the overall metric AUC, UNIFIEDQA has a slight edge over ï¬ne-tuning the vanilla language models.
Acc. (XS) Acc. (S) Ace. (M) Ace. (L) Ace. (XL) AUC RoBERTa 55.4 62.4 66.7 74.2 78.2 67.5 54.2 57.8 59.7 68.9 72.0 62.4 56.0 59.5 61.6 68.6 73.3 63.6 75.6 79.8 86.4 90.3 90.2 84.8 78.8 83.4 86.9 88.5 89.4 85.7
Table 10: Extended results on the Winogrande dataset | {
"id": "1806.08730"
} |
2005.00200 | HERO: Hierarchical Encoder for Video+Language Omni-representation Pre-training | We present HERO, a novel framework for large-scale video+language
omni-representation learning. HERO encodes multimodal inputs in a hierarchical
structure, where local context of a video frame is captured by a Cross-modal
Transformer via multimodal fusion, and global video context is captured by a
Temporal Transformer. In addition to standard Masked Language Modeling (MLM)
and Masked Frame Modeling (MFM) objectives, we design two new pre-training
tasks: (i) Video-Subtitle Matching (VSM), where the model predicts both global
and local temporal alignment; and (ii) Frame Order Modeling (FOM), where the
model predicts the right order of shuffled video frames. HERO is jointly
trained on HowTo100M and large-scale TV datasets to gain deep understanding of
complex social dynamics with multi-character interactions. Comprehensive
experiments demonstrate that HERO achieves new state of the art on multiple
benchmarks over Text-based Video/Video-moment Retrieval, Video Question
Answering (QA), Video-and-language Inference and Video Captioning tasks across
different domains. We also introduce two new challenging benchmarks How2QA and
How2R for Video QA and Retrieval, collected from diverse video content over
multimodalities. | http://arxiv.org/pdf/2005.00200 | Linjie Li, Yen-Chun Chen, Yu Cheng, Zhe Gan, Licheng Yu, Jingjing Liu | cs.CV, cs.CL, cs.LG | Accepted by EMNLP 2020 | null | cs.CV | 20200501 | 20200929 | 0 2 0 2
p e S 9 2 ] V C . s c [ 2 v 0 0 2 0 0 . 5 0 0 2 : v i X r a
# HERO: Hierarchical Encoder for Video+Language Omni-representation Pre-training
Linjie Liâ, Yen-Chun Chenâ, Yu Cheng, Zhe Gan, Licheng Yu, Jingjing Liu Microsoft Dynamics 365 AI Research {lindsey.li, yen-chun.chen, yu.cheng, zhe.gan, licheng.yu, jingjl}@microsoft.com
# Abstract
framework We present HERO, omni- for large-scale HERO encodes representation learning. multimodal inputs in a hierarchical structure, where local context of a video frame is captured by a Cross-modal Transformer via multimodal fusion, and global video context is captured by a Temporal Transformer. In addi- tion to standard Masked Language Modeling (MLM) and Masked Frame Modeling (MFM) objectives, we design two new pre-training (i) Video-Subtitle Matching (VSM), tasks: where the model predicts both global and local temporal alignment; and (ii) Frame Order Modeling (FOM), where the model predicts the right order of shufï¬ed video frames. HERO is jointly trained on HowTo100M and large-scale TV datasets to gain deep under- standing of complex social dynamics with multi-character interactions. Comprehensive experiments demonstrate that HERO achieves new state of the art on multiple benchmarks over Text-based Video/Video-moment Re- trieval, Video Question Answering (QA), Video-and-language and Video Captioning tasks across different domains. We also introduce two new challenging benchmarks How2QA and How2R for Video QA and Retrieval, collected from diverse video content over multimodalities.1
Bansal, 2019), UNITER (Chen et al., 2020b), VL- BERT (Su et al., 2020) and Unicoder-VL (Li et al., 2020a). However, most large-scale pre-trained models are tailored for static images, not dynamic videos. VideoBERT (Sun et al., 2019b) is the ï¬rst to apply BERT to learn joint embedding for video- text pairs. But since only discrete tokens are used to represent video frames, rich video frame features are not fully utilized. To remedy this, CBT (Sun et al., 2019a) proposes to use a contrastive loss, but mainly for video representation learning alone, with text input only considered as side information. UniViLM (Luo et al., 2020) takes a step further and considers both understanding and generation tasks. Several constraints inherently limit the success of existing models. (i) Most model designs are di- rect adaptation of BERT, taking simple concatena- tion of subtitle sentences and visual frames as input, while losing the temporal alignment between video and text modalities. (ii) Pre-training tasks are di- rectly borrowed from image+text pre-training meth- ods, without exploiting the sequential nature of videos. (iii) Compared to diverse image domains investigated in existing work, video datasets used in current models are restricted to cooking or narrated instructional videos (Miech et al., 2019), exclud- ing video sources that contain dynamic scenes and complex social interactions.
# Introduction
Inspired by BERT (Devlin et al., 2019), large- scale multimodal pre-training has prevailed in the realm of vision-and-language research (Lu et al., 2019; Tan and Bansal, 2019; Chen et al., 2020b). There are many early players in the area, including ViLBERT (Lu et al., 2019), LXMERT (Tan and
To tackle these challenges, we present a new video-and-language large-scale pre-training frame- work - HERO (Hierarchical EncodeR for Omni- representation learning). As illustrated in Figure 1, HERO takes as input a sequence of video clip frames and their accompanying subtitle sentences.2 Instead of adopting a ï¬at BERT-like encoder, HERO encodes multimodal inputs in a hierarchical fash- ion, with (i) a Cross-modal Transformer to fuse a subtitle sentence and its accompanying local video
â Equal contribution. 1Code and new datasets will be released at https://
github.com/linjieli222/HERO.
2ASR can be applied when subtitles are unavailable.
frames, followed by (ii) a Temporal Transformer to obtain a sequentially contextualized embedding for each video frame, using all the surrounding frames as global context. The proposed hierarchical model ï¬rst absorbs visual and textual local context on frame level, which is then transferred to a global video-level temporal context. Experiments show that this novel model design achieves better perfor- mance than a ï¬at BERT-like architecture.
Four pre-training tasks are designed for HERO: (i) Masked Language Modeling (MLM); (ii) Masked Frame Modeling (MFM); (iii) Video- Subtitle Matching (VSM); and (iv) Frame Order Modeling (FOM). Compared to prior work, the key novelty is VSM and FOM, which encourage ex- plicit temporal alignment between multimodalities as well as full-scale exploitation of the sequential nature of video input. In VSM, the model consid- ers not only global alignment (predicting whether a subtitle matches the input video clip), but also local temporal alignment (retrieving the moment where the subtitle should be localized in the video clip). In FOM, we randomly select and shufï¬e a subset of video frames, and the model is trained to restore their original order. Extensive ablation studies demonstrate that both VSM and FOM play a critical role in video+language pre-training.
To empower the model with richer knowledge beyond instructional videos used in prior work, we jointly train HERO with both HowTo100M (nar- rated instructional videos) (Miech et al., 2019) and a large-scale TV dataset (containing TV episodes spanning across different genres) (Lei et al., 2018, 2020a,b; Liu et al., 2020). Compared to factual de- scriptions in HowTo100M, the TV dataset contains more complex plots that require comprehensive in- terpretation of human emotions, social dynamics and causal relations of events, making it a valuable supplement to HowTo100M and a closer approxi- mation to real-life scenarios.
Existing pre-trained models are evaluated on YouCook2 (Zhou et al., 2018a) and MSR-VTT (Xu et al., 2016a) datasets. YouCook2 focuses on cook- ing videos only, and the captions in MSR-VTT are very simple. To evaluate our model on more chal- lenging benchmarks, we collect two new datasets on video-moment retrieval and question answer- ing, How2R and How2QA. In addition, we evaluate HERO on popular retrieval and QA tasks such as TVR (Lei et al., 2020b) and TVQA (Lei et al., 2018), where HERO outperforms existing models
by a large margin. We further demonstrate the generalizability of our model by adapting it to (i) diverse downstream tasks: video-and-language in- ference and video captioning tasks, achieving new state of the art on VIOLIN (Liu et al., 2020) and TVC (Lei et al., 2020b) benchmarks; (ii) differ- ent video types: single-channel videos (video-only) and multi-channel videos (video + subtitle), report- ing superior performance over existing state of the art on DiDeMo (Anne Hendricks et al., 2017a) and MSR-VTT.
Our main contributions are summarized as (i) We present HERO, a hierarchical follows. Transformer-based model for video+language rep- (ii) We propose new pre- resentation learning. training tasks VSM and FOM, which complement MLM and MRM objectives by better capturing tem- poral alignment between multimodalities in both global and local contexts. (iii) Different from previ- ous work that mainly relies on HowTo100M, we in- clude additional video datasets for pre-training, en- couraging the model to learn from richer and more divserse visual content. (iv) We collect two new datasets based on HowTo100M for video-moment retrieval/QA, and will release the new benchmarks to foster future study. HERO achieves new state of the art across all the evaluated tasks.
# 2 Related Work
Since the birth of BERT (Devlin et al., 2019), there has been continuing advancement in language model pre-training, such as XLNet (Yang et al., 2019), RoBERTa (Liu et al., 2019), ALBERT (Lan et al., 2020), UniLM (Dong et al., 2019), and T5 (Raffel et al., 2019), which epitomizes the su- perb power of large-scale pre-training. Satellited around BERT, there is parallel growing interest in model compression (Sun et al., 2019c; Shen et al., 2020) and extension to generation tasks (Chen et al., 2020a; Wang and Cho, 2019).
Branching out from language processing to mul- timodal, subsequent studies also emerge in vi- sion+language space. Prominent work includes ViLBERT (Lu et al., 2019), LXMERT (Tan and Bansal, 2019), VL-BERT (Su et al., 2020), Unicoder-VL (Li et al., 2020a), B2T2 (Alberti et al., 2019), UNITER (Chen et al., 2020b) and VILLA (Gan et al., 2020). A detailed review can be found in Appendix A.7.
Contrast to the boom in image+text area, pre- training for video+language is still in its infancy.
So far, VideoBERT (Sun et al., 2019b), CBT (Sun et al., 2019a), MIL-NCE (Miech et al., 2020), Act- BERT (Zhu and Yang, 2020) and UniViLM (Luo et al., 2020) are the only existing work exploring this space, covering downstream tasks from text- based video retrieval (Zhou et al., 2018a; Xu et al., 2016b) and video question answering (Maharaj et al., 2017; Lei et al., 2020a) to video caption- ing (Zhou et al., 2018b).
In this paper, we aim to propel video+language omni-representation learning in four dimensions: (i) better model architecture design; (ii) better pre- training task design; (iii) diversiï¬cation of training corpora; and (iv) new high-quality benchmarks for downstream evaluation.
# 3 Hierarchical Video+Language Encoder
In this section, we explain the proposed HERO architecture and the four pre-training tasks in detail.
# 3.1 Model Architecture
Model architecture of HERO is illustrated in Fig- ure 1, which takes the frames of a video clip and the textual tokens of subtitle sentences as inputs. They are fed into a Video Embedder and a Text Embedder to extract initial representations. HERO computes contextualized video embeddings in a hierarchical procedure. First, local textual con- text of each visual frame is captured by a Cross- modal Transformer, computing the contextualized multi-modal embeddings between a subtitle sen- tence and its associated visual frames. The encoded frame embeddings of the whole video clip are then fed into Temporal Transformer to learn the global video context and obtain the ï¬nal contextualized video embeddings.
Input Embedder We denote visual frames of a video clip as v = {vi}Nv i=1 and its subtitle as s = {si}Ns i=1 (Nv is the number of visual frames in a video clip and Ns is the number of sentences in each subtitle). For Text Embedder, we follow Liu et al. (2019) and tokenize a subtitle sentence si into a sequence of WordPieces (Wu et al., 2016), i.e., wsi = {wj j=1 (L is the number of tokens in si). The ï¬nal representation for each sub-word token is obtained via summing up its token em- bedding and position embedding, followed by a layer normalization (LN) layer. For Video Em- bedder, we ï¬rst use ResNet (He et al., 2016) pre- trained on ImageNet (Deng et al., 2009) and Slow- Fast (Feichtenhofer et al., 2019) pre-trained on Ki-
netics (Kay et al., 2017) to extract 2D and 3D visual features for each video frame. These features are concatenated as visual features and fed through a fully-connected (FC) layer to be projected into the same lower-dimensional space as token em- beddings. Since video frames are sequential, their position embeddings can be calculated in the same way as in the Text Embedder. The ï¬nal embedding of a frame is obtained by summing up FC outputs and position embeddings and then passing through an LN layer. After Input Embedder, token and 3 are denoted frame embeddings for wsi and vsi si â RKÃd (d is the si â RLÃd and Vemb as Wemb hidden size).
Cross-modal Transformer To utilize the inher- ent alignment between subtitles and video frames, for each subtitle sentence si, we ï¬rst learn contex- tualized embeddings between the corresponding tokens wsi and its associated visual frames vsi through cross-modal attention. Inspired by the re- cent success (Chen et al., 2020b; Lu et al., 2019) of using Transformer (Vaswani et al., 2017) for multimodal fusion, we also use a multi-layer Trans- former here. The outputs from Cross-modal Trans- former is a sequence of contextualized embeddings for each subtitle token and each video frame:
Vcross si , Wcross si = fcross(Vemb si , Wemb si ) , (1)
where fcross(·, ·) denotes the Cross-modal Trans- â RKÃd and Wcross former, Vcross
Temporal Transformer After collecting all the }Ns visual frame embeddings Vcross = {Vcross i=1 â RNvÃd from the output of Cross-modal Trans- former, we use another Transformer as tempo- ral attention to learn contextualized video embed- dings from the global context of a video clip. To avoid losing positional information, we use residual connection (He et al., 2016) to add back Vemb â RNvÃd. The ï¬nal contextualized video embeddings are calculated as:
Vtemp = ftemp(Vemb + Vcross) , (2)
where ftemp(·) denotes the Temporal Transformer, and Vtemp â RNvÃd. Compared to ï¬at BERT-like encoder, which directly concatenates all textual tokens and visual frames as inputs, the proposed
j=1 denotes the set of visual frames paired with subtitle sentence si, based on their timestamps. Refer to Appendix A.4 for details.
MFM (Masked) Frame Features Cross-Modal Transformer 00:00:02 -> 00:00:04 âThat's why you won't go out with her again? Word Embed. VSM Frame Features Cross-Modal Transformer 00:00:34 -> 00:00:36 MLM Frame Features Cross-Modal Transformer (Masked) Word Embed. 00:00:66 -> 00:00:68 (Joey:) Joey doesn't MASK] food! Query Encoder Temporal (Subtitle as Query) Transformer - Thank God you're here. Listen to this. share 016345287 Frame Order Modeling => MFM Flow [==> VSM Flow [=> MLM Flow => FOMFiow = Shuffle Frames siné) Cosine Similarity [5 Frame Features [EES Word Features Rs Masked Frame Features VSM Frame Features shuffled Frame Features [15 Masked Word Features
Figure 1: HERO Architecture (best viewed in color), consisting of Cross-Modal Transformer and Temporal Trans- former, learned via four pre-training tasks hierarchically. Initial frame features are obtained by SlowFast and ResNet feature extractors, and word embeddings are learned via an embedding layer initialized from RoBERTa.
model effectively utilizes the temporal alignment between subtitle sentences and video frames for multimodal fusion in a more ï¬ne-grained manner. In the experiments, we show that our model design far outperforms a ï¬at BERT-like baseline.
# 3.2 Pre-training Tasks
We introduce four tasks for pre-training. During training, we sample one task per mini-batch to pre- vent different tasks from corrupting each othersâ in- put. As shown in Figure 1, MFM and MLM are in analogy to BERT (Devlin et al., 2019). Word mask- ing is realized by replacing a word with special token [MASK], and frame masking by replacing a frame feature vector with zeros. Following Chen et al. (2020b), we only mask one modality each time while keeping the other modality intact. VSM is designed to learn both local alignment (between visual frames and a subtitle sentence) and global alignment (between a video clip and a sequence of subtitle sentences). FOM is designed to model sequential characteristics of video, by learning the original order of randomly reordered frames.
# m â NM .4
In MLM, we randomly mask out input words with a probability of 15%, and replace the masked tokens wm si with special tokens [MASK].5 The goal is to predict these masked words based on the ob- servation of their surrounding words w\m si and the visual frames aligned with the sentence vsi, by minimizing the negative log-likelihood:
LMLM(θ) = âED log Pθ(wm si |w\m si , vsi) , (3)
where θ denotes trainable parameters. Each pair (wsi, vsi) is sampled from the training set D.
# 3.2.2 Masked Frame Modeling
Similar to MLM, we also sample frames and mask their visual features with a probability of 15%. However, the difference is that MLM is performed on a local context (i.e., the output of Cross-modal Transformer), while MFM is performed on a global context (i.e., the output of Temporal Transformer). The model is trained to reconstruct masked frames vm, given the remaining frames v\m and all the subtitle sentences s. The visual features of masked
# 3.2.1 Masked Language Modeling
The inputs for MLM include: (i) sub-word to- kens from the i-th subtitle sentence wsi; (ii) visual frames vsi aligned with wsi; and (iii) mask indices
4N is a natural number, M is the number of masked tokens, and m is the set of masked indices.
5Following BERT, we decompose the 15% randomly masked-out words into 10% random words, 10% unchanged, and 80% [MASK].
frames are replaced by zeros. Unlike textual to- kens that are represented as discrete labels, visual features are high-dimensional and continuous, thus cannot be supervised via class likelihood. Instead, we propose two variants for MFM, which share the same objective base:
LMFM(θ) = EDfθ(vm|v\m, s) . (4)
Masked Frame Feature Regression (MFFR) MFFR learns to regress the output on each masked frame vi to its visual features. Specifically, we apply an FC layer to convert the output frame rep- resentations into a vector ho(v@ ) of the same dimension as the input visual feature rv). Then we apply L2 regression between the two: fol vml¥\ms8) = 22) [Vho(vin) ~ (vm |:
Masked Frame Modeling with Noise Con- trastive Estimation (MNCE) _ Instead of directly regressing the real values of masked visual features, we use the softmax version of Noise Contrastive Estimation (NCE) loss (Jozefowicz et al., 2016), which is widely adopted in self-supervised repre- sentation learning (Sun et al., 2019a; Hjelm et al., 2019; Oord et al., 2018). NCE loss encourages the model to identify the correct frame (given the context) compared to a set of negative distractors. Similar to MFFR, we feed the output of the masked frames vi into an FC layer to project them into a vector go(vee ). Moreover, we randomly sam- ple frames from the output of unmasked frames as negative distractors Vneg = fv Qalv a E V\m}, which are also transformed through the same FC layer as go(v ee). The final objec- tive minimizes the NCE loss: f(Vm|V\m;8) = yom log NCE(go(v!2))|-90(Vneg))-
3.2.3 Video-Subtitle Matching The inputs to VSM are: (i) a sampled query sq from all subtitle sentences; (ii) the whole video clip v; and (iii) the remaining subtitle sentences s\q for the video clip. We expect the model to learn: (i) local alignment - the start and end index yst, yed â {1, ..., Nv}, indicating the span of visual frames aligned with the query;6 and (ii) global alignment - to which video the sampled query is matched.
6 Timestamps are used to perform local alignment, which are either included with video (e.g., TV) or generated by ASR (e.g., HowTo100M). Refer to Appendix A.4 for details.
In VSM, we follow XML (Lei et al., 2020b) to compute the matching scores between the query and visual frames at both local and global levels. Speciï¬cally, we extract the output of Temporal Transformer as the ï¬nal visual frame representa- tion Vtemp â RNvÃd. The query is fed into Cross- modal Transformer to compute its textual represen- tations Wcross ). Based on this, we use a query encoder (Lei et al., 2020b), consisting of a self-attention layer, two linear lay- ers and an LN layer, to obtain the ï¬nal query vector q â Rd from Wcross
# sq
Local Alignment The local query-video match- ing score is computed using dot product:
Slocal(sq, v) = Vtempq â RNv . (5)
Two trainable 1D convolution ï¬lters are applied to the scores, followed by a softmax layer, to generate two probability vectors pst, ped â RNv , represent- ing the probabilities of every position being the start and end of the ground-truth span. During train- ing, we sample 15% subtitle sentences as queries for each video, and use the cross-entropy loss to predict the start and end index for local alignment:
Llocal = âED log(pst[yst]) + log(ped[yed]) ,
where p[y] denotes indexing the y-th element of the vector p.
Note that, XML computes the query-video matching score for each modality separately, and the ï¬nal matching score is the sum of the two scores. In our HERO model, multimodal fusion is performed in a much earlier stage.
Global Alignment The global matching score is computed by max-pooling the cosine similarities between each frame and the query:
viemp q Sglobal (Sq; V = mas (rim ie) . (6) stobal( Sa» ¥) verry al
We use a combined hinge loss Lh (Yu et al., 2018a) over positive and negative query-video pairs. For each positive pair (sq, v), we replace v or sq with one other sample from in the same mini-batch to construct two sets of negative examples: (sq, Ëv) and (Ësq, v). The training loss is speciï¬ed as:
Lh(Spos, Sneg) = max(0, δ + Sneg â Spos) , Lglobal = âED[Lh(Sglobal(sq, v), Sglobal(Ësq, v)) + Lh(Sglobal(sq, v), Sglobal(sq, Ëv))] , (7)
where δ is the margin hyper-parameter. The ï¬nal loss LVSM = λ1Llocal + λ2Lglobal, where λ1 and λ2 are hyper-parameters balancing the two terms.
3.2.4 Frame Order Modeling The inputs for FOM are: (i) all subtitle sentences s; (ii) visual frames v; and (iii) the reorder indices i=1 â NR.7 We randomly select 15% r = {ri}R of the frames to be shufï¬ed, and the goal is to reconstruct their original timestamps, denoted as t = {ti}R i=1, where ti â {1, ..., Nv}. We formulate FOM as a classiï¬cation problem, where t is the ground-truth labels of the reordered frames.
Speciï¬cally, reordering happens after the mul- timodal fusion of subtitle and visual frames. The reordered features are fed into Temporal Trans- former to produce reordered visual frame embed- dings Vtemp . These embeddings are transformed through an FC layer, followed by a softmax layer to produce a probability matrix P â RNvÃNv , where each column pi â RNv represents the scores of Nv timestamp classes that the i-th timestamp be- longs to. The ï¬nal objective is to minimize the the negative log-likelihood (cross-entropy loss):
pr logPiri,ti]. (8) Lrom = â
# 4 Experiments
In this section, we describe comprehensive ex- periments on downstream tasks and provide ab- lation studies for in-depth analysis of different pre- training settings.
To validate the effectiveness of HERO, we evalu- ate on a wide variety of downstream tasks, includ- ing Text-based Video/ Video-moment Retrieval, Video Question Answering, Video-and-language Inference, and Video Captioning. We consider 6 existing benchmarks: TVR (Lei et al., 2020b), TVQA (Lei et al., 2018), VIOLIN (Liu et al., 2020), TVC (Lei et al., 2020b), DiDeMo (Anne Hendricks et al., 2017a), and MSR-VTT (Xu et al., 2016b). Detailed descriptions and evaluation metrics on each task can be found in Appendix A.6.
# 4.1 Pre-training Datasets
Our pre-training dataset is composed of 7.6M video clips with their accompanying subtitles from TV and HowTo100M datasets. We exclude all the videos that appear in the downstream tasks to avoid contamination in evaluation.
7R is the number of reordered frames, and r is the set of reorder indices.
TV Dataset (Lei et al., 2018) was built on 6 popular TV shows across 3 genres: medical dramas, sitcoms and crime shows. It contains 21,793 video clips from 925 episodes. Each video clip is 60-90 seconds long, covering long-range scenes with complex character interactions and so- cial/professional activities. Dialogue for each video clip is also provided.
HowTo100M Dataset (Miech et al., 2019) was collected from YouTube, mostly instructional videos. It contains 1.22 million videos, with ac- tivities falling into 12 categories (e.g., Food & En- tertaining, Home & Garden, Hobbies & Crafts). Each video is associated with a narration as sub- titles that are either written manually or from an Automatic Speech Recognition (ASR) system. The average duration of videos in HowTo100M is 6.5 minutes. We cut the videos into 60-second clips to make them consistent with the TV dataset, and exclude videos in non-English languages. These pre-processing steps result in a subset of 7.56M video clips, accompanied with English subtitles.
# 4.2 New Benchmarks
Existing benchmarks are mostly built on videos from either a single domain or a single modality. In order to evaluate on diverse video content that re- ï¬ects multimodality challenges, we introduce two new datasets as additional benchmarks: How2R for text-based video-moment retrieval, and How2QA for video question answering.
How2R Amazon Mechanical Turk (AMT) is used to collect annotations on HowTo100M videos. Figure 6a in Appendix shows the interface for an- notation. We randomly sample 30k 60-second clips from 9,421 videos and present each clip to the turk- ers, who are asked to select a video segment con- taining a single, self-contained scene. After this segment selection step, another group of workers are asked to write descriptions for each displayed segment. Narrations are not provided to the work- ers to ensure that their written queries are based on visual content only. These ï¬nal video segments are 10-20 seconds long on average, and the length of queries ranges from 8 to 20 words.
From this process, we have collected 51,390 queries for 24k 60-second clips from 9,371 videos in HowTo100M, on average 2-3 queries per clip. We split the video clips and its associated queries into 80% train, 10% val and 10% test.
Pre-training Tasks TVR TVQA How2R How2QA Acc. 71.25 71.99 72.54 72.75 72.75 73.34 74.80 R@1 R@10 R@100 14.45 9.08 2.06 14.98 9.27 2.15 15.97 9.85 2.36 18.77 10.41 2.78 18.05 10.12 2.73 12.90 20.85 3.54 21.06 3.85 12.73 Acc. 69.79 70.13 70.85 71.36 71.36 73.68 73.81
# Pre-training Data
R@1 R@10 R@100 17.52 10.66 2.92 1 MLM 17.52 10.92 3.13 2 MLM + MNCE 17.43 10.27 3.09 3 MLM + MNCE + FOM 22.82 14.69 4 MLM + MNCE + FOM + VSM 4.44 22.37 14.29 5 MLM + MNCE + FOM + VSM + MFFR 4.44 21.63 13.23 3.81 6 MLM + MNCE + FOM + VSM Howto100M 24.55 16.26 5.13 TV + HowTo100M 7 MLM + MNCE + FOM + VSM
# TV
Table 1: Evaluation on pre-training tasks and datasets. Dark and light grey colors highlight the top and second best results across all the tasks trained with TV Dataset. The best results are in bold.
How2QA To collect another dataset for video QA task, we present the same set of selected video clips to another group of AMT workers for multi- choice QA annotation. Each worker is assigned with one video segment and asked to write one question with four answer candidates (one correct and three distractors). Similarly, narrations are hidden from the workers to ensure the collected QA pairs are not biased by subtitles.
We observe that human-written negative answers suffer from serious bias (i.e., models can learn to predict correctly without absorbing any informa- tion from the video or subtitles). To mitigate this, we use adversarial matching (Zellers et al., 2019) to replace one of the three written negative answers by a correct answer from another question that is most relevant to the current one. Similar to TVQA, we also provide the start and end points for the relevant moment for each question. After ï¬ltering low-quality annotations, the ï¬nal dataset contains 44,007 QA pairs for 22k 60-second clips selected from 9035 videos. We split the data into 80% train, 10% val and 10% test sets. More details about data collection can be found in Appendix A.9.
# 4.3 Ablation Study
We analyze the effectiveness of model design, espe- cially different combinations of pre-training tasks and datasets, through extensive ablation studies.
Optimal Setting of Pre-training Tasks To search for the optimal setting of pre-training tasks, we conduct a series of extensive ablation studies to test each setting, using video-moment retrieval and QA downstream tasks as evaluation. Table 1 sum- marizes ablation results on TVR, TVQA, How2R and How2QA under different pre-training settings. Models are trained on TV dataset only for com- putational efï¬ciency. Compared to using MLM only (L1 in Table 1), adding MNCE (L2) shows improvement on all downstream tasks. The best performance is achieved by MLM + MNCE + FOM
# + VSM (L4).
Effect of FOM and VSM When MLM, MNCE and FOM are jointly trained (L3), there is a large performance gain on TVQA, and signiï¬cant im- provement on How2R and How2QA. Comparable results are achieved on TVR. This indicates that FOM, which models sequential characteristics of video frames, can effectively beneï¬t downstream tasks that rely on temporal reasoning (such as QA tasks).
We observe signiï¬cant performance lift by adding VSM (L4), and the local and global align- ments between subtitle and visual frames learned through VSM are especially effective on TVR and How2R. Adding additional MFFR (L5) reaches slightly worse results. Our observation is that MFFR is competing with (instead of complimen- tary to) MNCE during pre-training, which renders the effect of MFFR negligible.
Effect of Pre-training Datasets We study the effect of pre-training datasets by comparing TV dataset with HowTo100M. In this study, we ï¬rst pre-train our model on HowTo100M dataset (L6). We observe a performance drop on TVR, while a performance boost on TVQA, How2R and How2QA, compared to the model trained on TV dataset (L4). Our hypothesis is that text-based video-moment retrieval is more sensitive to video domains. Although HowTo100M dataset contains much more videos, the model still beneï¬ts more from being exposed to similar TV videos during pre-training.
Hierarchical Design vs. Flat Architecture To validate the effectiveness of our model design, we compare HERO with two baselines (with and with- out pre-training): (i) Hierarchical Transformer (H- TRM) baseline, constructed by simply replacing the Cross-modal Transformer with a RoBERTa model
Method \Task TVR How2R TVQA How2QA VIOLIN TVC SOTA Baseline HERO R@1 R@10 R@100 R@1 R@10 R@100 13.27 3.25 21.06 6.21 13.41 19.34 30.52 36.66 2.06 3.85 8.96 12.73 Acc. 70.23 73.61 Acc. - 73.81 Acc. 67.84 68.59 Bleu Rouge-L Meteor Cider 45.38 10.87 49.98 12.35 32.81 34.16 16.91 17.64
(a) Results on multi-channel (video+subtitle) tasks: TVR12, How2R, TVQA, How2QA, VIOLIN and TVC.
Method \Task DiDeMo DiDeMo w/ ASR MSR-VTT MSR-VTT w/ ASR SOTA Baseline HERO R@1 R@10 R@100 R@1 R@10 R@100 R@1 R@5 R@10 R@1 R@5 R@10 1.59 2.14 6.71 11.43 25.44 36.09 - 3.01 - 14.87 - 47.26 14.90 16.80 40.20 43.40 52.80 57.70 - 20.50 - 47.60 - 60.90
(b) Results on DiDeMo and MSR-VTT with video-only inputs (single-channel), compared with ASR-augmented inputs (multi- channel).
Table 3: Results on the test set of six downstream tasks, compared to task-speciï¬c state-of-the-art (SOTA) mod- els: XML (Lei et al., 2020b) for TVR, How2R and DiDeMo, HowTo100M (Miech et al., 2019) for MSR-VTT, STAGE (Lei et al., 2020a) for TVQA (inapplicable to How2QA due to region-level features), Multi-stream (Liu et al., 2020) for VIOLIN, and MMT (Lei et al., 2020b) for TVC.
and encoding subtitles only;8 (ii) Flat BERT-like encoder (F-TRM).9
For this ablation experiment, we use TVR and TVQA as evaluation tasks. Results are summarized in Table 2: (i) Without pre-training, F-TRM is much worse than HERO on both tasks. This is due to H-TRM and HEROâs explicit exploitation of the temporal alignment between two modalities of videos. (ii) Pre-training lifts HERO performance by a large margin, but not much for F-TRM or H- TRM. This indicates that cross-modal interactions and temporal alignments learned by HERO through pre-training can provide better representations for downstream tasks.
Pre-training Model TVR TVQA SOTA R@1 R@10 R@100 15.97 2.76 9.08 Acc. 70.50 No10 Yes F-TRM H-TRM HERO F-TRM11 H-TRM HERO 1.99 2.97 2.98 2.69 3.12 4.44 7.76 10.65 10.65 9.21 11.08 14.69 13.26 18.68 18.25 15.98 18.42 22.82 31.80 70.09 70.65 49.12 70.03 72.75
Table 2: Ablation study on model design, comparing HERO to a ï¬at BERT-like encoder (F-TRM) baseline, a Hierarchical Transformer (H-TRM) baseline, and task- speciï¬c SOTA models on TVR and TVQA val set.
Key Conclusions The main observations from these extensive ablation studies are summarized as follows:
HERO vs. SOTA with and w/o Pre-training We compare HERO with task-specifc state of the art (SOTA) models, including XML (Lei et al., 2020b) for TVR and STAGE (Lei et al., 2020a) for TVQA. As shown in Table 2, our model consistently out- performs SOTA models on both tasks, with or with- out pre-training. Note that for TVQA, STAGE is trained with additional supervision on spatial grounding with region-level features for each frame. Without additional supervisions, HERO is able to achieve better performance.
⢠The optimal pre-training setting is MLM + MNCE + FOM + VSM, when trained on HowTo100M dataset and TV dataset.
⢠FOM effectively helps downstream tasks that rely on temporal reasoning (e.g., video QA tasks).
⢠VSM encourages frame-subtitle alignment, which is especially effective for video- moment retrieval tasks.
8The inputs to Temporal Transformer in H-TRM are the summation of initial frame embedding and max-pooled subti- tle embeddings from RoBERTa.
9F-TRM takes as input a single sequence by concatenating the embeddings of visual frames and all subtitle sentences, and encodes them through one multi-layer Transformer.
10Model parameters are initialized with RoBERTa weights following Lei et al. (2020b).
11F-TRM is pre-trained with MLM+MNCE. VSM and FOM cannot be directly applied.
⢠The hierarchical design in HERO explicitly aligns subtitles and frames, while a ï¬at model architecture can only learn this alignment through implicit attention.
⢠HERO consistently outperforms SOTA with and without pre-training, which further demonstrates the effectiveness of HERO model design.
# 4.4 Results on Downstream Tasks
Table 3 reports HERO results on the test splits of all downstream tasks. HERO is pre-trained on both TV and HowTo100M datasets, with the opti- mal pre-training setting: MLM + MNCE + FOM + VSM. We compare HERO with task-speciï¬c SOTA models on each downstream task, includ- ing: XML (Lei et al., 2020b) for TVR, Didemo and How2R; HowTo100M (Miech et al., 2019) for MSR-VTT; STAGE (Lei et al., 2020a) for TVQA; Multi-stream (Liu et al., 2020) for VIOLIN; and MMT (Lei et al., 2020b) for TVC. Note that we cannot directly apply STAGE to How2QA, as it was speciï¬cally designed to leverage region-level features. Our HERO model achieves new state of the art across all benchmarks.
Results on Multi-channel Tasks Table 3a shows results on downstream tasks consisting of multi-channel videos (video + subtitle). On TVR R@1, HERO results nearly double those from XML.12 Further, without leveraging ï¬ne-grained region-level features, HERO outperforms baseline models by +3.28% on TVQA and +0.75% on VI- OLIN. When evaluated on TVC, video and subti- tles are encoded by HERO, then fed into a 2-layer Transformer decoder to generate captions. Even though no pre-training was applied to the decoder, HERO surpasses SOTA baseline across all metrics, especially +4.60% on Cider. In addition, HERO establishes a strong baseline for new benchmarks How2R and How2QA.
Results on Single-channel Tasks Table 3b text-based results on DiDeMo for presents video-moment retrieval task and MSR-VTT for text-based video retrieval task. On DiDeMo, HERO surpasses XML by +0.55/+4.72/+10.65 on R@1/10/100, without leveraging Temporal End- point Feature used in XML. On MSRVTT, HERO outperforms existing video pre-training model (HowTo100M) by +1.9/+3.2/+4.9 on R@1/5/10.
To evaluate in multi-channel setting, we also ï¬ne- tuned HERO on MSR-VTT and DiDeMo using both video channel and extracted subtitle channel (with ASR tools). When augmenting DiDeMo/MSR- VTT with ASR inputs, HERO performance is fur- ther improved. Although our model design focuses on truly multimodal videos (video+subtitle input), these results demonstrate HEROs superior general-
12To be consistent with TVR leaderboard, results are re- ported on tIoU>0.7 without nms.
izability to different video types (multi- and single- channel). More results and analysis are provided in Appendix A.1.
# 5 Conclusion
In this paper, we present a hierarchical encoder for video+language omni-representation pre-training. Our HERO model presents a hierarchical archi- tecture, consisting of Cross-modal Transformer and Temporal Transformer for multi-modal fusion. Novel pre-training tasks are proposed to capture temporal alignment both locally and globally. Pre- trained on two large-scale video datasets, HERO ex- ceeds state of the art by a signiï¬cant margin when transferred to multiple video-and-language tasks. Two new datasets on text-based video-moment re- trieval and video QA are introduced to serve as additional benchmarks for downstream evaluation. We consider extension of our model to other video- and-language tasks as future work, as well as de- veloping more well-designed pre-training tasks.
# References
Chris Alberti, Jeffrey Ling, Michael Collins, and David Reitter. 2019. Fusion of detected objects in text for visual question answering. In EMNLP.
Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In CVPR.
Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, and Bryan Russell. 2017a. Localizing moments in video with natural language. In ICCV.
Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, and Bryan Russell. 2017b. Localizing moments in video with natural language. In CVPR.
Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large anno- tated corpus for learning natural language inference. In EMNLP.
Yen-Chun Chen, Zhe Gan, Yu Cheng, Jingzhou Liu, and Jingjing Liu. 2020a. Distilling the knowledge of bert for text generation. In ACL.
Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2020b. Uniter: Universal image-text representation learning. In ECCV.
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In CVPR.
Michael Denkowski and Alon Lavie. 2014. Meteor universal: Language speciï¬c translation evaluation for any target language. In Proceedings of the ninth workshop on statistical machine translation.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In NAACL.
Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xi- aodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Uniï¬ed language model pre-training for natural language understand- ing and generation. In NeurIPS.
Victor Escorcia, Mattia Soldan, Josef Sivic, Bernard Ghanem, and Bryan Russell. 2019. Temporal local- ization of moments in video collections with natural language. arXiv preprint arXiv:1907.12763.
Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. 2019. Slowfast networks for video recognition. In ICCV.
Zhe Gan, Yen-Chun Chen, Linjie Li, Chen Zhu, Yu Cheng, and Jingjing Liu. 2020. Large-scale ad- versarial training for vision-and-language represen- tation learning. arXiv preprint arXiv:2006.06195.
Zhe Gan, Chuang Gan, Xiaodong He, Yunchen Pu, Kenneth Tran, Jianfeng Gao, Lawrence Carin, and Li Deng. 2017. Semantic compositional networks for visual captioning. In CVPR.
Jiyang Gao, Chen Sun, Zhenheng Yang, and Ram Neva- tia. 2017. Tall: Temporal activity localization via language query. In CVPR.
Sergio Guadarrama, Niveda Krishnamoorthy, Girish Malkarnenkar, Subhashini Venugopalan, Raymond Mooney, Trevor Darrell, and Kate Saenko. 2013. Youtube2text: Recognizing and describing arbitrary activities using semantic hierarchies and zero-shot recognition. In ICCV.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In CVPR.
R Devon Hjelm, Alex Fedorov, Samuel Lavoie- Marchildon, Karan Grewal, Phil Bachman, Adam Trischler, and Yoshua Bengio. 2019. Learning deep representations by mutual information estimation and maximization. In ICLR.
Zhicheng Huang, Zhaoyang Zeng, Bei Liu, Dongmei Fu, and Jianlong Fu. 2020. Pixel-bert: Aligning im- age pixels with text by deep multi-modal transform- ers. arXiv preprint arXiv:2004.00849.
Yunseok Jang, Yale Song, Youngjae Yu, Youngjin Kim, and Gunhee Kim. 2017. Tgif-qa: Toward spatio- temporal reasoning in visual question answering. In CVPR.
Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Exploring arXiv preprint Shazeer, and Yonghui Wu. 2016. the limits of language modeling. arXiv:1602.02410.
Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Sudheendra Vijaya- narasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, et al. 2017. The kinetics human action video dataset. arXiv preprint arXiv:1705.06950.
Olga Kovaleva, Alexey Romanov, Anna Rogers, and Anna Rumshisky. 2019. Revealing the dark secrets of bert. In EMNLP.
Ranjay Krishna, Kenji Hata, Frederic Ren, Li Fei-Fei, and Juan Carlos Niebles. 2017. Dense-captioning events in videos. In ICCV.
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. Albert: A lite bert for self-supervised learn- ing of language representations. In ICLR.
Jie Lei, Licheng Yu, Mohit Bansal, and Tamara Berg. 2018. Tvqa: Localized, compositional video ques- tion answering. In EMNLP.
Jie Lei, Licheng Yu, Tamara L Berg, and Mohit Bansal. 2020a. Tvqa+: Spatio-temporal grounding for video question answering. In ACL.
Jie Lei, Licheng Yu, Tamara L Berg, and Mohit Bansal. 2020b. Tvr: A large-scale dataset for video-subtitle moment retrieval. In ECCV.
Gen Li, Nan Duan, Yuejian Fang, Daxin Jiang, and Ming Zhou. 2020a. Unicoder-vl: A universal en- coder for vision and language by cross-modal pre- training. In AAAI.
Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2019. Visualbert: A simple and performant baseline for vision and lan- guage. arXiv preprint arXiv:1908.03557.
Xiujun Li, Xi Yin, Chunyuan Li, Xiaowei Hu, Pengchuan Zhang, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, et al. 2020b. Oscar: Object-semantics aligned pre-training for vision-language tasks. In ECCV.
Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In ACL.
Jingzhou Liu, Wenhu Chen, Yu Cheng, Zhe Gan, Licheng Yu, Yiming Yang, and Jingjing Liu. 2020. Violin: A large-scale dataset for video-and-language inference. In CVPR.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.
Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In ICLR.
Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visi- olinguistic representations for vision-and-language tasks. In NeurIPS.
Jiasen Lu, Vedanuj Goswami, Marcus Rohrbach, Devi Parikh, and Stefan Lee. 2020. 12-in-1: Multi-task vision and language representation learning. In CVPR.
Huaishao Luo, Lei Ji, Botian Shi, Haoyang Huang, Nan Duan, Tianrui Li, Xilin Chen, and Ming Zhou. 2020. Univilm: A uniï¬ed video and language pre-training model for multimodal understanding and generation. arXiv preprint arXiv:2002.06353.
Tegan Maharaj, Nicolas Ballas, Anna Rohrbach, Aaron Courville, and Christopher Pal. 2017. A dataset and exploration of models for understanding video data through ï¬ll-in-the-blank question-answering. In CVPR.
Antoine Miech, Jean-Baptiste Alayrac, Lucas Smaira, Ivan Laptev, Josef Sivic, and Andrew Zisserman. 2020. End-to-end learning of visual representations from uncurated instructional videos. In CVPR.
Antoine Miech, Dimitri Zhukov, Jean-Baptiste Alayrac, Makarand Tapaswi, Ivan Laptev, and Josef Sivic. 2019. Howto100m: Learning a text-video embed- ding by watching hundred million narrated video clips. In ICCV.
Vishvak Murahari, Dhruv Batra, Devi Parikh, and Ab- hishek Das. 2020. Large-scale pretraining for visual dialog: A simple state-of-the-art baseline. In ECCV.
Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive pre- dictive coding. arXiv preprint arXiv:1807.03748.
Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018. Scaling neural machine trans- lation. arXiv preprint arXiv:1806.00187.
Yingwei Pan, Tao Mei, Ting Yao, Houqiang Li, and Yong Rui. 2016. Jointly modeling embedding and translation to bridge video and language. In CVPR.
Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In ACL.
Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a uniï¬ed text-to-text trans- former. arXiv preprint arXiv:1910.10683.
Sheng Shen, Zhen Dong, Jiayu Ye, Linjian Ma, Zhewei Yao, Amir Gholami, Michael W Mahoney, and Kurt Keutzer. 2020. Q-bert: Hessian based ultra low pre- cision quantization of bert. In AAAI.
Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. 2020. Vl-bert: Pre- training of generic visual-linguistic representations. In ICLR.
Chen Sun, Fabien Baradel, Kevin Murphy, and Cordelia Schmid. 2019a. Contrastive bidirectional transformer for temporal representation learning. arXiv preprint arXiv:1906.05743.
Chen Sun, Austin Myers, Carl Vondrick, Kevin Mur- phy, and Cordelia Schmid. 2019b. Videobert: A joint model for video and language representation learning. In ICCV.
Siqi Sun, Yu Cheng, Zhe Gan, and Jingjing Liu. 2019c. Patient knowledge distillation for bert model com- pression. In EMNLP.
Hao Tan and Mohit Bansal. 2019. Lxmert: Learning cross-modality encoder representations from trans- formers. In EMNLP.
Makarand Tapaswi, Yukun Zhu, Rainer Stiefelhagen, Antonio Torralba, Raquel Urtasun, and Sanja Fidler. 2016. Movieqa: Understanding stories in movies through question-answering. In CVPR.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS.
Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image de- scription evaluation. In CVPR.
Subhashini Venugopalan, Marcus Rohrbach, Jeffrey Donahue, Raymond Mooney, Trevor Darrell, and Kate Saenko. 2015. Sequence to sequence-video to text. In ICCV.
Alex Wang and Kyunghyun Cho. 2019. Bert has a mouth, and it must speak: Bert as a markov arXiv preprint random ï¬eld language model. arXiv:1902.04094.
Xin Wang, Jiawei Wu, Junkun Chen, Lei Li, Yuan- Fang Wang, and William Yang Wang. 2019. Vatex: A large-scale, high-quality multilingual dataset for video-and-language research. In ICCV.
Adina Williams, Nikita Nangia, and Samuel R Bow- man. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In NAACL.
Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Googleâs neural machine translation system: Bridging the gap between hu- arXiv preprint man and machine translation. arXiv:1609.08144.
Qiaolin Xia, Haoyang Huang, Nan Duan, Dongdong Zhang, Lei Ji, Zhifang Sui, Edward Cui, Taroon Bharti, and Ming Zhou. 2020. Xgpt: Cross-modal generative pre-training for image captioning. arXiv preprint arXiv:2003.01473.
Ning Xie, Farley Lai, Derek Doran, and Asim Ka- dav. 2019. Visual entailment: A novel task for ï¬ne-grained image understanding. arXiv preprint arXiv:1901.06706.
Jun Xu, Tao Mei, Ting Yao, and Yong Rui. 2016a. Msr- vtt: A large video description dataset for bridging video and language. In CVPR.
Jun Xu, Tao Mei, Ting Yao, and Yong Rui. 2016b. Msr- vtt: A large video description dataset for bridging video and language. CVPR.
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In NeurIPS.
Licheng Yu, Zhe Lin, Xiaohui Shen, Jimei Yang, Xin Lu, Mohit Bansal, and Tamara L Berg. 2018a. Mat- tnet: Modular attention network for referring expres- sion comprehension. In CVPR.
Youngjae Yu, Jongseok Kim, and Gunhee Kim. 2018b. A joint sequence fusion model for video question an- swering and retrieval. In ECCV.
Rowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. From recognition to cognition: Visual commonsense reasoning. In CVPR.
Luowei Zhou, Yannis Kalantidis, Xinlei Chen, Jason J Corso, and Marcus Rohrbach. 2019. Grounded video description. In CVPR.
Luowei Zhou, Hamid Palangi, Lei Zhang, Houdong Hu, Jason J Corso, and Jianfeng Gao. 2020. Uni- ï¬ed vision-language pre-training for image caption- ing and vqa. In AAAI.
Luowei Zhou, Chenliang Xu, and Jason J Corso. 2018a. Towards automatic learning of procedures from web instructional videos. In AAAI.
Luowei Zhou, Yingbo Zhou, Jason J Corso, Richard Socher, and Caiming Xiong. 2018b. End-to-end dense video captioning with masked transformer. In CVPR.
Linchao Zhu and Yi Yang. 2020. Actbert: Learning global-local video-text representations. In CVPR.
# A Appendix
# A.1 Additional Experiments
For further analysis, Table 4 provides comparison between HERO and task-speciï¬c SOTA models on the validation splits of each downstream task.13 For fair comparison, we re-run XML (Lei et al., 2020b) and MMT (Lei et al., 2020b) experiments using our visual frame features, which achieve slightly better performance than the reported results in Lei et al. (2020b). Note that we cannot directly apply our frame-level visual features to STAGE (Lei et al., 2020a) and Multi-stream (Liu et al., 2020), which require region-level features for each video frame. Overall, HERO achieves state-of-the-art results on all downstream tasks. Our model consistently outperforms XML on both TVR and How2R, with or without pre-training. Table 5 also provides de- tailed results on TVR and How2R in three dif- ferent evaluation settings from Lei et al. (2020b): (i) Video Retrieval, (ii) Moment Retrieval, and (iii) Video-moment Retrieval. For both TVR and How2R, pre-training signiï¬cantly lifts model per- formance in all three settings. Following Chen et al. (2020b); Lu et al. (2019), we assess the embed- dings learned in pre-training before any ï¬ne-tuning occurs. On How2R, HERO without ï¬ne-tuning achieves (2.11, 9.09, 14.83) for (R1, R10, R100). While the performance is signiï¬cantly lower than the ï¬ne-tuned model (-1.62 for R1), it performs reasonably well without seeing any How2R query, indicating that HERO has learned to align videos and subtitles (pseudo-query) during pre-training.
Note that for TVQA, STAGE is trained with ad- ditional supervision on spatial grounding, which requires region-level features for each frame of the video. Without additional supervision on spa- tial grounding or ï¬ne-grained region-level features, HERO is able to achieve better performance than STAGE on TVQA dataset. We also observe that pre-training signiï¬cantly boosts the performance of HERO across TVR, How2R and TVQA tasks.
On How2QA, since STAGE was speciï¬cally de- signed to leverage region-level features, we cannot directly apply STAGE. Thus, we only compare HERO performance w/o and with pre-training. Re- sults exhibit consistent patterns observed on other downstream tasks: pre-training achieves better per- formance than w/o pre-training.
13For VIOLIN, we report results on test set for fair com- parison, since no validation results are reported in Liu et al. (2020).
Pre-training greatly lifts HERO performance on VIOLIN by approximately +2.9%. However, HERO, without pre-training, presents worse per- formance than the SOTA baseline. Unlike Multi- stream, which leverages ï¬ne-grained region-level features, our results are reported on global frame- level features. Therefore, it may be difï¬cult for HERO to capture the inconsistency between hy- pothesis and video content. For example, changes of hypotheses about region-level attributes (color, shape, and etc.) may result in different conclusions. Extending HERO for region-level video representa- tions could be an interesting future direction.
HERO is also extensible to generation task: multi-modal video captioning. Our results on TVC show that HERO with pre-training surpasses MMT by a large margin. Although pre-training is only applied to the encoder, it signiï¬cantly improves HERO performance on TVC across all metrics. When no pre-training is applied, HERO is slightly inferior to the SOTA baseline. Our hypothesis is that TVC has short video context (with video length of 9-second on average) but our model is designed for long video representation learning (TVR/TVQA with video length of 76-second on average). How to design pre-training tasks for MMT on TVC or including decoder pre-training for HERO are left for future works.
# A.2 Qualitative Analysis
Visualization of VSM One way to understand how HERO aligns subtitles with video frames is to visualize the Video-Subtitle Matching pre-training task. We provide some examples of the top-1 moment predictions for VSM on both TV and HowTo100M corpora. As shown in Figure 2, the predicted moments (red) largely overlap with the ground truth moments (green) with minor differ- ences. In Figure 2a, we human could probably identify the moment by the speaker information and the visual clue of characterâs emotion. For Figure 2b, objects (rubber bands) might be the key matching clue. The success of HERO to correctly match the moments might be a positive signal that its pre-training captures those human-identiï¬ed pat- terns, hence leads to its strong video understand- ing capability. However, more thorough analysis, both quantitative and qualitative, is needed to inter- pret what video-language pre-trained models have learned, which we leave to future works.
Method \Task TVR How2R TVQA How2QA VIOLIN TVC R@1 R@10 R@100 R@1 R@10 R@100 13.45 2.62 13.27 2.76 8.45 9.08 14.86 15.97 1.97 2.06 8.32 8.96 2.98 10.65 18.42 2.17 9.38 15.65 5.13 16.26 24.55 3.85 12.73 21.06 Acc. 70.50 - 70.65 74.80 Acc. - - 71.36 73.81 Acc. 67.84 - 65.72 68.59 Bleu Rouge-L Meteor Cider 44.39 10.53 45.86 10.90 32.35 32.68 16.61 16.83 10.75 32.72 16.42 43.62 12.25 34.10 17.54 50.46
Table 4: Results on the validation set of six multi-channel video downstream tasks, compared to task-speciï¬c SOTA models: XML (Lei et al., 2020b) for TVR and How2R, STAGE (Lei et al., 2020a) for TVQA (inapplicable to How2QA due to region-level features), Multi-stream (Liu et al., 2020) for VIOLIN, and MMT (Lei et al., 2020b) for TVC. â indicates re-implementation of the model using our visual frame features.
Downstream Task Pre-training Video Ret. Moment Ret.18 Video Moment Ret.18 TVR How2R No10 Yes No10 Yes R@1 R@10 R@100 R@1 R@10 R@100 R@1 R@10 R@100 18.25 9.59 19.44 24.55 10.38 30.11 15.17 12.73 11.15 20.75 15.69 14.73 52.43 62.69 39.78 47.69 84.94 87.78 59.62 68.37 3.76 4.02 4.94 6.48 61.77 62.93 67.90 70.38 2.98 5.13 2.21 3.78 10.65 16.26 9.52 12.96
Table 5: Detailed results on TVR and How2R val set, including the main-task (Video Moment Retrieval) and two sub-tasks (Video Retrieval and Moment Retrieval).
Attention Pattern Visualization Following Ko- valeva et al. (2019) and Chen et al. (2020b), we analyze observable patterns in the attention maps of HERO. Figure 3 provides visualization examples of the attention maps learned by the Cross-modal Transformer. For completeness, we brieï¬y discuss each pattern here:
# A.3 Downstream Adaptation
The pre-trained model can be readily adapted to downstream video+language tasks through end-to- end ï¬netuning. Below, we describe the detailed adaptation approach to four downstream tasks: (i) text-based video moment retrieval, (ii) video ques- tion answering, (iii) video-and-language inference and (iv) multimodal video captioning.
⢠Vertical: Attention to a speciï¬c frame.
⢠Diagonal: Locally-focused attention to the token/frame itself or preceding/following to- kens/frames.
⢠Vertical + Diagonal: Mixture of Vertical and Diagonal.
⢠Block: Intra-modality attention, i.e., textual self-attention or visual self-attention.
Text-based Video-moment Retrieval The input video clip with accompanying subtitles is encoded by HERO as illustrated in Figure 4. The input query is encoded by the query encoder from the VSM pre-training task. We follow the same procedure as in VSM to compute query-video matching scores both locally (frame-level, for moment retrieval) and globally (clip-level, for video retrieval). The model is ï¬netuned end-to-end using loss LVSM. Similarly, we let the margin δ = 0.1 and set λ1 = 0.01 and λ2 = 8 in the loss term LVSM.
⢠Heterogeneous: Diverse attentions that cannot be categorized and highly dependent on actual input.
⢠Reversed Block: Cross-modality attention, i.e., text-to-frame and frame-to-text attention.
Note that we observe patterns slightly different from Chen et al. (2020b): Vertical patterns (Fig- ure 3a) are usually over a speciï¬c frame instead of special tokens ([CLS] or [SEP]). We leave more sophisticated attention analysis/probing to future works.
Video Question Answering For Video QA, we consider the multiple-choice setting. As illustrated in Figure 5, for each answer candidate, the cor- responding QA pair is appended to each of the subtitle sentences and fed into the Cross-modal Transformer to perform early fusion with local tex- tual context. In addition, these QA pairs are also appended to the input of Temporal Transformer to be fused with global video context. We use a sim- ple attention layer to compute the weighted-sum- across-time of the QA-aware frame representations
(a) TV Dataset.
One, you just need one. .», $0 | will keep her right here, so you'll see. You'll need a loom peg. t 1 So | will get my four rubber bands and put them on a peg. | Like on the same peg, all four of them.
(b) HowTo100M Dataset.
Figure 2: Visualization of top-1 moment predictions by HERO model for Video-Subtitle Matching on: (a) TV Dataset; and (b) HowTo100M Dataset. Text inside the dashed boxes is the accompany subtitles, with sampled subtitle query highlighted in blue. Groundtruth is highlighted with the green bar under the video frames. Predicted moments are bounded with boxes in red. Best viewed in color.
from the Temporal Transformer output.
also include the span prediction loss:
These ï¬nal QA-aware global representations are then fed through an MLP and softmax layer to obtain the probability score p(i) ans of all the answers for question i. The training objective is
# âae
Lans (9) > log pt). [ui]
i=1 where yi is the index of the ground-truth answer for question i. When supervision is available,14 we
Lspan = -SN yi log ps) [yi"] + log pS fy"). (10)
(10) where p(i) ed are the prediction scores of the start and end position, obtained by applying weighted-sum-across-answers attention to the Tem- poral Transformer output followed by two MLPs i , yed and a softmax layer. yst i are the indices of the
14Some existing Video QA tasks require localizing âframes
of interestâ for the question, e.g., TVQA+ (Lei et al., 2020a).
frames tokens frames tokens -o2 -00
(a) Vertical (b) Diagonal (c) Vertical + Diagonal (d) Block (e) Heterogeneous (f) Reversed Block
os 068 oa 02
ESE.
- B 0.2 : : Lao
[| | | Os = a os : a a : oz i: i BT
_ ml a ;°* r oh 02 a . a 68 â! a -
Figure 3: Visualization of the attention maps learned by Cross-modal Transformers of HERO model.
Frame Features 00:00:34 > 00:00:36 = Thank God you're here, Listen to ths. - What? Transformer Word Embed. Shared Cross-Modal Transformer (00:00:66 -> 00:00:68 Moment (Woey:) Joey doesn't Retrieval Video Retrieval Other video clips Transformer Other video clips Temporal Cosine Transformer Query: Query Joey's dating policy Encoder never shares food! Temporal Transformer => Flow Collect Frames iN@) Cosine Similarity (7 Frame Features [EG Target Frame Features [EEN] Word Features
Figure 4: HERO model adapted to downstream task: Text-based Video Moment Retrieval.
ground-truth start and end positions for question i. The ï¬nal loss LQA = Lans + λLspan, where λ is the hyper-parameter that balance the above two terms. Empirically, we found that λ = 0.5 yields the best model performance.
Video-and-Language to Video QA, each natural language hypothesis (or query) is appended to each of the subtitle sentences and also to the input of Temporal Transformer. A simple attention pooling layer is added to HERO to obtain the ï¬nal query-aware global representations.
Video-and-language inference task can be re- garded as a binary classiï¬cation problem. We su- pervise the training using cross-entropy loss.
Multimodal Video Captioning With a simple addition of a Transformer decoder (Vaswani et al., 2017), we can extend HERO for multimodal video captioning. We feed the whole subtitle-aligned video clip into HERO and obtain the subtitle-fused video representation for each frame. Next, frame representations are grouped by the âmoment of in- terestâ using the time interval provided in the cap-
Frame Features Cross-Modal 00:00:02 --> 00:00:04 That's why you won't go out with her again? Transformer Word Embed. Shared Shared Frame Features Cross-Modal Transformer 00:00:66 ~> 00:00:68 (Joey:) Joey doesn't share fod Embed. Collect Frames Temporal Transformer zalep sly jnoge uelduioo Keor pip AYN :O QA-aware Video Representations S014 £90 400) BUS :V Frame Features [ER Word Features
Figure 5: HERO model adapted to downstream task: Video Question Answering.
tion annotation. The decoder-to-encoder attention is applied on the representations of the correspond- ing video moment and the decoder is trained with conventional left-to-right language modeling cross- entropy loss together with the HERO encoder end- to-end. To make the comparison to MMT (Lei et al., 2020b) as fair as possible, we use shallow Trans- former decoder (2-layer) with 768 hidden size. We do not use self-critical RL or its variants to optimize test metrics. Following MMT, greedy decoding is used at inference.
frames. Speciï¬cally, for each subtitle sentence si, we pair it with a sequence of visual frames whose timestamps overlap with the subtitle timestamp, and denote these visual frames as vsi = {vj j=1 (K is the number of overlapping frames with si). In the case that multiple sentences overlap with the same visual frame, we always pair the frame with the one with maximal temporal Intersection over Union (tIoU) to avoid duplication. It is possible that a subtitle sentence is not paired with any visual frame, and in this case, we concatenate it to the neighboring sentences to avoid information loss.
Single-channel Tasks Although HERO is de- signed for multi-channel videos (video+subtitle), we can easily extend it to single-channel video (video-only) tasks by adding an empty-string subti- tle input and pair it with the whole frame sequence. For DiDeMo, we follow the same procedure as in VSM to compute both frame-level (for moment retrieval) and clip-level (for video retrieval) query- video matching scores. For MSR-VTT, a text-based video retrieval task, only clip-level scores are com- puted.
# A.4 Frames/Subtitles Pre-processing
Given a pair of video clip and its associated sub- title, we first extract a sequence of visual frames v= {uj}m, at a fixed frame rate (V, is the num- ber of visual frames in a video clip). The subtitle is parsed into sentences s = {sj} Ns, (N, is the number of sentences in each subtitle). Note that Ny, 4 Ns in most cases, since a subtitle sentence may last for several visual frames. We then align the subtitle sentences temporally with the visual
# Implementation Details
We extract 2304-dimensional Slowfast (Feichten- hofer et al., 2019) features at a ï¬xed frame rate (TV: 2/3 frame per second, HowTo100M: 1/2 frame per second). and 2048-dimensional ResNet-101 (He et al., 2016) features at doubled frame rate and max- pooled to get a clip-level feature. The ï¬nal frame features is concatenation of the two features with dimension 4352. The model dimensions are set to (L=6, H=768, A=12) for Cross-Modal Trans- former and (L=3, H=768, A=12) for Temporal Transformer, where L is the number of stacked Transformer blocks; H stands for hidden activa- tion dimension and A is the number of attention heads. For pre-training task VSM, we let the mar- gin δ = 0.1 and set λ1 = 0.01 and λ2 = 8 in the loss term LVSM.
Our models are implemented based on Py- Torch (Paszke et al., 2017).15 To speed up training,
15https://pytorch.org/
we use Nvidia Apex16 for mixed precision train- ing. Gradient accumulation (Ott et al., 2018) is applied to reduce multi-GPU communication over- heads. All pre-training experiments are run on Nvidia V100 GPUs (32GB VRAM; NVLink con- nection). We use AdamW optimizer (Loshchilov and Hutter, 2019) with a learning rate of 3e â 5 and weight decay of 0.01 to pre-train our model. The best pre-trained model is trained on 16 V100 GPUs for about 3 weeks. Finetuning experiments are implemented on the same hardware or Titan RTX GPUs (24GB VRAM) with AdamW optimizer but different learning rates.
# A.6 Downstream Tasks
TVR (Lei et al., 2020b) is the ï¬rst to introduce text-based video-moment Retrieval task for multi- channel videos (video+subtitle): given a natural language query, a model is required to not only retrieve the most relevant video clip from the video corpus, but also localize the relevant moment in the retrieved video clip. TVR is built upon the TV dataset, split into 80% train, 10% val, 5% test- public and 5% test-private. On average, 5 queries were collected for each video clip. Among them, 74.2% of queries are related to video only, 9.1% to text only, and 16.6% to both video and text.
TVQA (Lei et al., 2018) was introduced along with the TV dataset. Given a video clip and the accom- panying subtitles, the goal is to answer a multiple- choice question about the video. Each video clip has 7 questions, with 5 answers per question. The start/end points of relevant moments are provided for each question.17
VIOLIN (Liu et al., 2020) is a new Video-and- Language Inference task. Given a video clip with aligned subtitles as premise, a model needs to infer whether a natural language hypothesis is entailed or contradicted by the given video clip. It consists of 95.3K video-hypothesis pairs from 15.9K video clips, split into 80% train, 10% val and 10% test.
TVC (Lei et al., 2020b) is a multimodal Video Captioning dataset extended from TVR, contain- ing 262K descriptions paired with 108K video mo- ments.17 Note that it differs from traditional video captioning tasks in that models are allowed to uti- lize subtitle texts as input.
DiDeMo (Anne Hendricks et al., 2017a) is de-
16https://github.com/NVIDIA/apex 17Train, val and test video splits are the same as TVR.
signed for text-based video-moment retrieval on single-channel videos (video-only). It consists of 10.6K unedited video from Flickr with 41.2K sen- tences aligned to unique moments in the video. The dataset is split into 80% train, 10% val and 10% test. Note that moment start and end points are aligned to ï¬ve-second intervals and the maximum annotated video length is 30 seconds.
MSR-VTT (Xu et al., 2016b), for text-based video retrieval on single-channel videos (video-only), in- cludes YouTube videos collected from 257 popu- lar video queries from 20 categories (e.g. music, sports, movie, etc.). It contains 200K unique video clip-caption pairs. We follow the same setup in Yu et al. (2018b) to evaluate our model on MSR-VTT.
Evaluation Metrics Text-based Video-moment Retrieval can be decomposed into two sub-tasks: (i) Video Retrieval: retrieve the most relevant video clip described by the query; (ii) Moment Retrieval: localize the correct moment from the most relevant video clip. A model prediction is correct if: (i) its predicted video matches the ground-truth (in Video Retrieval); and (ii) its predicted span has high over- lap with the ground-truth (in Moment Retrieval). Average recall at K (R@K) over all queries is used as the evaluation metric for TVR, How2R, Didemo and MSR-VTT. For TVR, How2R and Didemo, temporal Intersection over Union (tIoU) is used to measure the overlap between the predicted span and the ground-truth span.18
TVQA and How2QA include 3 sub-tasks: QA on the grounded clip, question-driven moment lo- calization, and QA on the full video clip. We only consider QA on the full video clip, as it is the most challenging setting among the three. Video clips in VIOLIN are constrained to a single, self-contained scene, hence no additional grounding annotation is provided. Accuracy is used to measure model performance on TVQA, How2QA and VIOLIN.
TVC performance is measured by standard captioning metrics, inlcuding BLEU@4 (Pap- ineni et al., 2002), METEOR (Denkowski and Lavie, 2014), ROUGE-L (Lin, 2004), and CIDEr- D (Vedantam et al., 2015).
18During evaluation, the average recalls are calculated with tIoU>0.7. we apply non-maximal suppression (nms) with threshold 0.5 to TVR and How2R predictions following Lei et al. (2020b).
# A.7 Vision+Language Pre-training Overview
Very recently, multimodal pre-training has gained increasing attention, especially in the image+text area. Pioneering works such as ViLBERT (Lu et al., 2019) and LXMERT (Tan and Bansal, 2019) pro- pose to encode image and text modalities by two separate Transformers, with a third Transformer for later multimodal fusion. Compared to this two- stream architecture, VL-BERT (Su et al., 2020), Unicoder-VL (Li et al., 2020a), B2T2 (Alberti et al., 2019), VisualBERT (Li et al., 2019), and UNITER (Chen et al., 2020b) advocate single- stream architecture, where image and text signals are fused together in early stage. In VLP (Zhou et al., 2020) and XGPT (Xia et al., 2020), image captioning is considered as additional downstream application, so is visual dialog in Murahari et al. (2020). More recently, ViLBERT is enhanced by multi-task learning (Lu et al., 2020), Oscar (Li et al., 2020b) enhances pre-training with image tags, and Pixel-BERT (Huang et al., 2020) pro- poses to align image pixels (instead of bottom-up features (Anderson et al., 2018)) with text. Through these pre-training efforts, tremendous progress has been made for vision-and-language representation learning.
# A.8 Video+Language Tasks Overview
Text-based Video-moment retrieval is one of the most popular video+language tasks currently stud- ied. Anne Hendricks et al. (2017b) and Gao et al. (2017) introduce the task of Single Video Mo- ment Retrieval (SVMR), which aims at retrieving a moment from a single video via a natural lan- guage query. Escorcia et al. (2019) extends SVMR to Video Corpus Moment Retrieval (VCMR), ex- tending searching pool from single video to large video corpus. TVR (Lei et al., 2020b) deï¬nes a new task, Video-Subtitle Corpus Moment Re- trieval, which provides temporally aligned subtitle sentences along with the videos as inputs. For this new task, XML (Lei et al., 2020b) is proposed to compute similarity scores between the query and each modality separately (visual frames, subtitles) and then sum them together for ï¬nal prediction.
Another popular task is Video Question Answer- ing (QA), which aims to predict answers to natu- ral language questions given a video as context. Most previous work focuses on QA pairs from one modality only. For example, MovieFIB (Ma- haraj et al., 2017) focuses on visual concepts,
Please Drag the startlend handle below to cut a single-scene interval: The green span is the interval you cut. 00:30 00:43 Caption: 8 to 20 words
(a) User interface for query annotation. Each worker is provided with a video clip and required to select a single- scene clip from the video, then write a query in the text box.
Your Question: 5 to 25 words Correct answer v Wrong answer 1 x Wrong answer 2 x Wrong answer 3 xâ
(b) User interface for question/answer annotation. Each worker is provided with a segmented clip and required to write a question with four answers in the text boxes.
Figure 6: Data collection interface: (a) How2R; and (b) How2QA.
MovieQA (Tapaswi et al., 2016) is based on text summaries, and TGIF-QA(Jang et al., 2017) de- pends on predeï¬ned templates for question gen- eration on short GIFs. TVQA (Lei et al., 2018) designed a more realistic multimodal setting: col- lecting human-written QA pairs along with their associated video segments by providing the an-
044 ° @ 1 Percentage of Video Clips ° ° = iy L L 0-5 5-10 10-15 15-20 20-25 25-30 >30 Segment Length (s)
Figure 7: Distribution of video segment length.
notators with both video clips and accompanying subtitles. Later on, Lei et al. (2020a) augmented TVQA with frame-level bounding box annotations for spatial-temporal video QA, and introduced the STAGE framework to jointly localize moments, ground objects, and answer questions.
Inspired by natural language inference (Bow- man et al., 2015; Williams et al., 2018) and vi- sual entailment (Xie et al., 2019), Liu et al. (2020) recently proposed Video-and-Language Inference task along with VIOLIN dataset, which requires a model to draw inference on whether a written statement entails or contradicts a given video clip. This new task is challenging to solve, as a thorough interpretation of both visual and textual clues from videos is required to achieve in-depth understand- ing and inference for a complex video scenario.
There are also recent studies on video caption- ing (Venugopalan et al., 2015; Pan et al., 2016; Gan et al., 2017; Zhou et al., 2018b, 2019), popu- lar benchmarks including Youtube2Text (Guadar- rama et al., 2013), MSR-VTT (Xu et al., 2016a), YouCook2 (Zhou et al., 2018a), ActivityNet Cap- tions (Krishna et al., 2017) and VATEX (Wang et al., 2019). Unlike previous work mostly focusing on captions describing the visual content, a unique TVC (Lei et al., 2020b) dataset was released with captions that also describe dialogues/subtitles.
# A.9 How2R and How2QA Benchmarks
Data Collection Interface Figure 6a and 6b present the interfaces used for collecting How2R and How2QA. For How2R, the annotator is asked to ï¬rst select a video segment from the presented video clip using the sliding bar, and then enter a description about the selected video segment in the
0.204 o184 o16 4 @ 0.144 2 $ 0.124 6 â6 0.104 $ & 0084 = 8 0064 5 a 0.04 4 0.02 4 0.004 8 9 10 11 12 13 14 15 16 17 18 19 20 Query Length (#words)
Figure 8: How2R query length distribution.
0.16 4 0.144 » 0124 2 g % 0.104 3 S % 0.084 o 5 # 0.084 = 5 8 G 0.044 a 0.02 4 0.00 4 6] 7 86 1011 12919 14.15 16/17. 18.19 20:21 # Words in Questions
Figure 9: How2QA question length distribution.
0.184 0.16 4 0.144 0.124 0.104 0.08 4 0.06 4 Percentage of Answers 0.04 4 0.02 4 0.00 + a aaa diag Gage, eget aaa aaa aaa Y # Words in Answers blige Jaa ee et ee
Figure 10: How2QA answer length distribution.
text box (shown at the bottom of Figure 6a). For How2QA, we reuse the selected video segments collected for How2R. The annotators are asked to write a question, a correct answer and 3 wrong answers in the ï¬ve text boxes shown in Figure 6b.
# Video Segment Length Distribution The length
How much 7% How many 8% What 36% How 10% When 17% Why 20%
Figure 11: Distribution of questions categorized by their leading words in How2QA.
distribution of selected video segments is presented in Figure 7. The length of video segments varies from 5 to more than 30 seconds. The majority of them have length less than 15 seconds.
How2R Query Length Distribution Figure 8 shows the length (in number of words) distribu- tion of collected queries in How2R. The length of queries is diverse, ranging from 8 to 20.
How2QA Question and Answer Distribution Figure 9 and Figure 10 show the length (in number of words) distribution of collected questions and an- swers in How2QA. Questions are relatively longer, with more than 10 words on average. Answers are relatively shorter, most of which have less than 7 words.
In addition, we analyze the types of collected question by plotting the distribution of their leading words in Figure 11. In total, we collected questions in 7 different types. Majority of them starts with âwhatâ, âwhyâ and âwhenâ. | {
"id": "1807.03748"
} |
2005.00570 | When Ensembling Smaller Models is More Efficient than Single Large Models | Ensembling is a simple and popular technique for boosting evaluation
performance by training multiple models (e.g., with different initializations)
and aggregating their predictions. This approach is commonly reserved for the
largest models, as it is commonly held that increasing the model size provides
a more substantial reduction in error than ensembling smaller models. However,
we show results from experiments on CIFAR-10 and ImageNet that ensembles can
outperform single models with both higher accuracy and requiring fewer total
FLOPs to compute, even when those individual models' weights and
hyperparameters are highly optimized. Furthermore, this gap in improvement
widens as models become large. This presents an interesting observation that
output diversity in ensembling can often be more efficient than training larger
models, especially when the models approach the size of what their dataset can
foster. Instead of using the common practice of tuning a single large model,
one can use ensembles as a more flexible trade-off between a model's inference
speed and accuracy. This also potentially eases hardware design, e.g., an
easier way to parallelize the model across multiple workers for real-time or
distributed inference. | http://arxiv.org/pdf/2005.00570 | Dan Kondratyuk, Mingxing Tan, Matthew Brown, Boqing Gong | cs.LG, cs.CV, stat.ML | null | null | cs.LG | 20200501 | 20200501 | 2020:
0 2 0 2
y a M 1
]
# [cs.LG]
# G L . s c [
1 v 0 7 5 0 0 . 5 0 0 2 : v i X r a
# When Ensembling Smaller Models is More Efï¬cient than Single Large Models
# Dan Kondratyuk, Mingxing Tan, Matthew Brown, Boqing Gong Google AI {dankondratyuk,tanmingxing,mtbr,bgong}@google.com
# Abstract
Ensembling is a simple and popular technique for boost- ing evaluation performance by training multiple models (e.g., with different initializations) and aggregating their predictions. This approach is commonly reserved for the largest models, as it is commonly held that increasing the model size provides a more substantial reduction in error than ensembling smaller models. However, we show re- sults from experiments on CIFAR-10 and ImageNet that en- sembles can outperform single models with both higher ac- curacy and requiring fewer total FLOPs to compute, even when those individual modelsâ weights and hyperparame- ters are highly optimized. Furthermore, this gap in im- provement widens as models become large. This presents an interesting observation that output diversity in ensembling can often be more efï¬cient than training larger models, es- pecially when the models approach the size of what their dataset can foster. Instead of using the common practice of tuning a single large model, one can use ensembles as a more ï¬exible trade-off between a modelâs inference speed and accuracy. This also potentially eases hardware design, e.g., an easier way to parallelize the model across multiple workers for real-time or distributed inference.
For ensembles with more than two models, accuracy can increase further, but with diminishing returns. As such, this technique is typically used in the ï¬nal stages of model tun- ing on the largest available model architectures to slightly increase the best evaluation metrics. However, this method can be regarded as impractical for production use-cases that are under latency and size constraints, as it greatly increases computational cost for a modest reduction in error.
One may expect that increasing the number of parame- ters in a single network should result in higher evaluation performance than an ensemble of the same number of pa- rameters or FLOPs, at least for models that do not overï¬t too heavily. After all, the ensemble network will have less connectivity than the corresponding single network. But we show cases where there is evidence to the contrary.
In this paper, we show that we can consistently ï¬nd av- eraged ensembles of networks with fewer FLOPs and yet higher accuracy than single models with the same underly- ing architecture. This is true even for families of networks that are highly optimized in terms of its accuracy to FLOPs ratio. We also show how this gap widens as the number of parameters and FLOPs increase. We demonstrate this trend with a family of ResNets on CIFAR-10 [13] and Efï¬cient- Nets on ImageNet [12].
# 1. Introduction
Neural network ensembles are a popular technique to boost the performance of a modelâs metrics with minimal effort. The most common approach in current literature in- volves training a neural architecture on the same dataset with different random initializations and averaging their output activations [4]. This is known as ensemble averag- ing, or a simple type of committee machine. For instance, on image classiï¬cation on the ImageNet dataset, one can typi- cally expect a 1-2% top-1 accuracy improvement when en- sembling two models this way, as demonstrated by AlexNet [6]. Evidence suggests averaging ensembles works because each model will make some errors independent of one an- other due to the high variance inherent in neural networks with millions of parameters [3, 9, 2].
The results of this ï¬nding imply that a large model, es- pecially a model that is so large and begins to overï¬t to a dataset, can be replaced with an ensemble of a smaller version of the model for both higher accuracy and fewer FLOPs. This can result in faster training and inference with minimal changes to an existing model architecture. More- over, as an added beneï¬t, the individual models in the en- semble can be distributed to multiple workers which can speed up inference even more and potentially ease the de- sign of specialized hardware.
Lastly, we experiment with this ï¬nding by varying the ar- chitectures of the models in ensemble averaging using neu- ral architecture search to study if it can learn more diverse information associated with each model architecture. Our experiments show that, surprisingly, we are unable to im- prove over the baseline approach of duplicating the same architecture in the ensemble in this manner. Several factors
4321
could be attributed to this, including the choice of search space, architectural features, and reward function. With this in mind, either more advanced methods are necessary to provide gains based on architecture, or it is the case that ï¬nding optimal single models would be more suitable for reducing errors and FLOPs than searching for different ar- chitectures in one ensemble.
# 2. Approaches and Experiments
For our experiments, we train and evaluate convolutional neural networks for image classiï¬cation at various model sizes and ensemble them. When ensembling, we train the same model architecture independently with random initial- izations, produce softmax predictions from each model, and calculate a geometric mean1 µ across the model predictions. For n models, we ensemble them by
µ = (y1y2 . . . yn) 1 n (1)
where the multiplication is element-wise for each prediction vector yi.
We split our evaluation into two main experiments and a third follow-up experiment.
# 2.1. Image Classiï¬cation on CIFAR 10
For the ï¬rst experiment, we train wide residual networks on the CIFAR-10 dataset [13, 5]. We train and evaluate the Wide ResNets at various width and depth scales to examine the relationship between classiï¬cation accuracy and FLOPs and compare them with the ensembled versions of each of those models. We train 8 models for each scale and en- semble them as described. We select a depth parameter of n = 16, increase the model width scales k â {1, 2, 4, 8}, and provide the corresponding FLOPs on images with a 32x32 resolution. We use a standard training setup for each model as outlined in [13].
Note that we use smaller models than typically used (e.g., Wide ResNet 28-10) to show that our ï¬ndings can work on smaller models that are less prone to overï¬tting.
# 2.2. Image Classiï¬cation on ImageNet
To further show that the ensemble behavior as described can scale to larger datasets and more sophisticated models, we apply a similar experiment using Efï¬cientNets on Ima- geNet [12, 10]. Efï¬cientNet provides a family of models us- ing compound scaling on the network width, network depth, and image resolution, producing models from b0 to b7. We adopt the ï¬rst ï¬ve of these for our experiments, training and ensembling up to three of the same model architecture on ImageNet and evaluating on the validation set. We use the
1Since the softmax applies a transformation in log-space, a geomet- ric mean respects the relationship. We notice slightly improved ensemble accuracy when compared to an arithmetic mean.
4322
original training code and hyperparameters as provided by [12] for each model size with no additional modiï¬cations.
# 3. Results
In this section, we plot the relationship between accuracy and FLOPs for each ensembled model. In cases of single models that are not ensembled, we plot the median accu- racy. We observe that the standard deviation of the evalua- tion accuracy of each model architecture size never exceeds 0.1%, so we exclude it from the results for readability. For models that are ensembled, we vary the number of n trained models and choose the models randomly.
For the ï¬rst experiment on CIFAR-10, Figure 1 plots a comparison of Wide ResNets with a depth parameter of nd = 16 and width scales k â {1, 2, 4, 8}. For clarity in presentation, we show a smaller subset of all the net- works we trained. For each network (e.g., âwide restnet 16-8â, which stands for the depth parameter of nd = 16 and the width scale of k = 8), we vary the number of mod- els n â {1, 2, · · · , 8} in an ensemble and label it alongside the curve.
96 95 3 4 5 6 78 2 3 4 1 5 6 78 2 3 4 5 6 78 2 ) % ( 94 1 y c a r u c c A 93 2 3 1 4 5 6 78 92 91 1 wide resnet 16-1 wide resnet 16-2 wide resnet 16-4 wide resnet 16-8 90 108 109 1010
# FLOPs
Figure 1. Test accuracy vs. model FLOPs (log-scale) when en- sembling models trained on CIFAR-10. Each curve indicates the ensembles of increasing widths for a Wide ResNet nd-k with a depth of n = 16. We show the number of models in each ensem- ble next to each point.
In the second experiment on ImageNet, Figure 2 plots a comparison of Efï¬cientNets b0 to b5. Notably, we re-train all models using the current ofï¬cial Efï¬cientNet code2, but unlike the original paper that uses AutoAugment, here we do not use any specialized augmentation like AutoAugment or RandAugment to better observe the effects of overï¬tting.
# 4. Discussion
We draw the following observations from Figures 1 and 2 and particularly highlight the intriguing trade-off between
# 2The
# EfficientNet
# Efï¬cientNet
# code
# can
be
# found
# at
# https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet
84 83 2 3 2 3 3 82 2 1 ) % 81 1 ( y c a r u c c A 80 79 78 77 1 1 2 1 3 2 2 3 3 1 efficientnet-b0 efficientnet-b1 efficientnet-b2 efficientnet-b3 efficientnet-b4 efficientnet-b5 76 109 1010 FLOPs
Figure 2. Validation accuracy vs. model FLOPs (log-scale) when ensembling models trained on ImageNet. Each curve indicates the ensembles of increasing sizes for a given Efï¬cientNet. We show the number of models in each ensemble next to each point.
accuracy and FLOPs thanks to the ensembling.
First, we can see with no surprise that across the board, as the number of FLOPs increase for a single model, so too does the accuracy. This is also true of the ensembles which essentially multiply the base FLOPs by n for n models.
What is more interesting is that the results show that there can be cases where ensembles of models with fewer collective FLOPs can achieve higher accuracy than a single larger model. This is indicated by points that are above and to the left of other points. For instance, an ensem- ble of eight Wide ResNet 16-2 models achieves the same accuracy of 95% as a much wider Wide ResNet 16-8 at a fraction of the FLOPs (80M vs. 150M). An added beneï¬t is that ensembles can easily be distributed to multiple workers to speed up computation even more.
Increasing the number of models in an ensemble will eventually be hit with diminishing returns, resulting a crossover point where an ensemble of the next largest model provides a better trade-off in terms of accuracy to FLOPs. In CIFAR-10, we observe the optimal ensemble size would be 2-4 models before the accuracy improvement slows down. Finally, an interesting trend is that for smaller mod- els, we can see that ensembling them has a more difï¬- cult time improving over larger single models. But as the models become larger, becoming increasingly likely to be over-parameterized and overï¬t to the dataset, we can see how ensembling provides a bigger accuracy boost over even larger models. For instance, the ensembles of Efï¬cientNet- b0 do not come close to reaching the same accuracy to FLOPs trade-off as Efï¬cientNet-b1. However, as the mod- els become increasingly large, we see that the ensemble of two Efï¬cientNet-b3 models achieves higher accuracy with fewer FLOPs over Efï¬cientNet-b4, hence a better trade-off than Efï¬cientNet-b4 provides.
Despite Efï¬cientNetâs scaling ability producing highly optimized models, we can still see gaps in performance
4323
where ensembles can perform better under the same number of total FLOPs, especially as the model size grows from b3 onwards. In other words, ensembling offers an alternative and more effective scaling method than the compound scaling in Efï¬cientNet when some application scenarios permit the ensembling.
# 5. Neural Architecture Search (NAS) for Di- verse Ensembles
Having noted the observations above, we hypothesize that ensembles can be improved further by varying the ar- chitectures of each model in an ensemble rather than du- plicating the same architecture. The idea is that different architectures will naturally provide alternative features and therefore may enhance ensemble diversity. This should, in turn, provide improved accuracy at no increase to the num- ber of FLOPs.
# 5.1. NAS Experiment Setup
To test this hypothesis, we adopt the same NAS frame- work as MnasNet [11]. We use a search space predicting model depth, width, and convolution type. We also augment the search space to include varying input resolution scales r â {112, 168, 196, 224}. As a result, each model provides m = 50 hyperparameters to search. Additionally, we ex- pand this to a joint search space to search for an ensemble of models by multiplying the search space n times, one for each model, for a total of nm hyperparameters. Each model is trained individually and ensembled as described in earlier experiments.
We alter the reward function to be penalized by not the total latency of the ensemble, but the maximum latency of all of the models in an ensemble and simulate this latency on a Pixel Phone 1. Assuming that each model can be run in parallel on separate workers, this would require the search to optimize the largest model in the ensemble at any given point to reduce the likelihood of producing ensembles where one model is large and the rest are anemic. Lastly, we train each searched model for 10 epochs before evaluating the accuracy, which is part of the reward, on a held-out set.
# 5.2. NAS Results
We show the Pareto curves of the ensemble accuracy with respect to model maximum latency across ensembles of size one, two, and three in Figure 3. This plot demon- strates the inherent trade-off between model accuracy and computation speed, with the best models being in the outer edge of the point cloud.
Results show that one-, two-, and three-model ensem- bles are surprisingly close to one another. The skyline two- model ensembles tend to beat out single models, but only by 1% at best. Skyline three-model ensembles show nearly
70 65 60 ) 55 % ( y c a r u c c A 50 45 40 35 1 Model 2 Models 3 Models 1 Model (Median) 2 Models (Median) 3 Models (Median) 30 0 20 40 60 Max Image Latency (ms) 80 100 120
Figure 3. The resulting Pareto curves when searching for architec- tures across different ensemble sizes (trained for 10 epochs). We indicate the median ensemble accuracy and max latency of each ensemble size as stars in the plot.
identical performance to single models. We see that the median model accuracy does increase as the ensemble size grows, but at the cost of increased maximum latency.
Out of the searched diverse models, we pick the most promising candidates for a target latency. When trained to convergence, we ï¬nd that two-model and three-model en- sembles perform just as well as single models (assuming roughly equal max image latency). Somehow frustratingly, we ï¬nd that simply duplicating the best single model for a given latency target and ensembling them together provides the best improvement in accuracy.
This experiment presents evidence towards a conclusion that ensembles beneï¬t the most from choosing the most ac- curate models and not models that are architecturally di- verse, at least under our current NAS context. For a ï¬xed computational budget, this corresponds to using the best model architecture across the ensemble. We of course cau- tion that we only have tested this with a simple NAS setup on a single large image classiï¬cation dataset. This could change with a noisier and smaller dataset, or with more stringent constraints on model losses, regularization, or ar- chitectural mechanisms.
# 6. Related Work
Model ensembling has a long history with many different proposed techniques. Most works in this area come before advancements in deep learning were popularized. For in- stance, [7] deï¬ne different subsets of the training data and use cross-validation to divide data into different groups. [1] developed bagging, where a different training set is given for different models to promote diversiï¬ed feature learning. And [8] is one of the earliest attempts at constructing en-
4324
sembles with different models by changing the number of hidden nodes in each network.
# 7. Conclusion
We have demonstrated how averaging ensembles can re- sult in higher accuracy with fewer FLOPs than popular sin- gle models on image classiï¬cation. This provides an in- teresting insight that smaller models can stand to provide great beneï¬t without sacriï¬cing on the accuracy to efï¬- ciency trade-offs of larger models. We advocate further in- spections into the trade-off of ensembling especially for the applications where distributed inference is plausible.
# References
[1] Leo Breiman. Bagging predictors. Machine learning, 24(2):123â140, 1996.
[2] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. MIT press, 2016.
[3] Lars Kai Hansen and Peter Salamon. Neural network en- sembles. IEEE transactions on pattern analysis and machine intelligence, 12(10):993â1001, 1990.
[4] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distill- arXiv preprint ing the knowledge in a neural network. arXiv:1503.02531, 2015.
[5] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.
[6] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convolutional neural net- In Advances in neural information processing sys- works. tems, pages 1097â1105, 2012.
[7] Anders Krogh and Jesper Vedelsby. Neural network en- sembles, cross validation, and active learning. In Advances in neural information processing systems, pages 231â238, 1995.
[8] Derek Partridge and William B Yates. Engineering multiver- sion neural-net systems. Neural computation, 8(4):869â893, 1996.
[9] Michael P Perrone and Leon N Cooper. When networks dis- agree: Ensemble methods for hybrid neural networks. Tech- nical report, BROWN UNIV PROVIDENCE RI INST FOR BRAIN AND NEURAL SYSTEMS, 1992.
[10] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, San- jeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211â252, 2015.
[11] Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, Mark Sandler, Andrew Howard, and Quoc V Le. Mnas- net: Platform-aware neural architecture search for mobile. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2820â2828, 2019.
[12] Mingxing Tan and Quoc V Le. Efï¬cientnet: Rethinking ICML model scaling for convolutional neural networks. 2019. arXiv preprint arXiv:1905.11946, 2019.
[13] Sergey Zagoruyko and Nikos Komodakis. Wide residual net- works. arXiv preprint arXiv:1605.07146, 2016.
4325 | {
"id": "1503.02531"
} |
2005.00456 | USR: An Unsupervised and Reference Free Evaluation Metric for Dialog Generation | The lack of meaningful automatic evaluation metrics for dialog has impeded
open-domain dialog research. Standard language generation metrics have been
shown to be ineffective for evaluating dialog models. To this end, this paper
presents USR, an UnSupervised and Reference-free evaluation metric for dialog.
USR is a reference-free metric that trains unsupervised models to measure
several desirable qualities of dialog. USR is shown to strongly correlate with
human judgment on both Topical-Chat (turn-level: 0.42, system-level: 1.0) and
PersonaChat (turn-level: 0.48 and system-level: 1.0). USR additionally produces
interpretable measures for several desirable properties of dialog. | http://arxiv.org/pdf/2005.00456 | Shikib Mehri, Maxine Eskenazi | cs.CL, cs.LG | Accepted to ACL 2020 as long paper | null | cs.CL | 20200501 | 20200501 | 0 2 0 2
y a M 1 ] L C . s c [
1 v 6 5 4 0 0 . 5 0 0 2 : v i X r a
# USR: An Unsupervised and Reference Free Evaluation Metric for Dialog Generation
Shikib Mehri and Maxine Eskenazi Dialog Research Center, Language Technologies Institute Carnegie Mellon University, USA {amehri,max}@cs.cmu.edu
# Abstract
The lack of meaningful automatic evaluation metrics for dialog has impeded open-domain dialog research. Standard language genera- tion metrics have been shown to be ineffec- tive for evaluating dialog models. To this end, this paper presents USR, an UnSupervised and Reference-free evaluation metric for dialog. USR is a reference-free metric that trains un- supervised models to measure several desir- able qualities of dialog. USR is shown to strongly correlate with human judgment on both Topical-Chat (turn-level: 0.42, system- level: 1.0) and PersonaChat (turn-level: 0.48 and system-level: 1.0). USR additionally pro- duces interpretable measures for several desir- able properties of dialog.
# Introduction
The lack of meaningful automatic evaluation met- rics is a signiï¬cant impediment for open-domain dialog generation research. Standard language gen- eration metrics have been shown to be ineffec- tive for dialog evaluation (Deriu et al., 2019; Liu et al., 2016). Without well-accepted, meaningful automatic metrics, open-domain dialog researchers have come to rely on human evaluation. Due to its time- and cost-intensive nature, human eval- uation is typically only used for the ï¬nal dialog model. As such, during development dialog sys- tems are generally optimized for poorly-correlated automatic metrics (e.g., F-1, BLEU, PPL) which can result in sub-par human evaluation scores (Di- nan et al., 2019). To facilitate development of open- domain dialog models with meaningful automatic metrics, this paper presents the UnSupervised and Reference free (USR) evaluation metric for dialog. Standard automatic metrics for evaluating dialog generation (e.g., BLEU, F-1, METEOR, ROUGE) have several shortcomings that make them unsuit- able for dialog evaluation: (1) The one-to-many
nature of dialog (Zhao et al., 2017) makes word- overlap metrics ineffective for scoring valid system output that deviates from the ground-truth response (Liu et al., 2016; Gupta et al., 2019). (2) Human evaluation of dialog typically measures multiple properties (e.g., appropriate, interesting, consis- tent). Automatic metrics on the other hand, con- dense the multi-faceted nature of dialog quality to a single uninterpretable metric. (3) There are many deï¬nitions of what a good dialog is and, as such, it is not feasible to construct a âone size ï¬ts allâ metric. Depending on the task and the data, the desired qualities of a dialog system may differ (Walker et al., 1997; Deriu et al., 2019).
USR is a reference-free metric that consists of several interpretable sub-metrics which are com- bined in a conï¬gurable manner. Rather than relying on a ground-truth reference response, unsupervised models are trained to measure desired qualities of dialog (e.g., interesting, natural). As such, USR (1) alleviates the one-to-many issue of standard metrics, (2) produces interpretable measures for desirable properties of dialog, and (3) provides a conï¬gurable mechanism for combining several sub- metrics into an overall quality score.
To evaluate the performance of USR, human quality annotations were collected for models trained on the Topical-Chat (Gopalakrishnan et al., 2019) and the PersonaChat corpora (Zhang et al., 2018). USR is shown to strongly correlate with human judgment on both Topical-Chat (turn-level Spearman: 0.42, system-level Spearman: 1.0) and PersonaChat (turn-level Spearman: 0.48 and system-level Spearman: 1.0). The strong corre- lation with human judgment across two datasets and a variety of model types shows that USR is a valuable tool for the dialog community. Further, since USR does not require any explicit supervi- sion, it has the potential to generalize to several dialog tasks and datasets.
The contributions of this paper as as follows: (1) a strongly-correlated, unsupervised and reference free metric is proposed for evaluating open-domain dialog systems, (2) a thorough human quality an- notation is carried out and is released1 to facilitate future benchmarking of dialog evaluation metrics.
# 2 Related Work
Standard automatic metrics for language generation correlate poorly with human judgement of dialog (Liu et al., 2016; Lowe et al., 2017; Gupta et al., 2019). For example, the F-1 score can be gamed by outputting the most frequent words, regardless of the context (Dinan et al., 2019).
The poor performance of present metrics is largely due to the one-to-many nature of dialog (Zhao et al., 2017). To avoid comparing to a single reference response, several authors have proposed using multiple reference responses. Multiple ref- erence responses can be obtained with retrieval models (Galley et al., 2015; Sordoni et al., 2015) or through data collection (Gupta et al., 2019). These multi-reference metrics show improvement in per- formance, but it is infeasible to thoroughly cover the space of potential responses. As such, this pa- per addresses the one-to-many issue of dialog by presenting a reference-free metric.
Lowe et al. (2017) train ADEM to produce a quality score conditioned on the dialog context, the reference response and the generated response. Venkatesh et al. (2018) present a framework for evaluation of Alexa prize conversations, which at- tains moderate correlation with user ratings. Both of these methods are trained on explicit quality an- notations. In contrast, USR requires no explicit supervision and will more easily generalize to new datasets and tasks.
Li et al. (2017) proposes a reference-free dia- log evaluator which is trained to discriminate be- tween human and generated responses. This work is similar to USR in that it evaluates the quality of a response without a reference or quality anno- tation training data. Using the evaluation model as a reward during reinforcement learning exhib- ited strong performance. However, correlation with human judgement was not evaluated. Intuitively, it appears insufï¬cient to rely on a discriminator as a meaningful evaluation of dialog since this as- sumes that all human responses are perfect and all generated responses are imperfect.
# 1http://shikib.com/usr
# 3 Human Quality Annotation
To evaluate the correlation of automatic metrics with human judgment, human quality annotation was carried out across two open-domain dialog corpora. Generated responses were obtained from several models described in Section 3.3. For each dialog context, an additional human response was also written. Human annotation was then carried out on sixty dialog contexts, with six responses per context for Topical-Chat (four system outputs, one newly-annotated human output, one original ground-truth response) and ï¬ve for PersonaChat (one less system output). Each response was given six different scores: Understandable (0-1), Natu- ral (1-3), Maintains Context (1-3), Interesting (1- 3), Uses Knowledge (0-1), Overall Quality (1-5). Three annotators labeled each response.
The task instructions were very detailed in order to minimize subjectivity in the quality annotations. For example, individuals may differ in their def- inition of Interesting (e.g., some individuals ï¬nd football interesting, others do not). Thus, the in- structions contained a clear, albeit somewhat rigid deï¬nition, of Interesting. The instructions for Over- all Quality annotation, however, were less rigid and therefore those annotations contain some amount of annotator-speciï¬c subjectivity.
The data collection and experiments with Per- sonaChat were carried out to assess the general- ity of the USR metric. As such, the annotation questions used were speciï¬cally tailored to Topical- Chat, but are still suitable for PersonaChat.
# 3.1 Topical-Chat Dataset
The Topical-Chat dataset (Gopalakrishnan et al., 2019) is a large collection of human-human knowledge-grounded open-domain conversations that consists of 11,319 dialogs and 248,014 utter- ances. Following the same experimental setup as Gopalakrishnan et al. (2019), heuristics are em- ployed to identify the most relevant fact for each response. As such, the task is to produce a response conditioned on both a dialog context and a fact.
# 3.2 PersonaChat Dataset
The PersonaChat dataset (Zhang et al., 2018) is a corpus of human-human persona-conditioned conversations that consists of 10,907 dialogs and 162,064 utterances. Each worker is asked to con- dition their responses on a persona, which we con- sider to be analogous to the facts in the Topical-
Argmax Decoding * Response 1 Nucleus Sampling (0.3) weep Ld) Dialog Transformer Context Nucleus Sampling (0.5) Response 3 Nucleus Sampling (0.7) [>| Response 4 Ld Response 5 Ground-Truth Response
Figure 1: On the Topical-Chat corpus, six responses are obtained for each dialog context. Four use the trained Transformer model with different decoding strategies. One is a new human-generated response. One is the original ground-truth. A similar setup was employed for PersonaChat, albeit with different models.
Chat corpus.
# 3.3 Models
# 3.3.1 Topical-Chat Models
A Transformer (Vaswani et al., 2017) is trained to produce the response, r, conditioned on dialog con- text, c, and fact, f . The input to the transformer is the concatenation of c and f , similar to Gopalakr- ishnan et al. (2019). The transformer consists of 6 layers, a hidden size of 512, randomly-initialized word embeddings of size 300, a dropout rate of 0.1 and it is trained for 50 epochs.
A single Transformer model is trained, which matches reported by Gopalakrishnan et al. (2019). Different decoding strategies are used to obtain four different outputs from this model. In addition to standard argmax sampling, nucleus sampling (Holtzman et al., 2019) is used at three different rates: p = {0.3, 0.5, 0.7}. The outputs from these four decoding strategies are listed with the original ground-truth utterance and a new human-generated response, for a total of six responses for each context, as shown in Figure 1.
# 3.3.2 PersonaChat Models
Three models were used to generate system out- puts: a sequence-to-sequence model (Seq2Seq), an LSTM language model (LM) and a Key-Value Proï¬le Memory Network (KV-MemNN). We use the pre-trained models provided in ParlAI2 for the ConvAI2 competition (Dinan et al., 2019).
2https://github.com/facebookresearch/ ParlAI/tree/master/projects/convai2
A fourth open-source model was also used to produce output for quality annotation, however it was ultimately excluded from the released dataset and experiments due to possible data leakage.
# 3.4 Annotation
Quality annotation was performed by six dialog researchers. Using a crowdsourcing platform, such as Amazon Mechanical Turk (AMT), would have allowed for more efï¬cient and scalable annotation. However, crowdsourcing was not used because (1) the annotation instructions are lengthy, (2) a pre- liminary annotation pass was carried out, followed by a group discussion, (3) having many annota- tions from a few annotators allows examination of annotator-speciï¬c subjectivity.
Annotators were provided with a set of instruc- tions (Appendix A). A small preliminary annota- tion pass was carried out, with each individual an- notating 5 dialog contexts (for a total of 30 re- sponses). The inter-annotator agreement was com- puted for each of the questions. The instructions were reï¬ned after the preliminary pass and a discus- sion meeting (e.g., Maintains Context was changed to be a 3-point rating instead of a 2-point rating). After the instructions were modiï¬ed, the full anno- tation pass was carried out.
Each response was rated according to the qual- ities mentioned at the beginning of this section. Instructions for each of qualities are summarized below:
⢠Understandable (0 - 1): Is the response under- standable given the previous context?
⢠Natural (1 - 3): Does the response seem to be something that a person would naturally say?
⢠Maintains Context (1 - 3): Does the response serve as a valid continuation of the preceding conversation?
⢠Interesting (1 - 3): Is the response dull or interesting?
⢠Uses Knowledge (0 - 1): Given the fact that the response is conditioned on, how well does the response use that fact?
⢠Overall Quality (1 - 5): Given your answers above, what is your overall impression of the quality of this utterance?
Metric Spearman Pearson Topical-Chat Understandable Natural Maintains Context Interesting Uses Knowledge Overall Quality 0.5102 0.4871 0.5599 0.5811 0.7090 0.7183 0.5102 0.4864 0.5575 0.5754 0.7090 0.7096 PersonaChat Understandable Natural Maintains Context Interesting Uses Knowledge Overall Quality 0.2984 0.4842 0.6125 0.4318 0.8115 0.6577 0.2984 0.4716 0.6130 0.4288 0.8115 0.6603
Inter-annotator agreement for all the met- Table 1: rics. For all the correlations presented in this table, p < 0.01.
The instructions contained detailed descriptions and examples of what constitutes a response in each category (e.g., what makes a response score 2 on Maintains Context). These instructions were written to minimize subjectivity in the annotations, which results in clear, agreed upon deï¬nitions.
For Topical-Chat, the full annotation consisted of 60 dialog contexts randomly sampled from the frequent test set, for a total of 360 responses scored on six different qualities. For PersonaChat, 60 dialog contexts were sampled from the ConvAI2 validation set, with a total of 300 responses scored on six different qualities. Each response was la- beled by three different annotators. Annotators were randomly assigned to each dialog context.
# 3.5 Analysis
Inter-annotator agreements for the different ratings across both datasets are presented in Table 1. The correlation between each pair of annotations is com- puted and the average correlation over all the pairs is reported. Correlation is used instead of Cohenâs Kappa in order to better account for the ordinal nature of the ratings (i.e., 4 should correlate bet- ter with 5 than 1), and to maintain consistency with the evaluation of the automatic metrics. Most inter-annotator correlations are above 0.4, which indicates moderate to strong agreement. The low agreement for Understandable on PersonaChat is likely a consequence of the simple language in the dataset. Most responses are understandable,
except for those requiring background knowledge (e.g., that âcodâ is an acronym for âCall of Dutyâ). Since the annotators have differing background knowledge, the few occasions where they fail to understand an utterance will differ, hence the lower agreement. The agreement for Overall Quality is relatively high (0.71 for Topical-Chat and 0.66 for PersonaChat) which suggests that any ambiguity in the speciï¬c dialog qualities is mitigated when the annotator is asked for an overall impression.
Table 2 presents the scores for the different sys- tems on each of the six qualities. Across both datasets and all qualities, the new human generated response strongly outperforms all other response types, even the original ground truth. This may be because the new human generated response was written with this quality annotation in mind, and as such is optimized for turn-level evaluation. On the other hand, the workers who produced the orig- inal ground-truth response, were more concerned with the quality of the overall dialog than with the quality of each individual response.
On the Topical-Chat corpus, argmax decoding has a moderately higher performance over the nu- cleus sampling (Holtzman et al., 2019) methods. This should not be taken as an indication that argmax decoding is the superior method, since the hyperparameters (e.g., temperature) were not tuned for nucleus sampling. It should be noted that the objective was not to train and evaluate the best performing models, but instead to produce re- sponses of varying qualities and obtain accurate human judgements of these responses.
A regression was trained to map from the ï¬ve ratings to the overall score in order to analyze the re- lationship between them. For better interpretability of the regression weights, the scores were normal- ized (using z-score) before training the regression. For better interpretability, a softmax was computed over the weights. Since individuals may differ in their deï¬nition of a good response, a speciï¬c re- gression is trained for each of the ï¬ve annotators who labeled responses for the Topical-Chat corpus. Figure 2 displays the weights attributed to each of the ï¬ve qualities by each of the annotators.
Annotators attributed different weights to the speciï¬c features. For example, A3 emphasized nat- uralness while A2 paid more attention to whether a response was grounded on knowledge. Despite the differences across annotators, a good response was generally expected to be natural, maintain con-
System Und (0-1) Nat (1-3) MCtx (1-3) Int (1-3) UK (0-1) OQ (1-5) Original Ground-Truth Argmax Decoding Nucleus Sampling (0.3) Nucleus Sampling (0.5) Nucleus Sampling (0.7) New Human Generated 0.95 0.60 0.51 0.48 0.52 0.99 Topical-Chat 2.72 2.08 2.02 1.92 2.01 2.92 2.72 2.13 1.90 1.93 1.87 2.93 2.64 1.94 1.82 1.72 1.80 2.90 0.72 0.47 0.42 0.34 0.37 0.96 4.25 2.76 2.40 2.29 2.39 4.80 PersonaChat Original Ground-Truth Language Model LSTM Seq2Seq KV-MemNN New Human Generated 0.99 0.97 0.92 0.93 1.00 2.89 2.63 2.64 2.70 2.97 2.82 2.02 2.49 2.18 2.88 2.67 2.24 2.29 2.56 2.87 0.56 0.08 0.47 0.17 0.96 4.36 2.98 3.47 3.25 4.80
Table 2: Average scores for the six different responses, on the six quality: Understandable, Natural, Maintains Context, Interesting, Uses Knowledge and Overall Quality.
0.0045 Understandable Natural Maintains Context Interesting Uses Knowledge
4.1 describes several existing metrics that were studied. Section 4.2 presents USR, a novel unsu- pervised and reference-free metric.
# 4.1 Baseline Metrics
Several existing and easily-applicable metrics for dialog evaluation are compared. the list of available metrics is not exhaustive. Only the most commonly used and the most accessible are addressed.
Figure 2: Weight attributed to each of the ï¬ve speciï¬c metrics by each annotator, when labeling Overall Qual- ity. Lighter colors signify more weight.
text, and be interesting. These annotator-speciï¬c weights demonstrate that individuals deï¬ne good dialog differently. Future work could explore per- sonalized dialog evaluation wherein the evaluation metric is tailored to a speciï¬c individual.
A potential criticism of this quality annotation could be that certain dialog qualities are missing. To address concerns about the completeness of the set of ï¬ve qualities, a regression can be trained to produce the overall score conditioned on the quality ratings. The Spearman correlation between the predicted score and the original overall score is 0.9654, which signiï¬es that the set of qualities is thorough and contains enough information to reï¬ect the overall quality of the response.
# 4 Automatic Metrics
F-1 score computes the word-overlap between the generated response and the ground-truth, by taking the harmonic mean of the precision and re- call. It is one of the four metrics used by the cre- ators of the Topical-Chat dataset (Gopalakrishnan et al., 2019), along with perplexity and unique uni- gram/bigram counts. Dinan et al. (2019) described a simple adversarial example that attains a high F-1 score on PersonaChat. We produce a similar example for the Topical-Chat dataset and ï¬nd that always outputting a concatenation of the ten most common tokens in the dataset (â. i the , that a to it is ofâ) attains an F-1 score of 25.6 which is a +3.6 improvement over the Transformer presented by Gopalakrishnan et al. (2019).
BLEU (Papineni et al., 2002) is a well-known word overlap metric that computes n-gram preci- sion between the generated sequence and the refer- ence. Because precision favors shorter sentences, BLEU also adds a brevity penalty that punishes shorter sentences. BLEU has been found to corre- late poorly with human judgment (Liu et al., 2016; Lowe et al., 2017; Gupta et al., 2019).
This section describes the automatic metrics ex- plored for evaluating generated responses. Section
METEOR (Denkowski and Lavie, 2014) was
designed as an improvement on BLEU using a har- monic mean of precision and recall, as well as stemming and synonyms.
ROUGE-L (Lin, 2004) identiï¬es the longest common subsequence between the generated and reference sequence to better account for sentence- level structure when computing word overlap.
Greedy Matching (Rus and Lintean, 2012) is an embedding-based metric that greedily matches each word in the generated sequence to a reference word based on the cosine similarity of their embed- dings. The ï¬nal score is then an average over all the words in the generated sequence.
Embedding Average (Wieting et al., 2015) computes a sentence embedding for both the gen- erated sequence and the ground-truth response by taking an average of word embeddings. The score is then a cosine similarity of the average embedding for both the generated and reference sequence.
Vector Extrema (Forgues et al., 2014) follows a similar setup to Embedding Average, where the score is the cosine similarity between sentence em- beddings. Rather than taking an average over word embeddings, this method identiï¬es the maximum value for each dimension of the word embedding. Taking the maximum is motivated by the idea that common words will be de-emphasized as they will be closer to the origin. Vector Extrema has been shown to perform better on dialog tasks than other metrics (Gupta et al., 2019; Liu et al., 2016).
Skip-Thought (Kiros et al., 2015) uses a recur- rent neural network to produce a sentence-level em- bedding for the generated and reference sequences. A cosine similarity is then computed between the two embeddings. The implementation provided by Sharma et al. (2017) is used.
BERTScore (Zhang et al., 2019) uses a pre- trained BERT (Devlin et al., 2018) model to greed- ily match each word in a reference response with one word in the generated sequence. By doing so, it computes the recall of the generated sequence. BERTScore was shown to have strong system-level and segment-level correlation with human judg- ment on several machine translation and captioning tasks. However, although it is a more sophisticated metric, it still compares word similarity between a reference and a generated sequence. While this method may work well for tasks where there is a limited space of outputs for each input (e.g., cap- tioning, translation), it is ineffective at dealing with the one-to-many nature of dialog.
# 4.2 Proposed Metric
This section proposes describes the USR metric, an unsupervised, reference-free evaluation metric for dialog. USR leverages pre-trained language models, speciï¬cally RoBERTa (Liu et al., 2019), to measure properties of dialog. USR is designed to be reference-free because there is no one right answer due to the inherent one-to-many nature of dialog (Zhao et al., 2017).
Several sub-metrics were developed for the dif- ferent qualities of dialog (e.g., Natural, Interesting, Uses Knowledge). While USR measures the over- all quality of a response, its sub-metrics assess spe- ciï¬c dialog qualities and therefore facilitate better understanding of a modelâs performance.
4.2.1 Masked Language Modelling Metric The masked language modelling (MLM) metric uses a ï¬ne-tuned RoBERTa (Liu et al., 2019) model to estimate the likelihood of a response. RoBERTa is pre-trained on a massive amount of English data and ï¬ne-tuned on the corpus being evaluated (either Topical-Chat or PersonaChat), making it capable of identifying unnatural and incorrect responses. The likelihood estimated by the ï¬ne-tuned RoBERTa model is used as an automatic metric for evaluating the understandability and naturalness of responses. The RoBERTa-base model (Liu et al., 2019) is ï¬ne-tuned on the training set of the Topical-Chat corpus (Gopalakrishnan et al., 2019) using the im- plementation open-sourced by Wolf et al. (2019a). The language model is ï¬ne-tuned on only the dia- log, without any of the facts, for a single epoch.
RoBERTa uses both past and future context to predict a probability distribution for a masked word. The input sequence to MLM is a concatenation of a dialog context, c, and a response, r. One word at a time, each word in r is masked and its log likelihood is computed. Given the masked log- likelihood for the 7-th word of r as 1;, the value of the metric is then computed to be â yl" 1;. Figure 3 visualizes this process.
4.2.2 Dialog Retrieval Metrics Recent research has highlighted the complemen- tary nature of dialog retrieval and generation with respect to multi-tasking (Wolf et al., 2019b) and pre-training (Mehri et al., 2019). Because of this complimentary nature, using dialog retrieval (DR) for evaluating generative models is an intuitive choice, especially for metrics like Maintains Con- text and Uses Knowledge.
mm = ee eee (ow } (Core) (Gon) (Cee) (Ge) (oo) Ga) (soe (tow } Care} (ven ) Coos) (a) (Fa) (Cox) Cees) Gen) SS ons how }{ are ]{ you }{ cos] i am | [mask } {eos ok | oss (tow } (are) (eu) Coos) ) (em) (Cox) (Rae) Gee) ose MLM Score: -1.02
Figure 3: Visualization of the masked language mod- elling (MLM) metric. Context words are in grey; re- sponse words are in red. The red words are masked, and RoBERTa must predict the likelihood of their true value (shown in green).
The ï¬ne-tuned RoBERTa model described in Section 4.2.1 is further ï¬ne-tuned for the retrieval task. This task is set up in the same manner as the Ubuntu dialog corpus (Lowe et al., 2015). The model is trained given a context x, a response r, and a binary label y indicating whether r is the true response or randomly sampled. The context x may consist of the dialog history and the fact, denoted c, or just the fact, denoted f . Two different versions of the dialog retrieval (DR) metric are trained, with different values of x. The DR metric score is deï¬ned to be the probability P (y = 1| x, r) a given DR metric model produces.
Though the DR metric is trained for the task of retrieval, this is done in an unsupervised manner. The retrieval task is an unsupervised task since it requires no additional labels during training (e.g., explicit quality annotations).
The DR metric is appropriate for Maintains Con- text, Interesting and Uses Knowledge. If a retrieval model predicts that a generated response is con- textually relevant to a dialog context, it indicates that the response Maintains Context. Likewise, if a retrieval model predicts that the response r is con- textually relevant to fact f , it signiï¬es that r most likely Uses Knowledge.
Interesting is the measure of whether the re- sponse is dull/generic or if it provides some in- teresting/engaging information. The DR metric is trained to distinguish between a ground-truth re- sponse (y = 1) and a randomly sampled response (y = 0). Generic responses are applicable to many contexts, and will often appear as both ground- truth responses and randomly sampled responses. As such, the model will likely learn to assign a low probability distribution to these generic responses and will often output P (y = 1| r, x) = 0.5. As
such, generic responses will generally be scored lower than other contextually relevant, interesting responses. The DR metrics will learn to favor re- sponses that are unique to a given context x, rather than being applicable to many different contexts.
4.2.3 The USR Metric Given meaningful automatic metrics for each of the ï¬ve dialog qualities, USR combines the scores into an overall measure that correlates well with Overall Quality ratings.
In Section 3.5, a regression model was trained to reproduce the overall score from each of the speciï¬c quality scores. The predictions of this re- gression model attained a 0.9654 Spearman correla- tion with the original scores. This same regression is used by USR on top of the automatic metrics presented in Sections 4.2.1 and 4.2.2.
USR combines its sub-metrics into one measure of overall quality. This combination is conï¬gurable, adaptable to different datasets or tasks. For ex- ample, if a speciï¬c application prefers natural re- sponses over interesting ones, the weights of the regression model can be adjusted. Analysis demon- strated that individuals used different weights when producing the overall score (Figure 2). USR might be able to be personalized for speciï¬c individuals by adjusting the weights of the regression model.
# 5 Results
This section evaluates all of the automatic met- rics described in Section 4, by comparing them to human judgement. The best sub-metrics for each dialog quality are used as input for the regres- sion model of the USR metric. While the best per- forming sub-metrics are not consistent across the two datasets, the USR metric nonetheless exhibits strong results. The annotations for the original ground-truth are not used for evaluation, in order to accurately compare referenced and reference-free metrics.
Table 3 shows turn-level correlations of the best automatic metrics for each dialog quality on Topical-Chat. USR is shown to strongly outper- form both word-overlap and embedding-based met- rics across all of the dialog qualities. Interestingly, the best non-USR metric is consistently either ME- TEOR or BERTScore â possibly because both methods are adept at comparing synonyms during evaluation. For some dialog qualities, the overall USR metric outperforms the best sub-metric. For example, USR does better for Maintains Context
Metric Spearman Pearson Understandable 0.2502 0.3268 0.3152 BERTScore (base) USR - MLM USR 0.2611 0.3264 0.2932 Natural BERTScore (base) USR - MLM USR 0.2094 0.3254 0.3037 Maintains Context 0.3018 0.3650 0.3769 METEOR USR - DR (x = c) USR 0.2260 0.3370 0.2763 0.2495 0.3391 0.4160 Interesting BERTScore (base) USR - DR (x = c) USR 0.4121 0.4877 0.4645 Uses Knowledge 0.3909 0.4468 0.3353 METEOR USR - DR (x = f) USR 0.3901 0.3533 0.4555 0.3328 0.2220 0.3175
Table 3: Turn-level correlations on Topical-Chat. We (1) best non-USR metric, (2) best USR sub- show: metric and (3) USR metric. All measures in this table are statistically signiï¬cant to p < 0.01.
than USR-DR. This is likely because the informa- tion from the other sub-metrics (e.g., Uses Knowl- edge) is valuable and effectively leveraged by USR. Table 4 reports the turn-level correlations of the best automatic metrics for each dialog quality on the PersonaChat corpus. Across all dialog quali- ties, USR strongly outperforms the word-overlap and embedding-based metrics. Conversations in PersonaChat generally consist of individuals com- municating facts from their own persona in a rele- vant and coherent manner. As such, when models trained on PersonaChat produce subpar outputs, it is generally because the outputs either (1) do not effectively use the persona or (2) are not rele- vant/coherent to the dialog context. This explains why the correlations are signiï¬cantly higher for Maintains Context and Uses Knowledge. As a con- sequence of PersonaChatâs strong dependency on both the dialog context and the persona, USR-DR (x = c) which uses both the dialog context and the persona to perform dialog retrieval, generally out- performs all other metrics.
Table 5 shows turn-level correlation with the Overall Quality ratings on Topical-Chat, for all of
Metric Spearman Pearson Understandable 0.0685 0.1186 0.1324 BERTScore (base) USR - MLM USR 0.0672 0.1313 0.1241 Natural VectorExtrema USR - DR (x = c) USR 0.1375 0.2291 0.2430 Maintains Context 0.2564 0.5625 0.5280 METEOR USR - DR (x = c) USR 0.1458 0.1733 0.1862 0.2500 0.6021 0.6065 Interesting BERTScore (base) USR - DR (x = c) USR 0.0491 0.2634 0.0171 Uses Knowledge 0.1719 0.6309 0.3177 METEOR USR - DR (x = c) USR 0.0325 0.0606 0.0315 0.1678 0.4508 0.4027
Table 4: Turn-level correlations on Persona-Chat. We (1) best non-USR metric, (2) best USR sub- show: metric and (3) USR metric. All values with p > 0.05 are italicized.
Metric Spearman Pearson Word-Overlap Metrics F-1 BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE-L 0.1645 0.2728 0.2862 0.2569 0.2160 0.3365 0.2745 0.1690 0.2876 0.3012 0.3006 0.2956 0.3908 0.2870 Embedding Based Metrics Greedy Matching Embedding Average Vector Extrema Skip-Thought BERTScore (base) BERTScore (large) 0.1712 0.1803 0.2032 0.1040 0.3229 0.2982 0.1943 0.2038 0.2091 0.1181 0.3540 0.3252 Reference Free Metrics USR - MLM USR - DR (x = c) USR - DR (x = f) USR 0.3086 0.3245 0.1419 0.4192 0.3345 0.4068 0.3221 0.4220
Table 5: Turn-level correlations between all automatic metrics and the Overall Quality ratings for the Topical- Chat corpus. All values with p > 0.05 are italicized.
Metric Spearman Pearson Word-Overlap Metrics F-1 BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE-L 0.1422 0.0434 0.1122 0.1202 0.1353 0.2527 0.0659 0.1241 0.0469 0.0943 0.0924 0.0899 0.2713 0.0385 Embedding Based Metrics Greedy Matching Embedding Average Vector Extrema Skip-Thought BERTScore (base) BERTScore (large) 0.0916 0.1182 0.1570 -0.0393 0.1690 0.1518 0.0625 0.1428 0.1410 -0.0452 0.1526 0.1211 Reference Free Metrics USR-MLM USR-DR (x = f) USR-DR (x = c) USR 0.0795 -0.0495 0.4814 0.4693 0.0788 -0.0454 0.6087 0.4115
Table 6: Turn-level correlations between all automatic metrics and the Overall Quality ratings for the Per- sonaChat corpus. All values with p > 0.05 are itali- cized.
the automatic metrics. USR shows a strong im- provement over all other methods. This strong performance can be attributed to two factors: (1) the ability of MLM and DR to accurately quan- tify qualities of a generated response without a reference response, and (2) the ability of USR to effectively combine MLM and DR into a better correlated overall metric.
USR shows a similar improvement over all other metrics on PersonaChat, as shown in Table 6. How- ever, DR (x = c) outperforms USR despite the fact that four out of the ï¬ve sub-metrics input into the USR regression are DR (x = c). This result is prob- ably due to PersonaChatâs strong dependancy on both dialog context and persona, both of which DR (x = c) explicitly leverages.
We compute the system-level correlation be- tween all automatic metrics and the Overall Quality ratings. USR signiï¬cantly (p < 0.01) outperforms all other metrics with a Spearman correlation of 1.0 on both datasets and Pearson correlations of 0.92 (Topical-Chat) and 0.82 (PersonaChat). The full set of system-level correlations can be found in the appendix.
These results demonstrate USRâs effectiveness. It strongly outperforms other metrics on both turn- level and system-level correlations. Gopalakrish- nan et al. (2019) use the F-1 score as their pri- mary automatic evaluation metric when presenting Topical-Chat. The results demonstrate a signiï¬cant difference between USR and the F-1 score, suggest- ing that USR is a better metric for the Topical-Chat corpus.
# 6 Discussion
USR achieves statistically signiï¬cant correlations with human judgement. The results hold across two datasets, Topical-Chat (Gopalakrishnan et al., 2019) and PersonaChat (Zhang et al., 2018).
USR is conï¬gurable. Notably it is composed of several speciï¬c dialog quality sub-metrics. These sub-metrics are combined in a conï¬gurable manner, using a regression. For other tasks, datasets or even users, this regression can be adjusted, allowing qualities to be removed or re-weighted. Additional sub-metrics could be added.
USR should be used alongside human evalua- tion. USR was created to facilitate development and tuning of dialog models. As such, USR can be used for model selection and hyperparameter tuning. USR should not be used to claim superior performance over another method.
USR may not work with non-generative mod- els, which were not addressed here. Responses produced by a model that is too similar to the evalu- ation models (e.g., to DR) are a particular concern.
# 7 Conclusions
This paper presents USR, an UnSupervised and Reference-free evaluation metric for dialog. To ad- dress the shortcomings of standard metrics for lan- guage generation, USR (1) is reference-free, (2) is composed of multiple sub-metrics that evaluate spe- ciï¬c qualities of dialog, (3) has a deï¬nition of good dialog that is conï¬gurable. Thus the metric may be adapted to different tasks and datasets. USR is shown to strongly correlate with human judgment on Topical-Chat (turn-level: 0.42, system-level: 1.0) and PersonaChat (turn-level: 0.48, system- level: 1.0).
# 8 Acknowledgements
We thank the following individuals for their help with annotation: Evgeniia Razumovskaia, Felix Labelle, Mckenna Brown and Yulan Feng.
# References
Michael Denkowski and Alon Lavie. 2014. Meteor uni- versal: Language speciï¬c translation evaluation for In Proceedings of the Ninth any target language. Workshop on Statistical Machine Translation, pages 376â380.
Jan Deriu, Alvaro Rodrigo, Arantxa Otegi, Guillermo Echegoyen, Sophie Rosset, Eneko Agirre, and Survey on evaluation Mark Cieliebak. 2019. arXiv preprint methods for dialogue systems. arXiv:1905.04071.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.
Emily Dinan, Varvara Logacheva, Valentin Malykh, Jack Urbanek, Alexander Miller, Kurt Shuster, Douwe Kiela, Arthur Szlam, Iulian Serban, Ryan The second conversational Lowe, et al. 2019. arXiv preprint intelligence challenge (convai2). arXiv:1902.00098.
Jean-Marie LarchevËeque, and R´eal Tremblay. 2014. Boot- strapping dialog systems with word embeddings. In Nips, modern machine learning and natural language processing workshop, volume 2.
Michel Galley, Chris Brockett, Alessandro Sordoni, Yangfeng Ji, Michael Auli, Chris Quirk, Mar- garet Mitchell, Jianfeng Gao, and Bill Dolan. 2015. deltaBLEU: A discriminative metric for generation tasks with intrinsically diverse targets. In Proceed- ings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th Interna- tional Joint Conference on Natural Language Pro- cessing (Volume 2: Short Papers), pages 445â450, Beijing, China. Association for Computational Lin- guistics.
Karthik Gopalakrishnan, Behnam Hedayatnia, Qin- lang Chen, Anna Gottardi, Sanjeev Kwatra, Anu Venkatesh, Raefer Gabriel, Dilek Hakkani-T¨ur, and Amazon Alexa AI. 2019. Topical-chat: To- wards knowledge-grounded open-domain conversa- tions. Proc. Interspeech 2019, pages 1891â1895.
Prakhar Gupta, Shikib Mehri, Tiancheng Zhao, Amy Pavel, Maxine Eskenazi, and Jeffrey P Bigham. Investigating evaluation of open-domain di- 2019. alogue systems with human generated multiple ref- erences. arXiv preprint arXiv:1907.10568.
Ari Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degener- ation. arXiv preprint arXiv:1904.09751.
Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in neural information processing systems, pages 3294â3302.
Jiwei Li, Will Monroe, Tianlin Shi, S´ebastien Jean, Alan Ritter, and Dan Jurafsky. 2017. Adversar- ial learning for neural dialogue generation. arXiv preprint arXiv:1701.06547.
Chin-Yew Lin. 2004. Rouge: A package for auto- matic evaluation of summaries. Text Summarization Branches Out.
Chia-Wei Liu, Ryan Lowe, Iulian V Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation met- rics for dialogue response generation. arXiv preprint arXiv:1603.08023.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.
Ryan Lowe, Michael Noseworthy, Iulian Vlad Ser- ban, Nicolas Angelard-Gontier, Yoshua Bengio, and Joelle Pineau. 2017. Towards an automatic Tur- ing test: Learning to evaluate dialogue responses. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1116â1126, Vancouver, Canada. Association for Computational Linguistics.
Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dia- logue systems. arXiv preprint arXiv:1506.08909.
Shikib Mehri, Evgeniia Razumovsakaia, Tiancheng Zhao, and Maxine Eskenazi. 2019. Pretraining methods for dialog context representation learning. arXiv preprint arXiv:1906.00414.
Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- In Proceedings of uation of machine translation. 40th Annual Meeting of the Association for Compu- tational Linguistics, pages 311â318. Association for Computational Linguistics.
Vasile Rus and Mihai Lintean. 2012. A comparison of greedy and optimal assessment of natural language student input using word-to-word similarity metrics. In Proceedings of the Seventh Workshop on Building Educational Applications Using NLP, pages 157â 162. Association for Computational Linguistics.
Shikhar Sharma, Layla El Asri, Hannes Schulz, and Jeremie Zumer. 2017. Relevance of unsuper- vised metrics in task-oriented dialogue for evalu- ating natural language generation. arXiv preprint arXiv:1706.09799.
Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and William B. Dolan. 2015. A neural network approach to context- sensitive generation of conversational responses. In HLT-NAACL.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, pages 6000â6010.
Anu Venkatesh, Chandra Khatri, Ashwin Ram, Fenfei Guo, Raefer Gabriel, Ashish Nagar, Rohit Prasad, Ming Cheng, Behnam Hedayatnia, Angeliki Met- allinou, et al. 2018. On evaluating and compar- ing open domain dialog systems. arXiv preprint arXiv:1801.03625.
Marilyn A Walker, Diane J Litman, Candace A Kamm, and Alicia Abella. 1997. Paradise: A framework for evaluating spoken dialogue agents. arXiv preprint cmp-lg/9704004.
John Wieting, Mohit Bansal, Kevin Gimpel, and Towards universal para- arXiv preprint Karen Livescu. 2015. phrastic sentence embeddings. arXiv:1511.08198.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Râemi Louf, Morgan Funtow- icz, and Jamie Brew. 2019a. Huggingfaceâs trans- formers: State-of-the-art natural language process- ing. ArXiv, abs/1910.03771.
Thomas Wolf, Victor Sanh, Julien Chaumond, and Transfertransfo: A learning approach for neural network arXiv preprint
Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Per- sonalizing dialogue agents: I have a dog, do you have pets too? arXiv preprint arXiv:1801.07243.
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Eval- arXiv preprint uating text generation with bert. arXiv:1904.09675.
Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoen- coders. arXiv preprint arXiv:1703.10960.
# A Annotation Instructions
Tables 6, 7 and 8 show the annotation instructions used for human quality annotation. These instruc- tions and examples are verbatim what was shown to the annotators.
# B Metric Evaluation
Table 3 in the main paper showed turn-level cor- relations for each speciï¬c quality. Due to space limitations, the table only included results for only the best correlated metrics. The full results are shown in Tables 9 - 21.
# C Code and Data Release
The code for the metrics can be found at https:// github.com/shikib/usr and the human quality annotations can be found at http://shikib.com/ usr. The human quality annotations will allow benchmarking of additional metrics.
Annotation Instructions You will be given a conversation between two individuals. You will then be given several potential responses for the next turn in the conversation. These responses all concern an interesting fact, which will be provided as well. Your task is to rate each of the responses on several metrics. The response for one metric should not inï¬uence the other metrics. For example, if a response is not understandable or has grammatical errors â you should try to ignore this when considering whether it maintains context or if it is interesting. Please make sure you read and understand these instructions carefully. Feel free to ask if you require clariï¬cation. Please keep this document open while reviewing, and refer to it as needed. Understandable (0-1)
Is the response understandable in the context of the history? (Not if its on topic, but for example if it uses pronouns they should make sense) ⢠A score of 0 (no) means that the response is difï¬cult to understand. You do not know what the person is trying to say. â i did nât know that . i love to watch the movie inception , it âs also the ï¬rst racing movie to be a woman haha . i guess the movie was originally titled â inception â awesome movie ! â Context: in my religion , there is no star . how about you Response: yeah it was back in 1975 . ⢠A score of 1 (yes) means that the response is understandable. You know what the person is trying to say. â my favorite role would have to be quarterback . it is such an interesting role . â that is true . i think lebron is the highest paid celebrity , i wonder if he will be in the space jam sequel . Natural (1-3) Is the response naturally written? ⢠A score of 1 (bad) means that the response is unnatural. â Context: A: wow . do you believe in stars of the zodiac ? what is your star ? B: in my religion , there is no star . how about you Response: yeah , it was back in 1975 . â i think he is , he is a great teacher and he also taught ellie kemper , she is a great teacher ⢠A score of 2 (ok) means the response is strange, but not entirely unnatural. â Context: A: wow . do you believe in stars of the zodiac ? what is your star ? B: in my religion , there is no star . how about you Response: i read it sometimes for the fun of it .
⢠A score of 3 (good) means that the response is natural.
â i think it âs funny that the soviet union sent a spacecraft to venus .
Table 7: Annotation instructions (part 1 of 3).
# Maintains Context (1-3)
# Interesting (1-3)
# Annotation Instructions (ctd.)
Does the response serve as a valid continuation of the conversation history?
⢠A score of 1 (no) means that the response drastically changes topic or ignores the conver- sation history.
â Context: A: wow . do you believe in stars of the zodiac ? what is your star ? B: in my religion , there is no star . how about you Response: i think it âs funny that the soviet union sent a spacecraft to venus .
⢠A score of 2 (somewhat) means the response refers to the conversation history in a limited capacity (e.g., in a generic way) and shifts the conversation topic.
â Context: i do like some drama stuff , yeah he was awesome in that . Response: yeah . do you like jon hamm ?
â Context: i believe that ! he would have played longer i âm sure if he did the granny style approach to shooting freethrows ! Response: i agree . did you know that space jam is the highest grossing basketball movie of all time ?
⢠A score of 3 (yes) means the response is on topic and strongly acknowledges the conversa- tion history.
â Context: B: wow , that âs great . especially because more than of nba players go broke 5 years after retirement . A: i believe that ! he would have played longer i âm sure if he did the granny style approach to shooting freethrows ! Response: a lot of players can make money by starring in movies . did you know space jam is the highest grossing movie of all time ? maybe one of the broke retired players can be in the sequel !
â Context: B: you like drama ? patrick stewart teaches classes now . i loved him in star trek A: i do like some drama stuff , yeah he was awesome in that . Response: jonn hamm was also a drama teacher . he taught erin from the ofï¬ce
Is the response dull/interesting?
A score of 1 (dull) means that the response is generic and dull.
â thats true . i agree .
⢠A score of 2 (somewhat interesting) means the response is somewhat interesting and could engage you in the conversation (e.g., an opinion, thought)
â my favorite role would have to be quarterback . it is such an interesting role . â i love tom brady . i love tom brady .
⢠A score of 3 (interesting) means the response is very interesting or presents an interesting fact
â i agree . did you know that space jam is the highest grossing basketball movie of all time ?
â a lot of players can make money by starring in movies . did you know space jam is the highest grossing movie of all time ? maybe one of the broke retired players can be in the sequel !
Table 8: Annotation instructions (part 2 of 3)
Uses Knowledge (0-1) Overall Quality (1- 3) Annotation Instructions (ctd.) Given the interesting fact that the response is conditioned on, how well does the response use the fact? ⢠A score of 0 (no) means the response does not mention or refer to the fact at all ⢠A score of 1 (yes) means the response uses the fact well Given your answers above, what is your overall impression of this utterance? ⢠A score of 1 (very bad). A completely invalid response. It would be difï¬cult to recover the conversation after this. ⢠A score of 2 (bad). Valid response, but otherwise poor in quality. ⢠A score of 3 (neutral) means this response is neither good nor bad. This response has no negative qualities, but no positive ones either. ⢠A score of 4 (good) means this is a good response, but falls short of being perfect because of a key ï¬aw. ⢠A score of 5 (very good) means this response is good and does not have any strong ï¬aws.
Table 9: Annotation instructions (part 3 of 3)
Metric Name Turn-Level Correlation System-Level Correlation Pearson Spearman Spearman Pearson Word-Overlap Metrics F-1 BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE-L 0.1645 0.2728 0.2862 0.2569 0.2160 0.3365 0.2745 0.1690 0.2876 0.3012 0.3007 0.2956 0.3908 0.2870 0.6000 0.7000 0.9000 0.9000 0.9000 0.9000 0.9000 0.6120 0.8334 0.8201 0.9033 0.8740 0.9435 0.8143 Embedding-Based Metrics Greedy Matching Embedding Average Vector Extrema Skip-Thought BERTScore (base) BERTScore (large) 0.1712 0.1803 0.2032 0.1040 0.3229 0.2982 0.1943 0.2038 0.2091 0.1181 0.3540 0.3252 0.8000 0.7000 0.8000 0.5000 0.9000 0.9000 0.5610 0.9166 0.5838 0.5142 0.9100 0.8536 Reference Free Metrics USR-MLM USR-DR (x = c) USR-DR (x = f) USR 0.3086 0.3245 0.1419 0.4192 0.3345 0.4068 0.3221 0.4220 0.9000 0.7000 0.9000 1.0000 0.4732 0.9182 0.8519 0.9276
Table 10: Correlations of all the metrics with Overall Quality ratings on Topical-Chat. All values with p ⥠0.05 are italicized.
Metric Name Turn-Level Correlation System-Level Correlation Pearson Spearman Spearman Pearson Word-Overlap Metrics F-1 BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE-L 0.1422 0.0434 0.1122 0.1202 0.1353 0.2527 0.0659 0.1241 0.0469 0.0943 0.0924 0.0899 0.2713 0.0385 1.0000 0.6000 0.4000 0.4000 0.8000 0.8000 0.0000 0.9956 0.2599 0.6816 0.6668 0.8413 0.9065 0.1710 Embedding-Based Metrics Greedy Matching Embedding Average Vector Extrema Skip-Thought BERTScore (base) BERTScore (large) 0.0916 0.1182 0.1570 -0.0393 0.1690 0.1518 0.0625 0.1428 0.1410 -0.0452 0.1526 0.1211 0.8000 0.8000 0.6000 -0.2000 0.8000 0.0000 0.3808 0.8628 0.4349 0.2599 0.5173 0.2410 Reference Free Metrics USR-MLM USR-DR (x = c) USR-DR (x = f) USR 0.0795 0.4814 -0.0495 0.4693 0.0788 0.6087 -0.0454 0.4115 -0.4000 1.0000 -0.2108 1.0000 -0.2842 0.8202 -0.0178 0.8084
Table 11: Correlations of all the metrics with Overall Quality ratings on PersonaChat. All values with p ⥠0.05 are italicized.
Metric Name Turn-Level Correlation System-Level Correlation Pearson Spearman Spearman Pearson Word-Overlap Metrics F-1 BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE-L 0.0425 0.1794 0.2360 0.2099 0.2010 0.2452 0.2069 0.0620 0.1522 0.2081 0.2137 0.2175 0.2246 0.1632 0.8000 0.6000 0.7000 0.7000 0.7000 0.7000 0.7000 0.6481 0.8360 0.8262 0.9018 0.8663 0.9424 0.8208 Embedding-Based Metrics Greedy Matching Embedding Average Vector Extrema Skip-Thought BERTScore (base) BERTScore (large) 0.0839 0.0509 0.1561 0.0810 0.2611 0.2556 0.0868 0.0961 0.1321 0.0706 0.2502 0.2263 0.6000 0.6000 0.6000 0.2000 0.7000 0.7000 0.5664 0.9204 0.6113 0.4725 0.9118 0.8577 Reference Free Metrics USR-MLM USR-DR (x = c) USR-DR (x = f) USR 0.3264 0.1500 0.0881 0.2932 0.3268 0.2213 0.1967 0.3152 0.7000 0.9000 0.7000 0.9000 0.4666 0.9337 0.8420 0.9329
Table 12: Correlations of all the metrics with the Understandable ratings on Topical-Chat. All values with p ⥠0.05 are italicized. The USR-MLM metric has poor system-level correlations, however the USR metric leverages predictions from the other sub-metrics to improve this.
Metric Name Turn-Level Correlation System-Level Correlation Pearson Spearman Spearman Pearson Word-Overlap Metrics F-1 BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE-L -0.0340 0.0123 0.0854 0.0412 0.0537 0.0820 0.0346 -0.0550 -0.0196 0.0221 0.0249 0.0279 0.0431 0.0076 1.0000 0.6000 0.4000 0.4000 0.8000 0.8000 0.0000 0.9956 0.2599 0.6816 0.6668 0.8413 0.9065 0.1710 Embedding-Based Metrics Greedy Matching Embedding Average Vector Extrema Skip-Thought BERTScore (base) BERTScore (large) 0.0594 0.0573 0.1097 -0.0338 0.0676 0.0380 0.0710 0.0835 0.1113 -0.0297 0.0685 0.0086 0.8000 0.8000 0.6000 -0.2000 0.8000 0.0000 0.3808 0.8628 0.4349 0.2599 0.5173 0.2410 Reference Free Metrics USR-MLM USR-DR (x = c) USR-DR (x = f) USR 0.1313 0.0728 -0.0390 0.0997 0.1186 0.1446 -0.0433 0.1337 -0.4000 1.0000 -0.2108 1.0000 -0.2842 0.8202 -0.0178 0.8084
Table 13: Correlations of all the metrics with Understandable ratings on PersonaChat. All values with p ⥠0.05 are italicized.
Metric Name Turn-Level Correlation System-Level Correlation Pearson Spearman Spearman Pearson Word-Overlap Metrics F-1 BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE-L 0.0301 0.1606 0.1959 0.1896 0.1799 0.2121 0.1760 0.0398 0.1334 0.1648 0.1745 0.1748 0.1906 0.1457 0.6000 0.7000 0.9000 0.9000 0.9000 0.9000 0.9000 0.5605 0.7976 0.7888 0.8979 0.8973 0.9297 0.7902 Embedding-Based Metrics Greedy Matching Embedding Average Vector Extrema Skip-Thought BERTScore (base) BERTScore (large) 0.0534 0.0477 0.1009 0.0959 0.2164 0.2260 0.0483 0.0970 0.0761 0.0858 0.2088 0.2094 0.8000 0.7000 0.8000 0.5000 0.9000 0.9000 0.5271 0.8875 0.5363 0.5313 0.9024 0.8319 Reference Free Metrics USR-MLM USR-DR (x = c) USR-DR (x = f) USR 0.3370 0.1325 0.0313 0.2763 0.3254 0.2148 0.1611 0.3037 0.9000 0.7000 0.9000 1.0000 0.4485 0.9222 0.8808 0.9220
Table 14: Correlations of all the metrics with the Natural ratings on Topical-Chat. All values with p ⥠0.05 are ital- icized. The USR-MLM metric has poor system-level correlations, however the USR metric leverages predictions from the other sub-metrics to improve this.
Metric Name Turn-Level Correlation System-Level Correlation Pearson Spearman Spearman Pearson Word-Overlap Metrics F-1 BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE-L 0.0815 -0.0072 0.0838 0.0823 0.1081 0.0989 0.0096 0.0717 -0.0216 0.0344 0.0457 0.0499 0.0950 -0.0087 1.0000 0.6000 0.4000 0.4000 0.8000 0.8000 0.0000 0.9956 0.2599 0.6816 0.6668 0.8413 0.9065 0.1710 Embedding-Based Metrics Greedy Matching Embedding Average Vector Extrema Skip-Thought BERTScore (base) BERTScore (large) 0.1029 0.1413 0.1458 -0.0355 0.0606 0.0494 0.0665 0.1152 0.1375 -0.0365 0.0585 0.0477 0.8000 0.8000 0.6000 -0.2000 0.8000 0.0000 0.3808 0.8628 0.4349 0.2599 0.5173 0.2410 Reference Free Metrics USR-MLM USR-DR (x = c) USR-DR (x = f) USR 0.0999 0.1733 -0.0033 0.1862 0.1119 0.2291 0.0642 0.2430 -0.4000 1.0000 -0.2108 1.0000 -0.2842 0.8202 -0.0178 0.8084
Table 15: Correlations of all the metrics with the Natural ratings on PersonaChat. All values with p ⥠0.05 are italicized.
Metric Name Turn-Level Correlation System-Level Correlation Pearson Spearman Spearman Pearson Word-Overlap Metrics F-1 BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE-L 0.1290 0.2097 0.2087 0.1736 0.1307 0.2495 0.1928 0.1199 0.2228 0.2353 0.2377 0.2345 0.3018 0.2031 0.6000 1.0000 0.9000 0.9000 0.5000 0.9000 0.9000 0.6483 0.8754 0.8555 0.9090 0.8464 0.9573 0.8410 Embedding-Based Metrics Greedy Matching Embedding Average Vector Extrema Skip-Thought BERTScore (base) BERTScore (large) 0.1036 0.1197 0.1839 0.0326 0.2432 0.2140 0.1249 0.1511 0.1840 0.0568 0.2642 0.2328 0.8000 1.0000 0.8000 0.6000 0.9000 0.9000 0.6078 0.9460 0.6275 0.5237 0.9160 0.8779 Reference Free Metrics USR-MLM USR-DR (x = c) USR-DR (x = f) USR 0.3099 0.3391 0.0594 0.4160 0.3243 0.3650 0.1836 0.3769 0.9000 0.3000 0.5000 0.7000 0.5190 0.8899 0.8188 0.9270
Table 16: Correlations of all the metrics with the Maintains Context ratings on Topical-Chat. All values with p ⥠0.05 are italicized. Several referenced metrics perform strongly on the system-level correlations, however USR strongly outperforms all other metrics on the turn-level correlations.
Metric Name Turn-Level Correlation System-Level Correlation Pearson Spearman Spearman Pearson Word-Overlap Metrics F-1 BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE-L 0.1073 0.0713 0.0949 0.1270 0.1467 0.2500 0.1135 0.0747 0.0799 0.1372 0.1461 0.1508 0.2564 0.0910 1.0000 0.6000 0.4000 0.4000 0.8000 0.8000 0.0000 0.9956 0.2599 0.6816 0.6668 0.8413 0.9065 0.1710 Embedding-Based Metrics Greedy Matching Embedding Average Vector Extrema Skip-Thought BERTScore (base) BERTScore (large) 0.1503 0.1010 0.2288 0.0243 0.1770 0.1877 0.1631 0.1660 0.2631 0.0139 0.1686 0.1569 0.8000 0.8000 0.6000 -0.2000 0.8000 0.0000 0.3808 0.8628 0.4349 0.2599 0.5173 0.2410 Reference Free Metrics USR-MLM USR-DR (x = c) USR-DR (x = f) USR 0.1805 0.6021 -0.0198 0.6065 0.2067 0.5625 -0.0164 0.5280 -0.4000 1.0000 -0.2108 1.0000 -0.2842 0.8202 -0.0178 0.8084
Table 17: Correlations of all the metrics with the Maintains Context ratings on PersonaChat. All values with p ⥠0.05 are italicized. Several referenced metrics perform strongly on the system-level correlations, however USR strongly outperforms all other metrics on the turn-level correlations.
Metric Name Turn-Level Correlation System-Level Correlation Pearson Spearman Spearman Pearson Word-Overlap Metrics F-1 BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE-L 0.2523 0.3144 0.3184 0.2782 0.2322 0.3668 0.2946 0.2565 0.3343 0.3323 0.3247 0.3156 0.4391 0.2995 0.6000 0.7000 0.9000 0.9000 0.9000 0.9000 0.9000 0.5944 0.8197 0.8099 0.9047 0.8883 0.9398 0.8084 Embedding-Based Metrics Greedy Matching Embedding Average Vector Extrema Skip-Thought BERTScore (base) BERTScore (large) 0.1989 0.1940 0.2101 0.1139 0.3512 0.3167 0.2111 0.2161 0.2050 0.1356 0.3725 0.3349 0.8000 0.7000 0.8000 0.5000 0.9000 0.9000 0.5512 0.9056 0.5694 0.5187 0.9108 0.8480 Reference Free Metrics USR-MLM USR-DR (x = c) USR-DR (x = f) USR 0.3189 0.3533 0.2006 0.4555 0.3337 0.4877 0.4110 0.4645 0.9000 0.7000 0.9000 1.0000 0.4663 0.9233 0.8685 0.9297
Table 18: Correlations of all the metrics with the Interesting ratings on Topical-Chat. All values with p ⥠0.05 are italicized.
Metric Name Turn-Level Correlation System-Level Correlation Pearson Spearman Spearman Pearson Word-Overlap Metrics F-1 BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE-L 0.0473 -0.1081 -0.1048 -0.1247 -0.1359 -0.0458 -0.1456 0.0132 -0.0922 -0.1010 -0.1151 -0.1242 0.0116 -0.1354 1.0000 0.6000 0.4000 0.4000 0.8000 0.8000 0.0000 0.9956 0.2599 0.6816 0.6668 0.8413 0.9065 0.1710 Embedding-Based Metrics Greedy Matching Embedding Average Vector Extrema Skip-Thought BERTScore (base) BERTScore (large) -0.1778 -0.0141 -0.1883 -0.0882 0.0325 -0.0418 -0.2080 -0.0177 -0.1746 -0.0916 0.0491 -0.0245 0.8000 0.8000 0.6000 -0.2000 0.8000 0.0000 0.3808 0.8628 0.4349 0.2599 0.5173 0.2410 Reference Free Metrics USR-MLM USR-DR (x = c) USR-DR (x = f) USR -0.1045 0.0606 -0.0022 0.0315 -0.1007 0.2634 -0.0039 0.0171 -0.4000 1.0000 -0.2108 1.0000 -0.2842 0.8202 -0.0178 0.8084
Table 19: Correlations of all the metrics with the Interesting ratings on PersonaChat. All values with p ⥠0.05 are italicized.
Metric Name Turn-Level Correlation System-Level Correlation Pearson Spearman Spearman Pearson Word-Overlap Metrics F-1 BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE-L 0.1495 0.2888 0.2819 0.2442 0.2126 0.3328 0.3099 0.1485 0.3033 0.3066 0.3106 0.3096 0.3909 0.3273 0.6000 0.7000 0.9000 0.9000 0.9000 0.9000 0.9000 0.5970 0.8357 0.8309 0.9259 0.9084 0.9534 0.8333 Embedding-Based Metrics Greedy Matching Embedding Average Vector Extrema Skip-Thought BERTScore (base) BERTScore (large) 0.2327 0.1812 0.2294 0.0986 0.2847 0.2909 0.2306 0.1827 0.2111 0.1145 0.2947 0.3167 0.8000 0.7000 0.8000 0.5000 0.9000 0.9000 0.5874 0.9151 0.5917 0.5397 0.9308 0.8703 Reference Free Metrics USR-MLM USR-DR (x = c) USR-DR (x = f) USR 0.2195 0.2285 0.2220 0.3175 0.2261 0.4179 0.4468 0.3353 0.9000 0.7000 0.9000 1.0000 0.5070 0.9155 0.8884 0.9469
Table 20: Correlations of all the metrics with the Uses Knowledge ratings on Topical-Chat. All values with p ⥠0.05 are italicized.
Metric Name Turn-Level Correlation System-Level Correlation Pearson Spearman Spearman Pearson Word-Overlap Metrics F-1 BLEU-1 BLEU-2 BLEU-3 BLEU-4 METEOR ROUGE-L 0.0869 0.0737 0.1083 0.0999 0.0698 0.1678 0.0710 0.1056 0.0729 0.0722 0.0594 0.0528 0.1719 0.0632 1.0000 0.6000 0.4000 0.4000 0.8000 0.8000 0.0000 0.9956 0.2599 0.6816 0.6668 0.8413 0.9065 0.1710 Embedding-Based Metrics Greedy Matching Embedding Average Vector Extrema Skip-Thought BERTScore (base) BERTScore (large) 0.0382 0.0402 0.0564 -0.0686 0.0719 0.0271 0.0057 0.0618 -0.0008 -0.0609 0.0465 0.0094 0.8000 0.8000 0.6000 -0.2000 0.8000 0.0000 0.3808 0.8628 0.4349 0.2599 0.5173 0.2410 Reference Free Metrics USR-MLM USR-DR (x = c) USR-DR (x = f) USR -0.0782 0.4508 -0.0927 0.4027 -0.0756 0.6309 -0.0903 0.3177 -0.4000 1.0000 -0.2108 1.0000 -0.2842 0.8202 -0.0178 0.8084
Table 21: Correlations of all the metrics with the Uses Knowledge ratings on PersonaChat. All values with p ⥠0.05 are italicized. | {
"id": "1902.00098"
} |
2005.00614 | Multi-Dimensional Gender Bias Classification | Machine learning models are trained to find patterns in data. NLP models can
inadvertently learn socially undesirable patterns when training on gender
biased text. In this work, we propose a general framework that decomposes
gender bias in text along several pragmatic and semantic dimensions: bias from
the gender of the person being spoken about, bias from the gender of the person
being spoken to, and bias from the gender of the speaker. Using this
fine-grained framework, we automatically annotate eight large scale datasets
with gender information. In addition, we collect a novel, crowdsourced
evaluation benchmark of utterance-level gender rewrites. Distinguishing between
gender bias along multiple dimensions is important, as it enables us to train
finer-grained gender bias classifiers. We show our classifiers prove valuable
for a variety of important applications, such as controlling for gender bias in
generative models, detecting gender bias in arbitrary text, and shed light on
offensive language in terms of genderedness. | http://arxiv.org/pdf/2005.00614 | Emily Dinan, Angela Fan, Ledell Wu, Jason Weston, Douwe Kiela, Adina Williams | cs.CL | null | null | cs.CL | 20200501 | 20200501 | 0 2 0 2
y a M 1 ] L C . s c [
1 v 4 1 6 0 0 . 5 0 0 2 : v i X r a
# Multi-Dimensional Gender Bias Classiï¬cation
Emily Dinanâ, Angela Fanââ , Ledell Wu, Jason Weston, Douwe Kiela, Adina Williams Facebook AI Research â Laboratoire Lorrain dâInformatique et Applications (LORIA)
# Abstract
Machine learning models are trained to ï¬nd patterns in data. NLP models can inadver- tently learn socially undesirable patterns when In this work, training on gender biased text. we propose a general framework that decom- poses gender bias in text along several prag- matic and semantic dimensions: bias from the gender of the person being spoken about, bias from the gender of the person being spoken to, and bias from the gender of the speaker. Using this ï¬ne-grained framework, we auto- matically annotate eight large scale datasets with gender information. In addition, we col- lect a novel, crowdsourced evaluation bench- mark of utterance-level gender rewrites. Dis- tinguishing between gender bias along multi- ple dimensions is important, as it enables us to train ï¬ner-grained gender bias classiï¬ers. We show our classiï¬ers prove valuable for a vari- ety of important applications, such as control- ling for gender bias in generative models, de- tecting gender bias in arbitrary text, and shed light on offensive language in terms of gen- deredness.
# Introduction
When | first started dating him, my husband had aiman bun just like yours! J Lo speaking about speaking as speaking to
Figure 1: Framework for Gender Bias in Dialogue. We propose a framework separating gendered language based on who you are speaking ABOUT, speaking TO, and speaking AS.
gender biases can affect downstream applicationsâ sometimes even leading to poor user experiencesâ understanding and mitigating gender bias is an im- portant step towards making NLP tools and models safer, more equitable, and more fair. We provide a ï¬ner-grained framework for this purpose, analyze the presence of gender bias in models and data, and empower others by releasing tools that can be employed to address these issues for numerous text-based use-cases.
Language is a social behavior, and as such, it is a primary means by which people communicate, express their identities, and socially categorize themselves and others. Such social information is present in the words we write and, consequently, in the text we use to train our NLP models. In particular, models often can unwittingly learn neg- ative associations about protected groups present in their training data and propagate them. In partic- ular, NLP models often learn biases against others based on their gender (Bolukbasi et al., 2016; Hovy and Spruit, 2016; Caliskan et al., 2017; Rudinger et al., 2017; Garg et al., 2018; Gonen and Gold- berg, 2019; Dinan et al., 2019a). Since unwanted
While many works have explored methods for removing gender bias from text (Bolukbasi et al., 2016; Emami et al., 2019; Maudslay et al., 2019; Dinan et al., 2019a; Kaneko and Bollegala, 2019; Zmigrod et al., 2019; Ravfogel et al., 2020), no extant work on classifying gender or removing gen- der bias has incorporated facts about how humans collaboratively and socially construct our language and identities. We propose a pragmatic and se- mantic framework for measuring bias along three dimensions that builds on knowledge of the con- versational and performative aspects of gender, as illustrated in Figure 1. Recognizing these dimen- sions is important, because gender along each di- mension can affect text differently, for example,
# âJoint ï¬rst authors.
by modifying word choice or imposing different preferences in how we construct sentences.
Decomposing gender into separate dimensions also allows for better identiï¬cation of gender bias, which subsequently enables us to train a suite of classiï¬ers for detecting different kinds of gender bias in text. We train several classiï¬ers on freely available data that we annotate with gender infor- mation along our dimensions. We also collect a new crowdsourced dataset (MDGENDER) for bet- ter evaluation of gender classiï¬er performance. The classiï¬ers we train have a wide variety of poten- tial applications. We evaluate them on three: con- trolling the genderedness of generated text, detect- ing gendered text, and examining the relationship between gender bias and offensive language. In addition, we expect them to be useful in future for many text applications such as detecting gen- der imbalance in newly created training corpora or model-generated text.
In this work, we make four main contribu- tions: we propose a multi-dimensional framework (ABOUT, AS, TO) for measuring and mitigating gen- der bias in language and NLP models, we introduce an evaluation dataset for performing gender iden- tiï¬cation that contains utterances re-written from the perspective of a speciï¬c gender along all three dimensions, we train a suite of classiï¬ers capable of labeling gender in both a single and multitask set up, and ï¬nally we illustrate our classiï¬ersâ utility for several downstream applications. All datasets, annotations, and classiï¬ers will be released pub- licly to facilitate further research into the important problem of gender bias in language.
# 2 Related Work
Gender affects myriad aspects of NLP, including corpora, tasks, algorithms, and systems (Chang et al., 2019; Costa-juss`a, 2019; Sun et al., 2019). For example, statistical gender biases are ram- pant in word embeddings (Jurgens et al., 2012; Bolukbasi et al., 2016; Caliskan et al., 2017; Garg et al., 2018; Zhao et al., 2018b; Basta et al., 2019; Chaloner and Maldonado, 2019; Du et al., 2019; Gonen and Goldberg, 2019; Kaneko and Bollegala, 2019; Zhao et al., 2019)âeven multilingual ones (Gonen et al., 2019; Zhou et al., 2019)âand af- fect a wide range of downstream tasks including coreference resolution (Zhao et al., 2018a; Cao and Daum´e, 2019; Emami et al., 2019), part-of-speech and dependency parsing (Garimella et al., 2019),
unigram language modeling (Qian et al., 2019), ap- propriate turn-taking classiï¬cation (Lepp, 2019), relation extraction (Gaut et al., 2019), identiï¬cation of offensive content (Shariï¬rad and Matwin, 2019; Shariï¬rad et al., 2019), and machine translation (Stanovsky et al., 2019). Furthermore, translations are judged as having been produced by older and more male speakers than the original was (Hovy et al., 2020).
For dialogue text particularly, gender biases in training corpora have been found to be ampliï¬ed in machine learning models (Lee et al., 2019; Dinan et al., 2019a; Liu et al., 2019). While many of the works cited above propose methods of mitigating the unwanted effects of gender on text, Maudslay et al. (2019); Zmigrod et al. (2019); Dinan et al. (2019a) in particular rely on counterfactual data to alter the training distribution to offset gender-based statistical imbalances (see §4.1 for more discussion of training set imbalances). Also relevant is Kang et al. (2019, PASTEL), which introduces a paral- lel style corpus and shows gains on style-transfer across binary genders. In this work, we provide a clean new way to understand gender bias that extends to the dialogue use-case by independently investigating the contribution of author gender to data created by humans.
Most relevant to this work, Sap et al. (2019b) proposes a framework for modeling pragmatic as- pects of many social biases in text, such as intent to offend, for guiding discovery of new instances of social bias. These works focus on complemen- tary aspects of a larger goalânamely, making NLP safe and inclusive for everyoneâbut they differ in several ways. Here, we treat statistical gender bias in human or model generated text speciï¬cally, allotting it the focused and nuanced attention that such a complicated phenomenon deserves. Sap et al. (2019b) takes a different perspective, and aims to characterize the broader landscape of nega- tive stereotypes in social media text, an approach which can make parallels apparent across differ- ent types of socially harmful content. Moreover, they consider different pragmatic dimensions than they target negatively stereotyped com- we do: monsense implications in arguably innocuous state- ments, whereas we investigate pragmatic dimen- sions that straightforwardly map to conversational roles (i.e., topics, addressees, and authors of con- tent). As such, we believe the two frameworks to be fully compatible.
Also relevant is the intersectionality of gender identity, i.e., when gender non-additively interacts with other identity characteristics. Negative gen- der stereotyping is known to be weakened or re- inforced by the presence of other social factors, such as dialect (Tatman, 2017), class (Degaetano- Ortlieb, 2018) and race (Crenshaw, 1989). These differences have been found to affect gender classi- ï¬cation in images (Buolamwini and Gebru, 2018), and also in sentences encoders (May et al., 2019). We acknowledge that these are crucial considera- tions, but set them aside for follow-up work.
# 3 Dimensions of Gender Bias
Gender inï¬ltrates language differently depending on the conversational role played by the people using that language (see Figure 1). We propose a framework for decomposing gender bias into three separate dimensions: bias when speaking ABOUT someone, bias when speaking TO someone, and bias from speaking AS someone. In this section, we ï¬rst deï¬ne bias and gender, and then motivate and describe our three dimensions.
# 3.1 Deï¬nitions of Bias and Gender
Bias. In an ideal world, we would expect little dif- ference between texts describing men, women, and people with other gender identities, aside from the use of explicitly gendered words, like pronouns or names. A machine learning model, then, would be unable to pick up on statistical differences among gender labels (i.e., gender bias), because such dif- ferences would not exist. Unfortunately, we know this is not the case. For example, Table 1 pro- vides examples of adjectives, adverbs, and verbs that are more common in Wikipedia biographies of people of certain genders. This list was gen- erated by counting all verbs, adjectives, and ad- verbs (using a part-of-speech tagger from Honnibal and Montani (2017)) that appear in a large section of biographies of Wikipedia. We then computed P (word | gender)/P (word) for words that appear more than 500 times. The top over-represented verbs, adjectives, and adverbs using this calculated metric are displayed for each gender.
In an imagined future, a classiï¬er trained to iden- tify gendered text would have (close to) random performance on non-gender-biased future data, be- cause the future would be free of the statistical biases plaguing current-day data. These statistical biases are what make it possible for current-day
classiï¬ers to perform better than random chance. We know that current-day classiï¬ers are gender biased, because they achieve much better than ran- dom performance by learning distributional differ- ences in how current-day texts use gender; we show this in §5. These classiï¬ers learn to pick up on these statistical biases in text in addition to explicit gender markers (like she).1
Gender. Gender manifests itself in language in numerous ways. In this work, we are interested in gender as it is used in English when referring to people and other sentient agents, or when dis- cussing their identities, actions, or behaviors. We annotate gender with four potential values: mascu- line, feminine, neutral and unknown â which al- lows us to go beyond the oppositional male-female gender binary. We take the neutral category to contain characters with either non-binary gender identity, or an identity which is unspeciï¬ed for gen- der by deï¬nition (say, for a magic tree).2 We also include an unknown category for when there might be a gender identity at play, but the gender is not known or readily inferrable by crowdworkers from the text (e.g., in English, one would not be able to infer gender from just the short text âHello!â).
# 3.2 Gender in Multiple Dimensions
Exploring genderâs inï¬uence on language has been a fruitful and active area of research in many dis- ciplines, each of which brings its own unique per- spectives to the topic (Lakoff, 1973; Butler, 1990; Cameron, 1990; Lakoff, 1990; Swann, 1992; Craw- ford, 1995; Weatherall, 2002; Sunderland, 2006; Eckert and McConnell-Ginet, 2013; Mills, 2014; Coates, 2015; Talbot, 2019). In this section, we propose a framework that decomposes genderâs contribution along three conversational dimensions to enable ï¬ner-grained classiï¬cation of genderâs effects on text from multiple domains.
Speaking About: Gender of the Topic. Itâs well known that we change how we speak about others depending on who they are (Hymes, 1974; Rick- ford and McNair-Knox, 1994), and, in particular,
1We caution the reader that âthe term bias is often used to refer to demographic disparities in algorithmic systems that are objectionable for societal reasonsâ (Barocas et al., 2020, 14); we restrict our use of bias to its traditional deï¬nition here. 2We fully acknowledge the existence and importance of all chosen gender identitiesâincluding, but not limited to non- binary, gender ï¬uid, poly-gender, pan-gender, alia-gender, agenderâfor the end goal of achieving accessible, inclusive, and fair NLP. However, these topics require a more nuanced investigation than is feasible using na¨ıve crowdworkers.
VERBS ADJECTIVES ADVERBS M F N M F N M F N ï¬nance presiding oversee survives disagreed obliged modelling ï¬lling steamed increases optional tropical volcanic transgender glacial bench abundant sicilian variable 24-hour cooking excavated optimistic reproductive malay weird akin vain feminist lesbian uneven ethnically romantically westward intimately aground emotionally soundly sexually upstairs happily alongside socially artistically anymore randomly really hotly overhead positively lesser variant sandy convincingly incredibly actor kisses towed guest range inland low automatically typically faster normally round usually slightly dissipated descriptive vary engined tailed feminine female reassigned kissing danced studies forested upgraded ordained factual electriï¬ed sexy blonde pregnant pledged agreeing
Table 1: Bias in Wikipedia. We look at the most over-represented words in biographies of men and women, respectively, in Wikipedia. We also compare to a set of over-represented words in gender-neutral pages. We use a part-of-speech tagger (Honnibal and Montani, 2017) and limit our analysis to words that appear at least 500 times.
based on their gender (Lakoff, 1973). People of- ten change how they refer to others depending on the gender identity of the individual being spoken about (Eckert and McConnell-Ginet, 1992). For ex- ample, adjectives which describe women have been shown to differ from those used to describe men in numerous situations (Trix and Psenka, 2003; Gaucher et al., 2011; Moon, 2014; Hoyle et al., 2019), as do verbs that take nouns referring to men as opposed to women (Guerin, 1994; Hoyle et al., 2019). Furthermore, metaphorical extensionsâ which can shed light on how we construct con- ceptual categories (Lakoff and Johnson, 1980)âto men and women starkly differ (Fontecha and Cata- lan 2003; Holmes 2013, 325; Amery et al. 2015).
less likely to be intended to offend). Like race, gen- der is often described as a âfundamentalâ category for self-identiï¬cation and self-description (Banaji and Prentice, 1994, 315), with men, women, and non-binary people differing in how they actively create and perceive of their own gender identities (West and Zimmerman, 1987). Who someone is speaking as strongly affect what they may say and how they say it, down to the level of their choices of adjectives and verbs in self-descriptions (Charyton and Snelbecker, 2007; Wetzel et al., 2012). Even children as young as two dislike when adults mis- attribute a gender to them (Money and Ehrhardt, 1972; Bussey, 1986), suggesting that gender is in- deed an important component of identity.
Speaking To: Gender of the Addressee. People often adjust their speech based on who they are speaking withâtheir addressee(s)âto show soli- darity with their audience or express social distance (Wish et al., 1976; Bell, 1984; Hovy, 1987; Rick- ford and McNair-Knox, 1994; Bell and Johnson, 1997; Eckert and Rickford, 2001). We expect the addresseeâs gender to affect, for example, the way a man might communicate with another man about styling their hair would differ from how he might communicate with a woman about the same topic.
Speaking As: Gender of the Speaker. People react to content differently depending on who cre- ated it.3 For example, Sap et al. (2019a) ï¬nd that na¨ıve annotators are much less likely to ï¬ag as offensive certain content referring to race, if they have been told the author of that content speaks a dialect that signals in-group membership (i.e., is
Our Speaking As dimension builds on prior work on author attribution, a concept purported to hail from English logician Augustus de Morgan (Mendenhall, 1887), who suggested that authors could be distinguished based on the average word length of their texts. Since then, sample statistics and NLP tools have been used for applications such as settling authorship disputes (Mosteller and Wal- lace, 1984), forensic investigations (Frantzeskou et al., 2006; Rocha et al., 2016; Peng et al., 2016), or extracting a stylistic ï¬ngerprint from text that enables the author to be identiï¬ed (Stamatatos et al., 1999; Luyckx and Daelemans, 2008; Arga- mon et al., 2009; Stamatatos, 2009; Raghavan et al., 2010; Cheng et al., 2011; Stamatatos, 2017). More speciï¬cally, automatic gender attribution has re- ported many successes (Koppel et al., 2002; Koolen and van Cranenburgh, 2017; Qian, 2019), often driven by the fact that authors of speciï¬c genders tend to prefer producing content about topics that belie those gender (Sarawgi et al., 2011). Given
3We will interchangeably use the terms speaker and au- thor here to refer to a creator of textual content throughout.
Dataset M F N U Dim Training Data Wikipedia Image Chat Funpedia Wizard Yelp ConvAI2 ConvAI2 OpenSub OpenSub LIGHT LIGHT - 10M 1M 1M ABOUT 15K 154K - 39K ABOUT - 1K 3K 19K ABOUT - 1K 1K 6K ABOUT - 1M - 1M AS 86K 22K - 22K AS 86K 22K 22K - TO 131K AS 149K 69K - 209K TO 45K - 95K 83K - 8K 13K AS 83K - 8K 13K TO Evaluation Data MDGENDER MDGENDER MDGENDER 384 396 411 401 371 382 - - - - - - ABOUT AS TO
Table 2: Dataset Statistics. Dataset statistics on the eight training datasets and new evaluation dataset, MD- GENDERwith respect to each label.
this, we might additionally expect differences be- tween genders along our Speaking As and Speaking About dimensions to interact, further motivating them as separate dimensions.
# 4 Creating Gender Classiï¬ers
Previous work on gender bias classiï¬cation has been predominantly single-taskâoften supervised on the task of analogyâand relied mainly on word lists, that are binarily (Bolukbasi et al., 2016; Zhao et al., 2018b, 2019; Gonen and Goldberg, 2019)âand sometimes also explicitly (Caliskan et al., 2017; Hoyle et al., 2019)âgendered. While wordlist-based approaches provided a solid start on attacking the problem of gender bias, they are insufï¬cient for multiple reasons. First, they con- ï¬ate different conversational dimensions of gender bias, and are therefore unable to detect the subtle pragmatic differences that are of interest here. Fur- ther, all existing gendered word lists for English are limited, by construction, to explicitly binarily gendered words (e.g., sister vs. brother). Not only is binary gender wholly inadequate for the task, but restricting to explicitly gendered words is itself problematic, since we know that many words arenât explicitly gendered, but are strongly statistically gendered (see Table 1). Rather than solely relying on a brittle binary gender label from a global word list, our approach will also allow for gender bias to be determined ï¬exibly over multiple words in context (Note: this will be crucial for examples that only receive gendered interpretations when in a par-
ticular context; for example, âbagâ disparagingly refers to an elderly woman, but only in the context of âoldâ, and âcupâ hints at masculine gender only in the context of âwearâ).
Instead, we develop classiï¬ers that can decom- pose gender bias over full sentences into semantic and/or pragmatic dimensions (about/to/as), addi- tionally including gender information that (i) falls outside the male-female binary, (ii) can be contextu- ally determined, and (iii) is statistically as opposed to explicitly gendered. In the subsequent sections, we provide details for training these classiï¬ers as well as details regarding the annotation of data for such training.
# 4.1 Models
We outline how these classiï¬ers are trained to pre- dict gender bias along the three dimensions, provid- ing details of the classiï¬er architectures as well as how the data labels are used. We train single-task and a multi-task classiï¬ers for different purposes: the former will leverage gender information from each contextual dimension individually, and the lat- ter should have broad applicability across all three.
Single Task Setting. In the single-task setting, we predict masculine, feminine, or neutral for each dimension â allowing the classiï¬er to predict any of the three labels for the unknown category).
Multitask Setting. To obtain a classiï¬er capable of multi-tasking across the about/to/as dimensions, we train a model to score and rank a set of pos- sible classes given textual input. For example, if given Hey, John, Iâm Jane!, the model is trained to rank elements of both the sets {TO:masculine, TO:feminine, TO:neutral} and {AS:masculine, AS:feminine, AS:neutral} and produce appropriate labels TO:masculine and AS:feminine. Models are trained and evaluated on the annotated datasets.
Model Architectures. For single task and mul- titask models, we use a pretrained Transformer (Vaswani et al., 2017) to ï¬nd representations for the textual input and set of classes. Classes are scoredâand then rankedâby taking a dot product between the representations of the textual input and a given class, following the bi-encoder architecture (Humeau et al., 2019) trained with cross entropy. The same architecture and pre-training as BERT (Devlin et al., 2018) are used throughout. We use ParlAI for model training (Miller et al., 2017). We will release data and models.
Model About To As Avg. M F Avg. M F Avg. M F All Avg. SingleTask ABOUT 70.43 63.54 77.31 44.44 36.25 52.62 67.75 69.19 66.31 60.87 49.97 SingleTask TO 49.39 95.38 3.4 50.12 99.74 0.5 57.27 67.15 47.38 78.21 70.71 85.71 60.82 46.97 51.3 SingleTask AS 42.4 62.59 64.32 60.85 78.25 73.24 83.25 72.15 66.67 77.63 67.13 MultiTask 50.41 100 0.81
Table 3: Accuracy on the novel evaluation dataset MDGENDER comparing single task classiï¬ers to our multi- task classiï¬ers. We report accuracy on the masculine and the feminine classes, as well as the average of these two metrics. Finally, we report the average (of the M-F averages) across the three dimensions. MDGENDERwas collected to enable evaluation on the masculine and femninine classes, for which much of the training data is noisy.
87.4 86.65 55.2 77.22 ABOUT Wikipedia 36.48 83.56 33.22 51.09 ABOUT Image Chat 75.82 82.24 70.52 76.2 ABOUT Funpedia 64.51 83.33 81.82 76.55 ABOUT Wizard - 73.92 65.08 Yelp - 65.65 ConvAI2 - ConvAI2 45.98 61.28 - OpenSubtitles 56.95 59.31 - OpenSubtitles 53.73 60.29 - 51.57 65.72 LIGHT - 51.92 68.48 LIGHT
data by preserving unknown samples. Additionally, we note that during training, we balance the data across the masculine, feminine, and neutral classes by oversampling from classes with fewer examples. We do this because much of the data is highly im- balanced: for example, over > 80% of examples from Wikipedia are labeled masculine (Table 2). We also early stop on the average accuracy across all three classes.
# 4.2 Data
Next, we describe how we annotated our training data, including both the 8 existing datasets and our novel evaluation dataset, MDGENDER.
Table 4: Performance of the multitask model on the test sets from our training data. We evaluate the multi-task model on the test sets for the training datasets. We report accuracy on each (gold) labelâ masculine, feminine, and neutralâand the average of the three. We do not report accuracy on imputed labels.
Model Labels. Many of our annotated datasets contain cases where the ABOUT, AS, TO labels are unknown. We retain these examples during train- ing, but use two techniques to handle them. If the true label is unknown (for example, in Wikipedia, we do not know the gender of the author, so the as dimension is unknown), we either impute it or provide a label at random. For data for which the about label is unknown, we impute it using a clas- siï¬er trained only on data for which this label is present. For data for which the to or as label is unknown, we provide a label at random, choosing between masculine and feminine. From epoch to epoch, we switch these arbitrarily assigned labels so that the model learns to assign the masculine and feminine labels with roughly equal probability to examples for which the gender is unknown. This la- bel ï¬ipping allows us to retain greater quantities of
Annotation of Existing Datasets. To enable training our classiï¬ers, we leverage a variety of ex- isting datasets. Since one of our main contributions is a suite of open-source general-purpose gender bias classiï¬ers, we selected datasets for training based on three criteria: inclusion of recoverable information about one or more of our dimensions, diversity in textual domain, and high quality, open data use. Once we narrowed our search to free, open source and freely available datasets, we maxi- mized domain diversity by selecting datasets with high quality annotations along at least one of our dimensions (e.g., dialogue datasets have informa- tion on author and addressee gender, biographies have information on topic gender, and restaurant reviews have information on author gender).
The datasets are: Wikipedia, Funpedia (a less formal version of Wikipedia) (Miller et al., 2017), Wizard of Wikipedia (knowledge-based conversa- tion) (Dinan et al., 2019d), Yelp Reviews4, Con- vAI2 (chit-chat dialogue) (Dinan et al., 2019c), ImageChat (chit-chat dialogue about an image) (Shuster et al., 2018), OpenSubtitles (dialogue from
4https://yelp.com/dataset
movies) (Lison and Tiedemann, 2016), and LIGHT (chit-chat fantasy dialogue) (Urbanek et al., 2019). We use data from multiple domains to represent different styles of textâfrom formal writing to chitchatâand different vocabularies. Further, sev- eral datasets are known to contain statistical im- balances and biases with regards to how people of different genders are described and represented, such as Wikipedia and LIGHT. Table 2 presents dataset statistics; the full detailed descriptions and more information on how labels were inferred or imputed in Appendix A.
Some of the datasets contain gender annotations provided by existing work. For example, classiï¬ers trained for style transfer algorithms have previously annotated the gender of Yelp reviewers (Subrama- nian et al., 2018). In other datasets, we infer the gender labels. For example, in datasets where users are ï¬rst assigned a persona to represent before chatting, often the gender of the persona is pre- determined. In some cases gender annotations are not provided. In these cases, we sometimes impute the label if we are able to do so with high conï¬- dence. More details regarding how this is done can be found in Appendix A.
Collected Evaluation Dataset. We use a variety of datasets to train classiï¬ers so they can be reliable on all dimensions across multiple domains. How- ever, this weakly supervised data provides some- what noisy training signal â particularly for the masculine and feminine classes â as the labels are automatically annotated or inferred. To enable re- liable evaluation, we collect a specialized corpus, MDGENDER, which acts as a gold-labeled dataset. First, we collect conversations between two speakers. Each speaker is provided with a per- sona description containing gender information, then tasked with adopting that persona and having a conversation.5 They are also provided with small sections of a biography from Wikipedia as the con- versation topic. We observe that using biographies to frame the conversation encourages crowdwork- ers to discuss about/to/as gender information.
To maximize the about/to/as gender information contained in each utterance, we perform a second annotation over each utterance in the dataset. In this next phase, we ask annotators to rewrite each
5We note that crowdworkers might perform genders in a non-authentic or idiosyncratic way when the persona gender doesnât match their gender. This would be an interesting avenue to explore in follow up work.
Model Performance M F N Avg. Multi-Task 87.4 86.65 55.2 77.22 Wikipedia Only 88.65 88.22 68.58 81.82 86.94 74.62 74.33 78.63 -gend words -gend words and names 82.10 82.52 55.21 73.28
Ablation of gender classiï¬ers on the Table 5: Wikipedia test set. We report the model accuracy on the masculine, feminine, and neutral classes, as well as the average accuracy across them. We train classiï¬ers (1) on the entire text (2) after removing explicitly gen- dered words using a word list and (3) after removing gendered words and names. While masking out gen- dered words and names makes classiï¬cation more chal- lenging, the model still obtains high accuracy.
utterance to make it very clear that they are speak- ing ABOUT a man or a woman, speaking AS a man or a woman, and speaking TO a man or a woman. For example, given the utterance Hey, how are you today? I just got off work, a valid rewrite to make the utterance ABOUT a woman could be: Hey, I went for a coffee with my friend and her dog as the her indicates a woman. A rewrite such as I went for a coffee with my friend is not acceptable as it does not mention that the friend is a woman. After each rewritten utterance, evaluators label how conï¬dent they are that someone else would predict that the text is spoken about, spoken as, or spoken to a man or woman. For the rewritten utterance I just got back from football practice, many people would guess that the utterance was said by a man, as more men play football then women, but one cannot be certain (as women also play or coach football). An example instance of the task is shown in Table 9 and the interface is shown in Appendix Figure 2.
# 5 Results
# about/to/as Gender Classiï¬cation
Quality of Classiï¬cation Models. We compare models that classify along a single dimension com- pared to one that multitasks across all three. To enable high quality evaluation along our proposed three dimensions, we use MDGENDERto evaluate. We measure the percentage accuracy for masculine, feminine, and neutral classes. We do not evaluate on the unknown class, as it is not modeled. Classi- ï¬er results on MDGENDER are shown in Table 3. We ï¬nd that the multitask classiï¬er has the best average performance across all dimensions, with a
small hit to single-task performance in the about and as dimensions. As expected, the single task models are unable to transfer to other dimensions: this is another indication that gender information manifests differently along each dimension. Train- ing for a single task allows models to specialize to detect and understand the nuances of text that indicates bias along one of the dimensions. How- ever, in a multitask setting, models see additional data along the other dimensions and can possibly learn to generalize to understand what language characterizes bias across multiple dimensions.
Performance by Dataset. The gender classiï¬ers along the TO, AS and ABOUT dimensions are trained on a variety of different existing datasets across multiple domains. We analyze which datasets are the most difï¬cult to classify correctly in Table 4. We ï¬nd that ABOUT is the easiest di- mension, particularly data from Wikipedia or based on Wikipedia, such as Funpedia and Wizard of Wikipedia, achieving almost 80% accuracy.
The TO and AS directions are both more difï¬cult, likely as they involve more context clues rather than relying on textual attributes and surface forms such as she and he to predict correctly. We ï¬nd that generally the datasets have similar performance, except Yelp restaurant reviews, which has a 70% accuracy on predicting AS.
Analysis of Classiï¬er Performance. We break down choices made during classiï¬er training by comparing different models on the Wikipedia (ABOUT dimension). We train a single classiï¬er of ABOUT, and train with the variations of mask- ing out gendered words and names. As gendered words such as her and names are very correlated with gender, masking can force models into a more challenging but nuanced setting where they must learn to detect bias from the remaining text. We present the results in Table 5. As expected, mask- ing out gendered words and names makes it harder to classify the text, but the model is still able to obtain high accuracy.
# 6 Applications
We demonstrate the broad utility of our multi-task classiï¬er by applying them to three different down- stream applications. First, we show that we can use the classiï¬er to control the genderedness of gener- ated text. Next, we demonstrate its utility in biased text detection by applying it Wikipedia to ï¬nd the
Generation Statistics Control Token # Gend. words Pct. masc. TO:feminine AS:feminine ABOUT:feminine Word list, feminine 246 227 1151 1158 48.0 51.0 19.72 18.22 TO:masculine AS:masculine ABOUT:masculine Word list, masculine 372 402 800 1459 75.0 71.6 91.62 94.8
Table 6: Word statistics measured on text generated from 1000 different seed utterances from ConvAI2 for each control token, as well as for our baseline model trained using word lists. We measure the number of gendered words (from a word list) that appear in the generated text as well as the percentage of masculine- gendered words among all gendered words. Sequences are generated with top-k sampling, k = 10, with a beam size of 10 and 3-gram blocking.
most gendered biographies. Finally, we evaluate our classiï¬er on an offensive text detection dataset to explore the interplay between offensive content and genderedness.
# 6.1 Controllable Generation
By learning to associate control variables with tex- tual properties, generative models can be controlled at inference time to adjust the generated text based on the desired properties of the user. This has been applied to a variety of different cases, includ- ing generating text of different lengths (Fan et al., 2017), generating questions in chit-chat (See et al., 2019), and reducing bias (Dinan et al., 2019a).
Previous work in gender bias used word lists to control bias, but found that word lists were lim- ited in coverage and applicability to a variety of domains (Dinan et al., 2019a). However, by de- composing bias along the TO, AS, AND ABOUT dimensions, ï¬ne-grained control models can be trained to control these different dimensions sep- arately. This is important in various applications â for example, one may want to train a chatbot with a speciï¬c personality, leaving the AS dimen- sion untouched, but want the bot to speak to and about everyone in a similar way. In this application, we train three different generative models, each of which controls generation for gender along one of the TO, AS, and ABOUT dimensions.
Methods We generate training data by taking the multi-task classiï¬er and using it to classify 250,000
textual utterances from Reddit, using a previously existing dataset extracted and obtained by a third party and made available on pushshift.io. This dataset was chosen as it is conversational in na- ture, but not one of the datasets that the classiï¬er was trained on. We then use the labels from the classiï¬er to prepend the utterances with tokens that indicate gender label along the dimension. For example for the ABOUT dimension, we prepend utterances with tokens ABOUT:<gender label>, where <gender label> denotes the label assigned to the utterance via the classiï¬er. At inference time, we choose control tokens to manipulate the text generated by the model.
We also compare to a baseline for which the control tokens are determined by a word list: if an utterance contains more masculine-gendered words than feminine-gendered words from the word list it is labeled as masculine (and vice versa for femi- nine); if it contains no gendered words or an equal number of masculine and feminine gendered words, it is labeled as neutral. Following Dinan et al. (2019a), we use several existing word lists (Zhao et al., 2018b, 2019; Hoyle et al., 2019).
For training, we ï¬ne-tune a large, Transformer sequence-to-sequence model pretrained on Reddit. At inference time, we generate text via top-k sam- pling (Fan et al., 2018), with k = 10 with a beam size of 10, and 3-gram blocking. We force the model to generate a minimum of 20 BPE tokens.
Qualitative Results. Example generations from various control tokens (as well as the word list base- line) are shown in Table 10 in the Appendix. These examples illustrate how controlling for gender over different dimensions yields extremely varied re- sponses, and why limiting control to word lists may not be enough to capture these different aspects of gender. For example, adjusting AS to âfeminineâ causes the model to write text such as Awwww, that sounds wonderful, whereas setting AS to masculine generates You can do it bro!
Quantitative Results. Quantitatively, we evalu- ate by generating 1000 utterances seeded from Con- vAI2 using both masculine and feminine control tokens and counting the number of gendered words from a gendered word list that also appear in the generated text. Results are shown in Table 6.
Utterances generated using about control tokens contain many more gendered words. One might ex- pect this, as when one speaks about another person,
Percentage of masculine-gendered text Dim ABOUT TO AS Safe 81.03 44.68 42.29 Offensive 70.66 60.15 65.12 t-statistic 5.49 -22.02 -14.56 p-value 5.19e-08 1.94e-46 1.05e-99
Table 7: Genderedness of offensive content. We mea- sure the percentage of utterances in both the âsafeâ and âoffensiveâ classes that are classiï¬ed as masculine- gendered, among utterances that are classiï¬ed as ei- ther masculine- or feminine-gendered. We test the hy- pothesis that safe and offensive classes distributions of masculine-gendered utterances differ using a t-test and report the p-value for each dimension.
one may refer to them using gendered pronouns. We observe that for the control tokens TO:feminine and AS:feminine, the utterances contain a roughly equal number of masculine-gendered and feminine- gendered words. This is likely due to the dis- tribution of such gendered words in the training data for the classiï¬er in the to and as dimensions. The ConvAI2 and Opensubtitles data show similar trends: on the ConvAI2 data, fewer than half of the gendered words in SELF:feminine utterances are feminine-gendered, and on the Opensubtitles data, the ratio drops to one-third.6 By design, the word list baseline has the best control over whether the generations contain words from this word list. These results, as well as the previously described qualitative results, demonstrate why evaluating and controlling with word lists is insufï¬cient â word lists do not capture all aspects of gender.
# 6.2 Bias Detection
Creating classiï¬ers along different dimensions can be used to detect gender bias in any form of text, beyond dialogue itself. We investigate using the trained classiï¬ers to detect the most gendered sen- tences and paragraphs in various documents, and analyze what portions of the text drive the clas- siï¬cation decision. Such methods could be very useful in practical applications such as detecting, removing, and rewriting biased writing.
Methods. We apply our classiï¬cation models by detecting the most gendered biographies in Wikipedia. We use the multitask model to score each paragraph among a set of 65, 000 Wikipedia
6The Opensubtitles data recalls the Bechdel test, which asks âwhether a work [of ï¬ction] features at least two women who talk to each other about something other than a man.â (Wikipedia contributors, 2020)
Masculine genderedness scores
Biographies Average Median All Men Women 0.74 0.90 0.042 0.98 0.99 0.00085
Table 8: Masculine genderedness scores of Wikipedia bios. We calculate a masculine gen- deredness score for a Wikipedia page by taking the median px = P (x â ABOUT:masculine) among all paragraphs x in the page, where P is the probability distribution given by the classiï¬er. We report the average and median scores for all biographies, as well as for biographies of men and women respectively.
biographies, where the score represents the proba- bility that the paragraph is masculine in the about dimension. We calculate a masculine genderedness score for the page by taking the median among all paragraphs in the page.
Quantitative Results. We report the average and median masculine genderedness scores for all bi- ographies in the set of 65, 000 that ï¬t this crite- ria, and for biographies of men and women in Ta- ble 8. We observe that while on average, the biogra- phies skew largely toward masculine (the average score is 0.74), the classiï¬er is more conï¬dent in the femininity of pages about women than it is in the masculinity of pages about men: the average fem- inine genderedness score for pages about women is 1 â 0.042 = 0.958, while the average masculine genderedness score for pages about men is 0.90. This might suggest that biographies about women contain more gendered text on average.
Qualitative Results. We show the pagesâ containing a minimum of 25 paragraphsâwith the minimum score (most feminine-gendered biogra- phies) and the maximum score (most masculine- gendered biographies) in Table 11 in the Appendix. We observe that the most masculine-gendered bi- ographies are mostly composers and conductors, likely due to the historical gender imbalance in these occupations. Amongst the most feminine gen- dered biographies, there are many popular actresses from the mid-20th century. By examining the most gendered paragraph in these biographies, anecdo- tally we ï¬nd these are often the paragraphs describ- ing the subjectâs life after retirement. For example, the most gendered paragraph in Linda Darnellâs biography contains the line Because of her then-
husband, Philip Liebmann, Darnell put her career on a hiatus, which clearly reï¬ects negative soci- etal stereotypes about the importance of womenâs careers (Hiller and Philliber, 1982; Duxbury and Higgins, 1991; Pavalko and Elder Jr, 1993; Byrne and Barling, 2017; Reid, 2018).
# 6.3 Offensive Content
Finally, the interplay and correlation between gen- dered text and offensive text is an interesting area for study, as many examples of gendered textâbe they explicitly or contextually genderedâare dis- paraging or have negative connotations (e.g., âcat ï¬ghtâ and âdollâ). There is a growing body of research on detecting offensive language in text. In particular, there has been recent work aimed at improving the detection offensive language in the context of dialogue (Dinan et al., 2019b). We investigate this relationship by examining the dis- tribution of labels output by our gender classiï¬er on data that is labeled for offensiveness.
Methods. For this application, we use the Stan- dard training and evaluation dataset created and de- scribed in Dinan et al. (2019b). We examine the re- lationship between genderedness and offensive ut- terances by labeling the gender of utterances (along the three dimensions) in both the âsafeâ and âof- fensiveâ classes in this dataset using our multitask classiï¬er. We then measure the ratio of utterances labeled as masculine-gendered among utterances labeled as either masculine- or feminine-gendered.
Quantitative Results. Results are shown in Ta- ble 7. We observe that, on the self and partner dimensions, the safe data is more likely to be la- beled as feminine and the offensive data is more likely to be labeled as masculine. We test the hy- pothesis that these distributions are unequal using a T-test, and ï¬nd that these results are signiï¬cant.
Qualitative Results. To explore how offensive content differs when it is ABOUT women and ABOUT men, we identiï¬ed utterances for which the model had high conï¬dence (probability > 0.70) that the utterance was feminine or masculine along the ABOUT dimension. After excluding stop words and words shorter than three characters, we hand- annotated the top 20 most frequent words as being explicitly gendered, a swear word, and/or bearing sexual connotation. For words classiï¬ed as mas- culine, 25% of the masculine words fell into these
categories , whereas for words classiï¬ed as femi- nine, 75% of the words fell into these categories.
# 7 Conclusion
We propose a general framework for analyzing gen- der bias in text by decomposing it along three di- mensions: (1) gender of the person or people be- ing spoken about (ABOUT), (2) gender of the ad- dressee (TO), and (2) gender of the speaker (AS). We show that classiï¬ers can detect bias along each of these dimensions. We annotate eight large ex- isting datasets along our dimensions, and also con- tribute a high quality evaluation dataset for this task. We demonstrate the broad utility of our classiï¬ers by showing strong performance on controlling bias in generated dialogue, detecting genderedness in text such as Wikipedia, and highlighting gender differences in offensive text classiï¬cation.
# References
Fran Amery, Stephen Bates, Laura Jenkins, and Heather Savigny. 2015. Metaphors on women in academia: A review of the literature, 2004-2013. At the center: Feminism, social science and knowledge, 20:247 `A267.
Shlomo Argamon, Moshe Koppel, James W Pen- nebaker, and Jonathan Schler. 2009. Automatically proï¬ling the author of an anonymous text. Commu- nications of the ACM, 52(2):119â123.
David Bamman and Noah A Smith. 2014. Unsuper- vised discovery of biographical structure from text. Transactions of the Association for Computational Linguistics, 2:363â376.
Mahzarin R. Banaji and Deborah A. Prentice. 1994. The self in social contexts. Annual review of psy- chology, 45(1):297â332.
Solon Barocas, Moritz Hardt, and Arvind Narayanan. 2020. Fairness in machine learning: Limitations and Opportunities.
Christine Basta, Marta R Costa-juss`a, and Noe Casas. 2019. Evaluating the underlying gender bias in con- textualized word embeddings. In Proceedings of the 1st Workshop on Gender Bias in Natural Language Processing.
Allan Bell. 1984. Language style as audience design. Language in society, 13(2):145â204.
Allan Bell and Gary Johnson. 1997. Towards a so- ciolinguistics of style. University of Pennsylvania Working Papers in Linguistics, 4(1):2.
Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to In Ad- homemaker? debiasing word embeddings. vances in neural information processing systems, pages 4349â4357.
Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in com- In Proceedings of mercial gender classiï¬cation. the 1st Conference on Fairness, Accountability and Transparency, volume 81 of Proceedings of Ma- chine Learning Research, pages 77â91, New York, NY, USA. PMLR.
In Aus- tralian women: New feminist perspectives, pages 90â104. Oxford University Press.
feminist the- ory, and psychoanalytic discourse. Routledge New York.
Alyson Byrne and Julian Barling. 2017. When she brings home the job status: Wives job status, status leakage, and marital instability. Organization Sci- ence, 28(2):177â192.
and Arvind Joanna J Bryson, Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183â186.
Deborah Cameron. 1990. The feminist critique of lan- guage: A reader.
Yang Trista Cao and Hal Daum´e. 2019. Toward gender- arXiv preprint inclusive coreference resolution. arXiv:1910.13913.
Kaytlin Chaloner and Alfredo Maldonado. 2019. Mea- suring gender bias in word embeddings across do- mains and discovering new gender bias word cate- In Proceedings of the First Workshop on gories. Gender Bias in Natural Language Processing, pages 25â32.
Kai-Wei Chang, Vinod Prabhakaran, and Vicente Or- donez. 2019. Bias and fairness in natural language processing. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Process- ing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): Tutorial Abstracts, Hong Kong, China. Association for Computational Linguistics.
Christine Charyton and Glenn E Snelbecker. 2007. En- gineersâ and musiciansâ choices of self-descriptive adjectives as potential indicators of creativity by gen- der and domain. Psychology of Aesthetics, creativity, and the arts, 1(2):91.
Na Cheng, Rajarathnam Chandramouli, and KP Sub- balakshmi. 2011. Author gender identiï¬cation from text. Digital Investigation, 8(1):78â88.
Jennifer Coates. 2015. Women, men and language: A sociolinguistic account of gender differences in lan- guage. Routledge.
Marta R Costa-juss`a. 2019. An analysis of gender bias studies in natural language processing. Nature Ma- chine Intelligence, pages 1â2.
Mary Crawford. 1995. Talking difference: On gender and language. Sage.
Kimberle Crenshaw. 1989. Demarginalizing the inter- section of race and sex: A black feminist critique of antidiscrimination doctrine, feminist theory and an- tiracist politics. u. Chi. Legal f., page 139.
Stefania Degaetano-Ortlieb. 2018. Stylistic variation over 200 years of court proceedings according to gender and social class. In Proceedings of the Sec- ond Workshop on Stylistic Variation, pages 1â10, New Orleans. Association for Computational Lin- guistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. CoRR, abs/1810.04805.
Emily Dinan, Angela Fan, Adina Williams, Jack Ur- banek, Douwe Kiela, and Jason Weston. 2019a. Queens are powerful too: Mitigating gender bias in dialogue generation.
Emily Dinan, Samuel Humeau, Bharath Chintagunta, and Jason Weston. 2019b. Build it break it ï¬x it for dialogue safety: Robustness from adversarial human attack. arXiv preprint arXiv:1908.06083.
Emily Dinan, Varvara Logacheva, Valentin Malykh, Jack Urbanek, Alexander Miller, Kurt Shuster, Douwe Kiela, Arthur Szlam, Iulian Serban, Ryan Lowe, et al. 2019c. The second conversational arXiv preprint intelligence challenge (convai2). arXiv:1902.00098.
Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019d. Wiz- ard of Wikipedia: Knowledge-powered conversa- In Proceedings of the International tional agents. Conference on Learning Representations (ICLR).
Yupei Du, Yuanbin Wu, and Man Lan. 2019. Exploring human gender stereotypes with word association test. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 6135â 6145.
Linda E Duxbury and Christopher A Higgins. 1991. Gender differences in work-family conï¬ict. Journal of applied psychology, 76(1):60.
Penelope Eckert and Sally McConnell-Ginet. 1992. Communities of practice: Where language, gender and power all live. In Locating power: Proceedings
of the second Berkeley women and language confer- ence, volume 1, pages 89â99. Berkeley, CA: Berke- ley University.
Penelope Eckert and Sally McConnell-Ginet. 2013. Language and gender. Cambridge University Press.
Penelope Eckert and John R Rickford. 2001. Style and sociolinguistic variation. Cambridge University Press.
Ali Emami, Paul Trichelair, Adam Trischler, Ka- heer Suleman, Hannes Schulz, and Jackie Chi Kit Cheung. 2019. The knowref coreference corpus: Removing gender and number cues for difï¬cult pronominal anaphora resolution. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 3952â3961.
Angela Fan, David Grangier, and Michael Auli. 2017. arXiv Controllable abstractive summarization. preprint arXiv:1711.05217.
Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hi- erarchical neural story generation. arXiv preprint arXiv:1805.04833.
Rosa Fernandez Semantic dero- Marıa Jimenez Catalan. 2003. gation in animal metaphor: a contrastive-cognitive analysis of two male/female examples in english and spanish. Journal of pragmatics, 35(5):771â797.
Georgia Frantzeskou, Efstathios Stamatatos, Stefanos Gritzalis, and Sokratis Katsikas. 2006. Effective identiï¬cation of source code authors using byte- level information. In Proceedings of the 28th inter- national conference on Software engineering, pages 893â896.
Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. 2018. Word embeddings quantify 100 years of gender and ethnic stereotypes. Pro- ceedings of the National Academy of Sciences, 115(16):E3635âE3644.
Aparna Garimella, Carmen Banea, Dirk Hovy, and Rada Mihalcea. 2019. Womens syntactic resilience and mens grammatical luck: Gender-bias in part-of- speech tagging and dependency parsing. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3493â3498.
Danielle Gaucher, Justin Friesen, and Aaron C Kay. 2011. Evidence that gendered wording in job advertisements exists and sustains gender inequal- ity. Journal of personality and social psychology, 101(1):109.
Andrew Gaut, Tony Sun, Shirlyn Tang, Yuxin Huang, Jing Qian, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, et al. 2019. To- wards understanding gender bias in relation extrac- tion. arXiv preprint arXiv:1911.03642.
Hila Gonen and Yoav Goldberg. 2019. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 609â614, Minneapolis, Minnesota. Association for Computa- tional Linguistics.
Hila Gonen, Yova Kementchedjhieva, and Yoav Gold- berg. 2019. How does grammatical gender affect noun representations in gender-marking languages? arXiv preprint arXiv:1910.14161.
Eduardo Graells-Garrido, Mounia Lalmas, and Filippo Menczer. 2015. First women, second sex: Gender bias in wikipedia. In Proceedings of the 26th ACM Conference on Hypertext & Social Media, pages 165â174.
Bernard Guerin. 1994. Gender bias in the abstractness of verbs and adjectives. The Journal of social psy- chology, 134(4):421â428.
Dana V Hiller and William W Philliber. 1982. Predict- ing marital and career success among dual-worker couples. Journal of Marriage and the Family, pages 53â62.
Janet Holmes. 2013. An introduction to sociolinguis- tics. Routledge.
Matthew Honnibal and Ines Montani. 2017. spaCy 2: Natural language understanding with Bloom embed- dings, convolutional neural networks and incremen- tal parsing. To appear.
Dirk Hovy, Federico Bianchi, and Tommaso Fornaciari. 2020. Can you translate that into man? commercial machine translation systems include stylistic biases. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.
Dirk Hovy and Shannon L Spruit. 2016. The social impact of natural language processing. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 591â598.
Eduard Hovy. 1987. Generating natural language un- der pragmatic constraints. Journal of Pragmatics, 11(6):689â719.
Alexander Miserlis Hoyle, Lawrence Wolf-Sonkin, Hanna Wallach, Isabelle Augenstein, and Ryan Cot- terell. 2019. Unsupervised discovery of gendered language through latent-variable modeling. In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 1706â 1716, Florence, Italy. Association for Computational Linguistics.
Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, and Jason Weston. 2019. Poly-encoders: Trans- former architectures and pre-training strategies for
fast and accurate multi-sentence scoring. preprint arXiv:1905.01969.
Dell Hymes. 1974. Ways of speaking. In R. Bauman and J. Sherzer, editors, Explorations in the ethnog- raphy of speaking, volume 1, pages 433â451. Cam- bridge: Cambridge University Press.
David Jurgens, Saif Mohammad, Peter Turney, and Keith Holyoak. 2012. SemEval-2012 task 2: Mea- In *SEM suring degrees of relational similarity. 2012: The First Joint Conference on Lexical and Computational Semantics â Volume 1: Proceedings of the main conference and the shared task, and Vol- ume 2: Proceedings of the Sixth International Work- shop on Semantic Evaluation (SemEval 2012), pages 356â364, Montr´eal, Canada. Association for Com- putational Linguistics.
Masahiro Kaneko and Danushka Bollegala. 2019. Gender-preserving debiasing for pre-trained word embeddings. arXiv preprint arXiv:1906.00742.
Dongyeop Kang, Varun Gangal, and Eduard Hovy. (male, bachelor) and (female, Ph.D) have 2019. different connotations: Parallelly annotated stylis- tic language dataset with multiple personas. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 1696â 1706, Hong Kong, China. Association for Computa- tional Linguistics.
Maximilian Klein, Harsh Gupta, Vivek Rai, Piotr Konieczny, and Haiyi Zhu. 2016. Monitoring the gender gap with wikidata human gender indicators. In Proceedings of the 12th International Symposium on Open Collaboration, pages 1â9.
2015. Wikipedia in the world of global gender in- equality indices: What the biography gender gap is measuring. In Proceedings of the 11th International Symposium on Open Collaboration, pages 1â2.
Corina Koolen and Andreas van Cranenburgh. 2017. These are not the stereotypes you are looking for: Bias and fairness in authorial gender attribution. In Proceedings of the First ACL Workshop on Ethics in Natural Language Processing, pages 12â22, Valen- cia, Spain. Association for Computational Linguis- tics.
Moshe Koppel, Shlomo Argamon, and Anat Rachel Shimoni. 2002. Automatically categorizing written texts by author gender. Literary and linguistic com- puting, 17(4):401â412.
George Lakoff and Mark Johnson. 1980. Metaphors we live by. Chicago, IL: University of Chicago.
Robin Lakoff. 1973. Language and womanâs place. Language in society, 2(1):45â79.
Robin Lakoff. 1990. Talking Power: The Politics of Language.
Nayeon Lee, Andrea Madotto, and Pascale Fung. 2019. Exploring social bias in chatbots using stereotype In Proceedings of the 2019 Workshop knowledge. on Widening NLP, pages 177â180.
Haley Lepp. 2019. Pardon the interruption: Automatic analysis of gender and competitive turn-taking in In Proceed- united states supreme court hearings. ings of the 2019 Workshop on Widening NLP, pages 143â145, Florence, Italy. Association for Computa- tional Linguistics.
Pierre Lison and J¨org Tiedemann. 2016. Opensub- titles2016: Extracting large parallel corpora from movie and TV subtitles.
Haochen Liu, Jamell Dacon, Wenqi Fan, Hui Liu, Zi- tao Liu, and Jiliang Tang. 2019. Does gender mat- ter? Towards fairness in dialogue systems. CoRR, abs/1910.10486.
Kim Luyckx and Walter Daelemans. 2008. Author- ship attribution and veriï¬cation with many authors and limited data. In Proceedings of the 22nd Inter- national Conference on Computational Linguistics (Coling 2008), pages 513â520.
Rowan Hall Maudslay, Hila Gonen, Ryan Cotterell, and Simone Teufel. 2019. Itâs all in the name: Mit- igating gender bias with name-based counterfactual data substitution. CoRR, abs/1909.00871.
Chandler May, Alex Wang, Shikha Bordia, Samuel R. Bowman, and Rachel Rudinger. 2019. On measur- ing social biases in sentence encoders. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 622â628, Minneapo- lis, Minnesota. Association for Computational Lin- guistics.
Thomas Corwin Mendenhall. 1887. The characteristic curves of composition. Science, 9(214):237â249.
Alexander H Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes, Devi Parikh, and Ja- son Weston. 2017. ParlAI: A dialog research soft- ware platform. arXiv preprint arXiv:1705.06476.
Sara Mills. 2014. Language and gender: Interdisci- plinary perspectives. Routledge.
John Money and Anke A Ehrhardt. 1972. Man and woman, boy and girl: Differentiation and dimor- phism of gender identity from conception to matu- rity.
Rosamund Moon. 2014. From gorgeous to grumpy: ad- jectives, age and gender. Gender & Language, 8(1).
Frederick Mosteller and David L Wallace. 1984. Ap- plied Bayesian and classical inference: the case of the Federalist papers. Springer Verlag.
Eliza K Pavalko and Glen H Elder Jr. 1993. Women behind the men: Variations in wivesâ support of hus- bandsâ careers. Gender & Society, 7(4):548â567.
Jian Peng, Kim-Kwang Raymond Choo, and Helen Ashman. 2016. User proï¬ling in intrusion detection: A review. Journal of Network and Computer Appli- cations, 72:14â27.
Yusu Qian. 2019. Gender stereotypes differ between In Proceedings of the male and female writings. 57th Annual Meeting of the Association for Com- putational Linguistics: Student Research Workshop, pages 48â53.
Yusu Qian, Urwa Muaz, Ben Zhang, and Jae Won Hyun. 2019. Reducing gender bias in word-level language models with a gender-equalizing loss func- tion. arXiv preprint arXiv:1905.12801.
Sindhu Raghavan, Adriana Kovashka, and Raymond Mooney. 2010. Authorship attribution using prob- In Proceedings abilistic context-free grammars. of the ACL 2010 Conference Short Papers, pages 38â42, Uppsala, Sweden. Association for Computa- tional Linguistics.
Shauli Ravfogel, Yanai Elazar, Hila Gonen, Michael Twiton, and Yoav Goldberg. 2020. Null it out: Guarding protected attributes by iterative nullspace projection. arXiv.
Joseph Reagle and Lauren Rhue. 2011. Gender bias in International Journal of wikipedia and britannica. Communication, 5:21.
Straying from breadwinning: Status and money in menâs interpretations of their wivesâ work arrangements. Gender, Work & Organi- zation, 25(6):718â733.
John R Rickford and Faye McNair-Knox. 1994. Addressee-and topic-inï¬uenced style shift: A quanti- tative sociolinguistic study. Sociolinguistic perspec- tives on register, pages 235â276.
Anderson Rocha, Walter J Scheirer, Christopher W Forstall, Thiago Cavalcante, Antonio Theophilo, Bingyu Shen, Ariadne RB Carvalho, and Efstathios Stamatatos. 2016. Authorship attribution for social media forensics. IEEE Transactions on Information Forensics and Security, 12(1):5â33.
and Benjamin Van Durme. 2017. Social bias in elicited natural lan- guage inferences. In Proceedings of the First ACL Workshop on Ethics in Natural Language Process- ing, pages 74â79, Valencia, Spain. Association for Computational Linguistics.
Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A Smith. 2019a. The risk of racial bias In Proceedings of the in hate speech detection. 57th Annual Meeting of the Association for Compu- tational Linguistics, pages 1668â1678.
Maarten Sap, Saadia Gabriel, Lianhui Qin, Dan Ju- rafsky, Noah A Smith, and Yejin Choi. 2019b. Social bias frames: Reasoning about social and arXiv preprint power implications of language. arXiv:1911.03891.
Ruchita Sarawgi, Kailash Gajulapalli, and Yejin Choi. 2011. Gender attribution: Tracing stylometric evi- In Proceedings of dence beyond topic and genre. the Fifteenth Conference on Computational Natural Language Learning, pages 78â86, Portland, Oregon, USA. Association for Computational Linguistics.
Abigail See, Stephen Roller, Douwe Kiela, and Jason Weston. 2019. What makes a good conversation? how controllable attributes affect human judgments. arXiv preprint arXiv:1902.08654.
Sima Shariï¬rad, Alon Jacovi, Israel Bar Ilan Univesity, and Stan Matwin. 2019. Learning and understand- ing different categories of sexism using convolu- tional neural networks ï¬lters. In Proceedings of the 2019 Workshop on Widening NLP, pages 21â23.
Using attention-based bidirectional lstm to identify differ- ent categories of offensive language directed toward female celebrities. In Proceedings of the 2019 Work- shop on Widening NLP, pages 46â48.
Kurt Shuster, Samuel Humeau, Antoine Bordes, and Jason Weston. 2018. Engaging image chat: Model- ing personality in grounded dialogue. arXiv preprint arXiv:1811.00945.
E. Stamatatos, N. Fakotakis, and G. Kokkinakis. 1999. Automatic authorship attribution. In Ninth Confer- ence of the European Chapter of the Association for Computational Linguistics, Bergen, Norway. Asso- ciation for Computational Linguistics.
Efstathios Stamatatos. 2009. A survey of modern au- thorship attribution methods. Journal of the Ameri- can Society for information Science and Technology, 60(3):538â556.
Efstathios Stamatatos. 2017. Authorship attribution us- ing text distortion. In Proceedings of the 15th Con- ference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Pa- pers, pages 1138â1149, Valencia, Spain. Associa- tion for Computational Linguistics.
Gabriel Stanovsky, Noah A Smith, and Luke Zettle- moyer. 2019. Evaluating gender bias in machine translation. arXiv preprint arXiv:1906.00591.
Lample, Denoyer, Eric Michael MarcâAurelio Ranzato, and Y-Lan Boureau. 2018. Multiple-attribute text style transfer. arXiv preprint arXiv:1811.00552.
Tony Sun, Andrew Gaut, Shirlyn Tang, Yuxin Huang, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, and William Yang Wang.
2019. Mitigating gender bias in natural language In Proceedings of processing: Literature review. the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 1630â1640, Florence, Italy. Association for Computational Linguistics.
Jane Sunderland. 2006. Language and gender: An ad- vanced resource book. Routledge.
Joan Swann. 1992. Girls, boys, and language. Black- well Publishers.
Mary Talbot. 2019. Language and gender. John Wiley & Sons.
Rachael Tatman. 2017. Gender and dialect bias in In Proceedings of YouTubeâs automatic captions. the First ACL Workshop on Ethics in Natural Lan- guage Processing, pages 53â59, Valencia, Spain. As- sociation for Computational Linguistics.
Frances Trix and Carolyn Psenka. 2003. Exploring the color of glass: Letters of recommendation for fe- male and male medical faculty. Discourse & Soci- ety, 14(2):191â220.
Jack Urbanek, Angela Fan, Siddharth Karamcheti, Saachi Jain, Samuel Humeau, Emily Dinan, Tim Rockt¨aschel, Douwe Kiela, Arthur Szlam, and Ja- son Weston. 2019. Learning to speak and act in In Proceedings a fantasy text adventure game. of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Interna- tional Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 673â683, Hong Kong, China. Association for Computational Lin- guistics.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz Kaiser, and Illia Polosukhin. 2017. Attention is all In Advances in neural information pro- you need. cessing systems, pages 5998â6008.
Claudia Wagner, David Garcia, Mohsen Jadidi, and Markus Strohmaier. 2015. Itâs a manâs wikipedia? assessing gender inequality in an online encyclope- dia. In Ninth international AAAI conference on web and social media.
Claudia Wagner, Eduardo Graells-Garrido, David Gar- cia, and Filippo Menczer. 2016. Women through the glass ceiling: gender asymmetries in wikipedia. EPJ Data Science, 5(1):5.
Ann Weatherall. 2002. Gender, language and dis- course. Psychology Press.
Candace West and Don H Zimmerman. 1987. Doing gender. Gender & society, 1(2):125â151.
Eunike Wetzel, Benedikt Hell, and Katja P¨assler. 2012. Comparison of different test construction strategies in the development of a gender fair interest inven- Journal of Career Assessment, tory using verbs. 20(1):88â104.
test â Wikipedia, the free encyclopedia. [Online; accessed 3-April-2020].
Myron Wish, Morton Deutsch, and Susan J Kaplan. 1976. Perceived dimensions of interpersonal rela- tions. Journal of Personality and social Psychology, 33(4):409.
Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual atten- tion. In International conference on machine learn- ing, pages 2048â2057.
Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cot- terell, Vicente Ordonez, and Kai-Wei Chang. 2019. Gender bias in contextualized word embeddings. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 629â634, Minneapolis, Minnesota. Association for Computa- tional Linguistics.
Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Or- donez, and Kai-Wei Chang. 2018a. Gender bias in coreference resolution: Evaluation and debiasing In Proceedings of the 2018 Conference methods. of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 15â20, New Orleans, Louisiana. Association for Computa- tional Linguistics.
Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, and Kai- Wei Chang. 2018b. Learning gender-neutral word embeddings. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 4847â4853, Brussels, Belgium. Associa- tion for Computational Linguistics.
Pei Zhou, Weijia Shi, Jieyu Zhao, Kuan-Hao Huang, Muhao Chen, Ryan Cotterell, and Kai-Wei Chang. 2019. Examining gender bias in languages with the grammatical gender. 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5275â5283, Hong Kong, China. Association for Computational Linguistics.
Ran Zmigrod, Sebastian J. Mielke, Hanna Wallach, and Ryan Cotterell. 2019. Counterfactual data aug- mentation for mitigating gender stereotypes in lan- guages with rich morphology. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 1651â1661, Florence, Italy. Association for Computational Linguistics.
# A Existing Data Annotation
We describe in more detail how each of the eight training datasets is annotated:
1. Wikipedia - to annotate ABOUT, we use a Wikipedia dump and extract biography pages. We identify biographies using named entity recognition applied to the title of the page (Honnibal and Montani, 2017). We label pages with a gender based on the number of gendered pronouns (he vs. she vs. they) and label each paragraph in the page with this la- bel for the ABOUT dimension.7 Wikipedia is well known to have gender bias in equity of biographical coverage and lexical bias in noun references to women (Reagle and Rhue, 2011; Graells-Garrido et al., 2015; Wagner et al., 2015; Klein and Konieczny, 2015; Klein et al., 2016; Wagner et al., 2016), making it an inter- esting test bed for our investigation.
2. Funpedia - Funpedia (Miller et al., 2017) con- tains rephrased Wikipedia sentences in a more conversational way. We retain only biogra- phy related sentences and annotate similar to Wikipedia, to give ABOUT labels.
3. Wizard of Wikipedia - Wizard of Wikipedia (Dinan et al., 2019d) contains two people dis- cussing a topic in Wikipedia. We retain only the conversations on Wikipedia biographies and annotate to create ABOUT labels.
4. ImageChat - ImageChat (Shuster et al., 2018) contains conversations discussing the content of an image. We use the (Xu et al., 2015) image captioning system8 to identify the con- tents of an image and select gendered exam- ples.
5. Yelp - we use the Yelp reviewer gender predic- tor developed by (Subramanian et al., 2018) and retain reviews for which the classiï¬er is very conï¬dent â this creates labels for the au- thor of the review (AS). We impute ABOUT labels on this dataset using a classiï¬er trained on the datasets 1-4.
6. ConvAI2 - ConvAI2 (Dinan et al., 2019c) contains persona-based conversations. Many
7This method of imputing gender is similar to the one used in Reagle and Rhue (2011, 1142) and Bamman and Smith (2014), except we also incorporate non-oppositional gender categories, and rely on basic counts without scaling. 8https://github.com/AaronCCWong/
Show-Attend-and-Tell
personas contain sentences such as I am a old woman or My name is Bob which allows an- notators to annotate the gender of the speaker (AS) and addressee (TO) with some conï¬dence. Many of the personas have unknown gender. We impute ABOUT labels on this dataset using a classiï¬er trained on the datasets 1-4.
7. OpenSubtitiles - OpenSubtitles9 (Lison and Tiedemann, 2016) contains subtitles for movies in different languages. We retain En- glish subtitles that contain a character name or identity. We annotate the characterâs gender using gender kinship terms such as daugh- ter and gender probability distribution calcu- lated by counting the masculine and feminine names of baby names in the United States10. Using the characterâs gender, we get labels for the AS dimension. We get labels for the TO dimension by taking the gender of the next character to speak if there is another utter- ance in the conversation; otherwise, we take the gender of the last character to speak. We impute ABOUT labels on this dataset using a classiï¬er trained on the datasets 1-4.
8. LIGHT - LIGHT contains persona-based con- versation. Similarly to ConvAI2, annotators labeled the gender of each persona (Dinan et al., 2019a), giving us labels for the speaker (AS) and speaking partner (TO). We impute ABOUT labels on this dataset using a classiï¬er trained on the datasets 1-4.
# B New Evaluation Dataset
The interface for our new evaluation dataset MD- GENDER can be seen in Figure 2. Examples from the new dataset can be found in Table 9.
# C Applications
Example generations for various control tokens, as well as for our word list baseline, are shown in Table 10. See §6.1 on Controllable Generation in the main paper for more details.
The top 10 most gendered Wikipedia biogra- phies are shown in Table 11. See §6.2 on Detecting Bias in the main paper for more details.
# 9http://www.opensubtitles.org/ 10https://catalog.data.gov/dataset/baby-names-from-
social-security-card-applications-national-level-data
System: (2 messages left) Please rewrite the following message so that most people would guess that the speaker is SPEAKING ABOUT a MAN: nice to meet you too,do you like music? Evaluator: nice to meet you too, David! do you like music? System: In the example you wrote, how confident would someone be that the speaker is SPEAKING ABOUT a MAN? CERTAIN PRETTY SURE UNSURE
Figure 2: Annotation interface. Annotation interface for collecting MDGENDER. Annotators were shown an utterance from a conversation, and asked to re-write it such that it is clear they would be speaker about/to/as a man or a woman. They were then asked for their conï¬dence level.
Utterance Dim. Gender Conï¬dence Original: Thatâs interesting. I am a Chef. What are your hobbies Rewrite: thatâs interesting. i am a chef and nï¬ player what are your hobbies AS masc. certain Original: My name is Zachery but most call me Z. Whatâs your name miss? Rewrite: My name is Tina but most call me T. Whatâs your name miss? AS fem. pretty sure Original: I said I like to wrestle with my kids for fun. What do you do for work? Rewrite: i said i like to wrestle with my son for fun. what do you do for work? ABOUT masc. certain Original: just checking up on you by saying hi Rewrite: Hanna said she was just checking up on you by saying hi ABOUT fem. pretty sure Original: How are you doing today Jahsim? Rewrite: How are you doing today Mr. Jahsim? TO masc. certain Original: My name is Alexsandro and I am a bee farmer! Rewrite: My name is alexsandro and I am a bee farmer! How about you maâam? AS fem. certain
Table 9: Examples from the MDGENDER. Crowdworkers were asked to re-write dialogue utterances such that most people would guess that the utterance was either said to, said by, or about a man or a woman. Afterwards, they were asked to give a conï¬dence level in their re-write, meant to capture the differences between statistical biases (more men play football than women) and fact (you do not have to be a man to play football).
Input: Awww! Cute! Our are a handful, but they are my babies! 4 wiener dogs. TO:feminine I canât wait for your children to get married.. I want one so badly! : ) AS:feminine Awwww, that sounds wonderful. Canât wait to see them in person! : ) ABOUT:feminine Youâre my hero. Thatâs a cute dog. How long has she been with you? Word list, feminine Thatâs pretty awesome! She was a little bit overweight but sheâs a sweet girl now
Input: Does vegan mean you do not want to be a super hero? TO:feminine Iâd love to be vegan, but it just isnât something I have any interest in doing. AS:feminine I donât want to become super heroes! I just think it would be cool to be vegan. ABOUT:feminine Sheâs a girl. She canât be a superhero!!! 11! 1! 1 1! Word list, feminine It does. She is a vegan and I am a super - hero. I am not vegan, and I donât want to become any sort of superhero.
Input: That is an good. Maybe you can make friends in a club or on a job? TO:feminine I was going to do that, but then I realized I didnât want to be alone. AS:feminine No, Iâve been working. I donât think I can make friendships online. ABOUT:feminine I think the problem is sheâs a girl, so thereâs not a lot of opportunity to make friends. Word list, feminine I have a girlfriend and I live in a small town. I donât know what clubs she can make any friends in.
Input: TO:masculine Iâm not sure this is true. Iâve seen a lot of grease monkeys on reddit. I donât think thereâs any evidence itâs true. AS:masculine You can do it bro! Just keep at it! You got this!!! : d ) ABOUT:male Iâd love to be a grease guy but I just canât make it to the gym. : ( Word list, masculine Youâre not alone bro. Iâm trying to be that guy, but itâs a little hard sometimes.
Input: Have you ever almost died? I nearly did from urinary cancer when I was 18. TO:masculine No one in my family has died. I have only seen a handful of people die. I am sure you will be ï¬ne. AS:masculine No I havenât. But itâs a lot better now, thanks to all the help Iâve received. ABOUT:masculine He said * nearly *. Itâs more likely to happen when youâre older than the average person. Word list, masculine Iâve only been alive for 3 days. I have no idea what Iâm going to do next, but Iâll be okay.
Input: TO:masculine Iâm going to start doing this, thanks! If it works, Iâll post it on r / ï¬tness! AS:masculine Thanks! Iâve always been really into exercise, too, and itâs deï¬nitely helped me. ABOUT:masculine Thank you! Iâm really glad he does : ). Iâve been working out since october. Word list, masculine Me too! I love being able to go to the gym without feeling like Iâm just a little kid. Itâs so rewarding when you get back in the swing of things.
Table 10: Example generations from a generative model trained using controllable generation, with control tokens determined by the classiï¬er. Sequences are generated with top-k sampling, k = 10, with a beam size of 10 and 3-gram blocking. Input is randomly sampled from the ConvAI2 dataset.
Most Feminine Most Masculine 1. Edie Sedgwick: was an American actress and fashion model... 2. Linda Darnell: was an American ï¬lm actress... 3. Maureen OâHara: was an Irish actress and singer... 4. Jessica Savitch: was an American television news presenter and correspondent,... 5. Patsy Mink: Mink served in the U.S. House of Representatives... 6. Shirley Chisholm: was an American politician, edu- cator, and author... 7. Mamie Van Doren: is an American actress, model, singer, and sex symbol who is... 8. Jacqueline Cochran: was a pioneer in the ï¬eld of American aviation and one of t... 9. Chlo Sevigny: is an American actress, fashion de- signer, director, and form... 10. Hilda Solis: is an American politician and a member of the Los Angeles Co... 1. Derek Jacobi: is an English actor and stage director... 2. Bohuslav Martin: was a Czech composer of modern classical music... 3. Carlo Maria Giulini: was an Italian conductor... 4. Zubin Mehta: is an Indian conductor of Western classical music... 5. John Barbirolli: was a British conductor and cellist ... 6. Claudio Abbado: was an Italian conductor... 7. Ed Harris: is an American actor, producer, director, and screenwriter... 8. Richard Briers: was an English actor... 9. Artur Schnabel: was an Austrian classical pianist, who also composed and tau... 10. Charles Mackerras: was an Australian conductor...
Table 11: Most gendered Wikipedia biographies We ran our multi-task classiï¬er over 68 thousand biographies of Wikipedia. After selecting for biographies with a minimum number of paragraphs (resulting in 15.5 thousand biographies) we scored them to determine the most masculine and feminine gendered. . | {
"id": "1711.05217"
} |
2004.14602 | Look at the First Sentence: Position Bias in Question Answering | Many extractive question answering models are trained to predict start and
end positions of answers. The choice of predicting answers as positions is
mainly due to its simplicity and effectiveness. In this study, we hypothesize
that when the distribution of the answer positions is highly skewed in the
training set (e.g., answers lie only in the k-th sentence of each passage), QA
models predicting answers as positions can learn spurious positional cues and
fail to give answers in different positions. We first illustrate this position
bias in popular extractive QA models such as BiDAF and BERT and thoroughly
examine how position bias propagates through each layer of BERT. To safely
deliver position information without position bias, we train models with
various de-biasing methods including entropy regularization and bias
ensembling. Among them, we found that using the prior distribution of answer
positions as a bias model is very effective at reducing position bias,
recovering the performance of BERT from 37.48% to 81.64% when trained on a
biased SQuAD dataset. | http://arxiv.org/pdf/2004.14602 | Miyoung Ko, Jinhyuk Lee, Hyunjae Kim, Gangwoo Kim, Jaewoo Kang | cs.CL | 13 pages, EMNLP 2020 | null | cs.CL | 20200430 | 20210308 | 1 2 0 2
r a M 8 ] L C . s c [
4 v 2 0 6 4 1 . 4 0 0 2 : v i X r a
# Look at the First Sentence: Position Bias in Question Answering
# Miyoung Ko Jinhyuk Leeâ Hyunjae Kim Gangwoo Kim Jaewoo Kangâ
Korea University {miyoungko,jinhyuk lee,hyunjae-kim}@korea.ac.kr {gangwoo kim,kangj}@korea.ac.kr
# Abstract
Many extractive question answering models are trained to predict start and end positions of answers. The choice of predicting answers as positions is mainly due to its simplicity and ef- fectiveness. In this study, we hypothesize that when the distribution of the answer positions is highly skewed in the training set (e.g., answers lie only in the k-th sentence of each passage), QA models predicting answers as positions can learn spurious positional cues and fail to give answers in different positions. We ï¬rst illustrate this position bias in popular extrac- tive QA models such as BiDAF and BERT and thoroughly examine how position bias propa- gates through each layer of BERT. To safely deliver position information without position bias, we train models with various de-biasing methods including entropy regularization and bias ensembling. Among them, we found that using the prior distribution of answer positions as a bias model is very effective at reducing position bias, recovering the performance of BERT from 37.48% to 81.64% when trained on a biased SQuAD dataset.
ââ Training data (All answers are in the k'" sentence) Example #1 Example #2 (Question, Answer) | (question, Answer) Context (1* sent.) ... Context (1* sent... (k* sent.) (k+1" sent.) ... (k* sent.) (k+1" sent). Test Sample 4 Question: When was the Royal University of Warsaw established? Answer: 1816 Prediction @ Model Prediction @ Answer (k®* sent.) Warsaw remained the capital of the Polish-Lithuanian Commonwealth until 1796, (Last sent.) The Royal University of Warsaw was established in 1816.
Figure 1: Example of position bias. BERT trained on the dataset with a skewed answer position distribution, provides wrong predictions, biased to the speciï¬c sen- tence position.
structure of earlier deep learning-based QA mod- els (Wang and Jiang, 2016; Seo et al., 2017; Xiong et al., 2017), recent QA models provide positions of answers without much consideration (Yu et al., 2018; Devlin et al., 2019; Yang et al., 2019).
# Introduction
Question answering (QA) is a task of answering questions given a passage. Large-scale QA datasets have attracted many researchers to build effective QA models, and with the advent of deep learn- ing, recent QA models are known to outperform humans in some datasets (Rajpurkar et al., 2016; Devlin et al., 2019; Yang et al., 2019). Extractive QA is the task that assumes that answers always lie in the passage. Based on this task assumption, various QA models are trained to predict the start and end positions as the answers. Following the
The popularity of predicting the answer posi- tions is credited to the fact that it reduces the pre- diction space to O(n) where n is the length of an input document. It is more efï¬cient and effective than directly generating answers from a large vo- cabulary space. Furthermore, it reduces the QA task to a classiï¬cation task which is convenient to model. Nevertheless, very few studies have dis- cussed the side effects of predicting the answer positions. Could there be any unwanted biases when using answer positions as prediction targets? In this paper, we demonstrate that the models predicting the position can be severely biased when trained on datasets that have a very skewed answer position distribution. We deï¬ne this as position bias as shown in Figure 1. Models trained on a bi- ased dataset where answers always lie in the same
â Corresponding authors
Training Data EM BiDAF F1 â EM BERT F1 â EM XLNet F1 â SQuADtrain SQuADtrain (Sampled) SQuADk=1 train SQuADk=1 train + First Sentence SQuADk=1 train + Sentence Shufï¬e 66.51 58.76 21.44 53.16 54.40 76.46 70.52 27.92 63.21 65.20 -5.94 -48.54 -13.25 -11.26 81.32 76.48 31.20 72.75 73.37 88.63 85.06 37.48 81.18 81.90 -3.57 -51.15 -7.45 -6.73 80.69 80.07 38.59 74.85 77.83 89.24 88.32 45.27 82.84 86.18 -0.92 -43.97 -6.40 -3.06
Table 1: Performance of QA models trained on the biased SQuAD dataset (SQuADk=1 â denotes the difference in F1 score with SQuADtrain. We use exact match (EM) and F1 score for evaluation.1
sentence position mostly give predictions on the corresponding sentence. As a result, BERT (De- vlin et al., 2019) trained on a biased training set where every answer appear in the ï¬rst sentence only achieves 37.48% F1 score in the SQuAD de- velopment set whereas the same model trained on the same amount of randomly sampled examples achieves 85.06% F1 score.
# 2 Analysis
We ï¬rst demonstrate the presence of position bias using biased training sets sampled from SQuAD (Rajpurkar et al., 2016) and visualize how position bias propagates in BERT.
# 2.1 Position Bias on Synthetic Datasets
To examine the cause of the problem, we thor- oughly analyze the learning process of QA models trained on the biased training sets, especially focus- ing on BERT. Our analysis shows that hidden rep- resentations of BERT preserve a different amount of word information according to the word posi- tion when trained on the biased training set. The predictions of biased models also become more dependent on the ï¬rst few words when the training set has answers only in the ï¬rst sentences.
To tackle the problem, we test various options, ranging from relative position encodings (Yang et al., 2019) to ensemble-based de-biasing meth- ods (Clark et al., 2019; He et al., 2019). While sim- ple baselines motivated by our analysis improve the test performance, our ensemble-based de-biasing method largely improves the performance of most models. Speciï¬cally, we use the prior distribution of answer positions as an additional bias model and train models to learn reasoning ability beyond the positional cues.
Contributions of our paper are in threefold; First, we deï¬ne position bias in extractive question an- swering and illustrate that common extractive QA models suffer from it. Second, we examine the rea- son for the failure of the biased models and show that positions can act as spurious biases. Third, we show that the prior distribution of answer positions helps us to build positionally de-biased models, re- covering the performance of BERT from 37.48% to 81.64%. We also generalize our ï¬ndings in many different positions and datasets.2
From the original training set Dtrain, we sub- sample a biased training set Dk trainwhose answers lie in the k-th sentence.3 We conduct experi- ments on SQuAD (D = SQuAD) as most exam- ples in SQuAD are answerable with a single sen- tence (Min et al., 2018). Our analysis mainly fo- cuses on SQuADk=1 train (i.e., all answers are in the ï¬rst sentence), which has the largest proportion of samples compared to other sentence positions in SQuAD (28,263 out of 87,599). The propor- tion in the development set (SQuADdev) is similar, having 3,637 out of 10,570 answers in the ï¬rst sentence. Note that while our analysis is based on SQuADk=1 train , we also test various sentence positions in our main experiments (Section 4.2). We exper- iment with three popular QA models that provide positions as answers: BiDAF (Seo et al., 2017), BERT (Devlin et al., 2019), and XLNet (Yang et al., 2019). All three models are trained on SQuADk=1 train and are evaluated on SQuADdev. For a fair comparison, we also randomly sample ex- amples from the original training set and make SQuADtrain (Sampled) which has the same number of examples with SQuADk=1 train . Table 1 shows the performance of the three mod- els trained on SQuADk=1 train . The performances of all models drop signiï¬cantly compared to the models trained on SQuADtrain or SQuADtrain (Sampled). The relative position encodings in XLNet mitigate position bias to some extent, but its performance
2Evaluation code is provided by https://rajpurkar.github.io/ SQuAD-explorer/
2https://github.com/dmis-lab/position-bias
3We use Spacy Sentencizer (https://spacy.io/api/ sentencizer) for the sentence split.
(a) Average cosine similarity (Layer 12) (b) Spearman correlation (start)
â PRE ââ ORIG â FIRST Correlation 2 4 6 8 10 12 BERT Layer
# Cosine
Figure 2: Visualization of position bias with BERT trained on SQuADtrain (ORIG), SQuADk=1 without ï¬ne-tuning (PRE). See Section See Section 2.2 for more details. train (FIRST), and BERT
still degrades signiï¬cantly.
# BERT.
To better understand the cause of position bias, we additionally perform two pre-processing meth- ods on SQuADk=1 train . First, we truncate each pas- sage up to the ï¬rst sentence (SQuADk=1 train + First Sentence). In this case, most performance is re- covered, which indicates that the distributions of answer positions are relatively deï¬ned with respect to the maximum sequence length. Shufï¬ing the sen- tence order of SQuADk=1 train + Sentence Shufï¬e) also recovers most performance, showing that the spreadness of answers matters. However, these pre-processing methods cannot be a solution as more ï¬ne-grained biases (e.g., word level posi- tions) could cause the problem again and models cannot learn proper multi-sentence reasoning from a corrupted context.
# 2.2 Visualization of Position Bias
To visualize how position bias propagates through- out the layers, we compare BERT models, each trained on SQuADk=1 train and SQuADtrain respectively and BERT without any ï¬ne-tuning. The uncased version of BERT-base is used for the analysis.
Figure 2 (a) shows the amount of word infor- mation preserved in the hidden representations at the last layer of BERT. We deï¬ne the amount of word information for each word position as the co- sine similarity between the word embedding and its hidden representation at each layer. The simi- larities are averaged over the passage-side hidden representations in SQuADdev. BERT trained on SQuADk=1
train (FIRST) has higher similarities at the front of the passages compared with BERT trained on SQuADtrain (ORIG). In the biased model, the similarity becomes smaller after the ï¬rst few tokens, which shows position bias of
Figure 2 (b) shows the Spearmanâs rank corre- lation coefï¬cient between the ï¬nal output logits4 and the amount of word information at each layer deï¬ned by the cosine similarity. A higher correla- tion means that the model is more dependent on the word information kept in that layer. The correla- tion coefï¬cient is much higher in the biased model (FIRST), especially in the last few layers. Com- bined with the observation from Figure 2 (a), this indicates that the predictions of the biased model are heavily relying on the information of the ï¬rst few words.
# 2.3 Why is Position Bias Bad?
Our analysis shows that it is very easy for neural QA models to exploit positional cues whenever pos- sible. While it is natural for neural models to learn the strong but spurious correlation present in the dataset (McCoy et al., 2019; Niven and Kao, 2019), we argue that reading ability should be cultivated independent of such positional correlation. Our study aims to learn proper reading ability even in extreme cases where all answers are in the k-th sen- tence. Although exploiting the position distribution within the dataset could help the model improve performance on its corresponding test set, position bias should not be learned since we cannot guar- antee realistic test environments to follow similar distribution.
# 3 Method
To prevent models from learning a direct corre- lation between word positions and answers, we introduce simple remedies for BERT and a bias
4We show the results with start position logits and the same pattern is observed with end position logits.
ensemble method with answer prior distributions that can be applied to any QA models.
# 3.1 Baselines
Randomized Position To avoid learning the di- rect correlation between word positions and an- swers, we randomly perturb input positions. We ï¬rst randomly sample t indices from a range of 1 to maximum sequence length of BERT. We sam- ple t = 384 when the maximum sequence length is 512. Then, we sort the indices in an ascending order to preserve the ordering of input words. Per- turbed indices then generate position embedding at each token position, which replaces the original position embedding.
Entropy Regularization Inspired by the obser- vation in Section 2.2, we force our model to pre- serve a constant amount of word information re- gardless of the word positions. Maximizing the en- tropy of normalized cosine similarity between the word embeddings and their hidden representations encourages models to maintain a uniform amount of information. As the cosine similarities are not probabilities, we normalize them to be summed to 1. We compute the entropy regularization term from the last layer and add it to the start/end pre- diction loss with a scaling factor λ.
# 3.2 Bias Ensemble with Answer Prior
Bias ensemble methods (Clark et al., 2019; He et al., 2019; Mahabadi et al., 2020) combine the log probabilities from a pre-deï¬ned bias model and a target model to de-bias. Ensembling makes the target model to learn different probabilities other than the bias probabilities. In our case, we deï¬ne the prior distribution of the answer positions as our bias model. Speciï¬cally, we introduce the sentence- level answer prior and the word-level answer prior.
Bias Ensemble Method Given a passage and question pair, a model has to ï¬nd the optimal start and end positions of the answer in the passage, de- noted as ys, ye. Typically, the model outputs two probability distributions ps and pe for the start and end positions. As our method is applied in the same manner for both start and end predictions, we drop the superscript from ps, pe and subscript from ys, ye whenever possible.
For ensembling two different log probabilities from the bias model and the target model, we use a product of experts (Hinton, 2002). Using the
product of experts, a probability at the i-th position is calculated as:
Ëpi = sof tmax(log(pi) + log(bi))
where log(pi) is a log probability from the target model and log(bi) is a log probability from the bias model. The ensembled probability Ëp is used for the training.
To dynamically choose the amount of bias for each sample, Clark et al. (2019) introduce a learned mixing ensemble with a trainable parameter. Prob- abilities in the training phase are now deï¬ned as:
Ëpi = sof tmax(log(pi) + g(X) log(bi))
We use hidden representations before the softmax layer as X. g(X) then applies afï¬ne transforma- tion on the representations to obtain a scalar value. Softplus activation followed by max pooling is used to obtain positive values. As BiDAF has separate hidden representations for the start and end predic- tions, we separately deï¬ne g(X) for each start and end representation.
As models often learn to ignore the biases and make g(X) to 0, Clark et al. (2019) suggest adding an entropy penalty term to the loss function. How- ever, the entropy penalty did not make much differ- ence in our case as g(X) was already large enough. Note that we only use log(bi) during training, and the predictions are solely based on the predicted log probability log(pi) from the model.
We deï¬ne bias log probability as pre-calculated answer priors. Using prior distributions in machine learning has a long history such as using class fre- quency in the class imbalance problem (Domingos, 1999; Japkowicz and Stephen, 2002; Zhou and Liu, 2006; Huang et al., 2016). In our case, the class prior corresponds to the prior distribution of answer positions.
Word-level Answer Prior First, we consider the word-level answer prior. Given the train- ing set having N examples having N answers {y(1), y(2), ..., y(N )}, we compute the word-level answer prior at position i over the training set. In this case, our bias log probability at i-th position is:
N 1 ; . log(bi) = + » Ly = 7] @) j=l
where we use the indicator function 1[cond]. Bias log probabilities for the end position prediction are
calculated in a similar manner. Note that the word- level answer prior gives an equal bias distribution for each passage while the distribution is more ï¬ne- grained than the sentence-level prior described in the next section.
Sentence-level Answer Prior We also use the sentence-level answer prior which dynamically changes depending on the sentence boundaries of each sample. First, we deï¬ne a set of sentences {S(j) L } for the j-th training passage, where L is the maximum number of sentence in whole training passages. Then, the sentence-level answer prior of the i-th word position (for the start predic- tion) for the j-th sample, is derived from the fre- quency of answers appearing in the l-th sentence:
N log (b)) := xD iy es), te 5 (4) k=1
Note that as boundaries of sentences in each sample are different, bias log probabilities should be de- ï¬ned in every sample. Again, bias log probabilities for the end positions are calculated similarly.
It is very convenient to calculate the answer pri- ors for any datasets. For instance, on Dk=1 train , we use the ï¬rst sentence indicator as the sentence-level answer prior as all answers are in the ï¬rst sentence. More formally, the sentence-level answer prior for Dk=1
train is 1 for l = 1, and 0 when l > 1: i â S(j) 1 , i /â S(j) 1
i G) log(bâ) = i : © a 0 i¢ Sy (5)
which is a special case of the sentence-level answer prior. For general datasets where the distributions of answer positions are less skewed, the answer priors are more softly distributed. See Appendix B for a better understanding of the answer priors.
Both word-level and sentence-level answer pri- ors are experimented with two bias ensemble meth- ods: product of experts with bias (Bias Product, Equation 1) and learned mixing of two log proba- bilities (Learned-Mixin, Equation 2).
# 4 Experiments
We ï¬rst experiment the effects of various de- biasing methods on three different QA models us- ing both biased and full training sets. Our next experiments generalize our ï¬ndings in different sentence positions and different datasets such as NewsQA (Trischler et al., 2017) and NaturalQues- tions (Kwiatkowski et al., 2019).
# 4.1 Effect of De-biasing Methods
We ï¬rst train all three models (BiDAF, BERT, and XLNet) on SQuADk=1 train with our de-biasing meth- ods and evaluate them on SQuADdev (original de- dev , and SQuADk=2,3,... velopment set), SQuADk=1 . Note that SQuADk=2,3,... is another subset of SQuADdev, whose answers do not appear in the ï¬rst sentence, but in other sentences. We also ex- periment with BERT trained on the full training set, SQuADtrain.
For all models, we use the same hyperparameters and training procedures as suggested in their orig- inal papers (Seo et al., 2017; Devlin et al., 2019; Yang et al., 2019), except for batch sizes and train- ing epochs (See Appendix A). λ for the entropy regularization is set to 5. Most of our implementa- tion is based on the PyTorch library.
Results with SQuADk=1 train The results of apply- ing various de-biasing methods on three models with SQuADk=1 train are in Table 2. Performance of all models without any de-biasing methods (de- noted as âNoneâ) is very low on SQuADk=2,3,... , but fairly high on SQuADk=1 dev . This means that their predictions are highly biased towards the In the case of BERT, F1 score ï¬rst sentences. on SQuADk=1 is 85.81%, while F1 score on dev SQuADk=2,3,... is merely 12.12%. Our simple base- line approaches used in BERT improve the perfor- mance up to 34.63% F1 score (Random Position) while the entropy regularization is not signiï¬cantly effective.
Bias ensemble methods using answer priors con- sistently improve the performance of all models. The sentence-level answer prior works the best, which obtains a signiï¬cant gain after applying the Learned-Mixin method. We found that the coef- ï¬cient g(X) in Equation 2 averages to 7.42. dur- ing training for BERT + Learned-Mixin, which demonstrates a need of proper balancing between the probabilities. The word-level answer prior does not seem to provide strong position bias signals as its distribution is much softer than the sentence- level answer prior.
Results with SQuADtrain The results of train- ing BERT with our de-biasing methods on the full training set SQuADtrain are in the bottom of Table 2. Note that the answer prior is more softened than the answer prior used in SQuADk=1 train as answers are now spread in all sentence positions. While exploit- ing the positional distribution of the training set
De-biasing Method SQuADk=1 dev F1 EM SQuADk=2,3,... EM dev F1 SQuADdev F1 EM BERT trained on SQuADk=1 train Baseline None Random Position Entropy Regularization 77.07 69.95 77.40 85.81 80.73 86.17 7.14 27.32 10.50 12.12 34.63 15.72 31.20 41.99 33.52 37.48 50.49 39.96 Word-Level Bias Product Learned-Mixin 78.61 78.17 87.08 86.56 7.85 8.55 12.88 13.43 32.19 32.51 38.41 38.59 Sentence-Level
BiDAF trained on SQuADk=1 train
Baseline None 61.04 72.91 0.66 4.34 21.44 27.92 Sentence-Level Bias Product Learned-Mixin 62.00 56.53 73.87 66.79 0.78 50.28 4.48 60.77 21.84 52.43 28.36 62.84
XLNet trained on SQuADk=1 train
Baseline None 78.99 87.24 11.52 16.77 38.59 45.27 Sentence-Level Bias Product Learned-Mixin 79.24 68.82 87.88 82.05 33.28 64.63 39.93 77.65 49.09 66.07 56.43 79.16
BERT trained on SQuADtrain
Baseline None 81.55 88.68 81.21 88.61 81.32 88.63 Sentence-Level Bias Product Learned-Mixin 81.88 81.58 88.87 88.38 81.29 80.87 88.87 88.47 81.49 81.12 88.87 88.44
Table 2: Results of applying de-biasing methods. Each model is evaluated on SQuADdev and two subsets: SQuADk=1
could be more helpful when evaluating on the devel- opment set that has a similar positional distribution, our method maintains the original performance. It shows that our method works safely when the posi- tional distribution doesnât change much.
Visualization To investigate the effect of de- baising methods, we visualize the word information in each layer as done in Section 2.2. We visualize the BERT trained on SQuADk=1 train ensembled with sentence-level answer prior in Figure 3. The bias product method (PRODUCT) and the model without any de-biasing methods (NONE) are similar, show- ing that it still has position bias. The learned-mixin method (MIXIN), on the other hand, safely delivers the word information across different positions.
# 4.2 Generalizing to Different Positions
As the SQuAD training set has many answers in the ï¬rst sentence, we mainly test our methods on SQuADk=1 train . However, does our method gener-
ââ NONE ââ PRODUCT ââ MIXIN 2 o x 2 o a Cosine similarity o 9: o 58 ROG 2 o o i} 25 50 75 100) 125 Passage word position 150 175 200
Figure 3: Visualization of BERT models trained on SQuADk=1
alize to different sentence positions? To answer this question, we construct four SQuADk traindatasets based on the sentence positions of answers. Note that unlike SQuADk=1 train , the number of samples becomes smaller and the sentence boundaries are more blurry when k > 1, making answer pri- ors much softer. We train three QA models on different biased datasets and evaluate them on SQuADdev with and without de-biasing methods.
EM F1 EM SQuADdev EM F1 F1 EM F1 SQuADk train k = 2 (20,593 samples) k = 3 (15,567 samples) k = 4 (10,379 samples) k = 5, 6, ... (12,610 samples) BiDAF +Bias Product +Learned-Mixin 18.43 21.51 47.49 25.74 28.67 58.36 12.26 11.19 43.57 19.04 18.39 53.80 9.96 11.20 30.18 16.50 17.78 39.51 12.34 10.09 18.51 19.65 16.78 27.30 BERT +Bias Product +Learned-Mixin 36.16 52.89 71.61 43.14 50.38 80.36 44.76 52.42 69.04 52.89 60.99 77.91 49.13 53.39 64.31 58.01 62.69 73.72 57.95 58.75 62.82 66.69 67.67 72.30 XLNet +Bias Product +Learned-Mixin 47.55 59.49 68.34 55.01 67.35 80.35 46.67 61.99 69.28 54.56 70.89 79.99 50.49 67.26 70.07 58.74 76.55 80.12 58.29 72.44 73.33 66.67 81.85 82.79
Table 3: Position bias in different positions. Each model is trained on a biased SQuAD dataset (SQuADk evaluated on SQuADdev.
(a) BERT (b) BERT + Bias Product (c) BERT + Learned-Mixin
Training set with k-th sentence ke2_ k= ke k=5,6, 18.96 59.49 59.14 69.09 20 Evaluation set with k-th sentence
ke2 Evaluation set with k-th sentence Training set with k-th sentence ko3 kaa k=5,6,
Training set with k-th sentence ke2_k=3 kad ke 5.6, kel 20 Evaluation set with k-th sentence
Figure 4: Sentence-wise position bias in SQuAD. Models are trained on SQuADk dev. (a) Standard BERT suffers from position bias as the off-diagonal performance is signiï¬cantly lower. (b), (c) Our de-biasing method successfully handles the bias and provides consistently higher performance.
Results As shown in Table 3, all three models suffer from position bias in every sentence position while the learned-mixin method (+Learned-Mixin) successfully resolves the bias. Due to the blurred sentence boundaries, position bias is less problem- atic when k is large. We observe a similar trend in BERT and XLNet while a huge performance drop is observed in BiDAF even with a large k.
Visualization Figure 4 visualizes the sentence- wise position biases. We train BERT, BERT + Bias Product and BERT + Learned Mixin on different subsets of SQuAD training set (SQuADk train) and evaluated on every SQuADk dev whose answers lie only in the k-th sentence. As a result, the low per- formance in the off-diagonal represent the presence of position bias. The ï¬gure shows that the biased model fails to predict the answers in different sen- tence positions (Figure 4 (a)) while our de-biased model achieves high performance regardless of the
sentence position (Figure 4 (c)). Again, as the value of k increases, the boundary of the k-th sen- tence varies a lot in each sample, which makes the visualization of sentence-wise bias difï¬cult.
# 4.3 NewsQA and NaturalQuestions
We test the effect of de-basing methods on datasets having different domains and different degrees of position bias. NewsQA (Trischler et al., 2017) is an extractive QA dataset that includes pas- sages from CNN news articles. NaturalQues- tions (Kwiatkowski et al., 2019) is a dataset con- taining queries and passages collected from the Google search engine. We use the pre-processed dataset provided by the MRQA shared task (Fisch et al., 2019).5
For each dataset, we construct two sub-training datasets; one contains samples with answers in the ï¬rst sentence (k = 1), and the other contains
5https://github.com/mrqa/MRQA-Shared-Task-2019
NewsQAk train NewsQAdev k = All k = 1 k = 2, 3, ... BERT +Bias Product +Learned-Mixin 69.94 69.46 69.42 27.99 28.81 44.50 56.15 56.86 58.22
Table 4: F1 scores on NewsQA. Models are evaluated on the original development dataset (NewsQAdev).
NQk train NQdev k = All k = 1 k = 2, 3, ... BERT +Bias Product +Learned-Mixin 78.79 78.84 79.04 56.79 56.77 72.83 49.59 53.34 60.63
Table 5: F1 scores on NaturalQuestions. Models are evaluated on the original development dataset (NQdev).
the remaining samples (k = 2, 3, ...). Models are trained on the original dataset and two sub-training datasets and evaluated on the original development set.
Implementation Details For NewsQA, we trun- cate each paragraph so that the length of each con- text is less than 300 words. We eliminate training and development samples that become unanswer- able due to the truncation. For NaturalQuestions, we choose ï¬rstly occurring answers for training ex- tractive QA models, which is a common approach in weakly supervised setting (Joshi et al., 2017; Talmor and Berant, 2019).
From NewsQA and NaturalQuestions, we con- struct two sub-training datasets having only the ï¬rst annotated samples (Dk=1 train ) and the remain- ing samples (Dk=2,3,... For a fair compari- ). son, we ï¬x the size of two sub-training sets to have 17,000 (NewsQA) and 40,000 samples (Natu- ralQuestions).
Results In Table 4 and Table 5, we show results of applying our methods. In both datasets, BERT, trained on biased datasets (k = 1 and k = 2, 3, ...), signiï¬cantly suffers from position bias. Position bias is generally more problematic in the k = 1 datasets while for NaturalQuestions, k = 2, 3, ... is also problematic. Our de-biasing methods prevent performance drops in all cases without sacriï¬cing the performance on the full training set (k = All).
# 5 Related Work
Various question answering datasets have been in- troduced with diverse challenges including reason- ing over multiple sentences (Joshi et al., 2017),
answering multi-hop questions (Yang et al., 2018), and more (Trischler et al., 2017; Welbl et al., 2018; Kwiatkowski et al., 2019; Dua et al., 2019). In- troduction of these datasets rapidly progressed the development of effective QA models (Wang and Jiang, 2016; Seo et al., 2017; Xiong et al., 2017; Wang et al., 2017; Yu et al., 2018; Devlin et al., 2019; Yang et al., 2019), but most models predict the answer as positions without much discussion on it.
Our work builds on the analyses of dataset bi- ases in machine learning models and ways to tackle them. For instance, sentence classiï¬cation models in natural language inference and argument rea- soning comprehension suffer from word statistics bias (Poliak et al., 2018; Minervini and Riedel, 2018; Kang et al., 2018; Belinkov et al., 2019; Niven and Kao, 2019). On visual question answer- ing, models often ignore visual information due to the language prior bias (Agrawal et al., 2016; Zhang et al., 2016; Goyal et al., 2017; Johnson et al., 2017; Agrawal et al., 2018). Several studies in QA also found that QA models do not leverages the full information in the given passage (Chen et al., 2016; Min et al., 2018; Chen and Durrett, 2019; Min et al., 2019). Adversarial datasets have been also proposed to deal with this type of prob- lem (Jia and Liang, 2017; Rajpurkar et al., 2018). In this study, we deï¬ne position bias coming from the prediction structure of QA models and show that positionally biased models can ignore informa- tion in different positions.
Our proposed methods are based on the bias en- semble method (Clark et al., 2019; He et al., 2019; Mahabadi et al., 2020). Ensembling with the bias model encourages the model to solve tasks without converging to bias shortcuts. Clark et al. (2019) conducted de-biasing experiments on various tasks including two QA tasks while they use tf-idf and the named entities as the bias models.
It is worth noting that several models incorporate the pointer network to predict the answer positions in QA (Vinyals et al., 2015; Wang and Jiang, 2016; Wang et al., 2017). Also, instead of predicting positions, some models predict the n-grams as an- swers (Lee et al., 2016; Seo et al., 2019), generate answers in a vocabulary space (Raffel et al., 2019), or use a generative model (Lewis and Fan, 2019). We expect that these approaches suffer less from position bias and leave the evaluation of position bias in these models as our future work.
# 6 Conclusion
Most QA studies frequently utilize start and end po- sitions of answers as training targets without much considerations. Our study shows that most QA models fail to generalize over different positions when trained on datasets having answers in a spe- ciï¬c position. Our ï¬ndings show that position can work as a spurious bias and alert researchers when building QA models and datasets. We introduce several de-biasing methods to make models to ig- nore the spurious positional cues, and ï¬nd out that the sentence-level answer prior is very useful. Our ï¬ndings also generalize to different positions and different datasets. One limitation of our approach is that our method and analysis are based on a sin- gle paragraph setting which should be extended to a multiple paragraph setting to be more practically useful.
# Acknowledgments
This research was supported by a grant of the Korea Health Technology R&D Project through the Korea Health Industry Development Institute (KHIDI), funded by the Ministry of Health & Welfare, Re- public of Korea (grant number: HR20C0021). This research was also supported by National Research Foundation of Korea (NRF-2017R1A2A1A17069 645, NRF-2017M3C4A7065887).
# References
Aishwarya Agrawal, Dhruv Batra, and Devi Parikh. 2016. Analyzing the behavior of visual question an- swering models. In EMNLP.
Aishwarya Agrawal, Dhruv Batra, Devi Parikh, and Aniruddha Kembhavi. 2018. Donât just assume; look and answer: Overcoming priors for visual ques- tion answering. In CVPR.
Yonatan Belinkov, Adam Poliak, Stuart M Shieber, Benjamin Van Durme, and Alexander Rush. 2019. On adversarial removal of hypothesis-only bias in natural language inference. NAACL HLT.
Danqi Chen, Jason Bolton, and Christopher D Man- ning. 2016. A thorough examination of the cnn/daily mail reading comprehension task. In ACL.
Jifan Chen and Greg Durrett. 2019. Understanding dataset design choices for multi-hop reasoning. In NAACL-HLT.
Christopher Clark, Mark Yatskar, and Luke Zettle- moyer. 2019. Donât take the easy way out: En- semble based methods for avoiding known dataset biases. In EMNLP-IJCNLP.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In NAACL.
Pedro Domingos. 1999. Metacost: a general method for making classiï¬ers cost-sensitive. In SIGKDD.
Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. Drop: A reading comprehension benchmark requir- ing discrete reasoning over paragraphs. In NAACL- HLT.
Adam Fisch, Alon Talmor, Robin Jia, Minjoon Seo, Eunsol Choi, and Danqi Chen. 2019. Mrqa 2019 shared task: Evaluating generalization in reading In Proceedings of the 2nd Work- comprehension. shop on Machine Reading for Question Answering.
Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the v in vqa matter: Elevating the role of image under- standing in visual question answering. In CVPR.
He He, Sheng Zha, and Haohan Wang. 2019. Unlearn dataset bias in natural language inference by ï¬tting the residual. EMNLP-IJCNLP 2019.
Geoffrey E Hinton. 2002. Training products of experts by minimizing contrastive divergence. Neural com- putation.
Chen Huang, Yining Li, Chen Change Loy, and Xiaoou Tang. 2016. Learning deep representation for imbal- anced classiï¬cation. In CVPR.
Nathalie Japkowicz and Shaju Stephen. 2002. The In- class imbalance problem: A systematic study. telligent data analysis.
Robin Jia and Percy Liang. 2017. Adversarial exam- ples for evaluating reading comprehension systems. In EMNLP.
Justin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. 2017. Clevr: A diagnostic dataset for com- positional language and elementary visual reasoning. In CVPR.
Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehen- sion. In ACL.
Dongyeop Kang, Tushar Khot, Ashish Sabharwal, and Eduard Hovy. 2018. Adventure: Adversarial train- ing for textual entailment with knowledge-guided ex- amples. In ACL.
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- ï¬eld, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob
Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natu- ral questions: a benchmark for question answering research. TACL.
Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, and Jonathan Berant. 2016. Learning recurrent span representations for ex- arXiv preprint tractive question answering. arXiv:1611.01436.
M Lewis and A Fan. 2019. Generative question an- swering: Learning to answer the whole question. In ICLR.
Rabeeh Karimi Mahabadi, Yonatan Belinkov, and James Henderson. 2020. End-to-end bias mitigation by modelling biases in corpora. In ACL.
Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In ACL.
Sewon Min, Eric Wallace, Sameer Singh, Matt Gard- ner, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2019. Compositional questions do not necessitate multi-hop reasoning. In ACL.
Sewon Min, Victor Zhong, Richard Socher, and Caim- ing Xiong. 2018. Efï¬cient and robust question an- swering from minimal context over documents. In ACL.
Pasquale Minervini and Sebastian Riedel. 2018. Ad- versarially regularising neural nli models to integrate logical background knowledge. CoNLL, page 65.
Timothy Niven and Hung-Yu Kao. 2019. Probing neu- ral network comprehension of natural language argu- ments. In ACL.
Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018. Hypothesis only baselines in natural language infer- In Proceedings of the Seventh Joint Confer- ence. ence on Lexical and Computational Semantics.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a uniï¬ed text-to-text trans- former. arXiv preprint arXiv:1910.10683.
Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you donât know: Unanswerable ques- tions for squad. In ACL.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250.
Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention ï¬ow for machine comprehension. In ICLR.
Minjoon Seo, Jinhyuk Lee, Tom Kwiatkowski, Ankur Parikh, Ali Farhadi, and Hannaneh Hajishirzi. 2019. Real-time open-domain question answering with dense-sparse phrase index. In ACL.
Alon Talmor and Jonathan Berant. 2019. Multiqa: An empirical investigation of generalization and transfer in reading comprehension. In ACL.
Adam Trischler, Tong Wang, Xingdi Yuan, Justin Har- ris, Alessandro Sordoni, Philip Bachman, and Ka- heer Suleman. 2017. Newsqa: A machine compre- In Proceedings of the 2nd Work- hension dataset. shop on Representation Learning for NLP.
Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In NIPS.
Shuohang Wang and Jing Jiang. 2016. Machine com- prehension using match-lstm and answer pointer. arXiv preprint arXiv:1608.07905.
Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang, and Ming Zhou. 2017. Gated self-matching net- works for reading comprehension and question an- swering. In ACL.
Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2018. Constructing datasets for multi-hop reading comprehension across documents. TACL.
Caiming Xiong, Victor Zhong, and Richard Socher. 2017. Dcn+: Mixed objective and deep residual coattention for question answering. In ICLR.
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretrain- arXiv preprint ing for language understanding. arXiv:1906.08237.
Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christo- pher D Manning. 2018. Hotpotqa: A dataset for di- verse, explainable multi-hop question answering. In EMNLP.
Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V Le. 2018. Qanet: Combining local convolution with global self-attention for reading comprehension. In ICLR.
Peng Zhang, Yash Goyal, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2016. Yin and yang: Balancing and answering binary visual questions. In CVPR.
Zhi-Hua Zhou and Xu-Ying Liu. 2006. On multi-class cost-sensitive learning. In AAAI.
# A Implementation Details
Details of Training For all experiments, we use uncased BERT-base and cased XLNet-base. We modify the open-sourced Pytorch implementation of models.6 BiDAF is trained with the batch size of 64 for 30 epochs and BERT and XLNet are trained for 2 epochs with batch sizes 12 and 10, re- spectively. The choice of hyperparameters mainly comes from the limitation of our computational re- sources and mostly follows the default setting used in their original works. Note that our de-biasing methods do not require additional hyperparameters. For all three models, the number of parame- ters remains the same as default settings with bias product and increases by a single linear layer with learned-mixin. We trained models on a single Ti- tan X GPU. The average training time of the bias ensemble method is similar to the original models.
# B Examples of Answer Prior
To provide a better understanding of our methods, Figure B.1 shows examples of answer priors, which are used as bias models. See section 3 for detail.
# C Visualization of Position Bias
In Figure C.1, we plot the preserved amount of word information in the middle layers of BERT. Figure C.2 shows the effect of applying the de- biasing methods in each layer of BERT. See Sec- tion 2 and 4.1 for more detail. We plot the results of layers 1, 4, 7, 10, and 11.
6https://github.com/allenai/allennlp, https://github.com/ huggingface/transformers
# Word-Level Answer Prior Distribution
Dtrain : [0-15, 0.10, 0.12, 0.10, 0.05, 0.08, 0.05, 0.03, 0.02, 0.04, ...] Din: (0-30, 0.20, 0.20, 0.15, 0.09, 0.01, 0.01, 0.004, 0.002, 0.001, ...]
# Sentence-Level Answer Prior Distribution
Drain : (0.80, 0.20] Dirin : [1,0]
# Sentence 1
# Sentence 2
Answer Prior Dataset Word1 Word2 Word3 Word4 . WordS Word6 Word7 Word& . Derain 0.15 0.10 0.12 0.10 0.05 0.08 0.05 0.03 0.02 0.04 Word Level Disin 0.30 0.20 0.20 0.15 0.09 0.01 0.01 0.004 0.002 0.001 Sentence Dtrain 0.8 08 08 0.8 08 0.2 0.2 0.2 0.2 0.2 Level DES, 1 1 1 1 1 0 0 0 0 0
Figure B.1: Example of three types of answer priors, word-level answer prior (Word-Level), sentence-level answer prior (Sentence-Level) and sentence-level answer prior on Dk=1
â
# PRE
â
# oric
â
# First
530.70 Eos. § 0.66. & o.6a (@) Layer 1 3 30 160 150 250 350 30 350 Passage word position â pre â orig â First 4050 Zoas £0.40 g 0-35) (b) Layer 4 5 30 160 150 250 250 350 350 Passage word position â pre â orig. â First (c) Layer 7 3 Eg 160 150 250 350 300 350 Passage word position â Pe â onic â First 0.30. 0.051 (4) Layer 10 3 3 160 150 250 250 360 350 Passage word position â pre â oris â Finst £ o20 8 S 0.05 0.00 (â¬) Layer 12 5 30 160 150 250 250 350 350
# Passage word position
Figure C.1: Visualization of each layer of BERT trained on SQuADtrain (ORIG), SQuADk=1 without ï¬ne-tuning (PRE). As the input passes each layer, position bias becomes more problematic.
â NONE ââ PRODUCT ââ MIXIN Cosine similarity 0.64, (a) Layer 1 0 50 100 150 200 250 300 350 Passage word position â NONE -ââ PRODUCT -ââ MIXIN S S b in wn i=) Cosine similarity o + o 0.35 (b) Layer 4 0 50 100 150 200 250 300 350 Passage word position â NONE ââ PRODUCT ââ MIXIN (c) Layer 7 0 50 100 150 200 250 300 350 Passage word position â NONE ââ PRODUCT ââ MIXIN 0.175! 20-150) 80.125) âE F 0.100: ¢ £0,075 fo} © 0.050: _ (d) Layer 10 0 50 100 150 200 250 300 350 Passage word position 0.025 â NONE ââ PRODUCT ââ MIXIN 0.150; 20.125: i = 0-100) a gy 0.075: £ % 9.050: U 0.025) (e) Layer 11 0 50 100 150 200 250 300 350 Passage word position
Figure C.2: Visualization of each layer of de-biased BERT. BERT trained on SQuADk=1 train without any de-biasing methods (NONE), with sentence-level prior bias product (PRODUCT), with learned-mixin (MIXIN). MIXIN pre- serves consistent information compared with NONE and prevents the bias propagation. | {
"id": "1608.07905"
} |
2004.14601 | Learning Music Helps You Read: Using Transfer to Study Linguistic Structure in Language Models | We propose transfer learning as a method for analyzing the encoding of
grammatical structure in neural language models. We train LSTMs on
non-linguistic data and evaluate their performance on natural language to
assess which kinds of data induce generalizable structural features that LSTMs
can use for natural language. We find that training on non-linguistic data with
latent structure (MIDI music or Java code) improves test performance on natural
language, despite no overlap in surface form or vocabulary. To pinpoint the
kinds of abstract structure that models may be encoding to lead to this
improvement, we run similar experiments with two artificial parentheses
languages: one which has a hierarchical recursive structure, and a control
which has paired tokens but no recursion. Surprisingly, training a model on
either of these artificial languages leads to the same substantial gains when
testing on natural language. Further experiments on transfer between natural
languages controlling for vocabulary overlap show that zero-shot performance on
a test language is highly correlated with typological syntactic similarity to
the training language, suggesting that representations induced by pre-training
correspond to the cross-linguistic syntactic properties. Our results provide
insights into the ways that neural models represent abstract syntactic
structure, and also about the kind of structural inductive biases which allow
for natural language acquisition. | http://arxiv.org/pdf/2004.14601 | Isabel Papadimitriou, Dan Jurafsky | cs.CL | EMNLP 2020 | null | cs.CL | 20200430 | 20201030 | 0 2 0 2
t c O 0 3 ] L C . s c [
3 v 1 0 6 4 1 . 4 0 0 2 : v i X r a
# Learning Music Helps You Read: Using Transfer to Study Linguistic Structure in Language Models
# Isabel Papadimitriou Stanford University [email protected]
# Dan Jurafsky Stanford University [email protected]
# Abstract
We propose transfer learning as a method for analyzing the encoding of grammatical struc- ture in neural language models. We train LSTMs on non-linguistic data and evaluate their performance on natural language to as- sess which kinds of data induce generalizable structural features that LSTMs can use for nat- ural language. We ï¬nd that training on non- linguistic data with latent structure (MIDI mu- sic or Java code) improves test performance on natural language, despite no overlap in surface form or vocabulary. To pinpoint the kinds of abstract structure that models may be encod- ing to lead to this improvement, we run simi- lar experiments with two artiï¬cial parentheses languages: one which has a hierarchical recur- sive structure, and a control which has paired tokens but no recursion. Surprisingly, training a model on either of these artiï¬cial languages leads to the same substantial gains when test- ing on natural language. Further experiments on transfer between natural languages control- ling for vocabulary overlap show that zero- shot performance on a test language is highly correlated with typological syntactic similar- ity to the training language, suggesting that representations induced by pre-training corre- spond to the cross-linguistic syntactic proper- ties. Our results provide insights into the ways that neural models represent abstract syntactic structure, and also about the kind of structural inductive biases which allow for natural lan- guage acquisition. 1
(Lower is better) Ppl when tested on Spanish (L2) oO © @ S 2 & XY & S 42 & ws ws os Ra S wg ry s & s erN ro) & _ (L1)
Pretraining Language (L1)
Figure 1: We ï¬nd that LSTM LMs can utilize vari- ous types of non-linguistic structure to help learn to model human language, and that nested hierarchical structure does not lead to more expressive encodings than ï¬at, head-dependency pair structure. We also ï¬nd that LSTM LMs learn representations that corre- late with typological syntactic feature distance, allow- ing them to transfer more effectively from languages which are grammatically similar.
et al., 2018a; Dalvi et al., 2019; Hewitt and Man- ning, 2019; Clark et al., 2019), or fed them curated inputs that depend on complex syntax (Linzen et al., 2016; Gulordava et al., 2018; Talmor et al., 2019; McCoy et al., 2020), in order to uncover latent syntactic awareness.
# Introduction
We propose a different approach: we measure the structural awareness of a language model by studying how much this structure acts as an induc- tive bias to improve learning when we transfer from one language or symbolic system to another.
Understanding how neural language models learn and represent syntactic structure is an important an- alytic question for NLP. Recent work has directly probed the internal activations of models (Conneau
the corpora and run our experiments at https://github.com/toizzy/ tilt-transfer
We train LSTM models on data with varying de- grees of language-like structure (music, Java code, nested symbols), and then evaluate their perfor- mance on natural language. Before evaluation, we freeze the LSTM parameters and ï¬ne-tune the word embeddings on the evaluation language. This lets us see if the training data induces language-like
® ® ® ® Train on L1 Freeze LSTM Finetune word Test on L2 parameters embeddings on L2 [un ]} LSTM_ |e Music > ]) LSTM | [t= +> âspanish ><>! LSTM in is by») Paren- 7 7 is Spanish {tin} LSTM |( theses }->( tn] LSTM |[tin +» Spanish ><> LSTM es tr ]| LSTM |fae (sta =e
Figure 2: Diagram illustrating our training procedure: k models are trained on k L1 languages, and then their LSTM weights are frozen while their linear layers are ï¬netuned on a common L2 language (in our case, we always use Spanish as the L2). We can then compare their performance on the common L2.
structure in the recurrent parameters of LSTMsâ despite removing vocabulary-level confounders. By assessing if representations are useful across languages, we examine the generalizable represen- tations of grammar that LSTMs encode. We call this new method the Test for Inductive Bias via Language Model Transfer (TILT).
representations of typologically sensible properties rather than relying on ad-hoc or non-natural repre- sentations. For this result we draw on recent inter- lingual work such as Artetxe et al. (2020), Ponti et al. (2019), and Conneau et al. (2018b), extending it to use typological distance to turn these observa- tions into quantitative probes.
Firstly, we examine the transfer of abstract struc- tural features from languages that are very different on the surface from human language. We ï¬nd that pretraining an LSTM on music data2 or Java code greatly improves transfer to human language over pretraining on structureless random baseline data. To test if the gain in performance is due to the LSTM utilizing the recursive nature of music and code, we train models on an artiï¬cial language with recursion (hierarchically nested symbols) and ob- serve that they also perform well when evaluated on human language. However, we also surprisingly ï¬nd that recursion is a sufï¬cient, but not necessary condition for generalizable, language-like grammar induction. We observe similar gains when pretrain- ing on a language of matching pairs that do not nest hierarchically, showcasing the importance of non-hierarchical head-dependent-type relations in LSTM language processing.
Lastly, in transfer experiments between different human languages, we ï¬nd that transfer is better between languages that are syntactically typolog- ically similar, even with no vocabulary overlap. This suggests that models have the ability to form
2We use the MAESTRO music dataset, which utilizes an exact symbolic representation of music (like a music score) that is sequentialized for sequence modelling
The TILT method allows us to ask a complemen- tary set of questions to those answered by current analysis methods. TILTs demonstrate the abstract structural notions that LSTMs can learn, rather than probing for the manifestation of a particular known structure, as in most current methods. By exam- ining the pretraining structures that give LSTMs a better ability to model language, we also con- tribute to the more general cognitive question of what structural inductive biases a learner needs to be able to easily acquire human language.
# 2 Architecture and Training
Our methodology consists of training LSTM lan- guage models on k different ï¬rst languages (L1s) which include natural languages, artiï¬cial lan- guages, and non-linguistic symbol systems, and testing the performance of these models on a com- mon second (L2) language. In our case, we used Spanish as the common L2. Before testing on the L2 test set, we ï¬ne-tune the linear embedding layer of the models on the L2 training set, while keeping the LSTM weights frozen. This aligns the vocabu- lary of each model to the new language, but does not let it learn any structural information about the L2 language. Though word embeddings do contain some grammatical information like part of
speech, they do not contain information about how to connect tokens to each other â that information is only captured in the LSTM. Figure 2 illustrates our training process. 3
We vary the L1 languages and maintain a com- mon L2 (instead of the other way around) in order to have a common basis for comparison: all of the models are tested on the same L2 test set, and there- fore we can compare the perplexity scores. We run n = 5 trials of every experiment with different random seeds. Any high-resource human language would have provided a good common L2, and Span- ish works well for our human languages experi- ments due to the fact that many higher-resource languages fall on a smooth gradation of typological distance from it (see Table 1).
We use the AWD-LM model (Merity et al., 2018) with the default parameters of 3 LSTM layers, 300- dimensional word embeddings, a hidden size of 1,150 per layer, dropout of 0.65 for the word em- bedding matrices and dropout of 0.3 for the LSTM parameters. We used SGD and trained to conver- gence, starting the learning rate at the default of 30 and reducing it at loss plateau 5 times.
Much of the work on multilingual transfer learn- ing has speculated that successes in the ï¬eld may be due to vocabulary overlap (see for example Wu and Dredze (2019)). Since our work focuses mostly on syntax, we wanted to remove this possibility. As such, we shufï¬e each word-to-index mapping to use disjoint vocabularies for all languages: the En- glish word âChileâ and the Spanish word âChileâ would map to different integers. This addresses the confound of vocabulary overlap, as all language pairs have zero words in common from the point of view of the model.
Since the vocabularies are totally separated be- tween languages, we align the vocabularies for all L1-L2 pairs by ï¬netuning the word embeddings of all the pretrained models on the Spanish (L2) training data, keeping the LSTM weights frozen. By doing this, we remove the confound that would arise should one languageâs vocabulary randomly happen to be more aligned with Spanish than an- otherâs. These controls ensure that lexical features, whether they be shared vocabulary or alignment of randomly aligned indices, do not interfere with the experimental results which are meant to compare higher-level syntactic awareness.
3All pretraining jobs took less than 2 days to run on one GPU, all ï¬netuning jobs took less than 1 day to run on one GPU.
# 3 Experiment 1: Random Baselines
We run our method on a random baseline L1: a cor- pus where words are sampled uniformly at random. This gives us a baseline for how much information we gain ï¬netuning the word embeddings to the L2, when there has not been any structurally biasing input to the LSTM from the L1.
We also examine the importance of vocabulary distribution by training on a random corpus that is sampled from a Zipï¬an distribution. Human languages are surprisingly consistent in sharing a roughly Zipï¬an vocabulary distribution, and we test how pretraining on this distribution affects the ability to model human language. 4
# 3.1 Data
Our random corpora are sampled from the Span- ish vocabulary, since Spanish is the common L2 language across all experiments. Words are sam- pled uniformly for the Uniform Random corpus, and drawn from the empirical Spanish unigram dis- tribution (as calculated from our Spanish training corpus) for the Zipï¬an Random corpus. Illustrative examples from all of our corpora can be found in Figure 3. The random corpora are controlled to 100 million tokens in length.
# 3.2 Results
When tested on Spanish, the average perplexity is 513.66 for models trained on the Random Uniform corpus and 493.15 for those trained on the Random Zipï¬an corpus, as shown in Figure 4. These per- plexity values are both smaller than the vocabulary size, which indicates that the word embedding ï¬ne- tuning captures information about the test language even when the LSTM has not been trained on any useful data.
The models trained on the Zipï¬an Random cor- pus are signiï¬cantly better than those trained on the Uniform corpus (p << 0.05, Welchâs t-test over n = 5 trials). However, even though training on a Zipï¬an corpus provides gains when compared to training on uniformly random data, in absolute terms performance is very low. This indicates that, without higher-level language-like features, there is very little that an LSTM can extract from properties of the vocabulary distribution alone.
4See Piantadosi (2014) for a review of cognitive, commu- nication and memory-based theories seeking to explain the ubiquity of power law distributions in language.
# Random
Uniform: marroguin jemer pertenecer osasuna formaron citoesqueleto relativismo Zipf: en con conocidas y en los victoriano como trabajar (unk) monte * en juegos dias en el The random corpora are sampled randomly from the Spanish vocabulary. There is no underlying structure of any kind that links words with each other. All words are equally likely to be sampled in the Uniform corpus, while common words are more likely in the Zipfian corpus. Music The music data is encoded from classical piano performances according to the MAESTRO standard. Music is structured on many levels. The red arrow in the example illustrates how, on a small timescale, each note is linked to its corresponding note when a motif is repeated but modulated down a whole-step. if (coordFactor == return sumExpl 1.0f) The code corpus is composed of Java code. The above snippet demonstrates some kinds of structure that are present in code: brackets are linked to their pairs, el se statements are linked else { to an if statement, and coreference of variable names is ; result = sum * coordFactor unambiguous. Parentheses . . . . Our artificial corpora consist of pairs of matching integers. In Nesting: the Nesting Parentheses corpus, integer pairs nest lant aA = A Van 0 29 29005 5 0 1016 1016 9 8 8 28 28 9 Flat: (Rh TN TIN A 21 13 21 6294 13 6294 5 5471 5 32 32 S471 hierarchically and so the arcs do not cross. In the Flat Parentheses corpus, each integer pair is placed independently of all the others, and so the arcs can cross multiple times. (There is a one-to-one mapping between Spanish words and integers and so these integers are sampled from the same Spanish vocabulary distribution as the Random Zipfian corpus. We visualize these corpora here with integers and the Random corpora with words for simplicity).
Figure 3: Examples illustrating the content of our non-linguistic corpora for Experiments 1-3. All examples are taken from the corpora.
The Zipï¬an Random baseline is controlled for vocabulary distribution: if an experiment yields better results than the Zipï¬an Random baseline, we cannot attribute its success only to lexical-level sim- ilarity to the L2. Therefore, models that are more successful than the Zipï¬an baseline at transfer to human language would have useful, generalizable syntactic information about the structures that link tokens.
# 4 Experiment 2: Non-linguistic structure
the abstract structural features that these corpora share with natural language in a generalizable way thatâs usable to model human language?
# 4.1 Data
For our music data we use the MAESTRO dataset of Hawthorne et al. (2018). The MAESTRO dataset embeds MIDI ï¬les of many parallel notes into a linear format suitable for sequence modelling, with- out losing musical information. The ï¬nal corpus has a vocabulary of 310 tokens, and encodes over 172 hours of classical piano performances. 6
In this experiment, we test the performance of LSTMs on Spanish when they have been trained on music and on code data. While music data es- pecially is very different from human language on the surface level, we know that music and code both contain syntactic elements that are similar to human language.5 By comparing performance to our random baselines, we ask: can LSTMs encode
5See for example Lerdahl and Jackendoff (1996) for gram- matical structure in music.
For programming code data, we used the Habeas corpus released by Movshovitz-Attias and Cohen (2013), of tokenized and labelled Java code. 7 We took out every token that was labelled as a com- ment so as to not contaminate the code corpus with
# 6The MAESTRO dataset
6The MAESTRO dataset is available at https://
# magenta.tensorflow.org/datasets/maestro corpus
# 7The
7The Habeas is available https://github.com/habeascorpus/ habeascorpus-data-withComments at
Zero-shot ppl on Spanish (L2) (Lower is better) BPeENNWWEo $3e¢see3ee8 * Ss Sf SS &* £ Se SN SF eS S RS) x s & ss xâ Cy w (L1)
# Pretraining Language
Figure 4: Results of Experiments 1 through 3, train- ing on non-linguistic corpora. Error bars on all bars in- dicate a 95% t-test conï¬dence interval over 5 restarts with different random seeds. All structured data is much better to train on than random data, including music which has a totally divergent vocabulary surface form from the rest. The two parentheses corpora result in equivalent perplexities, even though one has a hierar- chical underlying structure and the other does not.
# natural language.
The music corpus is 23 million tokens in length and the code corpus is 9.5 million. We cannot ef- fectively control the lengths of these corpora to be the same as all of the others, since there is no con- trolled notion of what one token means in terms of information. However, we only compare these re- sults to the random baseline, which we have trained on 100 million tokens â if the LSTMs trained on these corpora are under-speciï¬ed compared to the baseline, this would only strengthen our results.
# 4.2 Results
Our results show that language models pretrained on music are far better at modelling Spanish than those pretrained on random data. As shown in ï¬gure 4, LSTMs trained on music data have an av- erage performance of 256.15 ppl on Spanish, com- pared with 493.15 when training on the Zipï¬an random corpus. This discrepancy suggests that the model, when training on music, creates represen- tations of the relationships between tokens which are generalizable and can apply to Spanish.
The music corpus is markedly different from the Spanish corpus by most measures. Most saliently, MAESTRO uses a vocabulary of just 310 tokens to encode various aspects of music like volume and note co-occurrence.8 This is in contrast to
8For consistency, the model still has a word embedding
the Zipï¬an Random corpus, which has the same surface-level vocabulary and distribution as Span- ish, yet models trained on it perform on average 237 ppl worse compared to those trained on the music corpus. Since the surface forms between music and language are so different, the difference in performance cannot be based on surface-level heuristics, and our results suggest the presence of generalizable, structurally-informed representa- tions in LSTM language models.
We also show that models trained on Java code can transfer this knowledge to a human L2 bet- ter than the random baseline. Syntactic properties of code such as recursion are similar to natural language, though code is constructed to be unam- biguously parsed and lacks a lot of the subtlety and ambiguity that characterizes natural language. Models trained on code have an average perplexity of 139.10 on the Spanish test set. The large discrep- ancy between this performance and the baseline indicates that LSTMs trained on code capture the syntactic commonalities between code and natural language in a manner that is usable for modelling natural language.
Our results on non-linguistic data suggest that LSTMs trained on structured data extract repre- sentations which can be used to model human lan- guages. The non-linguistic nature of these data suggests that it is something structural about the music and Java code that is helping in the zero-shot task. However, there is a multitude of structural interpretations of music, and it is not clear what kinds of structure the LSTM encodes from music. In the next experiment, we create simple artiï¬cial corpora with known underlying structures in order to test how the LMs can represent and utilize these structures.
# 5 Experiment 3: Recursive Structure
In this experiment, we isolate and assess possible structural features of music and code that may ex- plain the results of Experiment 2. The most widely- known structural hypothesis is the claim of Hauser et al. (2002) that the narrow language faculty in humans (the inductive bias in the mind/brain that allows humans to acquire and develop language) can be reduced to just recursion. Given the promi- nence of such theories, it is natural to ask: is it the
matrix of 50,000 rows, but during training only ever sees words 1-310, meaning that much of the word embedding space has never been seen by the LSTM part of the model.
underlying recursive nature of music and code data that causes the gains that we observe in Experiment 2?
To test this possibility, we create a simple re- cursive corpus: a Nesting Parentheses corpus of hierarchically nesting matching symbols, and run the same experimental setup as we did for Experi- ments 1 and 2 9. We ï¬nd that plain recursion, even when the corpus has no other structural subtleties, is indeed a sufï¬cient condition for inducing the kinds of structural transfer we observed in Experi- ment 2.
Recursion is a sufï¬cient quality, but is it the only explanation for our results? We also create a con- trol corpus: a Flat Parentheses corpus, which has similar pairs of matching parentheses, but which do not nest hierarchically and projectively (the dif- ference between the two corpora is visually illus- trated in Figure 3). We surprisingly ï¬nd that this non-recursive corpus induces the same amount of structural transfer as the recursive nesting parenthe- ses, which emphasizes the importance of pairing, head-dependency type structure in the linguistic structural embeddings of LSTMs.
# 5.1 Data
The vocabulary for these corpora are the integers 0-50,000, where each number is a parenthesis to- ken, and that token âclosesâ when the same integer appears a second time. We draw the opening tokens from the empirical Spanish unigram distribution (mapping each Spanish word to an integer), mean- ing that these corpora have a similar vocabulary distribution, albeit a much simpler non-linguistic structure, to the L2. Both of the corpora are 100 million tokens long, like the random and the natural language corpora.
We create the Nesting Parentheses corpus by fol- lowing a simple stack-based grammar. At timestep t, we ï¬ip a coin to decide whether to open a new parenthesis (with probability 0.4) or close the top parenthesis on the stack (with probability 0.6).10 If we are opening a new parenthesis, we sample an in- teger xopen from the Spanish unigram distribution, write the integer xopen at the corpus position t, and push xopen onto the stack of open parentheses. If
9Though these corpora do not strictly use parentheses to- kens, we refer to both of these as parentheses corpora, drawing our metaphor from the wide variety of studies such as Karpa- thy et al. (2016) examining nested parentheses.
10P (open) has to be strictly less than 0.5, or else the tree
depth is expected to grow inï¬nitely.
we are closing a parenthesis, we pop the top integer from the stack, xclose, and write xclose at corpus position t.
The Flat Parentheses corpus is made up of pairs of parentheses that do not nest. At timestep t, we sample an integer x from the empirical Spanish unigram distribution, and a distance d from the empirical distribution of dependency lengths (cal- culated from the Spanish Universal Dependencies treebank (McDonald et al., 2013)). Then, we write x at position t and at position t + d. This creates pairs of matching parentheses which are not in- ï¬uenced by any other token in determining when they close. Note that this corpus is very similar to the Random Zipf corpus, except that each sampled token is placed twice instead of once.
# 5.2 Results
LSTMs trained on both parentheses corpora are able to model human language far better than mod- els trained on the random corpora, indicating that the isolated forms of grammar-like structure in these corpora are useful for modelling human lan- guage. Surprisingly, performance is the same for a model pretrained on the Nesting Parentheses and the Flat Parentheses corpus. This suggests that it is not necessarily hierarchical encodings which LSTMs use to model human language, and that other forms of structure such as ï¬at head-head de- pendencies may be just as important (de Marneffe and Nivre, 2019).
The Nesting Parentheses corpus exhibits hierar- chical structure while not having any of the irregu- larities and subtleties of human language or music. Despite the simplicity of the grammar, our results indicate that the presence of this hierarchical struc- ture is very helpful for an LSTM attempting to model Spanish. Our models trained on the Nesting Parentheses corpus have an average perplexity of 170.98 when tested on the Spanish corpus. This is 322 perplexity points better than the baseline models trained on the Zipf Random corpus, which has the same vocabulary distribution (Figure 4).
Models trained on the Flat Parentheses cor- pus are equally effective when tested on Spanish, achieving an average perplexity of 170.03. These results are surprising, especially given that the Flat Parentheses corpus is so similar to the Random Zipf corpus â the only difference being that inte- gers are placed in pairs not one by one â and yet performs better by an average of 323 perplexity
Language WALS-syntax distance from Spanish (out of a max of 49 features) Spanish (es) Italian (it) Portuguese (pt) English (en) Romanian (ro) Russian (ru) German (de) Finnish (ï¬) Basque (eu) Korean (ko) Turkish (tr) Japanese (ja) 0 0 3 4 5 9 10 13 15 18 23 23
Table 1: WALS-syntax distance between Spanish and L1s
points. This suggests that representing relation- ships between pairs of tokens is a key element that makes syntactic representations of language suc- cessful in LSTMs.
The Flat Parentheses corpus has structure in that each token is placed in relation to one other token, but just one other token. To model this successfully a model would have to have some ability to look back at previous tokens and determine which ones would likely have their match appear next. Our results suggest that this kind of ability is just as useful as potentially being able to model a simple stack-based grammar.
# 6 Experiment 4: Human Languages
To further analyze what kinds of generalizable structure LSTMs can infer, we run experiments in transferring zero-shot between human languages. We ask: can LSTMs infer and use ï¬ne-grained syntactic similarities between typologically simi- lar languages? Previous work (Zoph et al., 2016; Artetxe et al., 2020) indicates that transfer is more successful between related languages. We control for vocabulary overlap, and use typological syntac- tic difference as a quantitative probe to ask: are ï¬ne-grained syntactic similarities encoded in gen- eralizable, transferrable ways? To answer this ques- tion, we investigate the extent to which ï¬ne-grained differences in syntactic structure cause different zero-shot transfer results.
# 6.1 Data
We created our language corpora from Wikipedia, which offers both wide language variation as well as a generally consistent tone and subject domain. We used the gensim wikicorpus library to strip
~ Indo-European Languages Non Indo-European 130 N 2 = 120 ko tr ma) | | CT io fi gg | j ja S YD 100 rude co I on oy | OoG enro rd 1 oS 2 | £3 o~ 704. uo it £ = 60 pt ao ° a 50 0 10 15 20 5 L1 WALS-syntax distance from Spanish
Figure 5: Results of Experiment 4. Transfer is better between typologically similar languages, even when vocabularies are disjoint. Perplexity on Spanish test data plotted against the WALS-syntax distance of each modelâs L1 to Spanish. The relationship is almost lin- ear for Indo-European languages, and then reaches a ceiling. Error bars show 95% CIs for n = 5 trials with different random seeds. These results demonstrate how LSTMs can transfer knowledge more easily to lan- guages that share structural features with the L1, and that this correlation is robust to multiple trials. The orange line represents the oracle perplexity of train- ing all parameters to convergence on the L2 train data. Romance languages are in red, other Indo-European languages are in purple, and non-Indo-European lan- guages are blue.
Wikipedia formatting, and the stanfordnlp Python library (Qi et al., 2018) to tokenize the corpus. We run experiments on data from 12 human languages, all of which have Wikipedias of over 100,000 arti- cles: Spanish, Portuguese, Italian, Romanian, En- glish, Russian, German, Finnish, Basque, Korean, Turkish and Japanese. All of the training corpora are 100 million tokens in length. 11
For our typological data, we use the World At- las of Linguistic Structure, using the features that relate to syntax (WALS-syntax features). Exam- ples of syntactic features in WALS include ques- tions such as does a language have Subject-Verb- Object order, or does a degree word (like âveryâ) come before or after the adjective. We accessed the WALS data using the lang2vec package (Lit- tell et al., 2017). The quantity we are interested in extracting from the WALS data is the typological distance between the L2 (Spanish) and all of the
11The code for recreating our corpora from Wikipedia dumps is available at https://github.com/toizzy/ wiki-corpus-creator
L1 languages mentioned above. Not every feature is reported for every language, so we calculate the WALS distance by taking into account only the 49 (syntactic) features that are reported for all our chosen languages and count the number of entries that are different (see Table 1). Since they are only based on 49 features, these distances do not provide a perfectly accurate distance metric. Though we cannot use it for ï¬ne-grained analysis, correlation with this distance metric would imply correlation with syntactic distance.
# 6.2 Results
Our experiments present a strong correlation be- tween the ability to transfer from an L1 language to Spanish and the WALS-syntax distance between those two languages, as shown in Figure 5(a). In the case of Indo-European languages the relation- ship is largely linear with a Pearson R2 coefï¬cient of 0.83. For languages not in the Indo-European language family, transfer performance appears to reach a noisy ceiling, and Pearsonâs R2 = 0.78 when taking into account all languages.12
Our previous experiments show that LSTMs can encode and generalize structural features from data that is structured, both in recursive and in non- hierarchical fashion. This experiment provides a more ï¬ne-grained analysis using using natural lan- guage to show that the syntax induced by LSTMs is generalizable to other languages in a typologi- cally sensible fashion, even when we do not let the model take advantage of vocabulary overlap. How- ever, after a certain threshold, the model is unable to take advantage of ï¬ne-grained similarities and performance on distant languages reaches a ceiling. It should be noted that all of the models trained on natural language, even the most distant, per- form far better than non-linguistic data, indicating that LSTMs are able to extract universal syntactic information from all natural language L1s that is applicable to Spanish.
# 7 Discussion
In this work we propose the Test for Inductive bias via Language model Transfer (TILT), a novel an- alytic method for neural language models which tests the ability of a model to generalize and use
12We veriï¬ed that our results also stand when calculating correlation coefï¬cients using log perplexity, which yielded similar values: R2 of 0.79 and 0.73 for Indo-European and all languages respectively.
structural knowledge. We pretrain LSTMs on struc- tured data, and then use the frozen LSTM weights to model human language. In doing so, we treat the frozen LSTM weights as the only structural faculty available to a human language model, and assess if the induced structure is general enough to be used to model human language.
Our experiments are cross-lingual and cross- modal in nature, not searching for representations of high-level features in one language, but for rep- resentations that encode general ideas of structure. While the majority of past work analyzing the struc- tural abilities of neural models looks at a modelâs treatment of structural features that are realized in speciï¬c input sentences, our method compares the encoding and transfer of general grammatical features of different languages. By using TILTs, we do not have to identify a structural feature of interest and investigate if it is being encoded, but instead asses if generalizable abstract structures are encoded in one language by examining if they can be used to model human language. Our work thus avoids known issues that have been pointed out with analytic methods like probing (Voita and Titov, 2020; Pimentel et al., 2020; Hewitt and Liang, 2019).
We run experiments on natural languages, arti- ï¬cial languages, and non-linguistic corpora. Our non-linguistic and artiï¬cial language experiments suggest three facets of the structural encoding abil- ity of LSTM LMs. First, that vocabulary distribu- tion has a very minor effect for modelling human language compared to structural similarity. Second, that models can encode useful language modelling information from the latent structure inherent in non-linguistic structured data, even if the surface forms are vastly differing. Last, that encodings derived from hierarchically structured tokens are equally useful for modelling human language as those derived from texts made up of pairs of to- kens that are linked but non-hierarchical. Run- ning experiments on a range of human languages, we conclude that the internal linguistic representa- tion of LSTM LMs allows them to take advantage of structural similarities between languages even when unaided by lexical overlap.
Our results on the parentheses corpora do not necessarily provide proof that the LSTMs trained on the Nesting Parentheses corpus arenât encoding and utilizing hierarchical structure. In fact, previ- ous research shows that LSTMs are able to suc-
cessfully model stack-based hierarchical languages (Suzgun et al., 2019b; Yu et al., 2019; Suzgun et al., 2019a). What our results do indicate is that, in order for LSTMs to model human language, being able to model hierarchical structure is similar in utility to having access to a non-hierarchical ability to âlook backâ at one relevant dependency. These results shine light on the importance of consider- ing other types of structural awareness that may be used by neural natural language models, even if those same models also demonstrate the ability to model pure hierarchical structure.
Our method could be used to test many other hypotheses regarding neural language models, by choosing a discerning set of pretraining languages. A ï¬rst step in future work would be to test if the results of this paper hold on Transformer architec- tures, or if instead Transformers result in differ- ent patterns of structural encoding transfer. Future work expanding on our results could focus on ab- lating speciï¬c structural features by creating hypo- thetical languages that differ in single grammatical features from the L2, in the style of Galactic Depen- dencies (Wang and Eisner, 2016), and testing the effect of structured data thatâs completely unrelated to language, such as images.
Our results also contribute to the long-running nature-nurture debate in language acquisition: whether the success of neural models implies that unbiased learners can learn natural languages with enough data, or whether human abilities to acquire language given sparse stimulus implies a strong innate human learning bias (Linzen and Baroni, 2020). The results of our parentheses experiments suggest that simple structural head-dependent bias, which need not be hierarchical, goes a long way toward making language acquisition possible for neural networks, highlighting the possibility of a less central role for recursion in language learning for both humans and machines.
# Acknowledgements
We thank Urvashi Khandelwal, Kawin Ethayarajh, Kyle Mahowald, Chris Donahue, Yiwei Luo, Alex Tamkin and Kevin Clark for helpful discussions and comments on drafts, and our anonymous reviewers for their feedback. This work was supported by an NSF Graduate Research Fellowship for IP and a SAIL-Toyota Research Award. Toyota Research In- stitute (âTRIâ) provided funds to assist the authors with their research but this article solely reï¬ects
the opinions and conclusions of its authors and not TRI or any other Toyota entity.
# References
Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2020. On the cross-lingual transferability of mono- lingual representations. In Proceedings of the 58th Annual Meeting of the Association for Computa- tional Linguistics, pages 4623â4637, Online. Asso- ciation for Computational Linguistics.
Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D Manning. 2019. What does BERT In Pro- look at? an analysis of BERTâs attention. ceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 276â286.
Alexis Conneau, German Kruszewski, Guillaume Lam- ple, Lo¨ıc Barrault, and Marco Baroni. 2018a. What you can cram into a single vector: Probing sen- tence embeddings for linguistic properties. Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers).
Alexis Conneau, Ruty Rinott, Guillaume Lample, Ad- ina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018b. XNLI: Evaluating cross-lingual sentence representations. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2475â2485, Brussels, Belgium. Association for Computational Linguistics.
Fahim Dalvi, Nadir Durrani, Hassan Sajjad, Yonatan Belinkov, Anthony Bau, and James Glass. 2019. What is one grain of sand in the desert? analyzing In Pro- individual neurons in deep NLP models. ceedings of the AAAI Conference on Artiï¬cial Intel- ligence, volume 33, pages 6309â6317.
Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless green recurrent networks dream hierarchically. Pro- ceedings of the 2018 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol- ume 1 (Long Papers).
Marc D. Hauser, Noam Chomsky, and W. Tecumseh Fitch. 2002. The faculty of language: What is Science, it, who has it, and how did it evolve? 298(5598):1569â1579.
Curtis Hawthorne, Andriy Stasyuk, Adam Roberts, Ian Simon, Cheng-Zhi Anna Huang, Sander Dieleman, Erich Elsen, Jesse Engel, and Douglas Eck. 2018. Enabling factorized piano music modeling and gen- eration with the maestro dataset.
John Hewitt and Percy Liang. 2019. Designing and interpreting probes with control tasks. In Proceed- ings of the 2019 Conference on Empirical Methods
in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 2733â2743.
John Hewitt and Christopher D Manning. 2019. A structural probe for ï¬nding syntax in word represen- In Proceedings of the 2019 Conference of tations. the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4129â4138.
Andrej Karpathy, Justin Johnson, and Li Fei-Fei. 2016. Visualizing and understanding recurrent networks.
Fred Lerdahl and Ray S Jackendoff. 1996. A genera- tive theory of tonal music. MIT press.
Tal Linzen and Marco Baroni. 2020. Syntactic struc- ture from deep learning. Annual Review of Linguis- tics, 7.
Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics, 4:521â 535.
Patrick Littell, David R Mortensen, Ke Lin, Katherine Kairis, Carlisle Turner, and Lori Levin. 2017. Uriel and lang2vec: Representing languages as typologi- cal, geographical, and phylogenetic vectors. In Pro- ceedings of the 15th Conference of the European Chapter of the Association for Computational Lin- guistics: Volume 2, Short Papers, volume 2, pages 8â14.
Marie-Catherine de Marneffe and Joakim Nivre. 2019. Dependency grammar. Annual Review of Linguis- tics, 5:197â218.
R. Thomas McCoy, Robert Frank, and Tal Linzen. 2020. Does syntax need to grow on trees? sources of hierarchical inductive bias in sequence-to-sequence networks. Transactions of the Association for Com- putational Linguistics, 8:125â140.
Ryan McDonald, Joakim Nivre, Yvonne Quirmbach- Brundage, Yoav Goldberg, Dipanjan Das, Kuzman Ganchev, Keith Hall, Slav Petrov, Hao Zhang, Os- car T¨ackstr¨om, et al. 2013. Universal dependency In Proceed- annotation for multilingual parsing. ings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 92â97.
Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2018. Regularizing and optimizing LSTM In International Conference on language models. Learning Representations.
Dana Movshovitz-Attias and William Cohen. 2013. Natural language models for predicting program- ming comments. In Proceedings of the 51st Annual Meeting of the Association for Computational Lin- guistics (Volume 2: Short Papers), pages 35â40.
Steven T Piantadosi. 2014. Zipfâs word frequency law in natural language: A critical review and fu- ture directions. Psychonomic bulletin & review, 21(5):1112â1130.
Tiago Pimentel, Josef Valvoda, Rowan Hall Maudslay, Ran Zmigrod, Adina Williams, and Ryan Cotterell. 2020. Information-theoretic probing for linguistic structure.
Edoardo Maria Ponti, Ivan Vuli´c, Ryan Cotterell, Roi Reichart, and Anna Korhonen. 2019. Towards zero- In Proceedings of the shot language modeling. 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2900â2910, Hong Kong, China. Association for Computational Linguistics.
Peng Qi, Timothy Dozat, Yuhao Zhang, and Christo- pher D. Manning. 2018. Universal dependency pars- ing from scratch. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 160â170, Brus- sels, Belgium. Association for Computational Lin- guistics.
Mirac Suzgun, Sebastian Gehrmann, Yonatan Belinkov, and Stuart M. Shieber. 2019a. LSTM networks can In Proceedings of the perform dynamic counting. Workshop on Deep Learning and Formal Languages: Building Bridges, volume abs/1906.03648, Florence. Association for Computational Linguistics.
Mirac Suzgun, Sebastian Gehrmann, Yonatan Belinkov, and Stuart M. Shieber. 2019b. Memory-augmented recurrent neural networks can learn generalized Dyck languages.
Alon Talmor, Yanai Elazar, Yoav Goldberg, and Jonathan Berant. 2019. oLMpics â on what lan- guage model pre-training captures.
Information- theoretic probing with minimum description length. arXiv preprint arXiv:2003.12298.
Dingquan Wang and Jason Eisner. 2016. The Galactic Dependencies treebanks: Getting more data by syn- thesizing new languages. Transactions of the Asso- ciation for Computational Linguistics, 4:491â505.
Shijie Wu and Mark Dredze. 2019. Beto, bentz, be- cas: The surprising cross-lingual effectiveness of bert. Proceedings of the 2019 Conference on Empir- ical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP).
Xiang Yu, Ngoc Thang Vu, and Jonas Kuhn. 2019. Learning the dyck language with attention-based seq2seq models. In ACL 2019.
Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer learning for low-resource In Proceedings of the neural machine translation.
2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 1568â1575, Austin, Texas. Association for Computational Linguistics.
# Appendix: Numerical results of experiments
For every experiment we ran ï¬ve trials with differ- ent random seeds. We list the means and standard deviations for each L1 below:
L1 Language Mean TILT Ppl Std. Dev 1.01 2.97 2.65 1.24 1.02 1.48 4.93 3.40 3.84 0.51 2.10 0.81 1.23 0.21 6.26 4.74 0.21 0.85 | {
"id": "2003.12298"
} |
2004.14546 | WT5?! Training Text-to-Text Models to Explain their Predictions | Neural networks have recently achieved human-level performance on various
challenging natural language processing (NLP) tasks, but it is notoriously
difficult to understand why a neural network produced a particular prediction.
In this paper, we leverage the text-to-text framework proposed by Raffel et
al.(2019) to train language models to output a natural text explanation
alongside their prediction. Crucially, this requires no modifications to the
loss function or training and decoding procedures -- we simply train the model
to output the explanation after generating the (natural text) prediction. We
show that this approach not only obtains state-of-the-art results on
explainability benchmarks, but also permits learning from a limited set of
labeled explanations and transferring rationalization abilities across
datasets. To facilitate reproducibility and future work, we release our code
use to train the models. | http://arxiv.org/pdf/2004.14546 | Sharan Narang, Colin Raffel, Katherine Lee, Adam Roberts, Noah Fiedel, Karishma Malkan | cs.CL, cs.LG | null | null | cs.CL | 20200430 | 20200430 | 0 2 0 2
r p A 0 3 ] L C . s c [
1 v 6 4 5 4 1 . 4 0 0 2 : v i X r a
# WT5?! Training Text-to-Text Models to Explain their Predictions
# Sharan Narangâ Adam Roberts Colin Raï¬elâ Noah Fiedel Katherine Lee Karishma Malkan
Google Research
# Abstract
Neural networks have recently achieved human-level performance on various challenging natural language processing (NLP) tasks, but it is notoriously diï¬- cult to understand why a neural network produced a particular prediction. In this paper, we leverage the text-to-text framework proposed by Raï¬el et al. (2019) to train language models to output a natural text explanation alongside their prediction. Crucially, this requires no modiï¬cations to the loss function or training and decoding procedures â we simply train the model to output the explanation after generat- ing the (natural text) prediction. We show that this approach not only obtains state-of-the-art results on âexplainabilityâ benchmarks, but also permits learning from a limited set of labeled explanations and trans- ferring rationalization abilities across datasets. To facilitate reproducibility and future work, we release our code use to train the models.1
1
# Introduction
Neural . > O network OTS (this work) 2) O Human g DS (S) (S) <x fo) Rule-based system
# Interpretability
Figure 1: Illustration of our perspective on the accu- racy and interpretability of diï¬erent models. Neural networks (blue) can attain superhuman performance, but are notoriously hard to interpret. A rule-based system (yellow) is easy to interpret but rarely performs well on diï¬cult tasks. Humans (red) are reasonably accurate and provide some degree of interpretability by being able to verbally explain their predictions. In this work, our model (green) is trained both to be highly accurate (in some cases, more accurate than a human) and provide explanations for its predictions as humans do.
Neural networks excel in a wide variety of practical settings, from computer vision to speech recognition to natural language processing (NLP) and beyond. In particular, over the past few years it has been shown that large language models pre-trained on an unla- beled text corpus can be subsequently ï¬ne-tuned to achieve superhuman performance on NLP tasks that had previously been considered diï¬cult for machines (Devlin et al., 2018; Peters et al., 2018; Howard & Ruder, 2018; Lan et al., 2019; Raï¬el et al., 2019). It has further recently been shown that all NLP tasks of interest can be cast as a âtext-to-textâ problem (Raf- fel et al., 2019), where the model is fed some text as
âEqual Contribution. Correspondence to [email protected]
input and is trained to produce target text as output. For example, sentiment analysis of a movie review might involve analyzing the input text âI went to see this movie with my husband, and we both thought the acting was terrible!â and producing the word ânega- tiveâ to denote a negative sentiment. This simple and (arguably) universal framework was shown to obtain state-of-the-art results across a variety of NLP tasks. In spite of their empirical and practical successes, it is notoriously diï¬cult to determine why a neural network has produced a given prediction. This has led to a substantial body of research that endeavors to make neural networks more âinterpretableâ, e.g. by attributing its prediction to a given part of its in- put (Baehrens et al., 2010; Sundararajan et al., 2017;
# 1https://github.com/google-research/
google-research/tree/master/wt5
1
âexplain sentiment: I went to see this movie with my husband, and we both thought the acting was terrible!" âsentiment: Despite what others say, I thought this movie was funny.â âexplain nli premise: Cardinals lost last night. hypothesis: The Saint Louis Cardinals always win." ânegative explanation: the acting was terrible." âpositiveâ âcontradiction explanation: you can't lose if you always w:
Figure 2: Diagram of our method for training a text-to-text model to explain its predictions. We train the model to generate an explanation when the text âexplainâ is prepended to the input. The model can still be trained for classiï¬cation (without an explanation) simply by omitting the âexplainâ keyword. This approach is readily applicable to sentiment analysis, natural language inference (NLI), and other text tasks.
Smilkov et al., 2017) or by designing architectures that are easier to analyze (Foerster et al., 2017; Jacobsen et al., 2018; Bahdanau et al., 2014; Raï¬el et al., 2017). However, the reliability of some of these methods has been questioned (Kindermans et al., 2019; Hooker et al., 2018; Jain & Wallace, 2019; Serrano & Smith, 2019; Pruthi et al., 2019), limiting their practical util- ity. To motivate the perspective we take in this work, note that we humans (who tend to be rather good at NLP tasks) are also âblack boxesâ in the sense that we cannot obtain a full and transparent view of our own decision-making process. Instead, we rely on humans to explain their judgements. In contrast, consider a system comprising a list of hard-coded rules (for ex- ample, âif the review contains the word âterribleâ, the review is negativeâ). In this case, it would be simple to understand the behavior and decision-making pro- cess of the system, but itâs unlikely that such a system would be very accurate (for example, the review âThis movie was anything but terrible!â suggests a positive sentiment).
various forms of cross-domain transfer. For example, we train a model to generate explanations for data from one domain and show that it can generate plau- sible explanations for out-of-domain data. From a broad view, we argue that a text-to-text model has an inherent ability to âcommunicateâ given its input and output format, and our work mainly involves training these models to communicate better. We provide a pictorial summary of our perspective on model interpretability in ï¬g. 1.
In the following section, we review the text-to-text framework and corresponding pre-trained model we use and describe our approach in more detail. Sec- tion 3 provides an overview of related work on in- terpretability, particularly for NLP models. Then, in section 4, we introduce the various datasets and evaluation procedures we use for benchmarking be- fore presenting experimental results. Finally, we con- clude with an outlook on the connection between interpretability and training models to communicate with natural language.
Given that humans and neural networks are both highly accurate and hard to interpret, we argue that neural networks should also be given the ability to explain their predictions using natural text. This idea has already motivated the development of various âexplainabilityâ datasets in the NLP literature (Zaidan & Eisner, 2008; Camburu et al., 2018; Rajani et al., 2019; DeYoung et al., 2019). Our main contribution is to show that the text-to-text framework makes it straightforward to train a model to produce an explanation. Speciï¬cally, instead of training the model to simply predict a label (e.g. ânegativeâ), we train it to predict a label and explanation (e.g. ânegative explanation: the acting was terribleâ).
# 2 Approach
Before presenting our basic methods, we ï¬rst review the text-to-text framework we use. This framework underlies the pre-trained model that we used, which is called the âText-to-Text Transfer Transformerâ (T5). Then, we describe the details of how we ï¬ne-tune T5 to produce explanations for its predictions. We call the resulting model (and our general approach) âWT5â as shorthand for âWhy, T5?â.
# 2.1 Text-to-Text Framework
In addition to producing state-of-the-art results on explainability datasets, this approach also allows for both âsemi-supervisedâ training (where explanations are only provided on a subset of the dataset) and for
A text-to-text model follows the sequence-to-sequence framework (Sutskever et al., 2014; Kalchbrenner et al., 2014) â it is fed a sequence of discrete tokens as
2
input and produces a new sequence of tokens as out- put. Speciï¬cally, the model is provided with an input sequence {x1, . . . , xT } and is trained to pro- duce an output sequence {y1, . . . , yU } by maximizing p(yi|x1, . . . , xT , y1, . . . , yiâ2, yiâ1}. At test time, the tokens are sampled from the modelâs predicted output distribution (yi â¼ p(yi| . . .)) one at a time and fed back into the model autoregressively until a special end-of-sequence token is generated, at which point the model has produced its complete prediction. For text problems, the individual sequence elements yi or xi are often characters, words, or (in our case), subword token IDs (Sennrich et al., 2015) produced by a tokenizer like SentencePiece (Kudo, 2018; Kudo & Richardson, 2018). Notably, Raï¬el et al. (2019) propose converting all text problems to the sequence- to-sequence format. For example, in a classiï¬cation task, instead of training a classiï¬cation layer to as- sign a high probability to some class ID, the model is trained to produce the actual text corresponding to the class label. Concretely, to train the model to perform sentiment analysis on our running movie re- view example, the model would be fed the sequence âsentiment: I went to see this movie with my husband, and we both thought the acting was terrible!â and would be trained to produce the literal text ânega- tiveâ. The âsentiment:â preï¬x tells the model what task it should perform, which is useful in multi-task models (Caruana, 1997; Ruder, 2017).
In Raï¬el et al. (2019), this framework was used to pre-train Transformer (Vaswani et al., 2017) models on a large collection of unlabeled text drawn from the Common Crawl web scrape. We use the result- ing pre-trained models (referred to as T5 for âText- to-Text Transfer Transformerâ) in our experiments. Pre-trained models of various sizes are available; we experiment with the âBaseâ model with about 220 million parameters and the â11Bâ model with around 11 billion parameters. Further details on these models and the pre-training procedure are available in (Raï¬el et al., 2019).
# 2.2 Generating Explanations
The text-to-text framework provides a straightforward means of training models to output an explanation alongside their prediction. We experimented with var- ious ways of modifying the input and output text to include an explanation, and settled on the following recipe: When we want the model to generate an expla- nation, we simply prepend the word âexplainâ to the input text and then append âexplanation:â followed by the explanation to the target text. In our running movie review example, this produces the input âex-
3
plain sentiment: I went to see this movie with my husband, and we both thought the acting was terrible!â with target ânegative explanation: the acting was ter- rible.â Crucially, the model can be simultaneously trained on examples with explanations (which have âexplainâ prepended to their input and âexplanation: ...â appended to their output) as well as examples with only labels (by omitting âexplainâ from the input and âexplanation: ...â from the target so that the desired output is only the label text). This allows us to explore a âsemi-supervisedâ setting where we have a dataset that is fully labeled but only a limited number of examples have explanations. A diagram of this basic approach, with examples for sentiment analysis and natural language inference (NLI) (Dagan et al., 2005; Bowman et al., 2015), is shown in ï¬g. 2.
# 2.3 Extractive Explanations
So far, we have assumed that explanations will be arbitrary text generated by our model. An alter- native way of producing explanations is to train a model to identify spans of text in the input which support its prediction. This âextractiveâ version is the setting considered by the recent ERASER bench- mark (DeYoung et al., 2019), which combines various datasets that have been annotated with extractive explanations. The use of spans makes it possible to use non-generative span-prediction models like BERT (Devlin et al., 2018). It also makes evaluation poten- tially simpler by computing the overlap between the predicted and annotated spans. In our running movie review example, the explanation text âthe acting was terribleâ appears as a span of text in the input, so this particular example is compatible with the extractive approach.
Note that forcing explanations to be extracted spans is strictly less general. Consider the task of pro- ducing explanations for the Winograd Schema Chal- lenge (WSC) (Levesque et al., 2012), where the goal is to disambiguate an ambiguous pronoun. For example, in the text âthe city government denied the protesters a permit because they feared violenceâ the pronoun âtheyâ refers to âthe city governmentâ because govern- ments sometimes fear violence from protesters and not vice-versa. This explanation for why âtheyâ refers to âthe city governmentâ does not appear anywhere in the text, suggesting that this task (and likely many others) is largely incompatible with extractive explanations. We include some extractive explanation datasets in our experiments mainly to demonstrate the ï¬exibil- ity of our approach. To train our model to generate extractive explanations, we include the spans of the input which have been annotated as an explanation
with the text âexplanation:â in the targets and train the model to generate them sequentially. Then, when the model outputs a prediction and corresponding sequence of explanations, we match each predicted explanation to a span of text in the input, thereby al- lowing straightforward evaluation using span overlap- based metrics. A potential issue arises if our model generates an explanation which does not appear in the input text. We ignore such spurious explanations, though we found this rarely happened in practice.
# 3 Related Work
Measuring and improving the interpretability of neu- ral networks is a heavily-studied area of research; a comprehensive overview is outside of the scope of this work. Instead, we refer the interested reader to the surveys provided by (Doshi-Velez & Kim, 2017; Molnar, 2019; Guidotti et al., 2018). Most work on interpretability methods focuses on models for com- puter vision applications (e.g. Xu et al. (2015); Zhang & Zhu (2018)), whereas the interpretability of NLP models is apparently less studied. A notable excep- tion is the fact that attention-based neural networks (Bahdanau et al., 2014) provide some means of inter- pretability âfor freeâ by examining the weight assigned by the neural network to diï¬erent regions in the input (Graves, 2013; Raï¬el et al., 2017; Huang et al., 2018), but this introspection method has been shown to be unreliable (Jain & Wallace, 2019; Serrano & Smith, 2019; Pruthi et al., 2019). There has separately been signiï¬cant work on better understanding the behav- ior NLP models, for example by crafting inputs that cause a misclassiï¬cation (Jia & Liang, 2017; Nie et al., 2019) or diagnosing why they sometimes generate nonsensical text (Lee et al., 2018; Belinkov & Bisk, 2017).
An early investigation into explanations for NLP datasets was performed by Zaidan & Eisner (2008), who introduced the idea of annotating spans of the input which support the label. This produced the âMovie Reviewsâ dataset that we consider in our ex- periments. The general approach of extractive expla- nation was recently surveyed and advocated for by DeYoung et al. (2019), who proposed the ERASER benchmark comprising various datasets. As discussed in section 2.3, our approach is strictly more general in that it also allows us to consider generating abstractive explanations.
Camburu et al. (2018) have the most philosophi- cally similar perspective to ours. They consider the generation of abstractive explanations by creating the e-SNLI dataset, which we consider in our experiments.
4
e-SNLI is a variant of the Stanford Natural Language Inference (SNLI) dataset (Bowman et al., 2015) that adds human-annotated explanations for all examples in the training, validation, and test sets. To gen- erate explanations, Camburu et al. (2018) propose model architectures that generally consist of separate components for classiï¬cation and explanation. They consider various tasks, including conditioning the pre- diction on the explanation and vice versa, as well as producing sentence embeddings. Most related to this work, they also consider the task of learning to explain with e-SNLI but generating explanations for out-of- domain examples from other natural language infer- ence tasks. The primary diï¬erences between Camburu et al. (2018) and this work are that our approach re- quires no special model architecture and that we take advantage of a pre-trained model that is already quite capable of generating natural text. These diï¬erences not only make our method simpler but also produce substantially better performance on the e-SNLI task. Rajani et al. (2019) also consider abstractive ex- planations. They introduce the CoS-E dataset, which comprises examples from the Commonsense Question Answering (CQA) dataset that have been annotated with abstractive explanations. However, their focus is mainly on using explanations to improve a modelâs predictions, and as such they propose ï¬rst training a model to generate explanations and then training a classiï¬er to produce a prediction based on the original example and the generated explanation. Interestingly, this provided a substantial performance boost on the CQA dataset. They include minimal analysis or eval- uation of the generated explanations, though they do show (through a few non-cherrypicked examples) that their model can generate explanations for datasets it was not trained on. The primary focus of our work is on generating useful explanations, so we do not experiment with feeding explanations into a model to improve its predictions.
# 4 Experiments
Having introduced our straightforward approach for generating explanations alongside predictions using the text-to-text framework, we now evaluate this idea on the various benchmark datasets described in the following subsection. In our experiments, we will fre- quently use human judgements for evaluation because free-form text is notoriously diï¬cult to automatically evaluate and some of the datasets we consider do not have ground-truth explanations. We describe both the automatic metrics for evaluation used as well as our framework for obtaining human judgements in
section 4.2. The remainder of this section is devoted to our experimental results.
# 4.1 Datasets
In our experiments, we evaluate on the following datasets: e-SNLI was proposed by Camburu et al. (2018), who annotated every example in the Stan- ford Natural Language Inference (SNLI) dataset with free-form explanations of the labels. The natural lan- guage inference task involves determining whether a premise entails (implies), contradicts, or has no rela- tionship to a hypothesis. CoS-E was introduced in (Rajani et al., 2019) and augments the Commonsense Question-Answering (CQA) with free-form explana- tions. The CQA task involves answering multiple- choice questions that ostensibly rely on commonsense reasoning or âworld knowledgeâ. Note that CoS-E also includes extractive explanations, which we do not use in this paper. Movie Reviews (Zaidan & Eisner, 2008) is a sentiment analysis dataset where the goal is to predict whether a movie review has a positive or negative sentiment. Each review is anno- tated with spans that support the positive/negative label. MultiRC (Khashabi et al., 2018b) is a reading comprehension dataset that similarly includes spans of the input document supporting the answer to a given question. Speciï¬cally, in MultiRC a model is fed not only a question about a given document but also a candidate answer that the model must then classify as correct or incorrect. We use the variants of Movie Reviews and MultiRC distributed with the ERASER benchmark (DeYoung et al., 2019).
# 4.2 Evaluation
# 4.2.1 Quantitative
All of the datasets we use involve producing a class label based on some input text â entailment, neutral, or contradiction for e-SNLI, the correct answer from a list of choices for CoS-E, positive or negative for Movie Reviews, and True or False for MultiRC. As such, for all datasets we report the classiï¬cation accuracy of our model in order to evaluate the quality of its predictions.
For abstractive explanations, Camburu et al. (2018) propose using the BLEU score (Papineni et al., 2002) to compare a predicted explanation against the ground- truth explanation from e-SNLI. Since Rajani et al. (2019) mainly consider the setting where explanations are fed as input into a classiï¬cation model, they do not propose any particular metric for evaluating the quality of generated explanations on CoS-E. As such,
5
we use the BLEU score both for e-SNLI and CoS- E. We use SacreBLEU v1.3.0 (Post, 2018) with exp smoothing and intl tokenization. Notably, many of the ground-truth explanations for CoS-E are low quality and/or nonsensical (for example, the question âLit- tle sarah didnât think that anyone should be kissing boys. She thought that boys had what?â with answer âcootiesâ was annotated with the explanation âamer- ican horror comedy ï¬lm directedâ; or the question âWhat do you ï¬ll with ink to print?â with answer âprinterâ was annotated with the explanation âhealth complicationsâ, etc.), suggesting that BLEU scores on CoS-E should be taken with a grain of salt. We discuss this issue further in section 4.4
The ERASER benchmark (DeYoung et al., 2019) suggests various metrics for measuring whether ex- tracted explanation spans match the ground-truth. The simplest and most general computes an F1 score based on which entries of the input are labeled as an explanation by the prediction and ground-truth. Speciï¬cally, DeYoung et al. (2019) ï¬rst tokenize the input text using the spacy.io tokenizer,2 and then compute true/false positives/negatives on a token-by- token basis. We use spacyâs âen core web smâ model for tokenization to compute the F1 score.
# 4.2.2 Qualitative
The BLEU and F1 scores mentioned above can only loosely measure the quality of an explanation. We ar- gue that the most reliable way of determining whether an explanation supports a prediction is using human judgement. The number and scale of our experiments necessitates the use of crowd computing, so we use the Mechanical Turk (MTurk) platform to obtain ratings of model explanations.
Since many of our raters are not language experts, we devised a simple and straightforward set of ques- tions for evaluating a given explanation. Speciï¬cally, we present a rater with the input, predicted label, and explanation and ask whether the explanation supports the label. We apply this procedure to both abstrac- tive (e-SNLI, CoS-E) and extractive (Movie Reviews, MultiRC) explanations. For extractive explanations, we present a single answer span at a time alongside the input and predicted label. Note that this will only allow us to evaluate whether a span is a true or false positive and does not provide a way of evaluating false negatives, but nevertheless provides a helpful perspec- tive on the modelâs explanation quality. We provide screenshots of our evaluation forms in appendix B.
For every dataset we study, we evaluate 100 exam- ples using MTurk with 5 independent ratings for each
2http://spacy.io
example. To ensure quality ratings, we split the 100 examples into batches of 10 and include an attention check (question for which we know the answer) in each group. If the attention check was failed or not every question was answered, we remove that batch of 10 examples from our analysis and re-rate the batch so that all examples are rated by 5 diï¬erent raters. We treat an explanation as correct if the majority of the 5 raters label it as a good explanation.
# 4.3 Training Details
We use the âBaseâ and â11Bâ conï¬gurations of T5 in our experiments. For ï¬ne-tuning, we follow the same procedure used for the downstream tasks in (Raï¬el et al., 2019): As a brief summary, models are ï¬ne- tuned trained using the AdaFactor optimizer (Shazeer & Stern, 2018) with a constant learning rate of 0.001. We use a batch size of 196,608 tokens for the Base model and 65,536 for 11B. We use a maximum input sequence length of 512 for e-SNLI, 128 for CoS-E, 1024 for MultiRC and 2048 for Movie Reviews. We apply dropout with a rate of 10% throughout ï¬ne-tuning. To obtain model predictions, we perform âgreedyâ decoding by choosing the token ID corresponding to the largest output logit. For each task, we ï¬ne-tune until overï¬tting is observed on a held-out validation set and choose the checkpoint corresponding to the highest accuracy on the validation set.
# 4.4 Results on standard benchmarks
We present WT5-Base and WT5-11Bâs performance on the standard datasets we consider in table 1. All reported results are on the test set except for CoS- E â human-annotated explanations are not available for the CQA test set, so we report validation set scores instead. Additionally, the test set for MultiRC provided in the Eraser benchmark (DeYoung et al., 2019) is the validation set from SuperGLUE (Wang et al., 2019). Therefore, the results in this paper do not match the ones reported on the SuperGLUE leader board3. To contextualize these results, we also include the following baselines:
Previous State-of-the-art (SoTA) We report the best score previously achieved on each dataset.
Human We estimated human performance on each dataset by hand-labeling examples from the validation set and measuring our accuracy and the corresponding explanation quality score (BLEU or Token F1). We also fed ground-truth labels from each dataset into our human evaluation procedure to get an idea of
3https://super.gluebenchmark.com/
6
the quality of explanations in each dataset. For e- SNLI, we use the human agreement scores reported in (Camburu et al., 2018) and (Bowman et al., 2015).
In general, we ï¬nd that WT5-11B attains the highest scores for its explanations on most of the datasets we studied. On all datasets, WT5-11Bâs explanation score is better than the score for the examples we hand-labeled. This likely does not mean that WT5-11Bâs explanations are âbetterâ, but rather that it has learned to model some of the spurious characteristics of ground-truth explanations on a given dataset. This is borne out in the human evaluation of WT5-11Bâs explanations, which produced similar scores to the ground-truth explanations on all datasets except for e-SNLI where WT5-11B achieved a 12% higher score. Separately, WT5-11B attained state-of- the-art accuracy on the e-SNLI and Movie Reviews datasets. For CoS-E and MultiRC, WT5-11B is very close to state-of-the-art accuracy to the T5-11B model which doesnât generate explanations. To summarize, our results suggest that WT5-11B is at a human or super-human level at both classifying and explaining examples from the datasets we considered. We provide some example predictions and explanations produced by WT5-11B in table 2.
In general, WT5-Base had worse accuracy and explanation quality scores than WT5-11B, but the Base model nevertheless frequently outperformed the previous state-of-the-art and, in some cases, human annotations. Surprisingly, our hand-annotated expla- nations achieved a very low BLEU score on CoS-E when evaluated against ground-truth explanations dis- tributed with the dataset. Upon inspection, we found that this was largely due to the aforementioned noisy and low-quality explanations that are distributed with CoS-E. This also likely explains why our modelâs gen- erated explanations were rated correct at a higher rate by MTurk workers than the ground truth explanations provided with CoS-E.
# 4.5 Learning from limited explanations
Our framework facilitates a natural way of learning to generate explanations when only a limited number of examples have been annotated with a ground-truth explanation. Speciï¬cally, if a training example has an annotated explanation, we prepend âexplainâ to the input and append the explanation to the target. If no explanation has been annotated, we simply train the model to generate the label alone and do not prepend âexplainâ to the input. These two cases can be seen in the top two examples in ï¬g. 2. At test time, we ask our model to generate explanations for all of its inputs by prepending âexplainâ to every example.
Table 1: Results attained by WT5 and various baselines on the datasets we study. Acc is short for accuracy, HE for Human Evaluation, and TF1 for Token F1. F1a is the F1 score over all answers used in the MultiRC dataset (Khashabi et al., 2018a). See section 4.4 for a description of baselines. Note that for the Human row, Acc and TF1 are measured on our hand-labeled examples while HE is measured on the ground-truth explanations from the dataset. We were not able to run human evaluation for past SoTA models since we do not have access to the explanations produced by those models. âAs far as we are aware, the only work which reports accuracy on the Movie Reviews dataset is (Zaidan & Eisner, 2008); (DeYoung et al., 2019) reports an F1 score of 97.4. Since the Movies Rationale dataset is reasonably class-balanced and models are performing near-perfectly, this F1 score is somewhat comparable to the accuracy scores we report. Superscripts denote results from past work: aLiu et al. (2019), bCamburu et al. (2018), cLan et al. (2019), dZaidan & Eisner (2008), eDeYoung et al. (2019), f Raï¬el et al. (2019), gBowman et al. (2015).
e-SNLI CoS-E Movie Reviews MultiRC Acc Previous SoTA 91.6a 89.0g Human 90.9 WT5-Base 92.3 WT5-11B BLEU 27.6b 22.5b 32.4 33.7 HE â 78.0 â 90.0 Acc 83.7c 80.4 59.4 82.7 BLEU â 0.51 4.63 5.17 HE â 16.0 â 30.0 Acc 92.2dâ 100.0 98.0 99.0 TF1 32.2e 29.1 32.7 31.5 HE â 99.0 â 94.0 F1a 87.6f 90.5 77.8 86.6 TF1 45.6e 51.8 69.9 76.9 HE â 51.0 â 50.0
Figure 3: Accuracy and explanation quality as a function of the number of explanations retained in the training set. Dashed lines show the performance attained by using explanations for every example in the training set. All scores are reported on the validation set.
Hopefully, this approach will produce a model that can generate plausible explanations without requiring much (potentially expensive) hand annotating.
To test this approach, we trained WT5-11B on variants of each dataset after artiï¬cially removing most of the annotated explanations (but retaining all labels) and then measured the resulting accuracy and explanation scores on the validation set. The results for e-SNLI, CoS-E, and Movie Reviews can be seen in ï¬g. 3. For e-SNLI, the accuracy stayed roughly constant but the BLEU score gradually tapered oï¬ as the number of explanations decreased. Neverthe- less, with only 100 explanations, WT5-11B attains a better BLEU score than the previous state-of-the- art (Camburu et al., 2018). On CoS-E, the accuracy and BLEU scores both stayed constant (with a mild amount of random ï¬uctuation) even as the number of explanations approached 100. We manually inspected the explanations generated by WT5-11B after it had been trained on 100 ground-truth explanations and found that they were indeed of similar quality to those generated by the model trained on all explanations.
In contrast with the results for abstractive expla- nation, we found that the explanation quality quickly degraded as the number of annotated explanations decreased for both Movie Reviews and MultiRC. The Movie Reviews dataset comprises only 1,600 train- ing examples; with 1,000 explanations the generated explanation score is reasonable but performance de- grades to near-zero as the number of explanations approaches 100. On MultiRC, we were able to achieve reasonable results with 10,000 annotated explanations (compared to about 25,000 for the full dataset) but the Token F1 score was 0 when fewer explanations were used. This suggests that training the WT5 to generate many extractive explanation spans may re- quire more supervision than training it to generate a single abstractive explanation (as is the case in e-SNLI and CoS-E).
# 4.6 Transferring across datasets
Another way to avoid having to annotate a given dataset with explanations would be to leverage a re-
7
Table 2: Non cherry-picked predictions and explanations produced by WT5-11B on the validation set of each dataset. For extractive explanation, we boldface the spans chosen by our model. We display a truncated review and passage for the examples from Movie Reviews and MultiRC (respectively) for clarity and space reasons.
# e-SNLI
Premise: A person in a blue shirt and tan shorts getting ready to roll a bowling ball down the alley. Hypothesis: A person is napping on the couch. Predicted label: contradiction Explanation: A person cannot be napping and getting ready to roll a bowling ball at the same time.
# CoS-E
Question: What can you use to store a book while traveling? Choices: library of congress, pocket, backpack, suitcase, synagogue Predicted answer: backpack Explanation: books are often found in backpacks
# Movie Reviews
Review: sylvester stallone has made some crap ï¬lms in his lifetime , but this has got to be one of the worst . a totally dull story that thinks it can use various explosions to make it interesting , â the specialist â is about as exciting as an episode of â dragnet , â and about as well acted . even some attempts at ï¬lm noir mood are destroyed by a sappy script , stupid and unlikable characters , and just plain nothingness ... Predicted label: negative
# MultiRC
Passage: Imagine you are standing in a farm ï¬eld in central Illinois . The land is so ï¬at you can see for miles and miles . On a clear day , you might see a grain silo 20 miles away . You might think to yourself , it sure is ï¬at around here ... Query: In what part of Illinois might you be able to see a grain silo that is 20 miles away ? Candidate answer: Northern Illinois Predicted label: False
lated dataset for which explanations are already avail- able. For example, we might use the e-SNLI dataset to train our model to generate explanations and then have the model produce explanations for a diï¬erent natural language inference dataset. This also can test whether the model has learned domain-agnostic expla- nation skills by evaluating performance on a dataset from a diï¬erent domain.
which did not have explanations annotated. Human raters considered 82% of explanations generated from the validation set to be correct for examples from MNLI and 94% for IMDb. WT5-11B also managed to attain reasonable classiï¬cation accuracy on each dataset (91.5% on MNLI and 97.2% on IMDb). We present an example model output for both MNLI and IMDb in table 3.
We evaluated whether WT5-11B could success- fully carry out this kind of transfer in two settings. We transferred from e-SNLI to the MNLI dataset (Williams et al., 2017) which measures natural lan- guage inference in a much wider variety of domains than SNLI. Secondly, we transferred from Movie Re- views to IMDb (Maas et al., 2011) which consists of a large collection of movie reviews from the web- site IMDb. In both cases, we combined all examples from the explanation-annotated and explanation-free datasets and sampled examples randomly from the combined dataset. For training, we proceeded in a similar fashion as the previous experiment, where we prepended explanation-annotated examples with the word âexplainâ and simply trained the model to pre- dict the label for explanation-free examples. Since the transfer datasets do not contain ground-truth expla- nations, our only option for assessing quality was to perform a human study.
To further test if our model has learnt domain ag- nostic explanations skills, we evaluated whether WT5- 11B could successfully generate explanations for any other kinds of reviews, in addition to movie reviews. To test this transfer, we used the Amazon Reviews dataset (He & McAuley, 2016; McAuley et al., 2015) which contains reviews for a wide variety of products across diverse categories. The Amazon datasets con- sists of 5 star ratings as labels. We converted the labels to binary by considering reviews with 1 or 2 stars as positive and those with 4 or 5 stars as nega- tive. The training setup is similar to the one described above for IMDb. The only diï¬erence is that we also included examples from Movie Reviews without any explanations. We included Movie Reviews both with and without explanations, so that the model could learn the semantics of the âexplainâ preï¬x on Movie Reviews and apply it generate explanations for Ama- zon Reviews. After training, the WT5-11B model produced explanations for almost all examples in the Amazon Reviews dataset, averaging 2.2 explanations
In both cases, we found that WT5-11B produced plausible explanations for examples in the dataset
8
Table 3: Non cherry-picked predictions and explanations for MNLI, IMDb, and Amazon Reviews based on training WT5-11B. The MNLI explanations are generated using the e-SNLI dataset. The IMDb and Amazon Reviews explanations are generated using Movie Reviews. For IMDb and Amazon Reviews, we boldface the explanatory spans chosen by our model. We display only the beginning of the review for IMDb.
# MNLI
Premise: He passed these instructions to the pilots that launched at 10:42 and afterward. Hypothesis: Instructions on how to engage were given to the pilot going up. Predicted label: neutral Explanation: The instructions could be about anything, not necessarily how to engage.
Review: âLassie Come Home,â âNational Velvet,â and âThe Courage of Lassie,â Elizabeth Taylor was eleven years old. Nevertheless, her charm and beauty were extraordinary, and what she lacked in talent and experience was well hidden in a ï¬ne production that was nominated for ï¬ve Academy Awards... Predicted label: positive
Review: Great TV After having researched the LCD 1080P marketplace extensively, I decided on the Samsung and have not been disappointed. My initial experience, as well as my prior experience with a number of Samsung products makes me conï¬dent this will prove to be an excellent choice. Predicted label: positive
per example. We share some explanations from the Amazon Reviews dataset in table 3 and examples from more product categories in appendix A. Additionally, the WT5-11B model achieves a classiï¬cation accu- racy of 98.1% on the Amazon Reviews dataset. This suggests that this form of transfer presents another plausible way of learning to generate explanations on a dataset where they have not yet been annotated, even when the dataset is from another domain.
Table 4: Cherry-picked predictions and explana- tions produced by WT5-11B when transferring from e-SNLI to CoS-E. Multiple-choice answers omitted for brevity. The word âniceâ in the third example is a typo that appears in the dataset.
Question: When going to sleep what happens to your body? Predicted answer: relaxation Explanation: When going to sleep, your body goes into a state of relaxation.
# 4.7 Transferring across tasks
To push the idea of transferring explanation abilities further, we tested whether WT5-11B could success- fully transfer across tasks. Speciï¬cally, we evaluated whether WT5-11B could be trained to generate expla- nations on e-SNLI and then produce plausible expla- nations for the CoS-E dataset.
Question: What rubber toy ï¬lled with helium will make a child happy? Predicted answer: balloon Explanation: balloon is a toy ï¬lled with helium.
Question: Jamesâs nice asked him about her grandfather. She was interested in learning about what? Predicted answer: family tree Explanation: A family tree is a way of tracing a familyâs lineage.
Note that e-SNLI and CoS-E are quite diï¬erent tasks: For e-SNLI, the model receives a premise and hypothesis and must produce an NLI label from three ï¬xed classes; for CoS-E, it is fed a question and must answer given ï¬ve example-speciï¬c choices. Given these diï¬erences, we modiï¬ed the task processing to better align the formatting. In CoS-E, we modiï¬ed the tokens âcos eâ and âquestion:â to ânliâ and âpremise:â respectively. We also modiï¬ed the e-SNLI inputs to contain three ï¬xed choices: âentailmentâ, âneutralâ, and âcontradictionâ. We ablated these changes and found that both were necessary for our models to learn to generate explanations for CoS-E. Additionally, we found that decoding with beam-search improved the reliability of generating explanations.
Similar to the procedure for Movie Reviews to Amazon Reviews transfer, we ï¬ne-tuned WT5-11B on a mixture of the full e-SNLI dataset with explanations,
Question: Where can one obtain a bass ï¬ddle? Predicted answer: music store Explanation: A bass ï¬ddle is an instrument.
the full e-SNLI dataset without explanations, and the CoS-E dataset without explanations. We applied this model to generate âzero-shotâ explanations for CoS-E. After training, our model produced an explanation for most examples from the CoS-E validation set. We share some cherry-picked CoS-E explanations gen- erated by our model in table 4. While our model did not 100% reliably transfer explanation abilities across tasks, this result establishes that zero-shot ex- planation transfer across tasks is indeed possible and motivates future work on task-agnostic explanation abilities.
9
# 5 Conclusion
In this paper, we demonstrated how the text-to-text framework can be straightforwardly used to train a model to generate explanations for its predictions. The resulting model, called WT5-11B, achieved state- of-the-art results on a diverse collection of benchmark datasets and in many cases matched human abilities in both classiï¬cation performance and explanation abilities. We also showed how this approach facili- tates learning from limited labeled explanations and transferring explanatory capabilities across domains and tasks.
At a high level, our results can be seen as a small step towards improving our modelsâ abilities to com- municate. For example, sentiment analysis on the Movie Reviews dataset is loosely equivalent to asking the model âwhat is the sentiment of this movie review?â and our work allows us to further ask the model âwhy?â. While we are broadly interested in making models communicate more naturally, we also recog- nize that this approach provides only a surface-level improvement of interpretability: Much like humans, our approach does not guarantee that the produced explanation actually explains the speciï¬c reasons why a model generated its prediction. In other words, the model could potentially just make up a reasonable- sounding explanation instead of providing a truly accurate description of its causal decision-making pro- cess. Nevertheless, we are excited to see the ï¬eld progress more towards more human-like text models.
# References
Baehrens, D., Schroeter, T., Harmeling, S., Kawan- abe, M., Hansen, K., and M ËAËzller, K.-R. How to explain individual classiï¬cation decisions. Journal of Machine Learning Research, 11(June), 2010.
Bahdanau, D., Cho, K., and Bengio, Y. Neural ma- chine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
Belinkov, Y. and Bisk, Y. Synthetic and natural noise both break neural machine translation. arXiv preprint arXiv:1711.02173, 2017.
Bowman, S. R., Angeli, G., Potts, C., and Man- ning, C. D. A large annotated corpus for learn- ing natural language inference. arXiv preprint arXiv:1508.05326, 2015.
Camburu, O.-M., Rockt¨aschel, T., Lukasiewicz, T., and Blunsom, P. e-snli: Natural language inference
with natural language explanations. In Advances in Neural Information Processing Systems, 2018.
Caruana, R. Multitask learning. Machine learning, 28(1), 1997.
Dagan, I., Glickman, O., and Magnini, B. The pascal recognising textual entailment challenge. In Ma- chine Learning Challenges Workshop, pp. 177â190. Springer, 2005.
Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. BERT: Pre-training of deep bidirectional trans- formers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
DeYoung, J., Jain, S., Rajani, N. F., Lehman, E., Xiong, C., Socher, R., and Wallace, B. C. Eraser: A benchmark to evaluate rationalized nlp models. arXiv preprint arXiv:1911.03429, 2019.
Doshi-Velez, F. and Kim, B. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608, 2017.
Foerster, J. N., Gilmer, J., Sohl-Dickstein, J., Chorowski, J., and Sussillo, D. Input switched aï¬ne networks: an rnn architecture designed for interpretability. In Proceedings of the 34th Interna- tional Conference on Machine Learning, 2017.
Graves, A. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013.
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Gi- annotti, F., and Pedreschi, D. A survey of methods for explaining black box models. ACM computing surveys (CSUR), 51(5), 2018.
He, R. and McAuley, J. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative ï¬ltering. In proceedings of the 25th international conference on world wide web, pp. 507â 517, 2016.
Hooker, S., Erhan, D., Kindermans, P.-J., and Kim, B. Evaluating feature importance estimates. arXiv preprint arXiv:1806.10758, 2018.
Howard, J. and Ruder, S. Universal language model ï¬ne-tuning for text classiï¬cation. arXiv preprint arXiv:1801.06146, 2018.
Huang, C.-Z. A., Vaswani, A., Uszkoreit, J., Shazeer, N., Hawthorne, C., Dai, A. M., Hoï¬man, M. D., and Eck, D. Music transformer: Generating music with long-term structure. arXiv preprint arXiv:1809.04281, 2018.
10
i- revnet: Deep invertible networks. arXiv preprint arXiv:1802.07088, 2018.
Jain, S. and Wallace, B. C. Attention is not explana- tion. arXiv preprint arXiv:1902.10186, 2019.
Jia, R. and Liang, P. Adversarial examples for evaluat- ing reading comprehension systems. arXiv preprint arXiv:1707.07328, 2017.
Kalchbrenner, N., Grefenstette, E., and Blunsom, P. A convolutional neural network for modelling sen- tences. arXiv preprint arXiv:1404.2188, 2014.
Khashabi, D., Chaturvedi, S., Roth, M., Upad- hyay, S., and Roth, D. Looking Beyond the Sur- face: A Challenge Set for Reading Comprehen- In Proc. of the sion over Multiple Sentences. Annual Conference of the North American Chap- ter of the Association for Computational Linguis- tics (NAACL), 2018a. URL http://cogcomp.org/ papers/2018-MultiRC-NAACL.pdf.
Khashabi, D., Chaturvedi, S., Roth, M., Upadhyay, S., and Roth, D. Looking beyond the surface: A chal- lenge set for reading comprehension over multiple sentences. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics, 2018b.
Kindermans, P.-J., Hooker, S., Adebayo, J., Alber, M., Sch¨utt, K. T., D¨ahne, S., Erhan, D., and Kim, B. The (un) reliability of saliency methods. In Explain- able AI: Interpreting, Explaining and Visualizing Deep Learning. 2019.
Kudo, T. Subword regularization: Improving neural network translation models with multiple subword candidates. arXiv preprint arXiv:1804.10959, 2018.
Kudo, T. and Richardson, J. Sentencepiece: A sim- ple and language independent subword tokenizer and detokenizer for neural text processing. arXiv preprint arXiv:1808.06226, 2018.
Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., and Soricut, R. ALBERT: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942, 2019.
Lee, K., Firat, O., Agarwal, A., Fannjiang, C., and Sussillo, D. Hallucinations in neural machine trans- lation. In NeurIPS Workshop on Interpretability and Robustness in Audio, Speech, and Language, 2018.
Levesque, H., Davis, E., and Morgenstern, L. The winograd schema challenge. In Thirteenth Interna- tional Conference on the Principles of Knowledge Representation and Reasoning, 2012.
Liu, X., He, P., Chen, W., and Gao, J. Multi-task deep neural networks for natural language understanding. arXiv preprint arXiv:1901.11504, 2019.
Maas, A. L., Daly, R. E., Pham, P. T., Huang, D., Ng, A. Y., and Potts, C. Learning word vectors for sentiment analysis. In Proceedings of the 49th annual meeting of the association for computational linguistics, 2011.
McAuley, J., Targett, C., Shi, Q., and Van Den Hen- gel, A. Image-based recommendations on styles and substitutes. In Proceedings of the 38th Inter- national ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 43â52, 2015.
Molnar, C. Interpretable machine learning. 2019.
Nie, Y., Williams, A., Dinan, E., Bansal, M., Weston, J., and Kiela, D. Adversarial nli: A new benchmark for natural language understanding. arXiv preprint arXiv:1910.14599, 2019.
Papineni, K., Roukos, S., Ward, T., and Zhu, W.- J. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, 2002.
Peters, M. E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., and Zettlemoyer, L. Deep contextualized word representations. arXiv preprint arXiv:1802.05365, 2018.
Post, M. A call for clarity in reporting bleu scores. arXiv preprint arXiv:1804.08771, 2018.
Pruthi, D., Gupta, M., Dhingra, B., Neubig, G., Learning to deceive with arXiv preprint and Lipton, Z. C. attention-based explanations. arXiv:1909.07913, 2019.
Raï¬el, C., Luong, M.-T., Liu, P. J., Weiss, R. J., and Eck, D. Online and linear-time attention by In Proceedings enforcing monotonic alignments. of the 34th International Conference on Machine Learning, 2017.
Raï¬el, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., and Liu, P. J. Exploring the limits of transfer learning with a uniï¬ed text-to-text transformer. arXiv preprint arXiv:1910.10683, 2019.
11
Rajani, N. F., McCann, B., Xiong, C., and Socher, R. Explain yourself! leveraging language mod- els for commonsense reasoning. arXiv preprint arXiv:1906.02361, 2019.
attend and tell: Neural image caption generation with visual attention. In International conference on machine learning, 2015.
Ruder, S. An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098, 2017.
Zaidan, O. and Eisner, J. Modeling annotators: A generative approach to learning from annotator rationales. In Proceedings of the 2008 conference on Empirical methods in natural language processing, pp. 31â40, 2008.
Sennrich, R., Haddow, B., and Birch, A. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909, 2015.
Is attention inter- pretable? arXiv preprint arXiv:1906.03731, 2019.
Zhang, Q. and Zhu, S. Visual interpretability for deep learning: a survey. Frontiers of Information Technology and Electronic Engineering, 19, 02 2018. doi: 10.1631/FITEE.1700808.
Shazeer, N. and Stern, M. Adafactor: Adaptive learning rates with sublinear memory cost. arXiv preprint arXiv:1804.04235, 2018.
Smilkov, D., Thorat, N., Kim, B., Vi´egas, F., and Wat- tenberg, M. Smoothgrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825, 2017.
Sundararajan, M., Taly, A., and Yan, Q. Axiomatic attribution for deep networks. In Proceedings of the 34th International Conference on Machine Learning, 2017.
Sutskever, I., Vinyals, O., and Le, Q. V. Sequence to sequence learning with neural networks. In Ad- vances in neural information processing systems, 2014.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. Attention is all you need. In Advances in neural information processing systems, 2017.
Wang, A., Pruksachatkun, Y., Nangia, N., Singh, A., Michael, J., Hill, F., Levy, O., and Bow- man, S. Superglue: A stickier benchmark for general-purpose language understanding systems. In Wallach, H., Larochelle, H., Beygelzimer, A., d'Alch´e-Buc, F., Fox, E., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 32, pp. 3266â3280. Curran Associates, Inc., 2019. URL http://papers.nips.cc/paper/ 8589-superglue-a-stickier-benchmark-for-general-purpose-language-understanding-systems. pdf.
Williams, A., Nangia, N., and Bowman, S. R. A broad-coverage challenge corpus for sentence un- derstanding through inference. arXiv preprint arXiv:1704.05426, 2017.
Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhudinov, R., Zemel, R., and Bengio, Y. Show,
12
# A Amazon Reviews explanations
Table 5: Non cherry-picked predictions and explanations for Amazon Reviews based on training WT5-11B for diï¬erent product categories. We boldface the explanatory spans chosen by our model.
Product Category Review Apparel Lovely vest Fits beautifully (or rather it did before my husband lost 70 pounds), true to size. He wore this a lot, so it went through the washer several times, and still looks great. Very soft material, has not pilled or faded. Predicted label: positive Books a must if you wanna create comics/manga this teaches you everything you need to know, from paneling, to creating believeable characters, to perspictive, and covers everything pretty much... Predicted label: positive Luggage pretty good So far Iâve only used this bag a couple of times but it has served itâs purpose. It ï¬ts in a standard overhead storage bin, I love the bright royal blue color, and it appears to be well-made. My only complaint is that the extension handle got stuck after only using it a couple of times. Otherwise, this is a great piece of luggage. Predicted label: positive Musical instruments Worked for about 2 weeks! This product is very poorly made. My kids, ages 2.5 and 4, got the item for Christmas and were able to enjoy it for approximately 2 weeks before the microphone completely stopped working. They were not hard on the product at all - I watched them play with it every time. There is absolutely no reason it should have stopped working. It is basically now trash. I deï¬nitely do not recommend this product if you want a functioning microphone!! Predicted label: negative Oï¬ce products Stay away! I guess you get what you pay for. Basically, I installed the 3 colors, and the small black. The yellow didnât work. I tried cleaning the contacts and the yellow worked sometimes, never well. Then the magenta stopped working. Total junk! Predicted label: negative Outdoors Highly recommended Awesome ... switch is a little confusing at ï¬rst ... Hope they hold up ... have not yet tossed them in the service truck .. purchase primarily because of the lifetime warranty. light it bright Predicted label: positive Shoes Beware. Replicas. Not genuine. The gold mirror coating rubbed oï¬ while cleaning the lenses â while using the enclosed cloth. (See photo with spot on lens and paint on cloth.) After doing a bit of research Iâve come to the conclusion that these are fakes. Predicted label: negative Toys Beautiful ï¬oat, but deï¬ates rapidly... Float looks as advertised; however, it takes considerable time to pump up, and then deï¬ates within a few days... Sigh* Predicted label: negative
13
# B Human Study GUIs
Figures 4 through 7 show the GUIs we posted to MTurk to evaluate diï¬erent datasets. When the dataset includes a human-generated explanation for the label, we perform human evaluation on both the provided explanation and generated explanation. Several datasets share the same MTurk setup as noted in the captions.
Please answer the following questions for each pair of sentences: Does the given explanation explain the relationship between the two sentences? Is the explanation simply a restatement of the sentence? Important: To Be Approved: Make sure to answer âallâ questions. To Be Approved: Make sure to answer golden question correctly.(Please read carefully, Some cases may tell you to select a specific answer, example: "Select Yes". Follow that instruction for âbothâ associated questions). Sentence 1: We do this through a wide range of programs including community- based, therapeutic foster care, group homes and our treatment center.â âcontradictionâ Sentence 2: We only focus on a single line of action. because: âA wide range of programs is more than a single line of action.' Does the given explanation explain the relationship between the two sentences? Yes No Is the explanation simply a restatement of the sentence? Yes No
Figure 4: GUI for MNLI (and also e-SNLI). The explanation provided is a generated by WT5-11B.
Below are several questions and answers. Please determine if the provided explanation adequately explains the answer to the question. Important: To Be Approved: Make sure to answer âall* questions. To Be Approved: Make sure to answer golden question correctly.(Please read carefully, Some cases may tell you to select a specific answer, example: âSelect Yes". Follow that instruction for that quesion. Question: Flowers make a good center focal point\, just one of many arrangements that look good on a what? Choices: ['market', âtableâ, âcountrysideâ, âanthologyâ, âvase'*] Answer: table Because: Table | Components | BootstrapVue Does the explanation adequately explain the answer to the question? Yes No
Figure 5: GUI for CoS-E. The explanation is from the validation set of the dataset.
14
Below are several movie reviews. For each movie review, we've provided a label of whether the review is "positive" or "negative." Does the explanation phrase in the movie review explain why the review is postive or negative? If the explanation phrase is inline with the review, say positive, but the review as a whole is negative, then please select âyes.â Use ctrl-f or cmd-f generously to find the explanation phrase within the review, it will make it easier Make sure to answer *all* questions for your hit to be approved. Make sure to answer golden question correctly. Labet: positive Explanation: âa thrill - rideâ perhaps the most dramatic changes in the motion picture industry in this decade have to do with special effects . there is no question that action - adventure and science - fiction / action movies are now judged by the character of their light and noise . whereas classic adventure pics of the last twenty years , such as raiders of the lost ark , were made in grand traditional fashion ; contemporary films like jurassic park are multimillion - dollar creations of computer technology . the latest in this visually awesome series of movies , the wachowski brothers ' the matrix , is a testament to the skilled use of special effects and its ability to enhance a movie 's story . unlike many sci - fi movies which promote themselves as effects - heavy blockbusters but fail to deliver on that promise , the matrix is a carefully constructed special effects event . it runs 135 minutes in length and employs a countless number of computerized tricks which range from gimmick to grandiose , and the quality of the effects remains constant throughout the film 's length . contrary to popular trend , the matrix commits itself to being a spectacle of light and sound . in this regard , the movie is something like a card sharp . with its flashy mass stripped away , the matrix would be quite shallow and untalented . the script is characteristically weak , and the dialogue suffers in lieu of a far more innovative visual approach . but , like the card sharp , the matrix wows its audience to such a high degree that actual content is irrelevent . the viewers do n't care about what the matrix has to say as long as the next special effects sequence is right around the corner . and right around the corner they usually are , for the script tells a fast - paced , albeit frequently revisited story . as the movie explains , the world as we know it is nothing but an elaborate computer program constructed by an artificial intelligence for the purpose of placating mankind . billions of human beings lie in this dormant state while the intelligence â- the matrix -- farms our life energy . only a select group of individuals knows of the real world , and a particularly ingenious squad of these rebels is led by ultra - cool morpheus ( laurence fishburne ) . morpheus and his crew recruit a computer expert named neo ( keanu reeves ) , believing he is a prophesied individual who will help them overthrow the matrix and return peace to earth . the cast plays out this story in stylish fashion . the set design is very dynamic , running the gambit between cramped and dreary to bright and airy . the costumes , as well as the actors who wear them , add to the roles . for instance , the manifested antagonists in the movie , a group of agents created by the matrix in its computer program , all dress in matching secret servicewear ; the rebel fighters , on the other hand , dress in rich hues of leather . the casting can not be criticized , for the typically stoic reeves is n't required to say much and laurence fishburne gets plenty of time to be so damn cool . supports in carrie - anne moss , joe pantoliano , and hugo weaving are all effectual . one of the best comparisons to thw matrix is last year 's science fiction masterpiece dark city , particularly if one ponders how this same premise would 've worked from a different approach . the alex proyas film was far more dark and introspective , requiring a bit of thought before themes became clear ; here , the wachowski brothers have managed to construct a thrill - ride motion picture with little abandon and much noise . the better picture between the two depends on the viewer , but the key to the success of the matrix is that the noise did not get in the way of the fun.â Does the explanation support the label? © Yes @No
Figure 6: GUI for Movie Reviews (also IMDB). The explanation is from the validation set of the dataset.
15
Below, is a combination of passage query answer Then we mark the answer true(correct)/false(wrong), and give an explanation justifying this choice. Please select whether the explanation makes sense in justifying if the answer was true or false Important instructions Use ctri-f or cmd-f generously to find the answers. Do not read the whole passage or you will not be able to complete this HIT!! Search for "Does this explanation make sense" in order to find all questions Make sure to answer *all* questions for your hit to be approved. Make sure to answer golden question correctly.(Please read carefully, some of the explanations will ask you to select yes/no. In that case, follow that instruction) The factory is highly automated and designed to shift flexibly to produce many different kinds of chips to suit demand . The diversity is the big difference with this plant , said Richard Doherty , president of Envisioneering , a research firm . It gives IBM the capability to make so many different kinds of custom chips , and the world is going to custom chips - The 140,000-square - foot plant is a testament to advanced manufacturing technology . The 300-millimeter silicon wafers -- about the size of a standard pizza -- are shuttled around the facility in enclosed plastic pods , which ride on overhead tracks - They drop down from wires automatically into machines , sheathed in stainless steel and glass , for each stage of processing and fabrication . Throughout the 500 processing steps , which typically last 20 days , the wafers are not touched by human hands . The circuits etched into the chips are less than one thousandth the width of a human hair . Human operators are there to monitor the systems , catch errors and fine - tune the production process for maximum efficiency . Because each of the hundreds of processing machines is self - enclosed , and essentially airtight , the uniforms operators wear are less constricting than in the previous generation of chip plants , which looked like space suits . The operators at the East Fishkill factory wear light nylon uniforms , light blue shoe coverings and translucent hair nets made of paper . They look more like workers in a bakery . Yes , said Richard Brilla , director of the new facility , but the donuts are a lot more costly here . Each wafer , holding hundreds of chips , is worth $ 6,000 to $ 10,000 apiece , depending on what insulation , circuitry and materials are used . question: What is the brief summarization of the appearance of the operators ' uniforms as described above ? answer: Wear light nylon uniforms, light blue shoe coverings and translucent hair nets made of paperâ The answer mentioned above is True because: Explanation: âThey look more like workers in a bakery .' Does the explanation make sense? O Yes ONo
Figure 7: GUI for MultiRC. The explanation is from the validation set of the dataset.
16 | {
"id": "1806.10758"
} |
2005.00341 | Jukebox: A Generative Model for Music | We introduce Jukebox, a model that generates music with singing in the raw
audio domain. We tackle the long context of raw audio using a multi-scale
VQ-VAE to compress it to discrete codes, and modeling those using
autoregressive Transformers. We show that the combined model at scale can
generate high-fidelity and diverse songs with coherence up to multiple minutes.
We can condition on artist and genre to steer the musical and vocal style, and
on unaligned lyrics to make the singing more controllable. We are releasing
thousands of non cherry-picked samples at https://jukebox.openai.com, along
with model weights and code at https://github.com/openai/jukebox | http://arxiv.org/pdf/2005.00341 | Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, Ilya Sutskever | eess.AS, cs.LG, cs.SD, stat.ML | null | null | eess.AS | 20200430 | 20200430 | 0 2 0 2
r p A 0 3 ] S A . s s e e [
1 v 1 4 3 0 0 . 5 0 0 2 : v i X r a
# Jukebox: A Generative Model for Music
# Prafulla Dhariwal * 1 Heewoo Jun * 1 Christine Payne * 1 Jong Wook Kim 1 Alec Radford 1 Ilya Sutskever 1
# Abstract
We introduce Jukebox, a model that generates music with singing in the raw audio domain. We tackle the long context of raw audio using a multi- scale VQ-VAE to compress it to discrete codes, and modeling those using autoregressive Trans- formers. We show that the combined model at scale can generate high-ï¬delity and diverse songs with coherence up to multiple minutes. We can condition on artist and genre to steer the musical and vocal style, and on unaligned lyrics to make the singing more controllable. We are releasing thousands of non cherry-picked samples, along with model weights and code.
# 1. Introduction
Music is an integral part of human culture, existing from the earliest periods of human civilization and evolving into a wide diversity of forms. It evokes a unique human spirit in its creation, and the question of whether computers can ever capture this creative process has fascinated computer scien- tists for decades. We have had algorithms generating piano sheet music (Hiller Jr & Isaacson, 1957; Moorer, 1972; Hadjeres et al., 2017; Huang et al., 2017), digital vocoders generating a singerâs voice (Bonada & Serra, 2007; Saino et al., 2006; Blaauw & Bonada, 2017) and also synthesizers producing timbres for various musical instruments (Engel et al., 2017; 2019). Each captures a speciï¬c aspect of music generation: melody, composition, timbre, and the human voice singing. However, a single system to do it all remains elusive.
The ï¬eld of generative models has made tremendous progress in the last few years. One of the aims of gen- erative modeling is to capture the salient aspects of the data and to generate new instances indistinguishable from the true data The hypothesis is that by learning to produce the data we can learn the best features of the data1. We are surrounded by highly complex distributions in the visual, audio, and text domain, and in recent years we have devel-
oped advances in text generation (Radford et al.), speech generation (Xie et al., 2017) and image generation (Brock et al., 2019; Razavi et al., 2019). The rate of progress in this ï¬eld has been rapid, where only a few years ago we had algorithms producing blurry faces (Kingma & Welling, 2014; Goodfellow et al., 2014) but now we now can gener- ate high-resolution faces indistinguishable from real ones (Zhang et al., 2019b).
Generative models have been applied to the music genera- tion task too. Earlier models generated music symbolically in the form of a pianoroll, which speciï¬es the timing, pitch, velocity, and instrument of each note to be played. (Yang et al., 2017; Dong et al., 2018; Huang et al., 2019a; Payne, 2019; Roberts et al., 2018; Wu et al., 2019). The symbolic approach makes the modeling problem easier by working on the problem in the lower-dimensional space. However, it constrains the music that can be generated to being a speciï¬c sequence of notes and a ï¬xed set of instruments to render with. In parallel, researchers have been pursuing the non- symbolic approach, where they try to produce music directly as a piece of audio. This makes the problem more challeng- ing, as the space of raw audio is extremely high dimensional with a high amount of information content to model. There has been some success, with models producing piano pieces either in the raw audio domain (Oord et al., 2016; Mehri et al., 2017; Yamamoto et al., 2020) or in the spectrogram domain (Vasquez & Lewis, 2019). The key bottleneck is that modeling the raw audio directly introduces extremely long-range dependencies, making it computationally chal- lenging to learn the high-level semantics of music. A way to reduce the difï¬culty is to learn a lower-dimensional encod- ing of the audio with the goal of losing the less important information but retaining most of the musical information. This approach has demonstrated some success in generat- ing short instrumental pieces restricted to a set of a few instruments (Oord et al., 2017; Dieleman et al., 2018).
In this work, we show that we can use state-of-the-art deep generative models to produce a single system capable of gen- erating diverse high-ï¬delity music in the raw audio domain, with long-range coherence spanning multiple minutes. Our approach uses a hierarchical VQ-VAE architecture (Razavi
*Equal contribution 1OpenAI, San Francisco. Correspondence
to: <[email protected]>.
1Richard Feynmann famously said, âWhat I cannot create, I do not understandâ
Jukebox: A Generative Model for Music
et al., 2019) to compress audio into a discrete space, with a loss function designed to retain the maximum amount of musical information, while doing so at increasing levels of compression. We use an autoregressive Sparse Trans- former (Child et al., 2019; Vaswani et al., 2017) trained with maximum-likelihood estimation over this compressed space, and also train autoregressive upsamplers to recreate the lost information at each level of compression.
We show that our models can produce songs from highly diverse genres of music like rock, hip-hop, and jazz. They can capture melody, rhythm, long-range composition, and timbres for a wide variety of instruments, as well as the styles and voices of singers to be produced with the mu- sic. We can also generate novel completions of existing songs. Our approach allows the option to inï¬uence the generation process: by swapping the top prior with a con- ditional prior, we can condition on lyrics to tell the singer what to sing, or on midi to control the composition. We release our model weights and training and sampling code at https://github.com/openai/jukebox.
# 2. Background
We consider music in the raw audio domain represented as a continuous waveform x â [â1, 1]T , where the number of samples T is the product of the audio duration and the sampling rate typically ranging from 16 kHz to 48 kHz. For music, CD quality audio, 44.1 kHz samples stored in 16 bit precision, is typically enough to capture the range of frequencies perceptible to humans. As an example, a four- minute-long audio segment will have an input length of â¼10 million, where each position can have 16 bits of information. In comparison, a high-resolution RGB image with 1024 Ã 1024 pixels has an input length of â¼3 million, and each position has 24 bits of information. This makes learning a generative model for music extremely computationally demanding with increasingly longer durations; we have to capture a wide range of musical structures from timbre to global coherence while simultaneously modeling a large amount of diversity.
# 2.1. VQ-VAE
back to the input space. It is thus an auto-encoder with a discretization bottleneck. The VQ-VAE is trained using the following objective:
L = Lrecons + Lcodebook + βLcommit
4D t D. #3
Leecons = 4D ||Xt â D(ez,)|[3 (2) t
Leodebook = % D. ||8g [hs] â ez, | 5 (3)
Leomnit = #3 |Ibs â sg [ez,]|3 (4)
where sg denotes the stop-gradient operation, which passes zero gradient during backpropagation. The reconstruction loss Lrecons penalizes for the distance between the input x and the reconstructed output X = D(ez), and Leodebook Pe- nalizes the codebook for the distance between the encodings h and their nearest neighbors e, from the codebook. To stabilize the encoder, we also add Leommit to prevent the encodings from fluctuating too much, where the weight 3 controls the amount of contribution of this loss. To speed up training, the codebook loss Leodebook instead uses EMA up- dates over the codebook variables. Razavi et al. (2019) extends this to a hierarchical model where they train a sin- gle encoder and decoder but break up the latent sequence h into a multi-level representation [h()),--- , h(â)] with de- creasing sequence lengths, each learning its own codebook C, They use non-autoregressive encoder-decoders and jointly train all levels with a simple mean-squared loss.
# 3. Music VQ-VAE
Inspired by the results from the hierarchical VQ-VAE model (Razavi et al., 2019) for images, we consider applying the same technique to model raw audio using three different levels of abstraction, as illustrated in Figure 1. At each level, we use residual networks consisting of WaveNet-style non- causal 1-D dilated convolutions, interleaved with downsam- pling and upsampling 1-D convolutions to match different hop lengths. A detailed description of the architecture is provided in Appendix B.1. We make a number of modiï¬ca- tions to our VQ-VAE compared to the ones in (Oord et al., 2017; Razavi et al., 2019), as described in the following subsections.
To make this task feasible, we use the VQ-VAE (Oord et al., 2017; Dieleman et al., 2018; Razavi et al., 2019) to compress raw audio to a lower-dimensional space. A one-dimensional VQ-VAE learns to encode an input sequence x = (x;)7_, using a sequence of discrete tokens z = (zs ⬠[K])9_y, where [x denotes the vocabulary size and we call the ratio T/S the hop length. It consists of an encoder E(x) which encodes x into a sequence of latent vectors h = (h,)5_y, a bottleneck that quantizes h, + e,, by mapping each h, to its nearest vector e,, from a codebook C = {ex }/_,, and a decoder D(e) that decodes the embedding vectors
# 3.1. Random restarts for embeddings
VQ-VAEs are known to suffer from codebook collapse, wherein all encodings get mapped to a single or few em- bedding vectors while the other embedding vectors in the codebook are not used, reducing the information capacity of the bottleneck. To prevent this, we use random restarts: when the mean usage of a codebook vector falls below a threshold, we randomly reset it to one of the encoder out- puts from the current batch. This ensures all vectors in the
Jukebox: A Generative Model for Music
Codebook ej , ii Vector oven Vector Encode Quantization Egg gee, Vector Encode IRIE Quantization Xt hy = E(xp) Zz = argming Codebook Lookup = Decode Codebook Lookup ion f Decode Codebook Lookup eT Decode hyâ ex! ez, xX, = Dez)
Figure 1. We ï¬rst train three separate VQ-VAE models with different temporal resolutions. At each level, the input audio is segmented and encoded into latent vectors ht, which are then quantized to the closest codebook vectors ezt . The code zt is a discrete representation of the audio that we later train our prior on. The decoder takes the sequence of codebook vectors and reconstructs the audio. The top level learns the highest degree of abstraction, since it is encoding longer audio per token while keeping the codebook size the same. Audio can be reconstructed using the codes at any one of the abstraction levels, where the least abstract bottom-level codes result in the highest-quality audio, as shown in Figure 4. For the detailed structure of each component, see Figure 7.
codebook are being used and thus have a gradient to learn from, mitigating codebook collapse.
we use the sum of the spectral losses Lspec calculated over multiple STFT parameters that trade-off time and frequency resolutions (Yamamoto et al., 2020).
# 3.2. Separated Autoencoders
When using the hierarchical VQ-VAE from (Razavi et al., 2019) for raw audio, we observed that the bottlenecked top level is utilized very little and sometimes experiences a com- plete collapse, as the model decides to pass all information through the less bottlenecked lower levels. To maximize the amount of information stored at each level, we simply train separate autoencoders with varying hop lengths. Dis- crete codes from each level can be treated as independent encodings of the input at different levels of compression.
# 4. Music Priors and Upsamplers
After training the VQ-VAE, we need to learn a prior p(z) over the compressed space to generate samples. We break up the prior model as
p(z) = p(ztop, zmiddle, zbottom) (5)
= p(ztop)p(zmiddle|ztop)p(zbottom|zmiddle, ztop)
# 3.3. Spectral Loss
When using only the sample-level reconstruction loss, the model learns to reconstruct low frequencies only. To capture mid-to-high frequencies, we add a spectral loss which is deï¬ned as
Lepee = |||STFT(x)| â |STFT(X)||l It encourages the model to match the spectral components without paying attention to phase which is more difficult to learn. This is similar to the use of power loss (Oord et al., 2018) and spectral convergence (Ank et al., 2018b) when training parallel decoders for raw audio. One differ- ence between the latter approach and ours is that we are no longer optimizing the spectral signal-to-noise ratio; dividing by the magnitude of the signal results in numerical insta- bility for mostly silent inputs. To prevent the model from overfitting to a particular choice of the STFT parameters,
and train separate models for the top-level prior p(ztop), and upsamplers p(zmiddle|ztop) and p(zbottom|zmiddle, ztop). Each of these is an autoregressive modeling problem in the dis- crete token space produced by the VQ-VAE. We use Trans- formers with sparse attention (Vaswani et al., 2017; Child et al., 2019) as they are currently the SOTA in autoregressive modeling. We propose a simpliï¬ed version which we call the Scalable Transformer, that is easier to implement and scale (see Appendix A for details).
For the upsamplers, we need to provide the autoregres- sive Transformers with conditioning information from the codes of the upper levels. To do so, we use a deep resid- ual WaveNet (Xie et al., 2017) followed by an upsampling strided convolution and a layer norm (Ba et al., 2016), and add the output as extra positional information to the embed- dings of the current level. We condition the lower levels only on the chunk of upper level codes that correspond to the same segment of raw audio.
Jukebox: A Generative Model for Music
At each level, we use Transformers over the same context length of discrete codes, which correspond to increasing the raw audio length with larger hop lengths, and modeling longer temporal dependencies at the higher levels while keeping the same computational footprint for training each level. As our VQ-VAE is convolutional, we can use the same VQ-VAE to produce codes for arbitrary lengths of audio.
# 4.1. Artist, Genre, and Timing Conditioning
Our generative model can be made more controllable by providing additional conditioning signals while training. For our ï¬rst models, we provide artist and genre labels for the songs. This has two advantages: ï¬rst, it reduces the entropy of the audio prediction, so the model is able to achieve better quality in any particular style. Second, at generation time, we are able to steer the model to generate in a style of our choosing. Additionally, we attach a timing signal for each segment at training time. This signal includes the total duration of the piece, the start time of that particular sample and how much fraction of the song that has elapsed. This allows the model to learn audio patterns that depend on the overall structure, such as spoken or instrumental introductions and applause at the end of a piece.
Conditioning Information + Top-Level Prior â_â_)L_____., Middle Upsampler 1 VQ-VAE Decoder vy --teibetee oeaeheenpe bt honneheegtreira
(a) Ancestral sampling: Priors for the VQ-VAE codes are trained using a cascade of Transformer models, shown in blue. Each model takes conditioning information such as genre, artist, timing, and lyrics, and the upsampler models are also conditioned on the codes from the upper levels. To generate music, the VQ-VAE codes are sampled from top to bottom using the conditioning information for control, after which the VQ-VAE decoder can convert the bottom-level codes to audio.
# 4.2. Lyrics Conditioning
While the conditioned models above are able to generate songs of diverse genres and artistic styles, singing voices generated by those models, while often sung in a compelling melody, are mostly composed of babbling, rarely producing recognizable English words. In order to be able to control the generative model with lyrics, we provide more context at training time by conditioning the model on the lyrics corresponding to each audio segment, allowing the model to produce singing simultaneosly with the music.
Lyrics-to-singing (LTS) task: The conditioning signal only includes the text of the lyrics, without timing or vocal- isation information. We thus have to model the temporal alignment of lyrics and singing, the artists voice and also the diversity of ways one can sing a phrase depending on the pitch, melody, rhythm and even genre of the song. The con- ditioning data isnât precise as the lyrics data often contains textual references to repeated sections like âchorusâ or mis- matching portions of lyrics with the corresponding music. There is also no separation between lead vocals, accompa- nying vocals and the background music in target audio. This makes the Lyrics-to-singing (LTS) task signiï¬cantly more challenging than the corresponding Text-to-speech (TTS) task.
time â â new samples
(b) Windowed sampling: To generate music longer than the modelâs context length (12 in this ï¬gure), we repeatedly sample continuations at each level, using overlapping windows of previous codes as the context. The overlap amount is a hyperparameter, and the ï¬gure shows an example of 75% overlap with hop length 3.
Generate â 3 i=] : : c Lâ+| a & re Primed Audio Generated Audio
Providing lyrics for chunks of audio: Our dataset includes song-level lyrics, but to make the task easier we train on shorter (24 sec) chunks of audio. To provide the lyrics cor-
(c) Primed sampling: The model can generate continuations of an existing audio signal by converting it into the VQ-VAE codes and sampling the subsequent codes in each level.
Figure 2. Sampling methods for generating music
Jukebox: A Generative Model for Music
responding to the audio during training, we began with a simple heuristics of aligning the characters of the lyrics to linearly span the duration of each song, and pass a ï¬xed-side window of characters centered around the current segment during training. While this simple strategy of linear align- ment worked surprisingly well, we found that it fails for certain genres such as hip-hop with fast lyrics. To address this, we use Spleeter (Hennequin et al., 2019) to extract vo- cals from each song and run NUS AutoLyricsAlign (Gupta et al., 2020) on the extracted vocals to obtain a word-level alignments of the lyrics, allowing us to more accurately provide the lyrics for a given chunk of audio. We choose a large enough window so that the actual lyrics have a high probability of being inside the window.
1.0 500 08 _ 400 § - 8 300 06 8 _ § g 4 Z = 200 â 04 2 100 02 ° 0.0 ° 1600 3200 4800 6400 8000 Music token position
Figure 3. Lyrics-singing alignment learned by one of the encoder- decoder attention layers. The x-axis is the position of music queries, and the y-axis is the position of lyric keys. The positions attended to by the decoder correspond to the characters being sung.
Encoder-decoder model: We use an encoder-decoder style model to condition on the characters of the lyrics, with the encoder producing features from the lyrics which are attended to by the decoder which produces the top level music tokens. The lyrics encoder is a Transformer with an autoregressive modeling loss for lyrics, and its last level is used as features of the lyrics. In the music decoder, we inter- leave a few additional layers with encoder-decoder attention where the queries from the music tokens are only allowed to attend to keys and values from the lyrics tokens. These layers attend on the activation from the last layer of the lyrics encoder (see Figure 8c). In Figure 3, we see that the attention pattern learned by one of these layers corresponds to the alignment between the lyrics and the singing.
previously generated tokens into the model as inputs and outputting the next token conditioned on all previous tokens. We then run our conditioning wavenet on the top level codes to produce the conditioning information for the middle level and sample ancestrally from it too, and do the same for the bottom level.
Windowed sampling: To sample segments longer than the context length, we use windowed sampling, where we move ahead our sampling window by half our context and con- tinue sampling conditioned on this half context (See Figure 2b). We can trade off speed for quality by using a smaller hop length here.
# 4.3. Decoder Pretraining
To reduce computation required to train the lyrics condi- tional model, we use a pretrained unconditional top-level prior as our decoder and introduce the lyrics encoder using model surgery (Berner et al., 2019). We initialize the output projection weights in the MLP and the attention layers of these residual blocks to zeros (Zhang et al., 2019a), so that the added layers perform the identity function at initializa- tion. Thus, at initialization the model behaves identically as the pretrained decoder, but there is still a gradient with respect to the encoder state and parameters2, allowing the model to learn to use the encoder.
Primed sampling: Instead of sampling the entire token sequence from the model, we can also run a forward pass of the VQ-VAE to obtain the top, middle, and bottom level codes corresponding to a segment from an actual song, as shown in Figure 2c. We can use these as the initial tokens in our ancestral sampling process and continue sampling from these to produce novel completions of the song.
# 5. Experiments
# 5.1. Dataset
# 4.4. Sampling
After we have trained our VQ-VAE, upsamplers, and top level priors, we can then use them to sample novel songs.
Ancestral sampling: We ï¬rst generate the top level codes one token at a time by the usual ancestral sampling process (see Figure 2a): generating the ï¬rst token, then passing all
We scraped a new dataset of 1.2 million songs (600k of which in English), paired with the lyrics and metadata from LyricWiki (LyricWiki). The metadata includes artist, album, genre, and year of the release, along with common moods or playlist keywords associated with each song. We train on 32 bit, 44.1 kHz raw audio and perform data augmentation by randomly downmixing the right and left channels to produce mono channel audio.
# 5.2. Training Details
2The gradient also needs to break symmetry with the encoder output features, which is the case here since the weights of the input projections in the attention are not zero.
For the music VQ-VAE, we use 3 levels of bottlenecks com- pressing 44 kHz audio in dimensionality by 8x, 32x, and
Jukebox: A Generative Model for Music
128x respectively, with a codebook size of 2048 for each level. The VQ-VAE has 2 million parameters and is trained on 9-second audio clips on 256 V100 for 3 days. We used exponential moving average to update the codebook fol- lowing Razavi et al. (2019). For our prior and upsampler models, we use a context of 8192 tokens of VQ-VAE codes, which corresponds to approximately 24, 6, and 1.5 seconds of raw audio at the top, middle, and bottom level, respec- tively. The upsamplers have one billion parameters and are trained on 128 V100s for 2 weeks, and the top-level prior has 5 billion parameters and is trained on 512 V100s for 4 weeks. We use Adam with learning rate 0.00015 and weight decay of 0.002. For lyrics conditioning, we reuse the prior and add a small encoder, after which we train the model on 512 V100s for 2 weeks. The detailed hyperparameters for our models and training are provided in Appendix B.3.
# 5.3. Samples
sity, and novelty of generated samples. The links to curated examples are embedded in text.
Coherence: We ï¬nd the samples stay very coherent musi- cally through the context length of the top-level prior (ap- proximately 24 seconds), and they maintain similar har- monies and textures as we slide the window to generate longer samples. However, because the top-level does not have the context of the entire song, we do not hear long term musical patterns, and we would never hear choruses or melodies that repeat.
The generations progress through beginnings of songs (for example applause or slow instrumental warm-ups), through sections that sound chorus-like, through instrumental inter- ludes, and then fading or otherwise wrapping up at the end. The top-level prior always knows what fraction of the song is complete time-wise, so it is able to imitate appropriate beginnings, middles and ends.
We trained a sequence of models with increasing sample quality. Our ï¬rst model was trained on the MAESTRO dataset using 22 kHz VQ-VAE codes and relatively small prior models. We observed that this could generate high ï¬delity classical music samples with piano and occasional violin. We then collected a larger and more diverse dataset of songs with genre and artist labels. The same model when trained on this new dataset was able to produce diverse sam- ples other than classical music, and demonstrated musicality and coherence over more than a minute.
Despite the novelty of being able to generate generally high ï¬delity and coherent songs, sample quality was still limited by a number of factors. First, the use of 22 kHz sampling rate along with small upsamplers introduced noise both in the upsampling and decoding steps, which we hear as grainy texture. We improved ï¬delity by using 44 kHz VQ-VAE and 1B parameter upsamplers in all subsequent experiments at the expense of longer rendering time.
Second, the 1B top-level prior was not big enough to pro- duce singing and diverse musical timbres. We ï¬rst explored increasing the model size to 5 billion parameters. Larger capacity allowed better modeling of the broader distribu- tion of songs, resulting in samples with better musicality, longer coherence and initial singing. While there is an over- all qualitative improvement, the unconditional model still struggled to sing recognizable words. Training a seq2seq model with lyric conditioning and limiting the dataset only to songs primarily in English made singing both intelligible and controllable.
Musicality: The samples frequently imitate familiar mu- sical harmonies and the lyrics are usually set in ways that are very natural. Frequently the highest or longest notes of the melody match words that a human singer would choose to emphasize, and the lyrics are almost always rendered in ways that capture the prosody of the phrases. This is noticeable in hip hop generations, where the model reliably captures the rhythm of spoken text. We do ï¬nd that the generated melodies are usually less interesting than human composed melodies. In particular, we do not hear the an- tecedent and consequent pattern familiar to many human melodies, and we rarely hear choruses that are melodically memorable.
Diversity: Likelihood training encourages covering of all modes, so we expect the model to produce diverse samples.
â Re-renditions: We generate multiple samples conditioned on artist and lyrics combinations that exist in our training data. While occasionally drum and bass lines or melodic intervals echo the original versions, we ï¬nd that none of the generated samples is noticeably similar to the original songs.
We also generate multiple songs conditioned on the same artist and lyrics as Sample 1 to obtain Samples 9â12. All ï¬ve sound interesting in their own ways with different moods and melodies with Sample 10 playing a harmonic at 00:14 as part of a blues riff, showing that the model has learned a wide range of singing and playing styles.
The ï¬nal model, which we call Jukebox, uses all these improvements. Because everyone experiences music dif- ferently, it is generally tricky and not very meaningful to evaluate samples by the mean opinion score or FID-like metrics. We manually evaluate coherence, musicality, diver-
â Completions: We prime the model with 12 seconds of existing songs and ask it to complete them in the same styles. When the priming samples include singing, the con- tinuations are more likely to imitate the original tunes and rhythms. Songs primed with more generic or common intros tend to be more diverse. Even generated samples that are
Jukebox: A Generative Model for Music
close to the originals early on deviate completely into new musical material after about 30 seconds.
Re-renditions and completions are interesting and diverse, but overall, there is still room for improvement in music quality compared to the original songs.
â Full tree: To understand diversity in a more systematic way, we generate multiple continuations from the same seg- ment. We start with a one-minute sample and independently sample four times per one-minute extension. By the three minute mark, there are 16 completions. We can think of this branching tree as exploring different possibilities obtained by ancestral sampling. In the generated songs in the link, we hear diversity in singing and development even when the same initial segment is used. We note that this particular sample follows the lyrics more successfully than many. For certain genres like hip hop and rap, where linearly moving the window does not yield good lyrics alignment, the chance of obtaining plausible singing is lower.
Novelty: With the ability to condition on various styles, lyrics, and raw audio, we would like Jukebox to be a useful tool for both professional musicians and music enthusiasts alike. In this section, we are interested in exploring capabil- ities and applications of Jukebox.
â Novel styles: We generate songs in an unusual genre typi- cally not associated with an artist. In general, we ï¬nd that it is fairly difï¬cult to generalize to a novel style of singing while using the same voice as the artist embedding overpow- ers other information. In Joe Bonamassa and Frank Sinatra samples, we hear a modest variation in instrumentation, energy, and ambience depending on the genre embedding. However, our attempts to mix country singer Alan Jackson with unusual genres like hip hop and punk did not seem to move the samples away from a country style in meaningful ways.
â Novel voices: We pick artists whose voices are reproduced reasonably well by the model, and interpolate their style embeddings to synthesize new voices. Some blending, for instance, between Frank Sinatra and Alan Jackson in Sample 4, still sounds similar to Frank Sinatra. In most cases, the model renders in a vaguely recognizable but distinct voice that preserves different vocal attributes. Samples 1 and 2 conditioned on the Céline Dion embeddings divided by two have slightly different timbre and tone but capture her unique vibrato.
We also experiment with changing the style embedding in the middle of a song to create a duet (Sample 7). This is another way of guiding generation during sampling. Con- tinuing in another voice works best when the segment ends in an interlude; otherwise, the model blends voices in the middle of a word or a sentence.
â Novel lyrics: We ask Jukebox to sing poems and novel verses generated by GPT-2 (Radford et al.) to demonstrate that it can indeed sing new lyrics. While the training data consists of song lyrics with limited vocabulary and con- strained structure, the model has learned to follow along most prompts and sing even new words that are reasonably pronounceable (including technical terms from the deep learning literature). To get the best results, however, we ï¬nd that it is useful to spell out difï¬cult words or acronyms as they are spoken. The generations are noticeably higher qual- ity if the text matches the distribution of lyrics for the given artist, both in terms of length, and of rhyming or rhythmic qualities. For example, hip hop lyrics tend to be longer than most other genres, and the commonly emphasized syllables easily form clear rhythms.
â Novel riffs: Another useful application of Jukebox is the ability to record an incomplete idea and explore various continuations without ever needing to tabulate in symbolic representations, which would lose details of timbre and mood. We curate recordings of novel riffs by our in-house musicians and prime the model during sampling. Sample 6 starts with a musical style not widely used in Elton Johnâs songs. The model still carries out the tune and develops it further. Similarly, the beginning of Sample 1 is a pro- gressive jazz piece with a 5/4 polymeter, which has never been used in hip hop. Despite this novelty, the rhythm per- sists throughout the song and is incorporated naturally with rapping.
# 5.4. VQ-VAE Ablations
Spectral convergence (dB) Level Hop length Without restart With restart Bottom Middle Top 8 32 128 â21.1 â12.4 â8.3 â23.0 â12.4 â8.3
Table 1. Reconstruction ï¬delity degrades with higher compression. Restarting dead codes near random encoder outputs mitigates learn- ing suboptimal codes.
Codebook size Spectral convergence (dB) 256 2048 No quantization â15.9 â23.0 â40.5
Table 2. Bottom-level VQ-VAE reconstruction results with differ- ent codebook sizes. Using larger codebooks helps reconstruction because it allows more information to be encoded at the bottleneck layers. Removing the bottleneck entirely yields almost perfect reconstruction.
Jukebox: A Generative Model for Music
Figure 4. Comparison of reconstructions from different VQ-VAEs, x-axis is time and y-axis is frequency. The columns from left to right are bottom-, middle-, and top-level reconstructions at hop lengths 8, 32, and 128 respectively, visualized as Mel spectrograms. The ï¬rst row is the ground-truth, and the second row shows the spectrograms of audio outputs from our VQ-VAE. In the third row, we remove the spectral loss, and see that the middle and top level lose high-frequency information. In the fourth row, we use a hierarchical VQ-VAE (Razavi et al., 2019) instead of separate auto-encoders (Figure 1), and we see the middle and top levels are not used for encoding pertinent information. Finally, the ï¬fth row shows a baseline with the Opus codec that encodes audio at constant bitrates comparable to our VQ-VAE. It also fails to capture higher frequencies and adds noticeable artifacts at the highest level of compression.
Ablation Spectral convergence (dB) None Without spectral loss With single autoencoder â8.3 â6.3 2.9
Table 3. Top-level codes are generally difï¬cult to train well without spectral loss or with a single hierarchical autoencoder. Resulting reconstructions may lose some to most of information.
& Daudet, 2011), which measures the amount of spectral error relative to signal, as test error and proxy for reconstruc- tion ï¬delity. We evaluate on 5000 held-out 3-second audio segments and report the average in decibels. All models in this section are trained with a batch size of 32, 3-second audio clips sampled at 44 kHz. As before, we use hop lengths of 8, 32, and 128 for the bottom, middle and top level respectively.
We compare raw audio VQ-VAEs when trained with varying compression ratios, objectives, and architectures. As we use nonautoregressive decoders with continuous represen- tation for output, we report spectral convergence (Sturmel
In Table 1, we see that increasing the hop size results in higher reconstruction error. Figure 4 indeed shows that a signiï¬cant amount of information, especially higher frequen- cies, is missing at middle and top levels across all ablations we ran. This is expected as audio is compressed more with
Jukebox: A Generative Model for Music
ze = 10 i= » p> NAF to a ~ = [Wee 5 9 ! tA â 1 . g ! â with restart 3 H ---- without restart gs1i 1S) T T T T T T 0 100k 200k 300k 400k 500k Number of training steps
Figure 5. Entropy of codebook with 2048 codes, i.e 11 bits, over training. Reviving dead codes near random encoder outputs en- sures good codebook utilization from the start of training.
larger hop sizes. To mitigate codebook collapse, we restart dead codes near random encoder embeddings. In Figure 5, we see that this yields higher codebook usage even from early on in training. Models trained without random restarts can converge to the same test error and codebook usage but require more training steps. With poor initialization, these models sometimes end up with suboptimal codes hurting reconstruction ï¬delity.
Codebook size also matters, as it sets a limit on channel ca- pacity through the bottleneck layers. In Table 2, we ï¬nd that reconstruction error increases considerably when the code- book size is reduced from 2048 to 256. We also compare with a model that uses continuous representations without vector quantization. We can think of this model as using a vastly large codebook with all encoder embeddings. This achieves almost perfect reconstruction with negligible spec- tral error.
When the model is trained with L2 loss only, reconstruc- tions tend to sound muddy from missing high frequencies, and this problem is exacerbated as hop size is increased. In Figure 4, we see that top-level codes trained without spec- tral loss do not capture much information beyond 2 kHz, and obtain worse reconstructions (Table 3). However, we observe that while spectral loss helps encode more infor- mation, it also adds distortion artifacts which we hear as scratchy noise.
Lastly, we train a raw audio hierarchical VQ-VAE (Razavi et al., 2019) and ï¬nd that it is generally difï¬cult to push information to higher levels. This model is trained twice as long as the previous models, but middle and top-level recon- structions as shown in Figure 4 are not capturing much. It is possible that higher level codes may have collapsed before bottom level starts to reconstruct the audio well. Making the bottom layers explicitly model residuals pushed more information to the top. But, we found separate autoencoders to be cleaner and more effective.
# 6. Related Work
Generative modeling in deep learning: Generative mod- els aim to learn the distribution of data by either explicitly by modeling the distribution or implicitly by constructing means to sample from it (Goodfellow, 2016). Modeling the interdependency within high-dimensional data was tra- ditionally considered extremely difï¬cult, but starting with Deep Boltzmann Machines (Salakhutdinov & Hinton, 2009), various kinds of deep generative models have been intro- duced. Generative Adversarial Networks (GANs) (Good- fellow et al., 2014) use generator and discriminator net- works that contest each other to make the generated samples as indistinguishable as possible from the data, and they are renowned for their ability to generate high-quality pic- tures (Zhang et al., 2019b; Brock et al., 2019). Autoregres- sive generative models such as NADE (Uria et al., 2016), PixelCNN (Van den Oord et al., 2016), and Transformers (Vaswani et al., 2017) use the chain rule of probability to factorize the joint distribution of data into a product of simpler distributions, and ï¬ow-based models (Dinh et al., 2015; 2017; Rezende & Mohamed, 2015; Kingma & Dhari- wal, 2018) learn a series of invertible transformations that maps the data distribution with a simpler one such as a Gaussian distribution. Autoregressive ï¬ows (Papamakarios et al., 2017; Kingma et al., 2016) combine the two ideas to achieve faster density estimation or data generation. Varia- tional autoencoders (VAEs) (Rezende et al., 2014; Kingma & Welling, 2014) impose a Gaussian prior on the latent code in an encoder-decoder setup from which data can be sampled.
Generative models for music: Generative modeling of symbolic music dates back to more than half a century, when Hiller Jr & Isaacson (1957) introduced the ï¬rst computer- generated music based on Markov chains. There exists a variety of earlier approaches using rule-based systems (Moorer, 1972), chaos and self-similarity (Pressing, 1988), cellular automata (Beyls, 1989), concatenative synthesis (Jehan, 2005), and constraint programming (Anders & Mi- randa, 2011). More recent data-driven approaches include DeepBach (Hadjeres et al., 2017) and Coconet (Huang et al., 2017) which use Gibbs sampling to produce notes in the style of Bach chorals, MidiNet (Yang et al., 2017) and MuseGAN (Dong et al., 2018) which use generative ad- versarial networks, MusicVAE (Roberts et al., 2018) and HRNN (Wu et al., 2019) which use hierarchical recurrent networks, and Music Transformer (Huang et al., 2019a) and MuseNet (Payne, 2019) which use Transformers to au- toregressively predict MIDI note events. There also have been a number of approaches for synthesizing music con- ditioned on symbolic music information, such as NSynth (Engel et al., 2017) which uses WaveNet-style autoen- coder, Mel2Mel (Kim et al., 2019) and Wave2Midi2Wave (Hawthorne et al., 2019) which synthesize music using
Jukebox: A Generative Model for Music
WaveNet conditioned on a piano roll representation, and GanSynth (Engel et al., 2019) which uses generative adver- sarial networks to produce magnitude spectrograms together with instananeous frequencies for easier spectrogram inver- sion. Generative models for music can also be used for music style transfer, as seen in Midi-VAE (Brunner et al., 2018) which uses a variational autoencoder to transfer styles between classical and jazz music, LakhNES (Donahue et al., 2019) which uses a Transformer architecture to generate chiptune music, and Universal Music Translator Network (Mor et al., 2019) which uses a denoising autoencoder that can disentangle musical style and content.
Sample-level generation of audio: In recent years, a vari- ety of generative models for raw audio have been introduced. WaveNet (Oord et al., 2016) performs autoregressive sample- by-sample probabilistic modeling of raw waveform using a series of dilated convolutions to exponentially increase the context length. It can produce realistic audio either uncon- ditionally or by conditioning on acoustic features or spec- trograms. The autoregressive nature of WaveNet makes the sampling notoriously slow, and it uses a categorical distribu- tion for audio samples which introduces quantization noise. Parallel WaveNet (Oord et al., 2018) improves upon this by instead using a mixture of logistics distribution, a con- tinuous probability distribution, and performing probabil- ity density distillation which learns a parallel feed-forward network from a pre-trained autoregressive model, allow- ing faster sampling of high ï¬delity audio. ClariNet (Ping et al., 2019) achieves similar audio quality using a simple Gaussian distribution instead and thus having a closed-form loss function, eliminating the need for Monte-Carlo sam- pling. SampleRNN (Mehri et al., 2017) uses a multi-scale, hierarchical recurrent neural network with convolutional upsampling to model long-range complex structures. Wa- veRNN (Kalchbrenner et al., 2018) uses recurrent neural networks that operate separately on the most signiï¬cant and the least signiï¬cant bytes, which can be efï¬ciently deployed in mobile devices while having comparable audio quality to WaveNet. WaveGlow (Prenger et al., 2019) is a ï¬ow-based model for parallel sample-level audio synthesis, which can be trained with a straightforward maximum-likelihood esti- mation and thus is advantageous to the two-stage training process needed for distillation. Parallel WaveGAN (Ya- mamoto et al., 2020) and MelGAN (Kumar et al., 2019) are GAN-based approaches directly modeling audio wave- forms, achieving similar quality as WaveNet and WaveGlow models with signiï¬cantly fewer parameters. While the ap- proaches above serve as sophisticated generative models for raw audio to be conditioned on a compact and controllable representation of audio such as Mel spectrograms, Mel- Net (Vasquez & Lewis, 2019) takes a different approach of hierarchically generating accurate high-resolution Mel spec-
trograms, after which a simple gradient-based optimization can produce high-ï¬delity audio.
VQ-VAE: Oord et al. (2017) introduced VQ-VAE, an ap- proach of downsampling extremely long context inputs to a shorter-length discrete latent encoding using a vector quan- tization, and they showed that it can generate both high- quality images and audio, as well as learn unsupervized representations of phonemes. Razavi et al. (2019) extended the above model by introducing a hierarchy of discrete rep- resentations for images and showed that the resulting model can learn to separate high-level semantics into the highest level of discrete codes which have the largest receptive ï¬eld, while capturing local features like textures in the lower lev- els with smaller receptive ï¬elds. They used the hierarchical model to generate high-diversity and high-ï¬delity images for the conditional ImageNet and FFHQ datasets. Dieleman et al. (2018) tried variants of this approach where instead of a single encoder there are successive encoders that each further compress the lossy discrete encodings from the previ- ous levels. A downside of this approach is that information is lost at each step and requires separate training for each VQ-VAE level, and it leads to a hierarchy collapse problem. De Fauw et al. (2019) used AR decoders which are known to cause the problem of ignoring the latent variables, and they suggested ways to mitigate it. The feed-forward decoders from (Razavi et al., 2019) do not suffer from this issue, and thus we use their approach.
Speech synthesis: Producing natural human voice entails an understanding of linguistic features, mapping of sounds, and steerability of expression. Many text-to-speech (TTS) systems rely on highly engineered features (Klatt, 1980), carefully curated sound segments (Hunt & Black, 1996), statistical parametric modeling (Zen et al., 2009), and of- ten complex pipelines as described in (Arık et al., 2017). These approaches are fairly involved and produce unnatural or inarticulate voices. More recent works like Deep Voice 3 (Ping et al., 2018), Tacotron 2 (Shen et al., 2018), and Char2Wav (Sotelo et al., 2017) learn speech synthesis end- to-end using sequence-to-sequence architecture (Sutskever et al., 2014). The design space is vast, but in general, typical approaches comprise of a bidirectional encoder, a decoder, and a vocoder to build text representations, audio features, and the ï¬nal raw waveforms. To generate multiple voices, text-to-speech models can also condition on the speaker identity (Oord et al., 2016; Gibiansky et al., 2017; Jia et al., 2018) as well as text prompt. By learning and manipulat- ing auxiliary embeddings, models can mimic a new voice (Arık et al., 2018a; Taigman et al., 2018) at test time. These methods, however, require labeled data. Ideas like clus- tering (Dehak et al., 2011), priming (Wang et al., 2018), and variational autoencoders (Hsu et al., 2019; Akuzawa et al., 2018) have been used to learn broader styles of speech and control expressivity in an unsupervised way. There are
Jukebox: A Generative Model for Music
also works on synthesizing singing by additionally con- trolling pitch and timbre. Similar to TTS literature, early works use concatenative methods (Bonada & Serra, 2007) that join short segments of curated singing, and statistical parametric methods (Saino et al., 2006; Oura et al., 2010) which allow modeling of timbre from training data. Both approaches impose fairly strong assumptions resulting in noticeable artifacts. (Blaauw & Bonada, 2017) train a neural TTS model with a parametric vocoder to separate pitch and timbre which can be controlled at generation time.
# 7. Future work
While our approach represents a step forward in the ability to generate coherent long raw audio music samples, we rec- ognize several directions for future work. Great music gen- eration should be high quality over all time scales: it should have a developing musical and emotional structure across the entire piece, local notes and harmonies that always make sense, nuanced and appropriate small timbral and textural details, and audio recording quality that balances and blends the multiple voices well, and without unwanted noise. We view our current model as stronger on the mid-range time scales: often the model generates samples that locally sound very good, with interesting and diverse harmonies, rhythms, instruments, and voices. We have frequently been very impressed how the melody and rhythm generated suits a particular lyric extremely well. However, while the samples stay consistent over longer time scales, we notice they donât have traditional larger music structures (such as choruses that repeat, or melodies that have a question and answer form). Additionally, on the smallest scale, we sometimes hear audio noise or scratchiness.
Beyond the quality of the samples, we also would look to diversify the languages and styles the model is able to generate. Our current model has been trained only on songs whose primary language as detected by (Sites, 2013) is English. In the future, we would look to include other languages and artists. We believe this will be of interest both for generating strictly in those styles, and because historically we have seen much creativity and development coming from unusual blends of existing musical styles.
Finally, we consider it very important that computer music generation also serves as a tool for human musicians, and increasingly those interested in music but without formal training. While we are able to steer our current model some- what through lyric and midi conditioning, we can imagine many other possible ways for humans to inï¬uence the gener- ations, including indicating the mood or dynamic at various sections, or controlling when drums, singers, or other instru- ments should play.
The current model takes around an hour to generate 1 minute of top level tokens. The upsampling process is very slow, as it proceeds sequentially through the sample. Currently it takes around 8 hours to upsample one minute of top level tokens. We can create a human-in-the-loop co-composition process at the top level only, using the VQ-VAE decoders to get a fast upsampling of the top level tokens to hear a very rough sense of what the model generates. The top-level model generates multiple samples, the person picks a fa- vorite (listening to the rough VQ-VAE decoding), and then the model continues generating multiple samples continuing the favorite. This process would be signiï¬cantly improved with faster generation and Transformer upsampling steps. Our models have fast parallel evaluation of likelihood but slow autoregressive sampling. We can instead use a model with fast parallel sampling but slow autoregressive likeli- hood evaluation (Kingma et al., 2016), and distill the infor- mation from our current model into it (Oord et al., 2018). The distillation works by generating samples from the paral- lel sampler and evaluating it likelihood and entropy using the parallel likelihood evaluator, and then optimising the sampler by minimising the KL divergence of it from our current model.
# 8. Conclusion
We have introduced Jukebox, a model that generates raw audio music imitating many different styles and artists. We can condition this music on speciï¬c artists and genres, and can optionally specify the lyrics for the sample. We laid out the details necessary to train a Hierarchical VQ-VAE to compress the music effectively into tokens. While previous work has generated raw audio music in the 20â30 second range, our model is capable of generating pieces that are multiple minutes long, and with recognizable singing in natural-sounding voices.
# 9. Acknowledgement
We would like to thank John Schulman and Will Guss for producing and performing novel riffs for our sampling ex- periments, and Rewon Child, Aditya Ramesh, Ryan Lowe and Jack Clark for providing feedback for initial drafts of this paper.
# References
Akuzawa, K., Iwasawa, Y., and Matsuo, Y. Expressive speech synthesis via modeling expressions with varia- tional autoencoder. In INTERSPEECH, 2018.
Anders, T. and Miranda, E. R. Constraint programming systems for modeling music theories and composition. ACM Computing Surveys (CSUR), 43(4):1â38, 2011.
Jukebox: A Generative Model for Music
Arık, S. Ã., Chrzanowski, M., Coates, A., Diamos, G., Gib- iansky, A., Kang, Y., Li, X., Miller, J., Ng, A., Raiman, J., Sengupta, S., and Shoeybi, M. Deep Voice: Real-time neural text-to-speech. In International Conference on Machine Learning, pp. 195â204, 2017.
Dieleman, S., van den Oord, A., and Simonyan, K. The chal- lenge of realistic music generation: modelling raw audio at scale. In Advances in Neural Information Processing Systems, pp. 7989â7999, 2018.
Arık, S. Ã., Chen, J., Peng, K., Ping, W., and Zhou, Y. Neural voice cloning with a few samples. In Advances in Neural Information Processing Systems, pp. 10019â 10029. 2018a.
Arık, S. Ã., Jun, H., and Diamos, G. Fast spectrogram inversion using multi-head convolutional neural networks. IEEE Signal Processing Letters, 26(1):94â98, 2018b.
Ba, J. L., Kiros, J. R., and Hinton, G. E. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
Dinh, L., Krueger, D., and Bengio, Y. NICE: Non-linear in- dependent components estimation. In International Con- ference in Learning Representations, Workshop, 2015.
Dinh, L., Sohl-Dickstein, J., and Bengio, S. Density esti- mation using Real NVP. In International Conference in Learning Representations, 2017.
Donahue, C., Mao, H. H., Li, Y. E., Cottrell, G. W., and McAuley, J. J. LakhNES: Improving multi-instrumental music generation with cross-domain pre-training. In In- ternational Society for Music Information Retrieval Con- ference, pp. 685â692, 2019.
Berner, C., Brockman, G., Chan, B., Cheung, V., DËebiak, P., Dennison, C., Farhi, D., Fischer, Q., Hashme, S., Hesse, C., et al. Dota 2 with large scale deep reinforcement learning. arXiv preprint arXiv:1912.06680, 2019.
Dong, H.-W., Hsiao, W.-Y., Yang, L.-C., and Yang, Y.-H. MuseGAN: Multi-track sequential generative adversarial networks for symbolic music generation and accompani- ment. In Thirty-Second AAAI Conference on Artiï¬cial Intelligence, 2018.
Beyls, P. The musical universe of cellular automata. In International Computer Music Conference, pp. 34â41, 1989.
Blaauw, M. and Bonada, J. A neural parametric singing synthesizer. In INTERSPEECH, 2017.
Engel, J., Resnick, C., Roberts, A., Dieleman, S., Norouzi, M., Eck, D., and Simonyan, K. Neural audio synthesis of musical notes with wavenet autoencoders. In Interna- tional Conference on Machine Learning, pp. 1068â1077, 2017.
Bonada, J. and Serra, X. Synthesis of the singing voice by performance sampling and spectral models. IEEE signal processing magazine, 24(2):67â79, 2007.
Engel, J., Agrawal, K. K., Chen, S., Gulrajani, I., Donahue, C., and Roberts, A. GANSynth: Adversarial neural au- dio synthesis. In International Conference on Learning Representations, 2019.
Brock, A., Donahue, J., and Simonyan, K. Large scale GAN training for high ï¬delity natural image synthesis. In International Conference on Learning Representations, 2019.
Gibiansky, A., Arık, S. Ã., Diamos, G., Miller, J., Peng, K., Ping, W., Raiman, J., and Zhou, Y. Deep Voice 2: Multi- In Advances in neural speaker neural text-to-speech. information processing systems, pp. 2962â2970, 2017.
Brunner, G., Konrad, A., Wang, Y., and Wattenhofer, R. MIDI-VAE: modeling dynamics and instrumentation of music with applications to style transfer. In International Society for Music Information Retrieval Conference, pp. 747â754, 2018.
Child, R., Gray, S., Radford, A., and Sutskever, I. Gen- erating long sequences with sparse transformers. arXiv preprint arXiv:1904.10509, 2019.
Goodfellow, I. NIPS 2016 tutorial: Generative adversarial In Neural Information Processing Systems, networks. Tutorial, 2016.
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672â2680, 2014.
De Fauw, J., Dieleman, S., and Simonyan, K. Hierarchi- cal autoregressive image models with auxiliary decoders. arXiv preprint arXiv:1903.04933, 2019.
Gupta, C., Yılmaz, E., and Li, H. Automatic lyrics tran- scription in polyphonic music: Does background music help? In International Conference on Acoustics, Speech, and Signal Processing, 2020.
Dehak, N., Kenny, P. J., Dehak, R., Dumouchel, P., and Ouellet, P. Front-end factor analysis for speaker veriï¬ca- tion. IEEE Transactions on Audio, Speech, and Language Processing, 19(4):788â798, 2011.
Hadjeres, G., Pachet, F., and Nielsen, F. Deepbach: a steer- able model for bach chorales generation. In International Conference on Machine Learning, pp. 1362â1371. JMLR. org, 2017.
Jukebox: A Generative Model for Music
Hawthorne, C., Stasyuk, A., Roberts, A., Simon, I., Huang, C.-Z. A., Dieleman, S., Elsen, E., Engel, J., and Eck, D. Enabling factorized piano music modeling and generation with the MAESTRO dataset. In International Conference on Learning Representations, 2019.
Kalchbrenner, N., Elsen, E., Simonyan, K., Noury, S., Casagrande, N., Lockhart, E., Stimberg, F., Oord, A., Dieleman, S., and Kavukcuoglu, K. Efï¬cient neural au- dio synthesis. In International Conference on Machine Learning, pp. 2410â2419, 2018.
Hennequin, R., Khlif, A., Voituret, F., and Moussallam, M. Spleeter: A fast and state-of-the art music source separa- tion tool with pre-trained models. Late-Breaking/Demo ISMIR 2019, November 2019. Deezer Research.
Kim, J. W., Bittner, R., Kumar, A., and Bello, J. P. Neural music synthesis for ï¬exible timbre control. In IEEE In- ternational Conference on Acoustics, Speech and Signal Processing, pp. 176â180, 2019.
Hiller Jr, L. A. and Isaacson, L. M. Musical composition with a high speed digital computer. In Audio Engineering Society Convention 9. Audio Engineering Society, 1957.
Kingma, D. P. and Dhariwal, P. Glow: Generative ï¬ow with invertible 1x1 convolutions. In Advances in Neural Information Processing Systems, pp. 10215â10224, 2018.
Ho, J., Kalchbrenner, N., Weissenborn, D., and Salimans, T. Axial attention in multidimensional transformers. arXiv preprint arXiv:1912.12180, 2019.
Kingma, D. P. and Welling, M. Auto-encoding variational bayes. In International Conference on Learning Repre- sentations, 2014.
Hsu, W.-N., Zhang, Y., Weiss, R. J., Zen, H., Wu, Y., Wang, Y., Cao, Y., Jia, Y., Chen, Z., Shen, J., Nguyen, P., and Pang, R. Hierarchical generative modeling for control- lable speech synthesis. In International Conference on Learning Representations, 2019.
Kingma, D. P., Salimans, T., Jozefowicz, R., Chen, X., Sutskever, I., and Welling, M. Improved variational in- ference with inverse autoregressive ï¬ow. In Advances in neural information processing systems, pp. 4743â4751, 2016.
Huang, C. A., Cooijmans, T., Roberts, A., Courville, A. C., and Eck, D. Counterpoint by convolution. In Interna- tional Society for Music Information Retrieval Confer- ence, pp. 211â218, 2017.
Klatt, D. H. Software for a cascade/parallel formant synthe- sizer. Journal of the Acoustical Society of America, 67 (3):971â995, 1980.
Huang, C.-Z. A., Vaswani, A., Uszkoreit, J., Shazeer, N., Simon, I., Hawthorne, C., Dai, A. M., Hoffman, M. D., Dinculescu, M., and Eck, D. Music Transformer: Gen- erating music with long-term structure. In International Conference on Learning Representations, 2019a.
Kumar, K., Kumar, R., de Boissiere, T., Gestin, L., Teoh, W. Z., Sotelo, J., de Brébisson, A., Bengio, Y., and Courville, A. C. MelGAN: Generative adversarial net- works for conditional waveform synthesis. In Advances in Neural Information Processing Systems, pp. 14881â 14892, 2019.
Huang, Y., Cheng, Y., Bapna, A., Firat, O., Chen, D., Chen, M., Lee, H., Ngiam, J., Le, Q. V., Wu, Y., et al. Gpipe: Efï¬cient training of giant neural networks using pipeline parallelism. In Advances in Neural Information Process- ing Systems, pp. 103â112, 2019b.
Hunt, A. J. and Black, A. W. Unit selection in a con- catenative speech synthesis system using a large speech database. In IEEE International Conference on Acoustics, Speech, and Signal Processing Conference, pp. 373â376, 1996.
LyricWiki. URL https://lyrics.fandom.com/ wiki/LyricWiki.
Mehri, S., Kumar, K., Gulrajani, I., Kumar, R., Jain, S., Sotelo, J., Courville, A., and Bengio, Y. SampleRNN: An unconditional end-to-end neural audio generation model. In International Conference on Learning Representations, 2017.
Jehan, T. Creating music by listening. PhD thesis, Mas- sachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2005.
Jia, Y., Zhang, Y., Weiss, R., Wang, Q., Shen, J., Ren, F., Chen, z., Nguyen, P., Pang, R., Lopez Moreno, I., and Wu, Y. Transfer learning from speaker veriï¬cation to multispeaker text-to-speech synthesis. In Advances in Neural Information Processing Systems, pp. 4480â4490. 2018.
Moorer, J. A. Music and computer composition. Communi- cations of the ACM, 15(2):104â113, 1972.
Mor, N., Wolf, L., Polyak, A., and Taigman, Y. Autoencoder- based music translation. In International Conference on Learning Representations, 2019.
Oord, A. v. d., Dieleman, S., Zen, H., Simonyan, K., Vinyals, O., Graves, A., Kalchbrenner, N., Senior, A., and Kavukcuoglu, K. WaveNet: A generative model for raw audio. arXiv preprint arXiv:1609.03499, 2016.
Jukebox: A Generative Model for Music
Oord, A. v. d., Vinyals, O., and Kavukcuoglu, K. Neural discrete representation learning. In Neural Information Processing Systems, 2017.
Roberts, A., Engel, J., Raffel, C., Hawthorne, C., and Eck, D. A hierarchical latent vector model for learning long- term structure in music. In International Conference on Machine Learning, pp. 4364â4373, 2018.
Oord, A. v. d., Li, Y., Babuschkin, I., Simonyan, K., Vinyals, O., Kavukcuoglu, K., van den Driessche, G., Lockhart, E., Cobo, L., Stimberg, F., Casagrande, N., Grewe, D., Noury, S., Dieleman, S., Elsen, E., Kalchbrenner, N., Zen, H., Graves, A., King, H., Walters, T., Belov, D., and Hassabis, D. Parallel WaveNet: Fast high-ï¬delity speech synthesis. In International Conference on Machine Learning, pp. 3918â3926, 2018.
Oura, K., Mase, A., Yamada, T., Muto, S., Nankaku, Y., and Tokuda, K. Recent development of the HMM-based singing voice synthesis system â Sinsy. 2010.
Papamakarios, G., Pavlakou, T., and Murray, I. Masked autoregressive ï¬ow for density estimation. In Advances in Neural Information Processing Systems, pp. 2338â2347, 2017.
Saino, K., Zen, H., Nankaku, Y., Lee, A., and Tokuda, K. An HMM-based singing voice synthesis system. In INTERSPEECH, 2006.
Salakhutdinov, R. and Hinton, G. Deep boltzmann machines. In Artiï¬cial intelligence and statistics, pp. 448â455, 2009.
Shen, J., Pang, R., Weiss, R. J., Schuster, M., Jaitly, N., Yang, Z., Chen, Z., Zhang, Y., Wang, Y., Skerrv-Ryan, R., Saurous, R. A., Agiomvrgiannakis, Y., and Wu, Y. Natural TTS synthesis by conditioning wavenet on mel spectrogram predictions. In IEEE International Confer- ence on Acoustics, Speech and Signal Processing, pp. 4779â4783, 2018.
Sites, D. Compact language detector 2. 2013. URL https: //github.com/CLD2Owners/cld2.
Payne, C. Musenet. OpenAI blog, 2019. URL https: //openai.com/blog/musenet.
Ping, W., Peng, K., Gibiansky, A., Arik, S. O., Kannan, A., Narang, S., Raiman, J., and Miller, J. Deep Voice 3: 2000-speaker neural text-to-speech. In International Conference on Learning Representations, 2018.
Ping, W., Peng, K., and Chen, J. Clarinet: Parallel wave generation in end-to-end text-to-speech. In International Conference on Learning Representations, 2019.
Prenger, R., Valle, R., and Catanzaro, B. WaveGlow: A ï¬ow-based generative network for speech synthesis. In IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 3617â3621, 2019.
Pressing, J. Nonlinear maps as generators of musical design. Computer Music Journal, 12(2):35â46, 1988.
Sotelo, J., Mehri, S., Kumar, K., Santos, J. F., Kastner, K., Courville, A. C., and Bengio, Y. Char2Wav: End-to- end speech synthesis. In International Conference on Learning Representations, 2017.
Sturmel, N. and Daudet, L. Signal reconstruction from stft magnitude: A state of the art. International Conference on Digital Audio Effects, DAFx, 2011.
Sutskever, I., Vinyals, O., and Le, Q. V. Sequence to se- quence learning with neural networks. In Advances in neural information processing systems, pp. 3104â3112, 2014.
Taigman, Y., Wolf, L., Polyak, A., and Nachmani, E. VoiceLoop: Voice ï¬tting and synthesis via a phonological loop. In International Conference on Learning Represen- tations, 2018.
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., and Sutskever, I. Language models are unsupervised multitask learners.
Uria, B., Côté, M.-A., Gregor, K., Murray, I., and Larochelle, H. Neural autoregressive distribution esti- mation. The Journal of Machine Learning Research, 17 (1):7184â7220, 2016.
Razavi, A., van den Oord, A., and Vinyals, O. Generating diverse high-ï¬delity images with vq-vae-2. In Advances in Neural Information Processing Systems, pp. 14837â 14847, 2019.
Van den Oord, A., Kalchbrenner, N., Espeholt, L., Vinyals, O., Graves, A., et al. Conditional image generation with pixelcnn decoders. In Advances in neural information processing systems, pp. 4790â4798, 2016.
Rezende, D. and Mohamed, S. Variational inference with normalizing ï¬ows. In International Conference on Ma- chine Learning, pp. 1530â1538, 2015.
Vasquez, S. and Lewis, M. MelNet: A generative model arXiv preprint for audio in the frequency domain. arXiv:1906.01083, 2019.
Rezende, D. J., Mohamed, S., and Wierstra, D. Stochastic backpropagation and approximate inference in deep gen- erative models. In International Conference on Machine Learning, pp. 1278â1286, 2014.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Å., and Polosukhin, I. Atten- tion is all you need. In Advances in Neural Information Processing Systems, pp. 5998â6008, 2017.
Jukebox: A Generative Model for Music
Wang, Y., Stanton, D., Zhang, Y., Skerry-Ryan, R., Batten- berg, E., Shor, J., Xiao, Y., Ren, F., Jia, Y., and Saurous, R. A. Style Tokens: Unsupervised style modeling, control and transfer in end-to-end speech synthesis. In Interna- tional Conference on Machine Learning, 2018.
Wu, J., Hu, C., Wang, Y., Hu, X., and Zhu, J. A hierarchical recurrent neural network for symbolic melody generation. IEEE Transactions on Cybernetics, 2019.
Xie, S., Girshick, R., Dollár, P., Tu, Z., and He, K. Aggre- gated residual transformations for deep neural networks. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 1492â1500, 2017.
Yamamoto, R., Song, E., and Kim, J.-M. Parallel Wave- GAN: A fast waveform generation model based on gener- ative adversarial networks with multi-resolution spectro- gram. In International Conference on Acoustics, Speech, and Signal Processing, 2020.
Yang, L., Chou, S., and Yang, Y. Midinet: A convolutional generative adversarial network for symbolic-domain mu- sic generation. In International Society for Music Infor- mation Retrieval Conference, pp. 324â331, 2017.
Zen, H., Tokuda, K., and Black, A. W. Review: Statistical parametric speech synthesis. Speech Communication, 51 (11):1039â1064, 2009.
Zhang, H., Dauphin, Y. N., and Ma, T. Fixup initialization: Residual learning without normalization. In International Conference on Machine Learning, 2019a.
Zhang, H., Goodfellow, I., Metaxas, D., and Odena, A. Self-attention generative adversarial networks. In Inter- national Conference on Machine Learning, 2019b.
Jukebox: A Generative Model for Music
# A. Scalable Transformer
We make the Sparse Transformer (Child et al., 2019) more scalable and easier to implement by a few small changes. We implement a simpler attention pattern that has the same performance without needing custom kernels to implement. We simplify the initialization by using the same initalization scale in the whole model without rescaling the weights based on fan-in and depth, and we optimize the memory footprint with fully half-precision training, i.e. storing the model weights, gradients and the optimizer states in half precision and performing computations in half precision as well. To cope with the narrower dynamic range of the fp16 format, we use dynamic scaling of the gradient and Adam optimizer states.
Axis-aligned attention patterns: The Sparse Transformer (Child et al., 2019) sparsiï¬es the attention pattern by reshaping the input sequence into a 2-D sequence of shape (blocks, block length) to use factorized attention. They observe that the strided attention pattern works best for images and audio because it does not have the state bottleneck of the ï¬xed attention. However, their implementation require specialized CUDA kernels. We can obtain a similar pattern by doing masked row, masked column, and unmasked previous-row attention. While the masked row captures the local context, the masked column and unmasked previous-row attention captures the context of all previous rows. We observe the same computational speed as well as training loss with this pattern. Each of these can be implemented directly as a dense attention by transposing or slicing the input sequence along appropriate axes, and thus do not require special CUDA kernels to implement. This can be easily extended to video too. Complementary to our work, a similar pattern was introduced in (Ho et al., 2019) where they also used axis-aligned attention but instead used a two-stream architecture.
Masked Masked Unmasked
# Row Attention
Masked Column Attention
Unmasked Previous-Row Attention
(a) Three axis-aligned attention patterns are sparse attention pat- terns that allow autoregressive generative modeling while only using simple Python-level array manipulation. Masked row and column attention patterns use autoregressive masks, whereas un- masked previous-row attention is fully visible.
<a ul |
(b) Combining two of the attention patterns, each position can attend to any of the previous positions, while not causing a state bottleneck as in ï¬xed sparse attention (Child et al., 2019).
Figure 6. Axis-aligned attention patterns
sion completely without loss in training performance. For the Adam state tensors (m_t, v_t) we do dynamic scal- ing. For each iteration and for every parameter, we rescale its state tensors before casting so that their maximum corre- sponds to the maximum value of the ï¬oat16 range, thus max- imizing the use of the ï¬oat16 range. Thus, we store the state m_t as the tuple (scale, (m_t/scale).half()), where scale = m_t.max()/float16.max(), and similarly for v_t. The above lets us ï¬t models of size 1B parameters into memory for our large context of 8192 to- kens. To train even larger models, we use GPipe (Huang et al., 2019b).
Half-precision parameters and optimizer state with dy- namic scaling: To allow training large models, (Child et al., 2019) uses recompute with gradient checkpointing, per- forms computations using half precision activations and gradients, and uses dynamic loss scaling. While this speeds up training on Volta cores, one still has a high memory us- age from storing the parameters and Adam state in full ï¬oat precision. To scale our models further, we store our matmul parameters and their Adam state in half precision, thus halv- ing our memory usage. We use a single parameter s to set the scale of all weights and initialize all matmul and input/out- put embeddings3 to N (0, s), and position embeddings to N (0, 2s). The initialization ensures all parameters are in a similar dynamic range, and allows us to train in half preci-
3We share the input and output embedding
Jukebox: A Generative Model for Music
# B. Experimental details
# B.1. Music VQ-VAE
We have three separate raw audio VQ-VAEs to produce dis- crete codes at varying hop sizes for the bottom, middle, and top priors. All autoencoders comprise non-causal, dilated 1-D convolutions, and are trained independently using non- autoregressive reconstruction losses. Basic building blocks in these networks share the same architecture, as shown in Figure 7. Each encoder block consists of a downsampling convolution, a residual network, and a 1D convolution with a kernel size of 3. Dilation is grown by a factor of 3 in these residual networks to increase the receptive ï¬eld. The decoder block mirrors this exactly with a 1D convolution with the kernel size of 3, a residual network with dilation contracting across depth, and an upsampling transposed con- volution. Here, all resampling convolutions use a kernel size of 4 and stride 2 so that each building block changes the hop length by a factor of 2. To get higher compression in time, we simply stack more of these blocks. For example, using seven blocks yields a hop length of 128 for the top level autoencoder.
Each residual network has four residual blocks in the mid- dle and top VQ-VAEs resulting in a receptive ï¬eld of 120 ms and 480 ms for the respective discrete tokens. Because increasing the residual depth helped improve reconstruction quality slightly, we doubled the number of residual blocks for the bottom level. This dramatically increases the recep- tive ï¬eld to about 2 seconds per code but the actual receptive ï¬eld is mostly local.
xk Dilated ConviD xD ConviD xy + ConviD + > )
(a) The encoder compresses the raw audio input into a sequence of embeddings. The length of this latent representation relative to the raw audio duration determines the amount of compression, and is an important factor for the trade-off between ï¬delity and coherence.
Codebook Nearest-Neighbor hy me Search - 21 Codebook Lookup
(b) The bottleneck takes the sequence of embeddings from the encoder and maps it into a sequence of code vectors from the codebook. This sequence of code indices is used as a discrete representation to be modeled by the priors. Larger codebooks improve ï¬delity but may be more difï¬cult to compress.
xL | ConviD ConviD Transposed | A oa [ + conte Conv1D & | Dilated 0 e,
(c) The decoder reconstructs the raw audio from latent represen- tations. It is a mirror of the encoder where dilations constracts by a factor of 3 down to 1 at the last block. The ï¬nal Conv1D projects to the desired number of audio channels and also acts as a smoothing operation after a sequence of transposed convolutions.
We also experimented with having a single decoder and modeling the residuals to separate out learned representa- tions as in (Razavi et al., 2019), hoping upsampling priors would simply ï¬ll in local musical structure. However, push- ing information to the top level was quite challenging as the bottommost level reconstructs almost perfectly early on in training. When we add auxiliary objectives to encourage the top to be used more, the top-level codes add serious distortions to the ï¬nal output. A similar challenge is shown in (Dieleman et al., 2018).
# B.2. Music Priors and Upsamplers
Architectural details of our music prior and upsampler mod- els are depicted in Figure 8. They perform autoregressive modeling of tokens at each level, conditioned on informa- tion such as artist and genre, as well as the tokens from the upper level in the case of the upsamplers (Figure 8a). Each artist and genre are learned as embedding vectors, whose sum is provided as the very ï¬rst token in each sequence. In addition, positional embedding is learned as a function of each positionâs absolute and relative timing in the dura- tion of the song. In upsampler models, upper-level tokens
Figure 7. Components of the VQ-VAE model
are upsampled by the conditioner network, using WaveNet- style dilated convolutions followed by a transposed 1-D convolutional layer (Figure 8b).
When the model is trained on lyrics, the top-level prior takes lyrics data corresponding to each audio segment and uses them to train an encoder-decoder Transformer as shown in Figure 8c. All transformer stacks use sparse self-attention layers with the three factorized attention types (row, column, and previous-row) repeating, and encoder-decoder attention layers, when present, are interleaved with the other attention types. Each layer consists of residual connections of an attention and an MLP feedforward network, each prepended by layer normalization (see Figure 8d).
# hy
Jukebox: A Generative Model for Music
Upper-Level Tokens Â¥ Conditioner Timing Dataââ>} Time Embedding Artist & Genre ââ> Zr t Lytics ---}-> Scalable Transformer Y 27
# Jane] doy ey) ul pes ION
Token Embedding xD Dilated ConviD Transposed Conv1D
(a) The structure of our prior models, performing next-token prediction at each level. The Transformer takes the embeddings of the tokens z1:T â1 prepended by the sum of the artist and genre embeddings, in addition to the time embedding that encodes relative and absolute timing of the segments in the duration of the song. The upsampler priors additionally take the tokens from the upper level, which are fed to the conditioner network and added to the input sequence. The top-level prior takes lyrics as conditioning information as well (see Figure 8c).
(b) The conditioner network takes the tokens from the upper level, and their embedding vectors go through non-causal WaveNet-like layers with in- creasingly dilated convolutions. The transposed 1-D convolution upsamples the sequence to the higher temporal resolution of the current level.
# c
# Lyrics
# the
# Level
Only in Top â VQ Codes 7 â-> Ml Y Lyrics Token Embedding VQ Code Embedding y Y s Row Attention Layer Row Attention Layer Column Attention Layer Column Attention Layer x6 Previous-Row Attention Layer Previous-Row Attention Layer |) Row Attention Layer [ââ*|_Encoder-Decoder Attention Layer Column Attention Layer Row Attention Layer Previous-Row Attention Layer Column Attention Layer x3 : Previous-Row Attention Layer |) Row Attention Layer [1 Encoder-Decoder Attention Layer Column Attention Layer : : Previous-Row Attention Layer [â1_Encoder-Decoder Attention Layer Row Attention Layer â| Y Column Attention Layer x3 Lyrics Token Embedding Previous-Row Attention Layer |) Next-Token Prediction â-â Encoder-Decoder Attention Layer es VQ Code Embedding Next-Token Prediction
(c) The Scalable Transformer architecture, shown with the lyrics Transformer used in the top-level prior. The Transformer layers use the three factorized attention types alternatingly, i.e. repeating row, column, and previous-row attentions. In the top-level prior, the VQ Transformer additionally includes interleaved encoder-decoder attention layers that apply lyrics conditioning by attending on the activation of the last encoder layer.
1 Layer Norm â_Y___. Encoder â Features > Attention Layer Norm
(d) Each Transformer layer is a resid- ual attention block, which performs two residual operations, attention and MLP, each prepended with layer nor- malization. Depending on the layerâs type, it uses either one of the three fac- torized attentions or encoder-decoder attention taking the lyrics features from the encoder.
Figure 8. Detailed architecture of the music prior and upsampler models
Jukebox: A Generative Model for Music
# B.3. Hyperparameters
For all Transformersâ residual blocks, we use MLP blocks with the same width as the model width, and attention blocks with queries, keys, and values with width 0.25 times the model width. For all convolutional residual blocks, we use convolutions with same channels as the model width.
Sample rate Sample length Hop lengths Embedding width Residual block width Residual blocks (per 2x downsample) Conv ï¬lter size Conv channels Dilation growth rate Commit weight β Codebook EMA γ Codebook size Spectral loss STFT bins Spectral loss STFT hop length Spectral loss STFT window size Initialization scale Batch size Training steps Learning rate
44100 393216 8, 32, 128 64 64, 32, 32 8, 4, 4 3 32 3 0.02 0.99 2048 2048, 1024, 512 240, 120, 50 1200, 600, 240 0.02 256 384618 0.0003
Table 4. VQ-VAE hyperparameters
1B upsamplers Sample length Context length Transformer width Transformer layers Attention heads Factorized attention shape Conditioner residual block width Conditioner residual blocks Conditioner conv ï¬lter size Conditioner conv channels Conditioner dilation growth rate Conditioner dilation cycle Initialization scale Batch size Training steps Learning rate Adam β2 Weight decay 262144, 65536 8192 1920 72 1 (128, 64) 1024 16 3 1024 3 8 0.004, 0.008 192, 184 265000, 279000 0.0003 0.95 0.01
Sample length Context length Transformer width Transformer self-attention layers Attention heads Factorized attention shape Lyrics encoder tokens Lyrics encoder width Lyrics encoder layers Lyrics encoder attention heads Lyrics encoder factored attention shape Encoder-Decoder attention layers Initialization scale Encoder initialization scale Batch size Training steps Learning rate Adam β2 Weight decay 5B prior 1048576 8192 4800 72 8 (128, 64) 512 1280 18 4 (32, 16) 7 0.002 0.014 512 310500 0.00015 0.925 0.002
Table 6. Top-level prior hyperparameters
Table 5. Middle- and bottom-level upsampler hyperparameters
Jukebox: A Generative Model for Music
# B.4. t-SNE Plot of Artists
Barry White ee Tony Bennett e ° : eo . Ray Noble rats Dominoe .© Natalie Cole Barrington Levy e eâ © R&B Soul © Jazz ee Pd e @ ow . © Cab Calloway © R&B © Blues e Lou Rawls @, e hd é § Louis Prima © Hip Hop © Country ThePlatters@®@ @ © Donna Summer e @ Rock @ Soundtrack Dinah Washington _â® The Mills Brothers © Pop @ Classical Reggae Jimmy Cliff Sean Paul Shaggy Bobby Bland © @ ¢ @ Nina Simone Walter Gieseking Luther Vandross eo eee e yee Yann Tiersen Ze © Glenn Gould e % © â @ Bessie Smith e i) The Weeknd@® ee o* e Howard Shore oe e e ° Garrick Ohlsson Lonnie Johnson e oc eee S - 2° . ce S © Ge Franz Schubert Ramin Djawadi @ e © 6 & @ Dean Martin e > eg Neil Diamond Vangelis @ eÂ¥o-YoMa Henry Mancini e eee ee KO e buck Owens Johann Sebastian Bach Andrea Bocelli rice BBY ac rod Bart les Ps e Pink Floyd Billy Ray Cyrus The Cure @ pee e e The Smashing Pumpkins @ â eink e Linkin Park
Figure 9. t-SNE of (artist, genre) embedding. The overall clustering shows very clearly how genres are related to one another. The broadest of all, pop, is situated in the middle of rock, country, blues, hip hop, and many more. Soundtrack and classical form their own island. Within a genre, we see a similar trend among artists. John Lennon, Paul McCartney, George Harrison and Ringo Starr are clustered around The Beatles. Cheap Trick which has a number of Beatles covers is also found near. Because we are showing only about 400 artists here, not all neighboring artists may be related. For an interactive version, we point to our blog post. | {
"id": "1609.03499"
} |
2004.13969 | Complementing Lexical Retrieval with Semantic Residual Embedding | This paper presents CLEAR, a retrieval model that seeks to complement
classical lexical exact-match models such as BM25 with semantic matching
signals from a neural embedding matching model. CLEAR explicitly trains the
neural embedding to encode language structures and semantics that lexical
retrieval fails to capture with a novel residual-based embedding learning
method. Empirical evaluations demonstrate the advantages of CLEAR over
state-of-the-art retrieval models, and that it can substantially improve the
end-to-end accuracy and efficiency of reranking pipelines. | http://arxiv.org/pdf/2004.13969 | Luyu Gao, Zhuyun Dai, Tongfei Chen, Zhen Fan, Benjamin Van Durme, Jamie Callan | cs.IR | ECIR 2021 | null | cs.IR | 20200429 | 20210329 | 1 2 0 2
r a M 9 2 ] R I . s c [
3 v 9 6 9 3 1 . 4 0 0 2 : v i X r a
# Complement Lexical Retrieval Model with Semantic Residual Embeddings
Luyu Gao1, Zhuyun Dai1, Tongfei Chen2, Zhen Fan1 Benjamin Van Durme2, Jamie Callan1
1 Carnegie Mellon University, 2 Johns Hopkins University
Abstract. This paper presents clear, a retrieval model that seeks to complement classical lexical exact-match models such as BM25 with semantic matching signals from a neural embedding matching model. clear explicitly trains the neural embedding to encode language struc- tures and semantics that lexical retrieval fails to capture with a novel residual-based embedding learning method. Empirical evaluations demon- strate the advantages of clear over state-of-the-art retrieval models, and that it can substantially improve the end-to-end accuracy and eï¬ciency of reranking pipelines.
# Introduction
State-of-the-art search engines adopt a multi-stage retrieval pipeline system: an eï¬cient ï¬rst-stage retriever uses a query to fetch a set of documents from the entire document collection, and subsequently one or more rerankers reï¬ne the ranking [28]. The retriever needs to run fast with high eï¬ciency in order to scan through the entire corpus with low latency. As a result, retrievers have remained simple and give only mediocre performance. With recent deep neural models like BERT [10] rerankers pushing reranking accuracy to new levels, ï¬rst-stage retrievers are gradually becoming the bottleneck in modern search engines.
Typical ï¬rst-stage retrievers adopt a bag-of-words retrieval model that com- putes the relevance score based on heuristics deï¬ned over the exact word overlap between queries and documents. Models such as BM25 [32] remained state-of- the-art for decades and are still widely used today. Though successful, lexical retrieval struggles when matching goes beyond surface forms and fails when query and document mention the same concept using diï¬erent words (vocabu- lary mismatch), or share only high-level similarities in topics or language styles. An alternative approach for ï¬rst-stage retrieval is a neural-based, dense em- bedding retrieval: query words are mapped into a single vector query represen- tation to search against document vectors. Such methods learn an inner product space where retrieval can be done eï¬ciently leveraging recent advances in max- imum inner product search (MIPS) [34,15,12]. Instead of heuristics, embedding retrieval learns an encoder to understand and encode queries and documents, and the encoded vectors can softly match beyond text surface form. However, single vector representations have limited capacity [1], and are unable to produce gran- ular token-level matching signals that are critical to accurate retrieval [11,33].
# 2 L. Gao et al.
We desire a model that can capture both token-level and semantic-level in- formation for matching. We propose a novel ï¬rst-stage retrieval model, Com- plementary Retrieval Model ( clear), that uses dense embedding retrieval to complement exact lexical retrieval. clear adopts a single-stage-multi-retriever design consisting of a lexical retrieval model based on BM25 and an embedding retrieval model based on a Siamese framework that uses BERT [10] to generate query/document embedding representations. Importantly, unlike existing tech- niques that train embeddings directly for ranking independently [40,4], clear explicitly trains the embedding retrieval model with a residual method: the em- bedding model is trained to build upon the lexical modelâs exact matching signals and to ï¬x the mistakes made by the lexical model by supplementing semantic level information, eï¬ectively learning semantic matching not captured by the lexical model, which we term the un-captured residual.
Our experiments on large-scale retrieval data sets show the substantial and consistent advantages of clear over state-of-the-art lexical retrieval models, a strong BERT-based embedding-only retrieval model, and a fusion of the two. Furthermore, clearâs initial retrieval provides additive gains to downstream rerankers, improving end-to-end accuracy and eï¬ciency. Our qualitative analysis reveals promising improvements as well as new challenges brought by clear.
# 2 Related Work
Traditionally, ï¬rst-stage retrieval has relied on bag-of-words models such as BM25 [32] or query likelihood [19], and has augmented text representations with n-grams [25], controlled vocabularies [30], and query expansion [20]. Bag- of-words representations can be improved with machine learning techniques, e.g., by employing machine-learned query expansion on bag-of-sparse-features [39,5], adjusting termsâ weights [8] with BERT [10], or adding terms to the document with sequence-to-sequence models [29]. However, these approaches still use the lexical retrieval framework and may fail to match at a higher semantic level.
Neural models excel at semantic matching with the use of dense text represen- tations. Neural models for IR can be classiï¬ed into two groups [11]: interaction- based and representation-based models. Interaction-based models model interac- tions between word pairs in queries and documents. Such approaches are eï¬ective for reranking, but are cost-prohibitive for ï¬rst-stage retrieval as the expensive document-query interactions must be computed online for all ranked documents. Representation-based models learn a single vector representation for the query or the document and use a simple scoring function (e.g., cosine or dot product) to measure their relevance. Representation-based neural retrieval mod- els can be traced back to eï¬orts such as LSI [9], Siamese networks [2], and MatchPlus [3]. Recent research investigated using modern deep learning tech- niques to build vector representations: [21] and [13] used BERT-based retrieval to ï¬nd passages for QA; [4] proposes a set of pre-training tasks for sentence re- trieval. Representation-based models enable low-latency, full-collection retrieval with a dense index. By representing queries and documents with dense vectors,
Complement Lexical Retrieval Model with Semantic Residual Embeddings
retrieval is reduced to a maximum inner product search (MIPS) [34] problem. In recent years, there has been increasing eï¬ort on accelerating maximum inner product and nearest neighbor search, which led to high-quality implementa- tions of libraries for nearest neighbor search such as hnsw [24], FAISS [15], and SCaNN [12]. Notably, with these technologies, nearest neighbor search can now scale to millions of candidates with millisecond latency [15,12], and has been suc- cessfully used in large-scale retrieval tasks [21,13]. They provide the technical foundation for fast embedding retrieval of our proposed clear model.
The eï¬ectiveness of representation-based neural retrieval models for stan- dard ad-hoc search is mixed [11,40]. All of the representation-based neural re- trieval models share the same limitation â they use a ï¬xed number of dimen- sions, which incurs the speciï¬city vs. exhaustiveness trade-oï¬ as in all controlled vocabularies [33]. Most prior research on hybrid models has focused on the reranking stage [26]. Some very recent research begins to explore hybrid lex- ical/embedding models. Its focus is mainly on improving the embedding part with weak-supervision [18] for low-resource setups, or new neural architectures that use multiple embedding vectors to raise model capacity [23]. In these works, embedding models are all trained independently from the lexical models and rely on simple post-training fusion to form a hybrid score. To the best of our knowl- edge, ours is the ï¬rst work that investigates jointly training latent embeddings and lexical retrieval for ï¬rst-stage ad hoc retrieval.
# 3 Proposed Method
clear consists of a lexical retrieval model and an embedding retrieval model. Between these two models, oneâs weakness is the otherâs strength: lexical retrieval performs exact token matching but cannot handle vocabulary mismatch; mean- while, the embedding retrieval supports semantic matching but loses granular (lexical level) information. To ensure that the two types of models work together and ï¬x each otherâs weakness, we propose a residual -based learning framework that teaches the neural embeddings to be complementary to the lexical retrieval.
# 3.1 Lexical Retrieval Model
Lexical retrievers are designed to capture token level matching information. They heuristically combine token overlap information, from which they compute a matching score for query document pairs. Decades of research have produced many lexical algorithms such as vector space models, Okapi BM25 [32], and query likelihood [19]. We use BM25 [32] given its popularity in existing systems. Given a query q and document d, BM25 generates a score based on the
overlapping words statistics between the pair.
. tf, siex(q,d) = BM25(q,d) = S> rsj,- a at teqnd tra th {( âb)+ ial} (1)
t is a term, tft,d is tâs frequency in document d, rsjt is tâs Robertson-Sp¨arck Jones weight, and l is the average document length. k1 and b are parameters.
4 L. Gao et al.
# 3.2 Embedding Retrieval Model
The embedding retrieval model encodes either the query or document text se- quence into a dense embedding vector, and matches queries and documents softly by comparing their vector similarity. Generally, the embedding retrieval model can take various neural architectures that encode natural language sequences such as CNN [16], or LSTM [14], as long as the model outputs can be pooled effectively into a single fixed-length vector for any input. A model capable of deeper text understanding is usually desired to produce high-quality embedding. This work uses a Transformer encoder. We start with pretrained BERT weights and fine-tune the model to encode both queries and documents into vectors in a d-dimension embedding space, i.e., Vg, Va ⬠R*. The model has a Siamese structure, where the query and document BERT models share param- eters @ in order to reduce training time, memory footprint, and storthe special token (QRY) to queries and (DOC) to documents. For a given query or document, the embedding model computes the corresponding query vector vy or document vector va, following SentenceBERT [81], by average pooling representations from the encoderâs last layers.
vq = AvgPool[BERT»((QRyY) ; query)] (2)
va = AvgPool[BERT ((Doc) ; document)] (3)
The embedding matching score semb(q, d) is the dot product of the two vectors. We use dot product as the similarity metric as it allows us to use MIPS [15,12] for eï¬cient ï¬rst-stage retrieval.
semb(q, d) = vT q vd . (4)
# 3.3 Residual-based Learning
We propose a novel residual-based learning framework to ensure that the lexi- cal retrieval model and the embedding retrieval model work well together. While BM25 has just two trainable parameters, the embedding model has more ï¬exibil- ity. To make the best use of the embedding model, we must avoid the embedding model ârelearningâ signals already captured by the lexical model. Instead, we focus its capacity on semantic level matching missing in the lexical model.
In general, the neural embedding model training uses hinge loss [36] deï¬ned over a triplet: a query q, a relevant document d+, and an irrelevant document dâ serving as a negative example:
L = [m â semb(q, d+) + semb(q, dâ)]+ (5)
where [x]+ = max{0, x}, and m is a static loss margin. In order to train embed- dings that complement lexical retrieval, we propose two techniques: sampling negative examples dâ from lexical retrieval errors, and replacing static margin m with a variable margin that conditions on the lexical retrievalâs residuals.
Complement Lexical Retrieval Model with Semantic Residual Embeddings
Error-based Negative Sampling We sample negative examples (dâ in Eq. 5) from those documents mistakenly retrieved by lexical retrieval. Given a positive query-document pair, we uniformly sample irrelevant examples from the top N documents returned by lexical retrieval with probability p. With such negative samples, the embedding model learns to diï¬erentiate relevant documents from confusing ones that are lexically similar to the query but semantically irrelevant.
Residual-based Margin Intuitively, diï¬erent query-document pairs require diï¬erent levels of extra semantic information for matching on top of exact match- ing signals. Only when lexical matching fails will the semantic matching signal be necessary. Our negative sampling strategy does not tell the neural model the degree of error made by the lexical retrieval that it needs to ï¬x. To address this challenge, we propose a new residual margin. In particular, in the hinge loss, the conventional static constant margin m is replaced by a linear residual margin function mr, deï¬ned over slex(q, d+) and slex(q, dâ), the lexical retrieval scores:
mr(slex(q, d+), slex(q, dâ)) = ξ â λtrain(slex(q, d+) â slex(q, dâ)),
where ξ is a constant non-negative bias term. The diï¬erence slex(q, d+)âslex(q, dâ) corresponds to a residual of the lexical retrieval. We use a scaling factor λtrain to adjust the contribution of residual. Consequently, the full loss becomes a function of both lexical and embedding scores computed on the triplet,
L = [mr(slex(q, d+), slex(q, dâ)) â semb(q, d+) + semb(q, dâ)]+
For pairs where the lexical retrieval model already gives an eï¬ective document ranking, the residual margin mr (Eq. 6) becomes small or even becomes negative. In such situations, the neural embedding model makes little gradient update, and it does not need to, as the lexical retrieval model already produces satisfying results. On the other hand, if there is a vocabulary mismatch or topic diï¬erence, the lexical model may fail, causing the residual margin to be high and thereby driving the embedding model to accommodate in gradient update. Through the course of training, the neural model learns to encode the semantic patterns that are not captured by text surface forms. When training ï¬nishes, the two models will work together, as clear.
# 3.4 Retrieval with CLEAR
clear retrieves from the lexical and embedding index respectively, taking the union of the resulting candidates, and sorts using a ï¬nal retrieval score: a weighted average of lexical matching and neural embedding scores:
sclear(q, d) = λtestslex(q, d) + semb(q, d) (8)
We give clear the ï¬exibility to take diï¬erent λtrain and λtest values. Though both are used for interpolating scores from diï¬erent retrieval models, they have
(7)
# 6 L. Gao et al.
diï¬erent interpretations. Training λtrain serves as a global control over the resid- ual based margin. On the other hand, testing λtest controls the contribution from the two retrieval components.
clear achieves low retrieval latency by having each of the two retrieval models adopt optimized search algorithms and data structures. For the lexi- cal retrieval model, clear index the entire collection with a typical inverted index. For the embedding retrieval model, clear pre-computes all document embeddings and indexes them with fast MIPS indexes such as FAISS [15] or SCANN [12], which can scale to millions of candidates with millisecond latency. As a result, clear can serve as a ï¬rst-stage, full-collection retriever.
# 4 Experimental Methodology
Dataset and Metrics We use the MS MARCO passage ranking dataset [27], a widely-used ad-hoc retrieval benchmark with 8.8 millions passages. The training set contains 0.5 million pairs of queries and relevant passages, where each query on average has one relevant passage1. We used two evaluation query sets with diï¬erent characteristics:
â MS MARCO Dev Queries is the MS MARCO datasetâs oï¬cial dev set, which has been widely used in prior research [28,8]. It has 6,980 queries. Most of the queries have only 1 document judged relevant; the labels are binary. MRR@10 is used to evaluate the performance on this query set following [27]. We also report the Recall of the top 1,000 retrieved (R@1k), an important metric for ï¬rst-stage retrieval.
â TREC2019 DL Queries is the oï¬cial evaluation query set used in the TREC 2019 Deep Learning Track shared task [6]. It contains 43 queries that are manually judged by NIST assessors with 4-level relevance labels, allowing us to understand the modelsâ behavior on queries with multiple, graded rele- vance judgments (on average 94 relevant documents per query). NDCG@10, MAP@1k and R@1k are used to evaluate this query setâs accuracy, following the shared task.
Compared Systems We compare clear retrieval with several ï¬rst-stage lex- ical retrieval systems that adopt diï¬erent techniques such as traditional BM25, deep learning augmented index and/or pseudo relevance feedback.
â BM25 [32]: A widely-used oï¬-the-shelf lexical-based retrieval baseline. â DeepCT [8]: A state-of-the-art ï¬rst-stage neural retrieval model. It uses BERT to estimate term importance based on context; in turn these context- aware term weights are used by BM25 to replace tf in Equation 1.
â BM25+RM3: RM3 [20] is a popular query expansion technique. It adds related terms to the query to compensate for the vocabulary gap between queries and documents. BM25+RM3 has been proven to be strong [22].
1 Dataset is available at https://microsoft.github.io/msmarco/.
Complement Lexical Retrieval Model with Semantic Residual Embeddings
â DeepCT+RM3: [7] shows that using DeepCT term weights with RM3 can further improve upon BM25+RM3.
In addition, we also compare with an embedding only model, BERT-Siamese: This is a BERT-based embedding retrieval model without any explicit lexical matching signals, as described in subsection 3.2. Note that although BERT em- bedding retrieval models have been tested on several question-answering tasks [21,13,4], their eï¬ectiveness for ad hoc retrieval remains to be studied.
Pipeline Systems To investigate how the introduction of clear will aï¬ect the ï¬nal ranking in state-of-the-art pipeline systems, we introduce two pipeline setups.
â BM25+BERT reranker: this is a state-of-the-art pipelined retrieval system. It uses BM25 for ï¬rst-stage retrieval, and reranks the top candidates using a BERT reranker [28]. Both the bert-base and the bert-large reranker provided by [28] are explored. Note that BERT rerankers use a very deep self-attentive architecture whose computation cost limits its usage to only the reranking stage.
â clear+BERT reranker: a similar pipelined retrieval system that uses clear as the ï¬rst-stage retreiever, followed by a BERT reranker (bert-base or bert-large reranker from [28]).
Setup Lexical retrieval systems, including BM25, BM25+RM3, and deep lex- ical systems DeepCT and DeepCT+RM3, build upon Anserini [38]. We set k1 and b in BM25 and DeepCT using values recommended by [8], which has stronger performance than the default values. The hyper-parameters in RM3 are found through a simple parameter sweep using 2-fold cross-validation in terms of MRR@10 and NDCG@10; the hyper-parameters include the number of feedback documents and the number of feedback terms (both searched over {5, 10, · · · , 50}), and the feedback coeï¬cient (searched over {0.1, 0.2, · · · , 0.9}). Our neural models were built on top of the HuggingFace [37] implementation of BERT. We initialized our models with bert-base-uncased, as our hardware did not allow ï¬ne-tuning bert-large models. For training, we use the 0.5M pairs of queries and relevant documents. At each training step, we randomly sample one negative document from the top 1,000 documents retrieved by BM25. We train our neural models for 8 epochs on one RTX 2080 Ti GPU; training more steps did not improve performance. We set ξ = 1 in Eq. 6. We ï¬xed λtrain = 0.1 in the experiments. For λtest, we searched over {0, 1, 0.2, · · · , 0.9} on 500 training queries, ï¬nding 0.5 to be the most robust. Models are trained using the Adam optimizer [17] with learning rate 2 à 10â5, and batch size 28. In pipelined systems, we use BERT rerankers released by Nogueira et al. [28]. Statistical signiï¬cance was tested using the permutation test with p < 0.05.
# 5 Results and Discussion
We study clearâs retrieval eï¬ectiveness on a large-scale, supervised retrieval task, its impact on downstream reranking, and its winning/losing cases.
8 L. Gao et al.
Type Model MS MARCO Dev TREC2019 DL MRR @10 R@1k NDCG @10 MAP @1k R@1k Lexical 1 BM25 2 BM25+RM3 3 DeepCT 4 DeepCT+RM3 0.1912 0.166 0.243124 0.91312 0.91412 0.23212 0.864 0.861 0.3775 0.7385 0.506 0.452135 0.78913 0.5551 0.7561 0.4221 0.5511 0.601123 0.481123 0.79413 Embedding 5 BERT-Siamese 0.3081â4 0.928123 0.594123 0.307 0.584 Lexical+ Embedding 6 clear â w/ Random Sampling 0.241â â w/ Constant Marginâ 0.314â 0.3381â5 0.9691â5 0.6991â5 0.5111â5 0.8121â5 0.926â 0.955â 0.553â 0.664â 0.409â 0.455â 0.779â 0.794
Table 1: First-stage retrieval eï¬ectiveness of clear on the MS MARCO dataset, evaluated using two query evaluation sets, with ablation studies. Superscripts 1â6 indicate statistically signiï¬cant improvements over methods indexed on the left. â indicates a number being statistically signiï¬cantly lower than clear. â: clear w/ Constant Margin is equivalent to a post-training fusion of BM25 and BERT-Siamese.
# 5.1 Retrieval Accuracy of CLEAR
In this experiment, we compare clearâs retrieval performance with ï¬rst stage retrieval models described in section 4 and record their performance in Table 1.
clear vs. Lexical Retrieval clear outperforms BM25 and BM25+RM3 sys- tems by large margins in both recall-oriented metrics (R@1k and MAP@1k) as well as precision-oriented ones (MRR@10 and NDCG@10). clear also signiï¬- cantly outperforms DeepCT and DeepCT+RM3, two BERT-augmented lexical retrieval models. DeepCT improves over BM25 by incorporating BERT-based contextualized term weighting, but still use exact term matching. The results show that lexical retrieval is limited by the strict term matching scheme, show- ing clearâs advantages of using embeddings for semantic-level soft matching.
clear vs. BERT-Siamese Retrieval BERT-Siamese performs retrieval solely relying on dense vector matching. As shown in Table 1, clear outperforms BERT-Siamese with large margins, indicating that an embedding-only retrieval is not suï¬cient. Interestingly, though outperforming BM25 by a large margin on MSMARCO Dev queries, BERT-Siamese performs worse than BM25 in terms of MAP@1k and recall on TREC DL queries. The main diï¬erence between the two query sets is that TREC DL query has multiple relevant documents with graded relevance levels. It therefore requires a better-structured embedding space to capture this, which proves to be harder to learn here. clear circumvents this full embedding space learning problem by grounding in the lexical retrieval model while using embedding as complement.
Complement Lexical Retrieval Model with Semantic Residual Embeddings
Retriever Reranker MSMARCO Dev TREC DL Rerank Depth MRR@10 NDCG@10 K 1 BM25 2 clear - - 0.191 0.3381 0.506 0.6991 - - 3 BM25 4 clear bert-base bert-base 0.3451 0.360123 0.7071 0.71912 1k 20 bert-large 5 BM25 6 clear bert-large 0.370123 0.3801â5 0.737123 0.7521â5 1k 100
Table 2: Comparing clear and the state-of-the-art BM25+BERT Reranker pipeline on the MS MARCO passage ranking dataset with two evaluation sets (Dev: MS MARCO Dev queries; TREC: TREC2019 DL queries). We record the most optimal reranking depth for each initial retriever. Superscripts 1â6 indicate statistically signiï¬cant improvements over the corresponding methods.
Ablation Studies We hypothesize that clearâs residual-based learning ap- proach can optimize the embedding retrieval to complement the lexical retrieval, so that the two parts can generate additive gains when combined. To verify this hypothesis, we run ablation studies by (1) replacing the error-based negative samples with random negative samples, and (2) replacing the residual margin in the loss function with a constant margin, which is equivalent to a fusion of BM25 and BERT-Siamese rankings. Using random negative samples leads to a substantial drop in clearâs retrieval accuracy, showing that it is important to train the embeddings on the mistakenly-retrieved documents from lexical re- trieval to make the two retrieval models additive. Using constant margins instead of residual margins also lowers the performance of the original clear model. By enforcing a residual margin explicitly, the embedding model is forced to learn to compensate for the lexical retrieval, leading to improved performance. The results conï¬rm that clear is more eï¬ective than a post-training fusion approach where the retrieval models are unaware of each other.
# 5.2 Impacts of clear on Reranking
Similar to other ï¬st-stage retrievers, clear can be incorporated into the state- of-the-art pipelined retrieval system, where its candidate list can be reranked by a deep neural reranker. To quantitatively evaluate the beneï¬t of clear, in the next experiment, we test reranking clear results with BERT rerankers.
Results are listed in Table 2. Here, we compare clear against the widely- used BM25 in a two-stage retrieval pipeline, using current state-of-the-art BERT rerankers [28] as the second stage reranking model. The rerankers use the con- catenated query-document text as input to BERT to classify the relevance. We experimented with both bert-base and bert-large reranker variants provided
9
10
# L. Gao et al.
><: CLEAR + BERT-Base Reranker 0.28 | -*-BM25 + BERT-Base Reranker =~ CLEAR w/o Reranker 0.26 0 ~=â250 «500-750-1000 Reranking Depth K
âSe CLEAR o8 ~e-BM25 04 0 250 500 750 1000 Reranking Depth K
(a) Retrieval Recall (b) Reranking Accuracy
Fig. 1: Comparison between clear and BM25 pipeline systems on MS MARCO Dev queries. The system uses the bert-base reranker to rerank against various depth K.
by [28]. We also investigate the reranking depth for each initial retriever and record the most optimal here.
The performance of clear without reranking is already close to that of the two-stage BM25+bert-base reranker. When adding a reranker, clear pipelines signiï¬cantly outperforms the BM25 pipelines. We also discover that reranking a truncated top list for clear is suï¬cient, while top 1000 is required for BM25. Concretely, the required re-ranking depth decreased from K=1,000 to K=20 for bert-base reranker and K=100 for bert-large reranker, reducing the computational cost by 10Ãâ50Ã. In other words, clear generates strong initial rankings that systematically raise the position of relevant documents across all queries and help state-of-the-art rerankers to achieve higher accuracy with lower computational costs, improving end-to-end accuracy, eï¬ciency, and scalability. Figure 1 further plots the recall and reranking accuracy at various reranking depth. Figure 1a shows that clear had higher recall values than BM25 at all depths, meaning that clear can provide more relevant passages to the reranker. Figure 1b shows the performance of a BERT reranker [28] applied to the top K documents retrieved from either BM25 or clear. When applied to BM25, the accuracy of the BERT reranker improved as reranking depth K increases. Interestingly for clear, the reranking accuracy was already high with small K. While increasing K improves global recall, the reranking accuracy shows satura- tion with larger K, indicating that BERT rerankers do not fully exploit the lower portion of clear candidate lists. We investigate this further in subsection 5.3.
# 5.3 Case Study: The Goods and the new Challenges
In this section, we take a more in-depth look into clear through case studies. We ï¬rst examine how BM25 ranking changes after being complemented by the dense embedding retrieval in clear, then turn to investigate why the lower part of clearâs candidates are challenging for BERT rerankers.
Complement Lexical Retrieval Model with Semantic Residual Embeddings
Query Document retrieved by clear BM25 â clear weather in danville, ca Thursday:The Danville forecast for Aug 18 is 85 de- grees and Sunny . There is 24 percentage chance of rain and 10 mph winds from the West. Friday:... 989 â 10 brief government deï¬nition Legal Deï¬nition of brief. 1 1 : a concise statement of a clientâs case written for the instruction of an attorney usually by a law clerk ... 996 â 7 population of jabodatek The population of Jabodetabek, with an area of 6,392 km2, was over 28.0 million according to the Indonesian Census 2010 .... not retrieved â 1
Table 3: Example documents retrieved by clear. We show ranking improve- ments from pure BM25 to clearâs complementary setup .
Query Document retrieved by clear clear â Rerank who is robert gray Grey started ... dropping his Robert Gotobed alias and using his birthname Robert Grey. rank 496 â rank 7 what is theraderm used for A thermogram is a device which measures heat through use of picture .... rank 970 â rank 8 what is the daily life of thai people Activities of daily living include are the tasks that are required to get going in the morning ... 1 walking. 2 bathing. 3 dressing. rank 515 â rank 7
Table 4: Challenging non-relevant documents retrieved only by clear, not by BM25, through semantic matching. We show in clear initial candidate list ranking as well as after BERT reranking.
In Table 3, we show three example queries to which the clear brings huge retrieval performance improvement. We see that in all three queries, critical query terms, weather, government and jabodatek, have no exact match in the relevant document, leading to failures in exact match only BM25 system. clear solves this problem, complementing exact matching with high-level semantic matching. As a result, âweatherâ can match with document content âsunny, rain, windâ and âgovernmentâ with document content âattorny, law clerkâ. In the third query, spelling mismatch between query term âjabodatekâ and document term âJabodetabekâ is also handled.
While clear improves relevant documentsâ rankings in the candidate list, it also brings in new forms of non-relevant documents that are not retrieved by lexical retrievers like BM25, and aï¬ects downstream rerankers. In Table 4, we show three queries and three corresponding false positive documents retrieved by clear, which are not retrieved by BM25. Unlike in BM25, where false positives mostly share surface text similarity with the query, in the case of clear, the false positives can be documents that are topically related but not relevant. In
# 12 L. Gao et al.
the ï¬rst two queries, clear mistakenly performs soft spell matches, while in the third one critical concept âthai peopleâ is ignored.
Such retrieval mistakes further aï¬ect the performance of downstream BERT reranker. As BERT also performs semantic level matching without explicit exact token matching to ground, the rerankers can amplify such semantically related only mistakes. As can be seen in Table 4, those false positive documents reside in the middle or at the bottom of the full candidate list of clear. With BERT reranker, however, their rankings go to the top. In general, clear goes beyond exact lexical matching to rely on semantic level matching. While improving ini- tial retrieval, it also inevitably brings in semantically related false positives. Such false positives are inherently more challenging for state-of-the-art neural reranker and require more robust and discriminative rerankers. We believe this also creates new challenges for future research to improve neural rerankers.
# 6 Conclusion
Classic lexical retrieval models struggle to understand the underlying meanings of queries and documents. Neural embedding based retrieval models can soft match queries and documents, but they lose speciï¬c word-level matching infor- mation. This paper present clear, a retrieval model that complements lexical retrieval with embedding retrieval. Importantly, instead of a linear interpolation of two models, the embedding retrieval in clear is exactly trained to ï¬x the errors of lexical retrieval.
Experiments show that clear achieves the new state-of-the-art ï¬rst-stage retrieval eï¬ectiveness on two distinct evaluation sets, outperforming classic bag- of-words, recent deep lexical retrieval models, and a BERT-based pure neural retrieval model. The superior performance of clear indicates that it is beneï¬cial to use the lexical retrieval model to capture simple relevant patterns using exact lexical clues, and complement it with the more complex semantic soft matching patterns learned in the embeddings.
Our ablation study demonstrates the eï¬ectiveness of clearâs residual-based learning. The error-based negative sampling allows the embedding model to be aware of the mistakes of the lexical retrieval, and the residual margin further let the embeddings focus on the harder errors. Consequently, clear outperforms post-training fusion models that directly interpolate independent lexical and embedding retrieval modelsâ results.
A single-stage retrieval with clear achieves an accuracy that is close to popular two-stage pipelines that uses a deep Transformer BERT reranker. We view this as an encouraging step towards building deep and eï¬cient retrieval systems. When combined with BERT rerankers in the retrieval pipeline, clearâs strong retrieval performance leads to better end-to-end ranking accuracy and eï¬ciency. However, we observe that state-of-the-art BERT neural rerankers do not fully exploit the retrieval results of clear, pointing out future research directions to build more discriminative and robust neural rerankers.
Complement Lexical Retrieval Model with Semantic Residual Embeddings
# References
1. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. CoRR abs/1409.0473 (2015)
2. Bromley, J., Guyon, I., LeCun, Y., S¨ackinger, E., Shah, R.: Signature veriï¬cation using a siamese time delay neural network. In: Advances in Neural Information Processing Systems 6. pp. 737â744 (1993)
3. Caid, W.R., Dumais, S.T., Gallant, S.I.: Learned vector-space models for document retrieval. Inf. Process. Manag. 31(3), 419â429 (1995)
4. Chang, W., Yu, F.X., Chang, Y., Yang, Y., Kumar, S.: Pre-training tasks for embedding-based large-scale retrieval. In: 8th International Conference on Learning Representations (2020)
5. Chen, T., Van Durme, B.: Discriminative information retrieval for question an- swering sentence selection. In: Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. pp. 719â725 (2017) 6. Craswell, N., Mitra, B., Yilmaz, E., Campos, D.: Overview of the TREC 2019 deep
learning track. In: TREC (to appear) (2019)
7. Dai, Z., Callan, J.: Context-aware document term weighting for ad-hoc search. In: WWW â20: The Web Conference 2020. pp. 1897â1907 (2020)
8. Dai, Z., Callan, J.: Context-aware term weighting for ï¬rst-stage passage retrieval. In: The 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval (to appear) (2020)
9. Deerwester, S.C., Dumais, S.T., Landauer, T.K., Furnas, G.W., Harshman, R.A.: Indexing by latent semantic analysis. J. Am. Soc. Inf. Sci. 41(6), 391â407 (1990) 10. Devlin, J., Chang, M., Lee, K., Toutanova, K.: BERT: pre-training of deep bidi- rectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. pp. 4171â4186 (2019)
11. Guo, J., Fan, Y., Ai, Q., Croft, W.B.: A deep relevance matching model for ad-hoc retrieval. In: Proceedings of the 25th ACM International Conference on Information and Knowledge Management. pp. 55â64 (2016)
12. Guo, R., Sun, P., Lindgren, E., Geng, Q., Simcha, D., Chern, F., Kumar, S.: Ac- celerating large-scale inference with anisotropic vector quantization. In: Proc. of the 37th International Conference on Machine Learning (2020)
13. Guu, K., Lee, K., Tung, Z., Pasupat, P., Chang, M.: REALM: retrieval-augmented language model pre-training. CoRR abs/2002.08909 (2020)
14. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Computation 9, 1735â1780 (1997)
15. Johnson, J., Douze, M., J´egou, H.: Billion-scale similarity search with GPUs. CoRR abs/1702.08734 (2017)
16. Kim, Y.: Convolutional neural networks for sentence classiï¬cation. In: EMNLP (2014)
17. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: 3rd Inter- national Conference on Learning Representations (2015)
18. Kuzi, S., Zhang, M., Li, C., Bendersky, M., Najork, M.: Leveraging semantic and lexical matching to improve the recall of document retrieval systems: A hybrid approach. ArXiv abs/2010.01195 (2020)
19. Laï¬erty, J.D., Zhai, C.: Document language models, query models, and risk mini- mization for information retrieval. In: SIGIR 2001: Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Informa- tion Retrieval. pp. 111â119 (2001)
14 L. Gao et al.
20. Lavrenko, V., Croft, W.B.: Relevance-based language models. In: SIGIR 2001: Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. pp. 120â127 (2001)
21. Lee, K., Chang, M., Toutanova, K.: Latent retrieval for weakly supervised open do- main question answering. In: Proceedings of the 57th Conference of the Association for Computational Linguistics. pp. 6086â6096 (2019)
22. Lin, J.: The neural hype and comparisons against weak baselines. In: SIGIR Forum. pp. 40â51 (2018)
23. Luan, Y., Eisenstein, J., Toutanova, K., Collins, M.: Sparse, dense, and attentional representations for text retrieval. Transactions of the Association of Computational Linguistics (2020)
24. Malkov, Y.A., Yashunin, D.A.: Eï¬cient and robust approximate nearest neigh- bor search using hierarchical navigable small world graphs. IEEE transactions on pattern analysis and machine intelligence (2018)
25. Metzler, D., Croft, W.B.: A markov random ï¬eld model for term dependencies. In: SIGIR 2005: Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. pp. 472â479 (2005) 26. Mitra, B., Diaz, F., Craswell, N.: Learning to match using local and distributed representations of text for web search. In: Proceedings of the 26th International Conference on World Wide Web. pp. 1291â1299 (2017)
27. Nguyen, T., Rosenberg, M., Song, X., Gao, J., Tiwary, S., Majumder, R., Deng, L.: MS MARCO: A human generated machine reading comprehension dataset. In: Proceedings of the Workshop on Cognitive Computation: Integrating neural and symbolic approaches 2016 co-located with the 30th Annual Conference on Neural Information Processing Systems (2016)
28. Nogueira, R., Cho, K.: Passage re-ranking with bert. arXiv:1901.04085 (2019) 29. Nogueira, R., Yang, W., Lin, J., Cho, K.: Document expansion by query prediction.
CoRR abs/1904.08375 (2019)
30. Rajashekar, T.B., Croft, W.B.: Combining automatic and manual index represen- tations in probabilistic retrieval. J. Am. Soc. Inf. Sci. 46(4), 272â283 (1995) 31. Reimers, N., Gurevych, I.: Sentence-bert: Sentence embeddings using siamese bert- networks. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing. pp. 3980â3990 (2019)
32. Robertson, S.E., Walker, S.: Some simple eï¬ective approximations to the 2-poisson model for probabilistic weighted retrieval. In: Proceedings of the 17th Annual In- ternational ACM-SIGIR Conference on Research and Development in Information Retrieval. pp. 232â241 (1994)
33. Salton, G., McGill, M.: Introduction to Modern Information Retrieval. McGraw- Hill Book Company (1984)
34. Shrivastava, A., Li, P.: Asymmetric LSH (ALSH) for sublinear time maximum inner product search (MIPS). In: Advances in Neural Information Processing Systems 27. pp. 2321â2329 (2014)
35. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I.: Attention is all you need. In: Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Sys- tems 2017. pp. 5998â6008 (2017)
36. Weston, J., Watkins, C.: Support vector machines for multi-class pattern recogni- tion. In: ESANN 1999, 7th European Symposium on Artiï¬cial Neural Networks. pp. 219â224 (1999)
Complement Lexical Retrieval Model with Semantic Residual Embeddings
37. Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., Brew, J.: Huggingfaceâs transformers: State- of-the-art natural language processing. CoRR abs/1910.03771 (2019)
38. Yang, P., Fang, H., Lin, J.: Anserini: Enabling the use of lucene for information re- trieval research. In: Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval. pp. 1253â1256 (2017) 39. Yao, X., Van Durme, B., Clark, P.: Automatic coupling of answer extraction and information retrieval. In: Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. pp. 159â165 (2013)
40. Zamani, H., Dehghani, M., Croft, W.B., Learned-Miller, E.G., Kamps, J.: From neural re-ranking to neural ranking: Learning a sparse representation for inverted indexing. In: Proceedings of the 27th ACM International Conference on Informa- tion and Knowledge Management. pp. 497â506 (2018) | {
"id": "1901.04085"
} |
2005.05909 | TextAttack: A Framework for Adversarial Attacks, Data Augmentation, and Adversarial Training in NLP | While there has been substantial research using adversarial attacks to
analyze NLP models, each attack is implemented in its own code repository. It
remains challenging to develop NLP attacks and utilize them to improve model
performance. This paper introduces TextAttack, a Python framework for
adversarial attacks, data augmentation, and adversarial training in NLP.
TextAttack builds attacks from four components: a goal function, a set of
constraints, a transformation, and a search method. TextAttack's modular design
enables researchers to easily construct attacks from combinations of novel and
existing components. TextAttack provides implementations of 16 adversarial
attacks from the literature and supports a variety of models and datasets,
including BERT and other transformers, and all GLUE tasks. TextAttack also
includes data augmentation and adversarial training modules for using
components of adversarial attacks to improve model accuracy and robustness.
TextAttack is democratizing NLP: anyone can try data augmentation and
adversarial training on any model or dataset, with just a few lines of code.
Code and tutorials are available at https://github.com/QData/TextAttack. | http://arxiv.org/pdf/2005.05909 | John X. Morris, Eli Lifland, Jin Yong Yoo, Jake Grigsby, Di Jin, Yanjun Qi | cs.CL, cs.AI, cs.LG | 6 pages. More details are shared at
https://github.com/QData/TextAttack | null | cs.CL | 20200429 | 20201005 | 0 2 0 2
t c O 5 ] L C . s c [
4 v 9 0 9 5 0 . 5 0 0 2 : v i X r a
# TextAttack: A Framework for Adversarial Attacks, Data Augmentation, and Adversarial Training in NLP
John X. Morris1, Eli Liï¬and1, Jin Yong Yoo1, Jake Grigsby1, Di Jin2, Yanjun Qi1 1 Department of Computer Science, University of Virginia 2 Computer Science and Artiï¬cial Intelligence Laboratory, MIT {jm8wx, yq2h}@virginia.edu
# Abstract
While there has been substantial research us- ing adversarial attacks to analyze NLP mod- els, each attack is implemented in its own code repository. It remains challenging to de- velop NLP attacks and utilize them to improve model performance. This paper introduces TextAttack, a Python framework for adver- sarial attacks, data augmentation, and adversar- ial training in NLP. TextAttack builds at- tacks from four components: a goal function, a set of constraints, a transformation, and a search method. TextAttackâs modular de- sign enables researchers to easily construct at- tacks from combinations of novel and exist- ing components. TextAttack provides im- plementations of 16 adversarial attacks from the literature and supports a variety of models and datasets, including BERT and other trans- formers, and all GLUE tasks. TextAttack also includes data augmentation and adver- sarial training modules for using components of adversarial attacks to improve model ac- curacy and robustness. TextAttack is de- mocratizing NLP: anyone can try data aug- mentation and adversarial training on any model or dataset, with just a few lines of code. Code and tutorials are available at https://github.com/QData/TextAttack.
1
# 1 Introduction
Over the last few years, there has been growing interest in investigating the adversarial robustness of NLP models, including new methods for gener- ating adversarial examples and better approaches to defending against these adversaries (Alzantot et al., 2018; Jin et al., 2019; Kuleshov et al., 2018; Li et al., 2019; Gao et al., 2018; Wang et al., 2019; Ebrahimi et al., 2017; Zang et al., 2020; Pruthi et al., 2019). It is difï¬cult to compare these attacks directly and fairly, since they are often evaluated on different data samples and victim models. Re-
Original Perfect performance by the actor â Positive (99%) Adversarial Spotless performance by the actor â Negative (100%)
Figure 1: Adversarial example generated using Jin et al. (2019)âs TextFooler for a BERT-based sentiment classiï¬er. Swapping out âperfectâ with synonym âspotlessâ completely changes the modelâs prediction, even though the underlying meaning of the text has not changed.
implementing previous work as a baseline is often time-consuming and error-prone due to a lack of source code, and precisely replicating results is complicated by small details left out of the publica- tion. These barriers make benchmark comparisons hard to trust and severely hinder the development of this ï¬eld.
To encourage the development of the adversar- ial robustness ï¬eld, we introduce TextAttack, a Python framework for adversarial attacks, data augmentation, and adversarial training in NLP.
To unify adversarial attack methods into one system, we decompose NLP attacks into four com- ponents: a goal function, a set of constraints, a transformation, and a search method. The attack at- tempts to perturb an input text such that the model output fulï¬lls the goal function (i.e., indicating whether the attack is successful) and the perturba- tion adheres to the set of constraints (e.g., gram- mar constraint, semantic similarity constraint). A search method is used to ï¬nd a sequence of trans- formations that produce a successful adversarial example.
This modular design enables us to easily assemble attacks from the literature while re- using components that are shared across attacks. TextAttack provides clean, readable implemen- tations of 16 adversarial attacks from the literature. For the ï¬rst time, these attacks can be benchmarked, compared, and analyzed in a standardized setting.
Create new attacks as a combination of . novel and pre-existing . components . Creating New Attacks Developing Attacks Use attack recipes Benchmarking Attacks Four Components of NLP Attack « Search Method Attack Recipes « Reimplementation of attacks from literature * Covers 16 papers Evaluate new attacks against standardized models Goal Function Constraints Transformation TextAttack's 82+ Pretrained Models Evaluate attacks from literature against standardized models NX TextAttack Training Pipeline Generate instead of â_ reimplementing Data - Augmenter | Augmentation Module | Utilizing Attacks Adversarial Attack Training Module NS Generate adversarial__. {adversarial Train oe Examples User Model new samples New Data Train: Samples User Model te pcat i in training loop:
Figure 2: Main features of TextAttack.
TextAttackâs design also allows researchers to easily construct new attacks from combinations of novel and existing components. In just a few lines of code, the same search method, transfor- mation and constraints used in Jin et al. (2019)âs TextFooler can be modiï¬ed to attack a transla- tion model with the goal of changing every word in the output.
⢠Benchmarking and comparing NLP attacks from previous works on standardized models & datasets.
⢠Fast development of NLP attack methods by re- using abundant available modules.
⢠Performing ablation studies on individual compo- nents of proposed attacks and data augmentation methods.
TextAttack is directly integrated with Hug- gingFaceâs transformers and nlp libraries. This allows users to test attacks on models and datasets. TextAttack provides dozens of pre-trained models (LSTM, CNN, and various transformer- based models) on a variety of popular datasets. Currently TextAttack supports a multitude of tasks including summarization, machine transla- tion, and all nine tasks from the GLUE benchmark. TextAttack also allows users to provide their own models and datasets.
Ultimately, the goal of studying adversarial at- tacks is to improve model performance and robust- ness. To that end, TextAttack provides easy- to-use tools for data augmentation and adversarial training. TextAttackâs Augmenter class uses a transformation and a set of constraints to produce new samples for data augmentation. Attack recipes are re-used in a training loop that allows models to train on adversarial examples. These tools make it easier to train accurate and robust models.
# Uses for TextAttack include1:
1All can be done in < 5 lines of code. See A.1.
⢠Training a model (CNN, LSTM, BERT, RoBERTa, etc.) on an augmented dataset.
⢠Adversarial training with attacks from the litera- ture to improve a modelâs robustness.
# 2 The TextAttack Framework
TextAttack aims to implement attacks which, given an NLP model, ï¬nd a perturbation of an in- put sequence that satisï¬es the attackâs goal and adheres to certain linguistic constraints. In this way, attacking an NLP model can be framed as a combinatorial search problem. The attacker must search within all potential transformations to ï¬nd a sequence of transformations that generate a suc- cessful adversarial example.
Each attack can be constructed from four com- ponents:
1. A task-speciï¬c goal function that determines whether the attack is successful in terms of the model outputs. Examples: untargeted classiï¬cation, targeted classiï¬cation, non-overlapping output, mini- mum BLEU score.
2
2. A set of constraints that determine if a per- turbation is valid with respect to the original input. Examples: maximum word embedding dis- tance, part-of-speech consistency, grammar checker, minimum sentence encoding cosine similarity.
3. A transformation that, given an input, gener- ates a set of potential perturbations. Examples: word embedding word swap, the- saurus word swap, homoglyph character sub- stitution.
4. A search method that successively queries the model and selects promising perturbations from a set of transformations. Examples: greedy with word importance rank- ing, beam search, genetic algorithm.
See A.2 for a full explanation of each goal func- tion, constraint, transformation, and search method thatâs built-in to TextAttack.
# 3 Developing NLP Attacks with TextAttack
TextAttack is available as a Python package installed from PyPI, or via direct download from GitHub. TextAttack is also available for use through our demo web app, displayed in Figure 3.
Python users can test attacks by creating and manipulating Attack objects. The command-line API offers textattack attack, which allows users to specify attacks from their four components or from a single attack recipe and test them on different models and datasets.
TextAttack supports several different output formats for attack results:
Printing results to stdout. ⢠Printing to a text ï¬le or CSV. ⢠Printing attack results to an HTML table. ⢠Writing a table of attack results to a visualization
server, like Visdom or Weights & Biases.
# 3.1 Benchmarking Existing Attacks with Attack Recipes
TextAttackâs modular design allows us to implement many different attacks from past work in a shared framework, often by adding only one or two new components. Table 1 categorizes 16 attacks based on their goal functions, constraints, transformations and search methods.
All of these attacks are implemented as âat- tack recipesâ in TextAttack and can be bench- marked with just a single command. See A.3
eee TextAttack & psn con/O0ataTesttack fsa servicable world wa i SH that cant totaly hide its , butt atleast calls attention toa problem hoiywood too long, Result19 Input: pulls] of the FB trick of recreating not only he look ofa certain ea, but also the feel âOutput: pulls] off the BIRR rick of recreating not onl the look ofa certain era, but also the fel Output: the fs i suspec is RBH shatner as a legendary pro nis asa biliant college student~where's pauly shore asthe
Figure 3: Screenshot of TextAttackâs web interface run- ning the TextBugger black-box attack (Li et al., 2019).
for a comparison between papersâ reported at- tack results and the results achieved by running TextAttack.
# 3.2 Creating New Attacks by Combining Novel and Existing Components
As is clear from Table 1, many components are shared between NLP attacks. New attacks often re- use components from past work, adding one or two novel pieces. TextAttack allows researchers to focus on the generation of new components rather than replicating past results. For example, Jin et al. (2019) introduced TextFooler as a method for attacking classiï¬cation and entailment models. If a researcher wished to experiment with applying TextFoolerâs search method, transformations, and constraints to attack translation models, all they need is to implement a translation goal function in TextAttack. They would then be able to plug in this goal function to create a novel attack that could be used to analyze translation models. 3.3 Evaluating Attacks on TextAttackâs
# Pre-Trained Models
As of the date of this submission, TextAttack provides users with 82 pre-trained models, includ- ing word-level LSTM, word-level CNN, BERT, and other transformer based models pre-trained on var- ious datasets provided by HuggingFace nlp. Since TextAttack is integrated with the nlp library, it can automatically load the test or validation data set for the corresponding pre-trained model. While the literature has mainly focused on classiï¬cation and entailment, TextAttackâs pretrained mod- els enable research on the robustness of models across all GLUE tasks.
3
Attack Recipe bae (Garg and Ramakrishnan, 2020) bert-attack (Li et al., 2020) deepwordbug (Gao et al., 2018) alzantot, fast-alzantot (Alzantot et al., 2018; Jia et al., 2019) iga (Wang et al., 2019) input-reduction (Feng et al., 2018) kuleshov (Kuleshov et al., 2018) hotflip (word swap) (Ebrahimi et al., 2017) Goal Function Untargeted Classiï¬cation Untargeted Classiï¬cation {Untargeted, Targeted} Classiï¬cation Untargeted {Classiï¬cation, Entailment} Untargeted {Classiï¬cation, Entailment} Input Reduction Untargeted Classiï¬cation Untargeted Classiï¬cation Constraints USE sentence encoding cosine similarity USE sentence encoding cosine similarity, Maximum number of words perturbed Levenshtein edit distance Percentage of words perturbed, Language Model perplexity, Word embedding distance Percentage of words perturbed, Word embedding distance Thought vector encoding cosine similarity, Language model similarity probability Word Embedding Cosine Similarity, Part-of-speech match, Number of words perturbed Transformation BERT Masked Token Prediction BERT Masked Token Prediction (with subword expansion) {Character Insertion, Character Deletion, Neighboring Character Swap, Character Substitution}* Counter-ï¬tted word embedding swap Counter-ï¬tted word embedding swap Word deletion Counter-ï¬tted word embedding swap Gradient-Based Word Swap Search Method Greedy-WIR Greedy-WIR Greedy-WIR Genetic Algorithm Genetic Algorithm Greedy-WIR Greedy word swap Beam search morpheus (Tan et al., 2020) pruthi (Pruthi et al., 2019) Minimum BLEU Score Untargeted Classiï¬cation Minimum word length, Maximum number of words perturbed Inï¬ection Word Swap {Neighboring Character Swap, Character Deletion, Character Insertion, Keyboard-Based Character Swap}* HowNet Word Swap Greedy search Greedy search Particle Swarm Optimization Greedy-WIR (saliency) Greedy-WIR
pso (Zang et al., 2020) pwws (Ren et al., 2019) seq2sick (black-box) (Cheng et al., 2018) textbugger (black-box) (Li et al., 2019)
# Untargeted Classiï¬cation Untargeted Classiï¬cation Non- overlapping output Untargeted Classiï¬cation
# WordNet-based synonym swap Counter-ï¬tted word embedding swap
|
USE sentence encoding cosine similarity
{Character Insertion, Character Deletion, Neighboring Character Swap, Character Substitution}* Counter-ï¬tted word embedding swap
# Greedy-WIR
|
# textfooler (Jin et al., 2019)
# Untargeted {Classiï¬cation, Entailment}
Word Embedding Distance, Part-of-speech match, USE sentence encoding cosine similarity
# Greedy-WIR
|
Table 1: TextAttack attack recipes categorized within our framework: search method, transformation, goal function, constraints. All attack recipes include an additional constraint which disallows the replacement of stopwords. Greedy search 4 with Word Importance Ranking is abbreviated as Greedy-WIR. * indicates a combination of multiple transformations
# 4 Utilizing TextAttack to Improve NLP Models
4.1 Evaluating Robustness of Custom Models TextAttack is model-agnostic - meaning it can run attacks on models implemented in any deep learning framework. Model objects must be able to take a string (or list of strings) and return an output that can be processed by the goal function. For example, machine translation models take a list of strings as input and produce a list of strings as output. Classiï¬cation and entailment models return an array of scores. As long as the userâs model meets this speciï¬cation, the model is ï¬t to use with TextAttack.
# 4.2 Model Training
TextAttack users can train standard LSTM, CNN, and transformer based models, or a user- customized model on any dataset from the nlp li- brary using the textattack train command. Just like pre-trained models, user-trained models are compatible with commands like textattack attack and textattack eval.
# 4.3 Data Augmentation
While searching for adversarial examples, TextAttackâs transformations generate pertur- bations of the input text, and apply constraints to verify their validity. These tools can be reused to dramatically expand the training dataset by intro- ducing perturbed versions of existing samples. The textattack augment command gives users access to a number of pre-packaged recipes for augmenting their dataset. This is a stand-alone feature that can be used with any model or train- ing framework. When using TextAttackâs mod- els and training pipeline, textattack train --augment automatically expands the dataset be- fore training begins. Users can specify the fraction of each input that should be modiï¬ed and how many additional versions of each example to create. This makes it easy to use existing augmentation recipes on different models and datasets, and is a great way to benchmark new techniques.
Figure 4 shows empirical results we obtained us- ing TextAttackâs augmentation. Augmentation with TextAttack immediately improves the per- formance of a WordCNN model on small datasets.
# 4.4 Adversarial Training
With textattack train --attack, at- tack recipes can be used to create new training
# Augmenting WordCNN Model
80 â~ 75- BS > - o 70 oO â â) U 65 Lo) <z g 60- Augmentation Method Ww ââ Baseline 55- ---- EDA ~ Embedding 102 103 104 # Training Samples
Figure 4: Performance of the built-in WordCNN model on the rotten tomatoes dataset with increasing training set size. Data augmentation recipes like EasyDataAugmenter (EDA, (Wei and Zou, 2019)) and Embedding are most help- ful when working with very few samples. Shaded regions represent 95% conï¬dence intervals over N = 5 runs.
sets of adversarial examples. After training for a number of epochs on the clean training set, the at- tack generates an adversarial version of each input. This perturbed version of the dataset is substituted for the original, and is periodically regenerated ac- cording to the modelâs current weaknesses. The resulting model can be signiï¬cantly more robust against the attack used during training. Table 2 shows the accuracy of a standard LSTM classiï¬er with and without adversarial training against differ- ent attack recipes implemented in TextAttack.
# 5 TextAttack Under the Hood
TextAttack is optimized under-the-hood to make implementing and running adversarial attacks simple and fast.
AttackedText. A common problem with im- plementations of NLP attacks is that the original text is discarded after tokenization; thus, the trans- formation is performed on the tokenized version of the text. This causes issues with capitalization and word segmentation. Sometimes attacks swap a piece of a word for a complete word (for example, transforming ââarenât" into ââarenâtoo"). To solve this problem, TextAttack stores each input as a AttackedText object which text and helper meth- contains the original ods for transforming the text while retaining tensors, tokenization.
5
Trained Against baseline (early stopping) deepwordbug (20 epochs) deepwordbug (75 epochs) textfooler (20 epochs) - 77.30% 76.38% 73.16% 61.85% Attacked By deepwordbug textfooler pruthi hotflip 23.46% 35.07% 44.74% 40.09% 2.23% 4.78% 13.42% 29.63% 59.01% 57.08% 58.28% 52.60% 64.57% 65.06% 66.87% 55.75% bae 25.51% 27.63% 32.77% 39.36%
Table 2: The default LSTM model trained on 3k samples from the sst2 dataset. The baseline uses early stopping on a clean training set. deepwordbug and textfooler attacks are used for adversarial training. âAccuracy Under Attackâ on the eval set is reported for several different attack types.
classes in TextAttack operate primarily on AttackedText objects. When words are added, swapped, or deleted, an AttackedText can maintain proper punctuation and capitalization. The AttackedText also contains implementa- tions for common linguistic functions like splitting text into words, splitting text into sentences, and part-of-speech tagging.
Caching. Search methods frequently encounter the same input at different points in the search. In these cases, it is wise to pre-store values to avoid unnecessary computation. For each input examined during the attack, TextAttack caches its model output, as well as the whether or not it passed all of the constraints. For some search methods, this memoization can save a signiï¬cant amount of time.2
that generated examples satisfy a given constraint (Kulynych et al., 2018). TEAPOT is a library for evaluating adversarial perturbations on text, but only supports the application of ngram-based com- parisons for evaluating attacks on machine transla- tion models (Michel et al., 2019). Most recently, AllenNLP Interpret includes functionality for running adversarial attacks on NLP models, but is intended only for the purpose of interpretability, and only supports attacks via input-reduction or greedy gradient-based word swap (Wallace et al., 2019). TextAttack has a broader scope than any of these libraries: it is designed to be extendable to any NLP attack.
# 7 Conclusion
# 6 Related Work
We draw inspiration from the Transformers library (Wolf et al., 2019) as an example of a well-designed Natural Language Processing library. Some of TextAttackâs models and tokenizers are implemented using Transformers.
cleverhans (Papernot et al., 2018) is a library for constructing adversarial examples for computer vision models. Like cleverhans, we aim to provide methods that generate adversarial exam- ples across a variety of models and datasets. In some sense, TextAttack strives to be a solution like cleverhans for the NLP community. Like cleverhans, attacks in TextAttack all im- plement a base Attack class. However, while cleverhans implements many disparate attacks in separate modules, TextAttack builds attacks from a library of shared components.
There are some existing open-source libraries re- lated to adversarial examples in NLP. Trickster proposes a method for attacking NLP models based on graph search, but lacks the ability to ensure
We presented TextAttack, an open-source framework for testing the robustness of NLP mod- els. TextAttack deï¬nes an attack in four mod- ules: a goal function, a list of constraints, a trans- formation, and a search method. This allows us to compose attacks from previous work from these modules and compare them in a shared environ- ment. These attacks can be reused for data aug- mentation and adversarial training. As new at- tacks are developed, we will add their components to TextAttack. We hope TextAttack helps lower the barrier to entry for research into robust- ness and data augmentation in NLP.
# 8 Acknowledgements
The authors would like to thank everyone who has contributed to make TextAttack a reality: Hanyu Liu, Kevin Ivey, Bill Zhang, and Alan Zheng, to name a few. Thanks to the IGA creators (Wang et al., 2019) for contributing an implementa- tion of their algorithm to our framework. Thanks to the folks at HuggingFace for creating such easy-to- use software; without them, TextAttack would not be what it is today.
2Caching alone speeds up the genetic algorithm of Alzantot et al. (2018) by a factor of 5.
6
# References
Abhaya Agarwal and Alon Lavie. 2008. Meteor, m-bleu and m-ter: Evaluation metrics for high- correlation with human rankings of machine trans- lation output. In WMT@ACL.
Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani B. Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversar- ial examples. ArXiv, abs/1804.07998.
Daniel Matthew Cer, Yinfei Yang, Sheng yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Universal sentence encoder. ArXiv, abs/1803.11175.
Minhao Cheng, Jinfeng Yi, Pin-Yu Chen, Huan Zhang, and Cho-Jui Hsieh. 2018. Seq2sick: Evaluating the robustness of sequence-to-sequence models with ad- versarial examples.
Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo¨ıc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In EMNLP.
Zhendong Dong, Qiang Dong, and Changling Hao. 2006. Hownet and the computation of meaning.
Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2017. Hotï¬ip: White-box adversarial exam- ples for text classiï¬cation. In ACL.
Shi Feng, Eric Wallace, Alvin Grissom II, Mohit Iyyer, Pedro Rodriguez, and Jordan Boyd-Graber. 2018. Pathologies of neural models make interpretations difï¬cult.
Ji Gao, Jack Lanchantin, Mary Lou Soffa, and Yanjun Qi. 2018. Black-box generation of adversarial text sequences to evade deep learning classiï¬ers. 2018 IEEE Security and Privacy Workshops (SPW), pages 50â56.
Siddhant Garg and Goutham Ramakrishnan. 2020. Bae: Bert-based adversarial examples for text clas- siï¬cation.
Ari Holtzman, Jan Buys, Maxwell Forbes, Antoine Bosselut, David Golub, and Yejin Choi. 2018. Learning to write with cooperative discriminators. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1638â1649, Melbourne, Aus- tralia. Association for Computational Linguistics.
Robin Jia and Percy Liang. 2017. Adversarial exam- ples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 2021â2031, Copenhagen, Denmark. Association for Computational Linguistics.
Robin Jia, Aditi Raghunathan, Kerem G¨oksel, and Percy Liang. 2019. Certiï¬ed robustness to adversar- ial word substitutions. In EMNLP/IJCNLP.
Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2019. Is bert really robust? natural lan- guage attack on text classiï¬cation and entailment. ArXiv, abs/1907.11932.
Rafal J´ozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the lim- its of language modeling. ArXiv, abs/1602.02410.
James Kennedy and Russell Eberhart. 1995. Particle In Proceedings of ICNNâ95- swarm optimization. International Conference on Neural Networks, vol- ume 4, pages 1942â1948. IEEE.
Ryan Kiros, Yukun Zhu, Ruslan Salakhutdinov, Richard S. Zemel, Raquel Urtasun, Antonio Tor- ralba, and Sanja Fidler. 2015. Skip-thought vectors. ArXiv, abs/1506.06726.
Volodymyr Kuleshov, Shantanu Thakoor, Tingfung Lau, and Stefano Ermon. 2018. Adversarial exam- ples for natural language classiï¬cation problems.
Bogdan Kulynych, Jamie Hayes, Nikita Samarin, and Carmela Troncoso. 2018. Evading classiï¬ers in dis- crete domains with provable optimality guarantees. CoRR, abs/1810.10939.
Jinfeng Li, Shouling Ji, Tianyu Du, Bo Li, and Ting Textbugger: Generating adversar- ArXiv, Wang. 2019. ial abs/1812.05271. text against real-world applications.
Linyang Li, Ruotian Ma, Qipeng Guo, Xiangyang Xue, and Xipeng Qiu. 2020. Bert-attack: Adversarial at- tack against bert using bert.
and Juan Miguel Pino. 2019. On evaluation of ad- versarial perturbations for sequence-to-sequence models. CoRR, abs/1903.06620.
George Armitage Miller, Richard Beckwith, Christiane Fellbaum, Derek Gross, and Katherine J. Miller. Introduction to wordnet: An on-line lexical 1990. International Journal of Lexicography, database. 3:235â244.
Nikola MrkËsi´c, Diarmuid O S´eaghdha, Blaise Thom- son, Milica GaËsi´c, Lina Rojas-Barahona, Pei- Hao Su, David Vandyke, Tsung-Hsien Wen, and Counter-ï¬tting word vec- Steve Young. 2016. arXiv preprint tors to linguistic constraints. arXiv:1603.00892.
Daniel Naber et al. 2003. A rule-based style and gram- mar checker. Citeseer.
Nicolas Papernot, Fartash Faghri, Nicholas Carlini, Ian Goodfellow, Reuben Feinman, Alexey Kurakin, Ci- hang Xie, Yash Sharma, Tom Brown, Aurko Roy,
7
Alexander Matyasko, Vahid Behzadan, Karen Ham- bardzumyan, Zhishuai Zhang, Yi-Lin Juang, Zhi Li, Ryan Sheatsley, Abhibhav Garg, Jonathan Uesato, Willi Gierke, Yinpeng Dong, David Berthelot, Paul Hendricks, Jonas Rauber, and Rujun Long. 2018. Technical report on the cleverhans v2.1.0 adversarial examples library. arXiv preprint arXiv:1610.00768.
Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2001. Bleu: a method for automatic eval- uation of machine translation. In ACL.
Maja Popovic. 2015. chrf: character n-gram f-score for automatic mt evaluation. In WMT@EMNLP.
Danish Pruthi, Bhuwan Dhingra, and Zachary C. Lip- ton. 2019. Combating adversarial misspellings with robust word recognition.
Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
Nils Reimers and Iryna Gurevych. 2019. Sentence- bert: Sentence embeddings using siamese bert- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics.
Shuhuai Ren, Yihe Deng, Kun He, and Wanxiang Che. 2019. Generating natural language adversarial ex- amples through probability weighted word saliency. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1085â1097, Florence, Italy. Association for Compu- tational Linguistics.
Samson Tan, Shaï¬q Joty, Min-Yen Kan, and Richard Socher. 2020. Itâs morphinâ time! Combating linguistic discrimination with inï¬ectional perturba- In Proceedings of the 58th Annual Meet- tions. ing of the Association for Computational Linguistics, pages 2920â2935, Online. Association for Computa- tional Linguistics.
Eric Wallace, Jens Tuyls, Junlin Wang, Sanjay Subra- manian, Matthew Gardner, and Sameer Singh. 2019. Allennlp interpret: A framework for explaining pre- dictions of nlp models. ArXiv, abs/1909.09251.
Xiaosen Wang, Hao Jin, and Kun He. 2019. Natural language adversarial attacks and defenses in word level.
Jason W. Wei and Kai Zou. 2019. EDA: easy data aug- mentation techniques for boosting performance on text classiï¬cation tasks. CoRR, abs/1901.11196.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Râemi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Transformers: State-of- the-art natural language processing.
Yuan Zang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Meng Zhang, Qun Liu, and Maosong Sun. 2020. Word-level textual adversarial attacking as combina- torial optimization. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 6066â6080, Online. Association for Computational Linguistics.
Tianyi Zhang*, Varsha Kishore*, Felix Wu*, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Eval- In International uating text generation with bert. Conference on Learning Representations.
8
# A Appendix
# A.1 TextAttack in Five Lines or Less
Table 3 provides some examples of tasks that can be accomplished in bash or Python with ï¬ve lines of code or fewer. Note that every action has to be prefaced with a single line of code (pip install textattack). A.2 Components of TextAttack
This section explains each of the four compo- nents of the TextAttack framework and de- scribes the components that are currently imple- mented. Figure 5 shows the decomposition of two popular attacks (Alzantot et al., 2018; Jin et al., 2019). A.2.1 Goal Functions
A goal function takes an input xâ and determines if it is satisfies the conditions for a successful attack in respect to the original input x. Goal functions vary by task. For example, for a classification task, a successful adversarial attack could be changing the modelâs output to be a certain label. Goal func- tions also scores how âgoodâ the given â is for achieving the desired goal, and this score can be used by the search method as a heuristic for finding the optimal solution.
TextAttack includes the following goal func- tions:
⢠Untargeted Classiï¬cation: Minimize the score of the correct classiï¬cation label.
⢠Targeted Classiï¬cation: Maximize the score of a chosen incorrect classiï¬cation label.
⢠Input Reduction (Classiï¬cation): Reduce the in- put text to as few wordas as possible while main- taining the same predicted label.
⢠Non-Overlapping Output (Text-to-Text): Change the output text such that no words in it overlap with the original output text.
Minimzing BLEU Score (Text-to-Text): Change the output text such that the BLEU score between it and the original output text is minimized (Pap- ineni et al., 2001). A.2.2 Constraints
is only considered valid if it satisï¬es each of the attackâs constraints. TextAttack contains four classes of constraints. Pre-transformation Constraints These con- straints are used to preemptively limit how x can be perturbed and are applied before x is perturbed. ⢠Stopword Modiï¬cation: stopwords cannot be per-
turbed.
⢠Repeat Modiï¬cation: words that have been al- ready perturbed cannot be perturbed again.
⢠Minimum Word Length: words less than a certain length cannot be perturbed.
⢠Max Word Index Modiï¬cation: words past a cer- tain index cannot be perturbed.
⢠Input Column Modiï¬cation: for tasks such as textual entailment where input might be com- posed of two parts (e.g. hypothesis and premise), we can limit which part we can transform (e.g. hypothesis).
Overlap We measure the overlap between x and x adv using the following metrics on the charac- ter level and require it to be lower than a certain threshold as a constraint:
⢠Maximum BLEU score difference (Papineni et al., 2001)
Maximum chrF score difference (Popovic, 2015) ⢠Maximum METEOR score difference (Agarwal
and Lavie, 2008)
Maximum Levenshtein edit distance ⢠Maximum percentage of words changed
Grammaticality These constraints are typically intended to prevent the attack from creating per- turbations which introduce grammatical errors. TextAttack currently supports the following constraints on grammaticality:
⢠Maximum number of grammatical errors in- duced, as measured by LanguageTool (Naber et al., 2003)
the replacement word should have the same part-of-speech as the original word. Supports taggers provided by ï¬air, SpaCy, and NLTK.
Filtering out words that do not ï¬t within the con- text based on the following language models: â Google 1-billion words language model
(J´ozefowicz et al., 2016)
â Learning To Write Language Model (Holtz- man et al., 2018) (as used by (Jia et al., 2019)) â GPT-2 language model (Radford et al., 2019)
Semantics Some constraints attempt to preserve semantics between x and x adv. TextAttack currently provides the following built-in semantic constraints:
⢠Maximum swapped word embedding distance (or minimum cosine similarity)
⢠Minimum cosine similarity score of sentence rep- resentations obtained by well-trained sentence
9
Task Command TextFooler on an LSTM trained on the MR sentiment classiï¬cation dataset textattack attack --recipe textfooler --model bert-base-uncased-mr --num-examples 100 TextFooler against BERT ï¬ne-tuned on SST-2 textattack attack --model bert-base-uncased-sst2 --recipe textfooler --num-examples 10 DeepWordBug on DistilBERT trained on the Quora Question Pairs paraphrase identiï¬cation dataset: textattack attack --model distilbert-base-uncased-qqp --recipe deepwordbug --num-examples 100 seq2sick (black-box) against T5 ï¬ne-tuned for English-German translation: textattack attack --model t5-en-de --recipe seq2sick --num-examples 100 Beam search with beam width 4 and word embedding transformation and untargeted goal function on an LSTM: textattack attack --model lstm-mr --num-examples 20 --search-method beam-search:beam width=4 --transformation word-swap-embedding --constraints repeat stopword max-words-perturbed:max num words=2 embedding:min cos sim=0.8 part-of-speech --goal-function untargeted-classification Augment dataset from âexamples.csvâ using the EmbeddingAugmenter, swapping out 4% of words, with 2 augmentations for example, withholding the original samples from the out- put CSV textattack augment --csv examples.csv --input-column text --recipe embedding --pct-words-to-swap 4 --transformations-per-example 2 --exclude-original Augment a list of strings in Python from textattack.augmentation import EmbeddingAugmenter augmenter = EmbeddingAugmenter() s = âWhat I cannot create, I do not understand.â augmenter.augment(s) Train the default LSTM for 50 epochs on the Yelp Polarity dataset textattack train --model lstm --dataset yelp polarity --batch-size 64 --epochs 50 --learning-rate 1e-5 Fine-tune bert-base on the CoLA dataset for 5 epochs textattack train --model bert-base-uncased --dataset glue:cola --batch-size 32 --epochs 5 Fine-tune RoBERTa on the Rotten Tomatoes Movie Review dataset, ï¬rst augmenting each example with 4 augmentations produced by the EasyDataAugmentation augmenter textattack train --model roberta-base --batch-size 64 --epochs 50 --learning-rate 1e-5 --dataset rotten tomatoes --augment eda --pct-words-to-swap .1 --transformations-per-example 4 Adversarially ï¬ne-tune DistilBERT on AG News using the HotFlip word-based attack, ï¬rst training for 2 epochs on the original dataset textattack train --model distilbert-base-cased --dataset ag news --attack hotflip --num-clean-epochs 2
Table 3: With TextAttack, adversarial attacks, data augmentation, and adversarial training can be achieved in just a few lines of Bash or Python.
encoders: â Skip-Thought Vectors (Kiros et al., 2015) â Universal Sentence Encoder (Cer et al., 2018) â InferSent (Conneau et al., 2017) â BERT trained for semantic similarity (Reimers
and Gurevych, 2019)
⢠Minimum BERTScore (Zhang* et al., 2020)
# A.2.3 Transformations
A transformation takes an input and returns a set of potential perturbations. The transformation is agnostic of goal function and constraint(s): it returns all potential transformations.
We categorize transformations into two kinds: white-box and black-box. White-box transforma-
10
Alzantot et al. (2018) Jin et al. (2019) Goal Function ( UntargetedClassification I UntargetedClassification Search Methoa (_GeneticalgorithmWordswap I 1 ](_GreeayWordswapWordImportanceRanking t WordSwapEmbedding (embedding=â cfâ Transformation | = al = d ][ wordswapEmbedding (embedding=' cfâ) Constraints * WordsPerturbedPercentage (max_perc=20) * WordEmbeddingDistance (max_mse=0.5) * GoogleLanguageModel (n_per_index=4) * WordEmbeddingDistance (min_cos_sim=0.5) * PartOfSpeech (verb_noun_swap=True) * UniversalSentenceEncoder ( metric=âangular', thresh=0.904458599)
Figure 5: TextAttack builds NLP attacks from a goal function, search method, transformation, and list of constraints. This shows attacks from Alzantot et al. (2018) and Jin et al. (2019) created using TextAttack modules.
tions have access to the model and can query it or examine its parameters to help determine the trans- formation. For example, Ebrahimi et al. (2017) de- termines potential replacement words based on the gradient of the one-hot input vector at the position of the swap. Black-box transformations determine the potential perturbations without any knowledge of the model.
posed for this process. TextAttack has imple- mented a selection of the most popular ones from the literature:
⢠Greedy Search with Word Importance Rank- ing. Rank all words according to some ranking function. Swap words one at a time in order of decreasing importance.
TextAttack currently supports the following transformations:
⢠Word swap with nearest neighbors in the counter- ï¬tted embedding space (MrkËsi´c et al., 2016)
WordNet word swap (Miller et al., 1990) ⢠Word swap proposed by a masked language model (Garg and Ramakrishnan, 2020; Li et al., 2020)
Word swap gradient-based: swap word with an- other word in the vocabulary that maximize the modelâs loss (Ebrahimi et al., 2017) (white-box) ⢠Word swap with characters transformed (Gao
¢ Word swap with characters transformed (Gao et al., 2018):
et al., 2018): â Character deleted â Neighboring characters swapped â Random character inserted â Substituted with a random character â Character substituted with a homoglyph â Character substituted with a neighboring char- acter from the keyboard (Pruthi et al., 2019)
⢠Beam Search. Initially score all possible trans- formations. Take the top b transformations (where b is a hyperparameter known as the âbeam widthâ) and iterate, looking at potential transfor- mations for all sequences in the beam.
⢠Greedy Search. Initially score transformations at all positions in the input. Swap words, taking the highest-scoring transformations ï¬rst. (This can be seen as a case of beam search where b = 1).
⢠Genetic Algorithm. An implementation of the algorithm proposed by Alzantot et al. (2018). It- eratively alters the population through greedy perturbation of each population member and crossover between population numbers, with preference to the more successful members of the population. (We also support an alternate version, the âImproved Genetic Algorithmâ pro- posed by Wang et al. (2019)).
⢠Word deletion ⢠Word swap with another word in the vocabulary that has the same Part-of-Speech and sememe, where the sememe is obtained by HowNet (Dong et al., 2006).
⢠Composite transformation: returns the results of multiple transformations
# A.2.4 Search Methods
The search method aims to ï¬nd a perturbation that achieves the goal and satisï¬es all constraints. Many combinatorial search methods have been pro-
Swarm Optimization. A population-based evolutionary computation paradigms (Kennedy and Eberhart, 1995) that exploits a population of interacting individuals to iteratively search for the optimal solution in the speciï¬c space (Zang et al., 2020). The population is called a swarm and individual agents are called particles. Each particle has a position in the search space and moves with an adaptable velocity.
) )
11
# A.3 TextAttack Attack Reproduction Results
results achieved when running attacks in TextAttack alongside numbers reported in the original paper. All TextAttack benchmarks were run on pre- trained models provided by the library and can be reproduced in a single textattack attack command. There are a few important implementa- tion differences:
⢠The genetic algorithm benchmark comes from the faster genetic algorithm of (Jia and Liang, 2017). As opposed to the original algorithm of (Alzantot et al., 2018), this implementation uses a fast language model, so it can query contexts of up to 5 words. Additionally, perplexity is com- pared to that of the original word, not the previ- ous perturbation. Since these are more rigorous linguistic constraints, a lower attack success rate is expected.
⢠The LSTM models from BAE (Garg and Ra- makrishnan, 2020) were trained using counter- ï¬tted GLoVe embeddings. The LSTM models from TextAttack were trained using normal GLoVe embeddings. Our models are conse- quently less robust to counter-ï¬tted embedding synonym swaps, and a higher attack success rate is expected.
in TextAttackâs PSO implementation is a concatenation of the three synonym sets used in the paper. This is necessary since TextAttack is dataset-agnostic and cannot expect to provide a set of synonyms for every possible dataset. Since the attack has more synonyms to choose from, TextAttackâs PSO implementation is slightly more successful.
# A.4 TextAttack Attack Prototypes
This section displays âattack prototypesâ for each attack recipe implemented in TextAttack. This is a concise way to print out the components of a given attack along with its parameters. These are directly copied from the output of running TextAttack.
# Alzantot Genetic Algorithm (Alzantot et al., 2018)
Attack (
( search method ) : GeneticAlgorithm (
( pop size ) : ( max iters ) : 0.3 (temp ) : ( give up if no improvement ) : 60 20 False ) ( goal function ) : UntargetedClassification ( transformation ) : WordSwapEmbedding( ( max candidates ) : ( embedding type ) : 8 paragramcf ) ( constraints ) : ( 0 ) : MaxWordsPerturbed( ( max percent ) : 0.2 ( compare against original ) : True ) ( 1 ) : WordEmbeddingDistance( ( embedding type ) : ( max mse dist ) : ( cased ) : False ( include unknown words ) : True ( compare against original ) : paragramcf 0.5 False ) ( 2 ) : GoogleLanguageModel( ( top n ) : None ( top n per index ) : ( compare against original ) : 4 False ) ( 3 ) : RepeatModification ( 4 ) : StopwordModification ( 5 ) : InputColumnModification ( ( matching column labels ) : ( columns to ignore ) : {âpremise â} [ â premise â , ) ( is black box ) : True
# â hypothesis â ]
)
# Alzantot Genetic Algorithm (faster) (Jia et al., 2019)
Attack ( ( search method ) : GeneticAlgorithm ( ( pop size ) : ( max iters ) : 0.3 (temp ) : ( give up if no improvement ) : 60 20 False ) ( goal function ) : UntargetedClassification ( transformation ) : WordSwapEmbedding( ( max candidates ) : ( embedding type ) : 8 paragramcf ) ( constraints ) : ( 0 ) : MaxWordsPerturbed( 0.2 ( max percent ) : ) ( 1 ) : WordEmbeddingDistance( ( embedding type ) : ( max mse dist ) : ( cased ) : False ( include unknown words ) : True paragramcf 0.5 ) ( 2 ) : LearningToWriteLanguageModel( ( max log prob diff ) : 5.0 ) ( 3 ) : RepeatModification ( 4 ) : StopwordModification ( is black box ) : True )
BAE (Garg and Ramakrishnan, 2020)
Attack (
( search method ) : GreedyWordSwapWIR( ( wir method ) : delete ) ( goal function ) : UntargetedClassification ( transformation ) : WordSwapMaskedLM( (method ) : (masked lm name ) : ( max length ) : ( max candidates ) : bae 256 bert âbaseâuncased 50 ) ( constraints ) : ( 0 ) : PartOfSpeech (
12
MR SST-2 LSTM IMDB AG MR SST-2 BERT-Base IMDB SNLI - 46.5 / 20.7 - 66.6 / 14.5 - 81.3 / 18.9 91.2 / 8.2 91.3 / 12.9 - 94.8 / 16.9 - 40.7 / 19.1 48.3 / - 61.5 / 15.2 - 78.2 / 21.2 - 92.7 / 11.9 86.7 / 16.7 88.7 / 18.7 - 27.7 / 11.6 - 21.4 / 6.3 72.5 / - 83.4 / 19.4 - 83.7 / 12.7 95.8 / 18.6 95.3 / 17.2 97.0 / 14.7 73.0 / 4.0 73.2 / - 88.8 / 2.6 - 97.6 / 5.2 100.0 / 3.7 100.0 / 1.3 99.7 / 5.1 100.0 / 2.4 - 70.8 / 18.3 - 72.7 / 13.5 - 82.6 / 17.1 93.8 / 9.1 96.5 / 11.5 - 98.8 / 14.2 - - 46.7 / 7.3 45.9 / - 55.6 / 3.2 - 80.9 / 5.3 98.7 / 3.7 100.0 / 1.2 85.0 / 6.1 100.0 / 7.2 - 74.9 / 12.3 - 78.4 / 7.1 - 99.0 / 9.8 78.9 / 11.7 91.8 / 6.2 95.5 / 18.5 96.3 / 7.2 Reported TextAttack 64.6 / 17.8 Reported TextAttack 74.4 / 12.3 Reported TextAttack 86.3 / 16.8 Reported TextAttack 94.9 / 10.7 Reported 96.2 / 14.9 TextAttack 97.4 / 13.6 alzantot (Alzantot et al., 2018) 70.2 / - bae (Garg and Ramakrishnan, 2020) - deepwordbug (Gao et al., 2018) - pso (Zang et al., 2020) textfooler (Jin et al., 2019) AG
18.1 / 12.6 - 16.9 / 7.4 - 60.7 / 25.1 - 79.4 / 16.7 86.7 / 22.0 79.5 / 23.5 Table 4: Comparison between our re-implemented attacks and the original source code in terms of success rate (left number) and percentage of perturbed words (right number). Numbers that are not found in the literature are marked as â-â. 1000 samples are randomly selected for evaluation from all these datasets except IMDB (100 samples are used for IMDB since some attack methods like Genetic and PSO take over 4 days to ï¬nish 1000 samples).
nltk ( tagger type ) : ( tagset ) : universal ( allow verb noun swap ) : True ( compare against original ) : True ) ( 1 ) : UniversalSentenceEncoder ( ( metric ) : ( threshold ) : ( window size ) : ( skip text shorter than window ) : True ( compare against original ) : True cosine 0.936338023 15 ) ( 2 ) : RepeatModification ( 3 ) : StopwordModification ( is black box ) : True
(random one ) : True ) ) ( constraints ) : ( 0 ) : LevenshteinEditDistance ( ( max edit distance ) : 30 ( compare against original ) : True ) ( 1 ) : RepeatModification ( 2 ) : StopwordModification ( is black box ) : True )
# HotFlip (Ebrahimi et al., 2017)
)
# BERT-Attack (Li et al., 2020)
Attack ( ( search method ) : GreedyWordSwapWIR( unk ( wir method ) : ) ( goal function ) : UntargetedClassification ( transformation ) : WordSwapMaskedLM( bert âattack (method ) : (masked lm name ) : ( max length ) : ( max candidates ) : bert âbaseâuncased 256 48 ) ( constraints ) : ( 0 ) : MaxWordsPerturbed( ( max percent ) : 0.4 ( compare against original ) : True ) ( 1 ) : UniversalSentenceEncoder ( ( metric ) : ( threshold ) : ( window size ) : ( skip text shorter than window ) : ( compare against original ) : True cosine 0.2 inf False ) ( 2 ) : RepeatModification ( 3 ) : StopwordModification ( is black box ) : True )
# Attack (
( search method ) : BeamSearch( ( beam width ) : 10 ) ( goal function ) : UntargetedClassification ( transformation ) : WordSwapGradientBased( ( top n ) : 1 ) ( constraints ) : ( 0 ) : MaxWordsPerturbed( (max num words ) : 2 ( compare against original ) : True ) ( 1 ) : WordEmbeddingDistance( ( embedding type ) : ( min cos sim ) : ( cased ) : ( include unknown words ) : True ( compare against original ) : True paragramcf 0.8 False ) ( 2 ) : PartOfSpeech ( ( tagger type ) : nltk universal ( tagset ) : ( allow verb noun swap ) : True ( compare against original ) : True ) ( 3 ) : RepeatModification ( 4 ) : StopwordModification ( is black box ) : False
)
# DeepWordBug (Gao et al., 2018)
# Attack(
# Attack (
# ( search method ) : GreedyWordSwapWIR( unk
( wir method ) :
) ( goal function ) : UntargetedClassification ( transformation ) : CompositeTransformation ( ( 0 ) : WordSwapNeighboringCharacterSwap(
(random one ) : True
)
( 1 ) : WordSwapRandomCharacterSubstitution ( (random one ) : True
# Input Reduction (Feng et al., 2018)
( search method ) : GreedyWordSwapWIR( ( wir method ) : delete ) ( goal function ) : InputReduction ( ( maximizable ) : True ) ( transformation ) : WordDeletion ( constraints ) : ( 0 ) : RepeatModification ( 1 ) : StopwordModification ( is black box ) : True
)
( 2 ) : WordSwapRandomCharacterDeletion( (random one ) : True
)
# Kuleshov (Kuleshov et al., 2018)
( 3 ) : WordSwapRandomCharacterInsertion (
13
Attack ( ( search method ) : GreedySearch ( goal function ) : UntargetedClassification ( transformation ) : WordSwapEmbedding( ( max candidates ) : ( embedding type ) : 15 paragramcf ) ( constraints ) : ( 0 ) : MaxWordsPerturbed( ( max percent ) : 0.5 ( compare against original ) : True ) ( 1 ) : ThoughtVector ( ( embedding type ) : ( metric ) : max euclidean ( threshold ) : ( window size ) : ( skip text shorter than window ) : ( compare against original ) : True paragramcf â0.2 inf False ) ( 2 ) : GPT2( ( max log prob diff ) : ( compare against original ) : True 2.0 ) ( 3 ) : RepeatModification ( 4 ) : StopwordModification ( is black box ) : True )
# MORPHEUS (Tan et al., 2020)
Attack ( ( search method ) : GreedySearch ( goal function ) : MinimizeBleu ( ( maximizable ) : ( target bleu ) : False 0.0 ) ( transformation ) : WordSwapInflections ( constraints ) : ( 0 ) : RepeatModification ( 1 ) : StopwordModification ( is black box ) : True )
# Particle Swarm Optimization (Zang et al.,
2020)
(
# Attack (
( search method ) : ParticleSwarmOptimization ( goal function ) : UntargetedClassification ( transformation ) : WordSwapHowNet(
( max candidates ) : â1
# ) ( constraints ) :
( 0 ) : RepeatModification ( 1 ) : StopwordModification ( 2 ) : InputColumnModification ( ( matching column labels ) : ( columns to ignore ) : {âpremise â}
[ â premise â , â hypothesis â ]
)
( is black box ) : True
)
# Pruthi Keyboard Char-Swap Attack (Pruthi
# et al., 2019)
# Attack (
( search method ) : GreedySearch ( goal function ) : UntargetedClassification ( transformation ) : CompositeTransformation ( ( 0 ) : WordSwapNeighboringCharacterSwap(
(random one ) : False
)
( 1 ) : WordSwapRandomCharacterDeletion( (random one ) : False
)
( 2 ) : WordSwapRandomCharacterInsertion ( (random one ) : False
)
# ( 3 ) : WordSwapQWERTY )
( constraints ) :
)
( 0 ) : MaxWordsPerturbed( (max num words ) : 1 ( compare against original ) : True ) ( 1 ) : MinWordLength ( 2 ) : StopwordModification ( 3 ) : RepeatModification ( is black box ) : True
# PWWS (Ren et al., 2019)
Attack (
)
( search method ) : GreedyWordSwapWIR( ( wir method ) : pwws ) ( goal function ) : UntargetedClassification ( transformation ) : WordSwapWordNet ( constraints ) : ( 0 ) : RepeatModification ( 1 ) : StopwordModification ( is black box ) : True
# seq2sick (Cheng et al., 2018)
Attack ( ( search method ) : GreedyWordSwapWIR( ( wir method ) : unk ) ( goal function ) : NonOverlappingOutput ( transformation ) : WordSwapEmbedding( ( max candidates ) : ( embedding type ) : 50 paragramcf ) ( constraints ) : ( 0 ) : LevenshteinEditDistance ( ( max edit distance ) : 30 ( compare against original ) : True ) ( 1 ) : RepeatModification ( 2 ) : StopwordModification ( is black box ) : True )
# TextBugger (Li et al., 2019)
Attack ( ( search method ) : GreedyWordSwapWIR( ( wir method ) : unk ) ( goal function ) : UntargetedClassification ( transformation ) : CompositeTransformation ( ( 0 ) : WordSwapRandomCharacterInsertion ( (random one ) : True ) ( 1 ) : WordSwapRandomCharacterDeletion( (random one ) : True ) ( 2 ) : WordSwapNeighboringCharacterSwap( (random one ) : True ) ( 3 ) : WordSwapHomoglyphSwap ( 4 ) : WordSwapEmbedding( ( max candidates ) : ( embedding type ) : 5 paragramcf ) ) ( constraints ) : ( 0 ) : UniversalSentenceEncoder ( ( metric ) : ( threshold ) : ( window size ) : ( skip text shorter than window ) : ( compare against original ) : True angular 0.8 inf ) ( 1 ) : RepeatModification ( 2 ) : StopwordModification ( is black box ) : True ) False
)
TextFooler (Jin et al., 2019)
14
# Attack (
( search method ) : GreedyWordSwapWIR( del ( wir method ) : ) ( goal function ) : UntargetedClassification ( transformation ) : WordSwapEmbedding( ( max candidates ) : ( embedding type ) : 50 paragramcf ) ( constraints ) : ( 0 ) : WordEmbeddingDistance( ( embedding type ) : ( min cos sim ) : ( cased ) : ( include unknown words ) : True ( compare against original ) : True paragramcf 0.5 False ) ( 1 ) : PartOfSpeech ( nltk ( tagger type ) : ( tagset ) : universal ( allow verb noun swap ) : True ( compare against original ) : True ) ( 2 ) : UniversalSentenceEncoder ( ( metric ) : ( threshold ) : ( window size ) : ( skip text shorter than window ) : True ( compare against original ) : angular 0.840845057 15 False ) ( 3 ) : RepeatModification ( 4 ) : StopwordModification ( 5 ) : InputColumnModification ( ( matching column labels ) : ( columns to ignore ) : {âpremise â} [ â premise â , ) ( is black box ) : True
# â hypothesis â ]
)
15 | {
"id": "1610.00768"
} |
2004.14503 | Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation | A major obstacle to the wide-spread adoption of neural retrieval models is
that they require large supervised training sets to surpass traditional
term-based techniques, which are constructed from raw corpora. In this paper,
we propose an approach to zero-shot learning for passage retrieval that uses
synthetic question generation to close this gap. The question generation system
is trained on general domain data, but is applied to documents in the targeted
domain. This allows us to create arbitrarily large, yet noisy, question-passage
relevance pairs that are domain specific. Furthermore, when this is coupled
with a simple hybrid term-neural model, first-stage retrieval performance can
be improved further. Empirically, we show that this is an effective strategy
for building neural passage retrieval models in the absence of large training
corpora. Depending on the domain, this technique can even approach the accuracy
of supervised models. | http://arxiv.org/pdf/2004.14503 | Ji Ma, Ivan Korotkov, Yinfei Yang, Keith Hall, Ryan McDonald | cs.IR, cs.CL | 14 pages, 4 figures | null | cs.IR | 20200429 | 20210127 | 1 2 0 2
n a J 7 2 ] R I . s c [
3 v 3 0 5 4 1 . 4 0 0 2 : v i X r a
# Zero-shot Neural Passage Retrieval via Domain-targeted Synthetic Question Generation
# Ivan Korotkov Yinfei Yang Keith Hall Ryan McDonald
# Google Research {maji,ivankr,yinfeiy,kbhall,ryanmcd}@google.com
# Abstract
A major obstacle to the wide-spread adoption of neural retrieval models is that they require large supervised training sets to surpass tradi- tional term-based techniques, which are con- structed from raw corpora. In this paper, we propose an approach to zero-shot learning for passage retrieval that uses synthetic question generation to close this gap. The question gen- eration system is trained on general domain data, but is applied to documents in the tar- geted domain. This allows us to create arbitrar- ily large, yet noisy, question-passage relevance pairs that are domain speciï¬c. Furthermore, when this is coupled with a simple hybrid term- neural model, ï¬rst-stage retrieval performance can be improved further. Empirically, we show that this is an effective strategy for building neural passage retrieval models in the absence of large training corpora. Depending on the domain, this technique can even approach the accuracy of supervised models.
# Introduction
Recent advances in neural retrieval have led to advancements on several document, passage and knowledge-base benchmarks (Guo et al., 2016; Pang et al., 2016; Hui et al., 2017; Dai et al., 2018; Gillick et al., 2018; Nogueira and Cho, 2019a; MacAvaney et al., 2019; Yang et al., 2019a,b,c). Most neural passage retrieval systems are, in fact, two stages (Zamani et al., 2018; Yilmaz et al., 2019), illustrated in Figure 1. The ï¬rst is a true retrieval model (aka ï¬rst-stage retrieval1) that takes a question and retrieves a set of candidate passages from a large collection of documents. This stage itself is rarely a neural model and most commonly is an term-based retrieval model such as BM25 (Robertson et al., 2004; Yang et al., 2017), though there is recent work on neural models (Zamani et al., 2018; Dai and Callan, 2019; Chang et al.,
1Also called open domain retrieval.
Question Retrieval Model = Rescoring Model Collection
Figure 1: End-to-end neural retrieval. A ï¬rst-stage model over a large collection returns a smaller set of relevant passages which are reranked by a rescorer.
2020; Karpukhin et al., 2020; Luan et al., 2020). This is usually due to the computational costs re- quired to dynamically score large-scale collections. Another consideration is that BM25 is often high quality (Lin, 2019). After ï¬rst-stage retrieval, the second stage uses a neural model to rescore the ï¬l- tered set of passages. Since the size of the ï¬ltered set is small, this is feasible.
The focus of the present work is methods for building neural models for ï¬rst-stage passage re- trieval for large collections of documents. While rescoring models are key components to any re- trieval system, they are out of the scope of this study. Speciï¬cally, we study the zero-shot setting where there is no target-domain supervised training data (Xian et al., 2018). This is a common situation, examples of which include enterprise or personal search environments (Hawking, 2004; Chirita et al., 2005), but generally any specialized domain.
The zero-shot setting is challenging as the most effective neural models have a large number of parameters, which makes them prone to overï¬tting. Thus, a key factor in training high quality neural models is the availability of large training sets. To address this, we propose two techniques to improve neural retrieval models in the zero-shot setting.
First, we observe that general-domain question- passage pairs can be acquired from community platforms (Shah and Pomerantz, 2010; Duan et al., 2017) or high quality academic datasets that are publicly available (Kwiatkowski et al., 2019; Bajaj et al., 2016). Such resources have been used to
create open domain QA passage retrieval models. However, as shown in Guo et al. (2020) and in our later experiments, neural retrieval models trained on the general domain data often do not transfer well, especially for specialized domains.
Towards zero-shot neural retrieval with im- proved domain adaptability, we propose a data aug- mentation approach (Wong et al., 2016) that lever- ages these naturally occurring question/answer pairs to train a generative model that synthesizes questions given a text (Zhou et al., 2017). We ap- ply this model to passages in the target domain to generate unlimited pairs of synthetic questions and target-domain passages. This data can then be used for training. This technique is outlined in Figure 2. A second contribution is a simple hybrid model that interpolates a traditional term-based model â BM25 (Robertson et al., 1995) â with our zero-shot neural model. BM25 is also zero-shot, as its param- eters do not require supervised training. Instead of using inverted index which is commonly used in term-based search, we exploit the fact that BM25 and neural models can be cast as vector similarity (see Section 4.4) and thus nearest neighbour search can be used for retrieval (Liu et al., 2011; Johnson et al., 2017). The hybrid model takes the advantage of both the term matching and semantic matching. We compare a number of baselines including other data augmentation and domain transfer tech- niques. We show on three specialized domains (scientiï¬c literature, travel and tech forums) and one general domain that the question generation approach is effective, especially when considering the hybrid model. Finally, for passage retrieval in the scientiï¬c domain, we compare with a num- ber of recent supervised models from the BioASQ challenge, including many with rescoring stages. Interestingly, the quality of the zero-shot hybrid model approaches supervised alternatives.
# 2 Related Work
Neural Retrieval The retrieval vs. rescorer dis- tinction (Figure 1) often dictates modelling choices for each task. For ï¬rst-stage retrieval, as mentioned earlier, term-based models that compile document collections into inverted indexes are most common since they allow for efï¬cient lookup (Robertson et al., 2004; Yang et al., 2017). However, there are studies that investigate neural ï¬rst-stage retrieval. A common technique is to learn the term weights to be used in an inverted index (Zamani et al., 2018; Dai and Callan, 2019, 2020). Another technique is representation-based models that embed ques-
Community QA _ââ auestion ; : | Answer,, Question, H Answery, Question, i Answer â ~~" Passage Question Generation Training Neural Passage | | Retrieval Model | Neural IR Training Question Question | Construction \. Target Corpus
Figure 2: Synthetic query generation for neural IR.
tions and passages into a common dense subspace (Palangi et al., 2016) and use nearest neighbour search for retrieval (Liu et al., 2011; Johnson et al., 2017). Recent work has shown this can be ef- fective for passage scoring (Chang et al., 2020; Karpukhin et al., 2020; MacAvaney et al., 2020). Though all of the aforementioned ï¬rst-stage neu- ral models assume supervised data for ï¬ne-tuning. For rescoring, scoring a small set of passages per- mits computationally intense models. These are often called interaction-based, one-tower or cross- attention models and numerous techniques have been developed (Guo et al., 2016; Hui et al., 2017; Xiong et al., 2017; Dai et al., 2018; McDonald et al., 2018), many of which employ pre-trained contex- tualized models (Nogueira and Cho, 2019a; MacA- vaney et al., 2019; Yang et al., 2019a,b). Khattab and Zaharia (2020) also showed that by delaying in- teraction to the last layer, one can build a ï¬rst stage retrieval model which also leverages the modeling capacity of an interaction based models.
Model Transfer Previous work has attempted to alleviate reliance on large supervised training sets by pre-training deep retrieval models on weakly supervised data such as click-logs (Borisov et al., 2016; Dehghani et al., 2017). Recently, Yilmaz et al. (2019) has shown that training models on general-domain corpora adapts well to new do- mains without targeted supervision. Another com- mon technique for adaptation to specialized do- mains is to learn cross-domain representations (Co- hen et al., 2018; Tran et al., 2019). Our work is more aligned with methods like Yilmaz et al. (2019) which use general domain resources to build neu- ral models for new domains, though via different techniques â data augmentation vs. model trans- fer. Our experiments show that data augmentation
compares favorably a model transfer baseline. For specialized domains, recently, there have been a number of studies using cross-domain transfer and other techniques for biomedical passage retrieval via the TREC-COVID challenge2,3 that uses the CORD-19 collection (Wang et al., 2020).
Question generation for data augmentation is a common tool, but has not been tested in the pure zero-shot setting nor for neural passage retrieval. Duan et al. (2017) use community QA as a data source, as we do, to train question generators. The generated question-passage pairs are not used to train a neural model, but QA is instead done via question-question similarity. Furthermore, they do not test on specialized domains. Alberti et al. (2019) show that augmenting supervised training resources with synthetic question-answer pairs can lead to improvements. Nogueira et al. (2019) em- ployed query generation in the context of ï¬rst-stage retrieval. In that study, the generated queries were used to augment documents to improve BM25 key- word search. Here we focus on using synthetic queries to train the neural retrieval models.
Hybrid Models Combining neural and term- based models have been studied, most commonly via linearly interpolating scores in an approximate re-ranking stage (Karpukhin et al., 2020; Luan et al., 2020) or through the ï¬nal layer of a rescor- ing network (Severyn et al., 2015; McDonald et al., 2018). Since rescoring can be cast as classiï¬cation, blending signals is straight-forward. However, this is approximate as it does not operate over the whole collection. For ï¬rst-stage retrieval, the most com- mon method is to learn term weights for a standard inverted index in order to make search efï¬cient (Zamani et al., 2018; Dai and Callan, 2019). Here we propose a ï¬rst-stage retrieval model that incor- porates both term-based (sparse) and neural-based (dense) representations in a hybrid model that uses nearest neighbor search for exact inference (Liu et al., 2011; Johnson et al., 2017; Wu et al., 2019). Similar methods using approximate nearest neigh- bour search have been investigated by Seo et al. (2019).
# 3 Synthetic Question Generation
In this work, we are speciï¬cally investigating the zero-shot scenario where there exists neither user is- sued questions nor domain speciï¬c data except the passage collection itself. We propose to address the
# 2ir.nist.gov/covidSubmit/ 3ir.nist.gov/covidSubmit/archive.html
Ubuntu Forums Passage: Every time I get a notiï¬cation about and begin updating when they become available, the process is interrupted by an error message: error in foomatic-ï¬lters. Then I get âerror in linux generic packageâ and a bunch of numbers. This is replaced before I can write it all down with âerror in Linux packageâ Everything seems to go OK except I donât know if the updates are really being installed. I tried un-installing and re-installing foomatic-ï¬lters . . . Generated Question: How do I get rid of error in foomatic-ï¬lters?
Biomedical Literature Passage: Electroencephalographic tracings of 50 patients who presented the classical features of Friedreichâs ataxia were reviewed . . . Friedre- ichâs ataxia is mainly a spinal disorder. Involvement of supraspinal and in particular brain stem or diencephalic structures may be more extensive in those patients who show electrographic abnormalities. This would re- quire conï¬rmation with comparative data based on pathological obser- vations. Impaired function of brain stem inhibitory mechanism may be responsible for the slightly raised incidence of seizures in patients with Friedreichâs ataxia and other cerebellar degenerations. Generated Question: What is the signiï¬cance of Friedreichâs ataxia?
Table 1: Examples of domain-targeted synthetic gener- ated questions used to train passage retrieval models.
training data scarcity issue by generating synthetic questions (Zhou et al., 2017; Duan et al., 2017; Alberti et al., 2019; Nogueira et al., 2019). Lever- age the fact that there are large question-answer data sources freely available from the web (Shah and Pomerantz, 2010; Duan et al., 2017). we ï¬rst train a question generator using general domain question-answer pairs. The passage collection of a target domain is then fed into this generator to cre- ate pairs of noisy question-passage pairs, which are used to train a retrieval model (see Figure 2). In this work, we mine English question-answer pairs from community resources, primarily StackExchange4 and Yahoo! Answers5. Note we use stackexchange as it covers a wide range of topics, and we focus on investigating the domain adaptability of using a question generation approach. We leave comparing question generator trained on different datasets or using different architectures to future work.
To ensure data quality, we further ï¬lter the data by only keeping question-answer pairs that were positively rated by at least one user on these sites. In total, the ï¬nal dataset contains 2 millions pairs, and the average length of questions and answers are 12 tokens and 155 tokens respectively. This dataset is general domain in that it contains question- answer pairs from a wide variety of topics.
Our question generator is an encoder-decoder with Transformer (Vaswani et al., 2017) layers, which is a common for generation tasks such as translation and summarization (Vaswani et al., 2017; Rothe et al., 2019). The encoder is trained to build a representation for a text and the decoder generates a question for which that text is a plausi- ble answer. Appendix B has model speciï¬cs.
# 4archive.org/details/stackexchange 5webscope.sandbox.yahoo.com/catalog.php?datatype=l
Our approach is robust to domain shift as the generator is trained to create questions based on a given text. As a result, generated questions stay close to the source passage material. Real examples are shown in Table 1 for technical and biomedical domains, highlighting the modelâs adaptability.
# 4 Neural First-stage Retrieval
In this section we describe our architecture for train- ing a ï¬rst-stage neural passage retriever. Our re- trieval model belongs to the family of relevance- based dense retrieval 6 that encodes pairs of items in dense subspaces (Palangi et al., 2016). Let Q = (q1, . . . qn) and P = (p1, . . . , pm) be a ques- tion and passage of n and m tokens respectively. Our model consists of two encoders, {fQ(), fP ()} and a similarity function, sim(). An encoder is a function f that takes an item x as input and outputs a real valued vector as the encoding, The similarity function, sim(), takes two encodings, q, p â RN and calculates a real valued score, s = sim(q, p). For passage retrieval, the two encoders are respon- sible for computing dense vector representation of questions and passages.
# 4.1 BERT-based Encoder
In this work, both query and document encoders are based on BERT (Devlin et al., 2019), which has been shown to lead to large performance gains across a number of tasks, including document rank- ing (Nogueira and Cho, 2019a; MacAvaney et al., 2019; Yang et al., 2019b). In addition, we share parameters between the query and passage encoder â i.e., fQ = fP , so called Siamese networks â as we found this greatly increased performance while reducing parameters.
We encode P as (CLS, p1, . . . , pm, SEP). For some datasets, a passage contains both a title T = (t1, ..., tl) and content C = (c1, ..., co), in which case we encode the passage as (CLS, t1, ..., tl, SEP, c1, ..., co, SEP). These se- quences are fed to the BERT encoder. Let hCLS â RN be the ï¬nal representation of the âCLSâ token. Passage encodings p are computed by applying a linear projection, i.e., p = W â hCLS, where W is a N à N weight matrix (thus N = 768), which preserves the original size of hCLS. This has been shown to perform better than down-projecting to a lower dimensional vector (Luan et al., 2020), espe- cially for long passages.
We encode Q as (CLS, q1, q2, ..., qn, SEP) which is then fed to the BERT encoder. Similarly,
6A.k.a. two-tower, dual encoder or dense retrieval.
Question-Passage Scoring Model First-stage Retrieval
Figure 3: First-stage neural passage retrieval. Top: A BERT-based transformer encodes questions and pas- sages and scores them via dot-product. Bottom: Pas- sages from the collection are encoded and stored in a nearest neighbour search backend. At inference, the question is encoded and relevant passages retrieved.
a linear projection on the corresponding âCLSâ to- ken, using the same weight matrix W, is applied to generate q. Following previous work (Luan et al., 2020; Lee et al., 2019b), we use dot product as the similarity function, i.e., sim(q, p) = (q, p) = q'p. The top half of Figure 3 illustrates the model.
# 4.2 Training
For training, we adopt softmax cross-entropy loss. 1 , ..., pâ Formally, given an instance {q, p+, pâ k } which comprises one query q, one relevant passage p+ and k non-relevant passages pâ i . The objective is to minimize the negative log-likelihood:
L(q,p*, Py). --;Py) = k log(e(@a") + S e(@4:)) â (q, qt) i=l
This loss function is a special case of ListNet loss (Cao et al., 2007) where all relevance judgements are binary, and only one passage is marked relevant for each training example. 1 , ..., pâ
k }, we use in-batch nega- tives. Given a batch of (query, relevant-passage) pairs, negative passages for a query are passages from different pairs in the batch. In-batch nega- tives has been widely adopted as it enables efï¬cient training via computation sharing (Yih et al., 2011; Gillick et al., 2018; Karpukhin et al., 2020).
# 4.3 Inference
Since the relevance-based model encodes questions and passages independently, we run the encoder
over every passage in a collection ofï¬ine to cre- ate a distributed lookup-table as a backend. At inference, we run the question encoder online and then perform nearest neighbor search to ï¬nd rel- evant passages, as illustrated in the bottom half of Figure 3. While there has been extensive work in fast approximate nearest neighbour retrieval for dense representations (Liu et al., 2011; Johnson et al., 2017), we simply use distributed brute-force search as our passage collections are at most in the millions, resulting in exact retrieval.
# 4.4 Hybrid First-stage Retrieval
Traditional like BM25 term-based methods (Robertson et al., 1995) are powerful zero-shot models and can outperform supervised neural mod- els in many cases (Lin, 2019). Rescoring sys- tems have shown that integrating BM25 into a neural model improves performance (McDonald et al., 2018). However, for ï¬rst-stage retrieval most work focuses on approximations via re-ranking (Karpukhin et al., 2020; Luan et al., 2020). Here we present a technique for exact hybrid ï¬rst-stage retrieval without the need for a re-ranking stage. Our method is motivated by the work of Seo et al. (2019) for sparse-dense QA.
For a query Q and a passage P , BM25 is com- puted as the following similarity score,
BM25(Q, P) = n 3 IDF(q) * cnt(q; ⬠P) * (k + 1) ent(qg; ⬠P)+k*(1âb+b« ââ¢)â Mavg
where k/b are BM25 hyperparameters, IDF is the termâs inverse document frequency from the cor- pus, cnt is the termâs frequency in a passage, n/m are the number of tokens in Q/P , and mavg is the collectionâs average passage length.
Like most TF-IDF models, this can be written as a vector space model. Speciï¬cally, let qbm25 â [0, 1]|V | be a sparse binary encoding of a query of dimension |V |, where V is the term vocabulary. Speciï¬cally this vector is 1 at position i if vi â Q, here vi is the i-th entry in V . Furthermore, let pbm25 â R|V | be a sparse real-valued vector where,
pbm25 i = IDF(vi) â cnt(vi â P ) â (k + 1) cnt(vi â P ) + k â (1 â b + b â m mavg
We can see that,
BM25(Q, P) = (a , pm)
)
As BM25 score can be written as vector dot- product, this gives rise to a simple hybrid model,
yb pâ¢) = (qr hy pâ¢) = ([Aq bm25 qâ¢"}, [poâ¢>, pâ¢]) =Xq bm25 p>) + (q"", pâ¢), sim(q
where qhyb and phyb are the hybrid encodings that concatenate the BM25 (qbm25/pbm25) and the neu- ral encodings (qnn/pnn, from Sec 4.1); and λ is a interpolation hyperparameter that trades-off the relative weight of BM25 versus neural models.
Thus, we can implement BM25 and our hy- brid model as nearest neighbor search with hybrid sparse-dense vector dot-product (Wu et al., 2019).
# 5 Experimental Setup
We outline data and experimental details. The Ap- pendix has further information to aid replicability.
# 5.1 Evaluation Datasets
BioASQ Biomedical questions from Task B Phase A of BioASQ (Tsatsaronis et al., 2015). We use BioASQ 7 and 8 test data for evaluation. The collection contains all abstracts from MEDLINE articles. Given an article, we split its abstract into chunks with sentence boundaries preserved. A pas- sage is constructed by concatenating the title and one chunk. Chunk size is set so that each passage has no more than 200 wordpiece tokens.
Forum Threads from two online user forum do- mains: Ubuntu technical help and TripAdvisor top- ics for New York City (Bhatia and Mitra, 2010). For each thread, we concatenate the title and initial post to generate passages. For BERT-based models we truncate at 350 wordpiece tokens. Unlike the BioASQ data, this data generally does not contain specialist knowledge queries. Thus, compared to the collection of question-answer pairs mined from the web, there is less of a domain shift.
NaturalQuestions Aggregated queries issued to Google Search (Kwiatkowski et al., 2019) with relevance judgements. We convert the original for- mat to a passage retrieval task, where the goal is to retrieval the long answer among all wiki para- graphs (Ahmad et al., 2019). We discarded ques- tions whose long answer is either a table or a list. We evaluate retrieval performance on the develop- ment set as the test set is not publicly available. The target collection contains all passages from the development set and is augmented with passages from 2016-12-21 dump of Wikipedia (Chen et al.,
2017). Each passage is also concatenated with title. For BERT-based models passages are truncated at 350 wordpiece tokens. This data is different from the previous data in two regards. First, there is a single annotated relevant paragraph per query. This is due to the nature in which the data was curated. Second, this data is entirely âgeneral domainâ. Dataset statistics are listed in Appendix A.
# 5.2 Zero-shot Systems
BM25 Term-matching systems such as BM25 (Robertson et al., 1995) are themselves zero-shot, since they require no training resources except the document collection itself. We train a standard BM25 retrieval model on the document collection for each target domain.
ICT The Inverse Cloze Task (ICT) (Lee et al., 2019b) is an unsupervised pre-training objective which randomly masks out a sentence from a pas- sage and creates synthetic sentence-passage pairs representing membership of the sentence in the passage. These masked examples can then used to train or pre-train a retrieval model. Lee et al. (2019b) showed that masking a sentence with a cer- tain probability, p, can both mimic the performance of lexical matching (p = 0) or semantic matching (p > 0). ICT is domain-targeted since training examples are created directly from the relevant col- lection. Chang et al. (2020) showed that ICT-based pre-training outperforms a number of alternatives such as Body First Selection (BFS) or Wiki Link Prediction (WLP) for large-scale retrieval.
Ngram Gysel et al. (2018) proposes to train un- supervised neural retrieval system by extracting ngrams and titles from each document as queries. Different from ICT, this approach does not mask the extract ngrams from the original document.
QA The dataset mined from community question- answer forums (Sec. 3) itself can be used directly to train a neural retrieval model since it comes of the form query and relevant text (passage) pair. This data is naturally occurring and not systematically noisy, which is an advantage. However, the data is not domain-targeted, in that it comes from general knowledge questions. We call models trained on this dataset as QA. Applying a model trained on general domain data to a speciï¬c domain with no adaptation is a strong baseline (Yilmaz et al., 2019).
QGen The QGen retrieval model trained on the domain-targeted synthetic question-passage pairs
BioASQ NQ Forum Travel Forum Ubuntu ICT 2.00M 90.50M 636.54M 2.00M 71.58M 356.15M 1.25M 2.00M 0.30M 2.07M 2.00M 0.42M QA Ngram ICT+Ngram QGen 727.05M 82.62M 427.72M 84.33M 1.54M 0.26M 2.49M 0.43M
Table 2: Number of (synthetic-question, passage) pairs used in zero-shot experiments.
described in Section 3. While this model can con- tain noise from the generator, it is domain-targeted.
QGenHyb This is identical to QGen, but instead of using the pure neural model, we train the hybrid model in Section 4.4 setting λ = 1.0 for all models to avoid any domain-targeted tuning. We train the term and neural components independently, comb- ing them only at inference.
All ICT, NGram, QA and QGen models are trained using the neural architecture from Section 4. For BioASQ experiments, question and passage encoders are initialized with BioBERT base v-1.1 (Lee et al., 2019a). All other data uses uncased BERT base (Devlin et al., 2019).
We can categorize the neural zero-shot models along two dimensions extractive vs. transfer. ICT and Ngram are extractive, in that they extract ex- act substrings from a passage to create synthetic questions for model training. Note that extractive models are also unsupervised, since they do not rely on general domain resources. QA is a direct cross-domain transfer model, in that we train the model on data from one domain (or general do- main) and directly apply it to the target domain for retrieval. QGen models are in-direct cross-domain transfer models, in that we use the out-of-domain data to generate resources for model training.
# 5.3 Generated Training Datasets
The nature of each zero-shot neural system requires different generated training sets. For ICT, we fol- low Lee et al. (2019b) and randomly select at most 5 sentences from a document, with a mask rate of 0.9. For Ngram models, Gysel et al. (2018) suggests that retrieval models trained with ngram- order of around 16 was consistently high in quality. Thus, in our experiment we also use 16 and move the ngram window with a stride of 8 to allow 8 token overlap between consecutive ngrams.
For QGen models, each passage is truncated to 512 sentence tokens and feed to the question gen- eration system. We also run the question generator on individual sentences from each passage to pro- mote questions that focus on different aspects of the same document. We select at most 5 salient sentences from a passage, where sentence saliency
is the max term IDF value in a sentence.
The size of the generated training set for each baseline is shown in Table 2.
# 6 Results and Discussion
Our main results are shown in Table 3. We compute Mean Average Precision over the ï¬rst N7 results (MAP), Precision@10 and nDCG@10 (Manning et al., 2008) with TREC evaluation script8. All numbers are in percentage.
Accuracy of pure neural models are shown in the upper group of Table 3. First, we see that both QA and QGen consistently outperform neural baselines such as ICT and Ngram that are based on sub-string masking or matching. Matching on sub-strings likely biases the model towards memorization in- stead of learning salient concepts of the passage. Furthermore, query encoders trained on sub-strings are not exposed to many questions, which leads to adaptation issues when applied to true retrieval tasks. Comparing QGen with QA, typically QGen performs better, especially for specialized target domains. This suggests that domain-targeted query generation is more effective for domain shift than direct cross-domain transfer (Yilmaz et al., 2019). Performance of term-based models and hybrid models are shown in Table 3 (bottom). We can see that BM25 is a very strong baseline. However, this could be an artifact of the datasets as the queries are created by annotators who already have the relevant passage in mind. Queries created this way typically have large lexical overlapping with the passage, thus favoring term matching based approaches like BM25. This phenomenon has been observed by previous work Lee et al. (2019b). Nonetheless, the hybrid model outperforms BM25 on all domains, and the improvements are statistically signiï¬cant on 9/12 metrics. This illustrate that term-based model and neural-based model return complemen- tary results, and the proposed hybrid approach ef- fectively combines their strengths.
For NaturalQuestions since there is a single rele- vant passage annotation, we report Precision@1 and Mean reciprocal rank (MRR)9. Results are show in Table 4. We can see here that while QGen still signiï¬cantly outperform other baselines, the gap between QGen and QA is smaller. Unlike BioASQ and Forum datasets, NaturalQuestions contains general domain queries, which aligns well with the question-answer pairs for training the QA
7BioASQ: N=100; and Forum: N=1000. 8https://trec.nist.gov/trec_eval/ 9MRR = MAP when there is one relevant item.
model. Another difference is that NaturalQuestions consists of real information seeking queries, in this case QGen performs better than BM25.
# 6.1 Zero-shot vs. Supervised
One question we can ask is how close to the state-of-the-art in supervised passage retrieval are these zero-shot models. To test this we looked at BioASQ 8 dataset and compare to the top- participant systems.10 Since BioASQ provides annotated training data, the top teams typically use supervised models with a ï¬rst-stage retrieval plus rescorer architecture. For instance, the AUEB group, which is the top or near top system for BioASQ 6, 7 and 8, uses a BM25 ï¬rst-stage re- trieval model plus a supervised neural rescorer (Brokos et al., 2018; Pappas et al., 2019).
In order to make our results comparable to par- ticipant systems, we return only 10 passages per question (as per shared-task guidelines) and use the ofï¬cial BioASQ 8 evaluation software.
Table 5 shows the results for three zero-shot sys- tems (BM25, QGen and QGenHyb) relative to the top 4 systems on average across all 5 batches of the shared task. We can see the QGenHyb performs quite favorably and on average is indistinguish- able from the top systems. This is very promising and suggests that top-performance for zero-shot retrieval models is possible.
A natural question is whether improved ï¬rst- stage model plus supervised rescoring is addi- tive. The last two lines of the table takes the two- best ï¬rst-stage retrieval models and adds a sim- ple BERT-based cross-attention rescorer (Nogueira and Cho, 2019b; MacAvaney et al., 2019). We can see that, on average, this does improve quality. Furthermore, having a better ï¬rst-stage retriever (QGenHyb vs. BM25) makes a difference.
As noted earlier, on BioASQ, BM25 is a very strong baseline. This makes the BM25/QGenHyb zero-shot models highly likely to be competitive. When we look at NaturalQuestions, where BM25 is signiï¬cantly worse than neural models, we see that the gap between zero-shot and supervised widens substantially. The last row of Table 4 shows a model trained on the NaturalQuestions training data, which is nearly 2-3 times more accurate than the best zero-shot models. Thus, while zero-shot neural models have the potential to be competitive with supervised counterparts, the experiments here show this is data dependant.
# 10participants-area.bioasq.org
BioASQ7 BioASQ 8 Forum Travel Forum Ubuntu Prec nDCG Prec nDCG Prec nDCG Prec nDCG MAP @10 @10 MAP @10 @10 MAP @10 @10 MAP @10 @10 NEURAL MODELS ICT* 9.31* 3.84* 11.44* | 9.31* â 3.36* 11.78* | 3.66* 11.60* 12.04* | 8.93* â 21.60* = 23.21* Ngram* 9.17* 3.86" 11.53* | 8.81* â 2.84* 10.74* | 10.00 25.60 28.53 | 9.44* = 22.00* â 23.90* Qai 17.80* 7.46* 21.93" | 14.61* 4.26* 17.09* | 11.00 27.60 28.32 17.78 34.00 34.73 QGent [32.45 "13.48 37.23 [30.32 9.36 34.53 | 11.79 32.00 33.34 | 17.97 32.40 36.11 TERM/HYBRID MODELS BM25* 45.12* 20.66 50.33* | 38.61* 11.94" 42.78" | 15.41* 37.60 39.21 16.23* 31.20* 35.16" QGenHyb! | 46.78 20.60 52.16 | 41.73 12.84 46.18 | 18.19 40.80 43.92 | 21.97 39.60 43.91
Table 3: Zero-shot first-stage retrieval. Unsupervised*; Out-of-domain'; Synthetic. Bold=Best in group. Statisti- cally significant differences (permutation test, p < 0.05) from the last row of each group are marked by *.
MRR Prec@1 BM25* 6.63* â-1.84* ICT* 4.62* = 1.58" Ngram* 7.22* â 3.05* Qat 11.14* 4.35* QGenF | 14.93 6.21 QGenHyb! | 16.73 6.05 Supervised | 33.68 17.33
BM25 QGen QGenHyb AUEB-1 pa bioinfo-3 DeepR-test BM25âresc. QGenHybâresc. B1 31.7 28.9 34.8 33.6 33.5 34.0 30.7 33.9 37.5 B2 27.8 20.3 31.3 31.8 33.0 31.7 29.1 29.2 31.2 B3 40.4 30.7 43.4 44.4 43.5 43.7 43.5 42.4 43.0 B4 40.1 29.0 41.9 40.1 36.0 40.2 39.8 42.5 43.6 B5 Avg. 36.3 41.8 28.4 33.1 39.3 45.3 39.2 46.0 48.3 38.9 39.2 46.7 38.1 47.5 38.7 457 40.4 46.6
~
Table 4: Zero-shot ad-hoc retrieval for Natural Ques- tions. Unsupervised*; Out-of-domainâ; Syntheticâ. Bold=Best; Underline=Best non-hybrid. Baselines with statistically significant differences (permutation test, p < 0.05) from QGen are marked by *.
Table 5: MAP for zero-shot models (above dashed lined) vs. supervised models (below dashed line) on BioASQ8 document retrieval. B1-B5 is batch 1-5.
# 6.2 Learning Curves
Since our approach allows us to generate queries on every passage of the target corpus, one question is that whether retrieval system trained this way simply memorizes the target corpus or it also gen- eralize on unseen passages. Furthermore, from an efï¬ciency standpoint, how many synthetic training examples are required to achieve maximum perfor- mance. To answer these questions, we uniformly sample a subset of documents and then generate synthetic queries only on that subset. Results on BIOASQ 7 are shown in Figure 4, where x-axis denotes the percentage of sampled documents. We can see that retrieval accuracy improves as passage coverage increases. The peak is achieved when using a 20% subset, which covers 21% of the refer- ence passages. This is not surprising because the number of frequently discussed entities/topics are typically limited, and a subset of the passages cov- ers most of them. This result also indicates that the learned system does generalize, otherwise optimal performance would be seen with 100% of the data.
# 6.3 Generation vs. Retrieval Quality
Another interesting question is how important is the quality of the question generator relative to retrieval performance. Below we measured gen-
eration quality (via Rouge-based metrics (Lin and Hovy, 2002)) versus retrieval quality for three sys- tems. The base generator contains 12 transformer layers, the lite version only uses the ï¬rst 3 layer. The large one contains 24 transformer layers and each layer with larger hidden layer size, 4096, and more attention heads, 16. Retrieval quality was measured on BIOASQ 7 and generation quality with a held out set of the community question- answer data set. Results are shown in Table 6. We can see that larger generation models lead to improved generators. However, there is little dif- ference in retrieval metrics, suggesting that large domain targeted data is the more important criteria.
# 7 Conclusion
We study methods for neural zero-shot passage retrieval and ï¬nd that domain targeted synthetic question generation coupled with hybrid term- neural ï¬rst-stage retrieval models consistently out- performs alternatives. Furthermore, for at least one domain, approaches supervised quality. While out of the scope of this study, future work includes fur- ther testing the efï¬cacy of these ï¬rst-stage models in a full end-to-end system (evaluated brieï¬y in Section 6.1), as well as for pre-training supervised models (Chang et al., 2020).
40% P@10 NDCG@10 mAP 30% 20% 10% 1% 5% 10% 20% 40% 100%
Figure 4: MAP on BioASQ7 (y-axis) w.r.t. documents used for synthesizing queries (x-axis).
Lite Base Large Generation Rouge Rouge 1 23.55 26.20 26.81 L 21.90 24.23 24.90 Retrieval nDCG Prec MAP @10 @10 13.48 37.23 32.50 37.96 32.86 13.42 37.53 13.34 32.61
Table 6: Generation quality vs. retrieval metrics.
# Acknowledgements
We thank members of Google Research Language for feedback on this work. In particular, Gonc¸alo SimËoes gave detailed feedback on an early draft of this work, and Shashi Narayan evaluated the question generation quality.
# References
Amin Ahmad, Noah Constant, Yinfei Yang, and Daniel Cer. 2019. ReQA: An evaluation for end-to-end an- In Proceedings of the 2nd swer retrieval models. Workshop on Machine Reading for Question Answer- ing, pages 137â146, Hong Kong, China. Association for Computational Linguistics.
Chris Alberti, Daniel Andor, Emily Pitler, Jacob De- vlin, and Michael Collins. 2019. Synthetic qa cor- pora generation with roundtrip consistency. arXiv preprint arXiv:1906.05416.
Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, et al. 2016. Ms marco: A human generated machine arXiv preprint reading comprehension dataset. arXiv:1611.09268.
Sumit Bhatia and Prasenjit Mitra. 2010. Adopting in- ference networks for online thread retrieval. In Pro- ceedings of the Twenty-Fourth AAAI Conference on Artiï¬cial Intelligence, AAAIâ10, page 1300â1305. AAAI Press.
Alexey Borisov, Ilya Markov, Maarten de Rijke, and Pavel Serdyukov. 2016. A neural click model for web search. In Proceedings of the 25th International Conference on World Wide Web, WWW â16, page 531â541, Republic and Canton of Geneva, CHE. In- ternational World Wide Web Conferences Steering Committee.
Georgios-Ioannis Brokos, Polyvios Liosis, Ryan Mc- Donald, Dimitris Pappas, and Ion Androutsopoulos. 2018. Aueb at bioasq 6: Document and snippet re- trieval. arXiv preprint arXiv:1809.06366.
Zhe Cao, Tao Qin, Tie-Yan Liu, Ming-Feng Tsai, and Hang Li. 2007. Learning to rank: From pairwise ap- proach to listwise approach. In Proceedings of the 24th International Conference on Machine Learn- ing, ICML â07, page 129â136, New York, NY, USA. Association for Computing Machinery.
Wei-Cheng Chang, Felix X. Yu, Yin-Wen Chang, Yim- ing Yang, and Sanjiv Kumar. 2020. Pre-training tasks for embedding-based large-scale retrieval. In International Conference on Learning Representa- tions.
Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open- In Proceedings of the 55th An- domain questions. nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870â 1879, Vancouver, Canada. Association for Computa- tional Linguistics.
Paul Alexandru Chirita, Wolfgang Nejdl, Raluca Paiu, and Christian Kohlsch¨utter. 2005. Using odp meta- In Proceedings of the data to personalize search. International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 178â185.
Daniel Cohen, Bhaskar Mitra, Katja Hofmann, and W. Bruce Croft. 2018. Cross domain regularization for neural ranking models using adversarial learning. CoRR, abs/1805.03403.
Context- aware sentence/passage term importance estima- arXiv preprint tion for ï¬rst stage retrieval. arXiv:1910.10687.
Zhuyun Dai and Jamie Callan. 2020. Context-aware term weighting for ï¬rst stage passage retrieval. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Infor- mation Retrieval, SIGIR 2020, Virtual Event, China, July 25-30, 2020, pages 1533â1536. ACM.
Zhuyun Dai, Chenyan Xiong, Jamie Callan, and Zhiyuan Liu. 2018. Convolutional neural networks for soft-matching n-grams in ad-hoc search. In Pro- ceedings of the ACM Web Search and Data Mining Conference, pages 126â134.
Mostafa Dehghani, Hamed Zamani, Aliaksei Severyn, Jaap Kamps, and W. Bruce Croft. 2017. Neural rank- ing models with weak supervision. In Proceedings of the International ACM SIGIR Conference on Re- search and Development in Information Retrieval.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- In Proceedings of the 2019 Conference standing. of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171â4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
Nan Duan, Duyu Tang, Peng Chen, and Ming Zhou. 2017. Question generation for question answering. In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 866â874.
Daniel Gillick, Alessandro Presta, and Gaurav Singh Tomar. 2018. End-to-end retrieval in continuous space. CoRR, abs/1811.08008.
Jiafeng Guo, Yixing Fan, Qingyao Ai, and W Bruce Croft. 2016. A deep relevance matching model In Proceedings of the 25th for ad-hoc retrieval. ACM International on Conference on Information and Knowledge Management, pages 55â64.
Mandy Guo, Yinfei Yang, Daniel Cer, Qinlan Shen, and Noah Constant. 2020. MultiReQA: A cross- domain evaluation for retrieval question answering models. arXiv preprint arXiv:2005.02507.
Christophe Van Gysel, Maarten de Rijke, and Evange- los Kanoulas. 2018. Neural vector spaces for un- supervised information retrieval. ACM Trans. Inf. Syst., 36(4).
David Hawking. 2004. Challenges in enterprise search. In ADC, volume 4, pages 15â24. Citeseer.
Kai Hui, Andrew Yates, Klaus Berberich, and Ger- ard de Melo. 2017. PACRR: A position-aware neu- In Proceed- ral IR model for relevance matching. ings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1049â1058, Copenhagen, Denmark. Association for Computa- tional Linguistics.
Jeff Johnson, Matthijs Douze, and Herv´e J´egou. 2017. arXiv Billion-scale similarity search with gpus. preprint arXiv:1702.08734.
Vladimir Karpukhin, Barlas OËguz, Sewon Min, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. CoRR.
Omar Khattab and Matei Zaharia. 2020. Colbert: Ef- ï¬cient and effective passage search via contextual- ized late interaction over BERT. In Proceedings of
the 43rd International ACM SIGIR conference on re- search and development in Information Retrieval, SI- GIR 2020, Virtual Event, China, July 25-30, 2020, pages 39â48. ACM.
Diederik Kingma and Jimmy Ba. 2014. Adam: A International method for stochastic optimization. Conference on Learning Representations.
Taku Kudo and John Richardson. 2018. Sentencepiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing. CoRR, abs/1808.06226.
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- ï¬eld, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming-Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natu- ral questions: a benchmark for question answering research. Transactions of the Association of Compu- tational Linguistics.
Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019a. BioBERT: a pre-trained biomedical for biomedical text mining. Bioinformatics.
Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019b. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 6086â6096, Florence, Italy. Association for Computational Linguistics.
Chin-Yew Lin and Eduard Hovy. 2002. Manual and au- tomatic evaluation of summaries. In Proceedings of the ACL-02 Workshop on Automatic Summarization - Volume 4, AS â02, page 45â51, USA. Association for Computational Linguistics.
Jimmy Lin. 2019. The neural hype and comparisons In ACM SIGIR Forum. against weak baselines. ACM New York, NY, USA.
Wei Liu, Jun Wang, Sanjiv Kumar, and Shih-Fu Chang. 2011. Hashing with graphs. In Proceedings of the International Conference on Machine Learning.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining ap- proach. CoRR, abs/1907.11692.
Yi Luan, Jacob Eisenstein, Kristina Toutanova, and Michael Collins. 2020. Sparse, dense, and atten- tional representations for text retrieval.
Sean MacAvaney, Franco Maria Nardini, Raffaele Perego, Nicola Tonellotto, Nazli Goharian, and Ophir Frieder. 2020. Expansion via prediction of importance with contextualization. In Proceedings of the 43rd International ACM SIGIR conference on
research and development in Information Retrieval, SIGIR 2020, Virtual Event, China, July 25-30, 2020, pages 1573â1576. ACM.
Sean MacAvaney, Andrew Yates, Arman Cohan, and Nazli Goharian. 2019. Cedr: Contextualized embed- dings for document ranking. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval.
Christopher D Manning, Prabhakar Raghavan, and Hin- rich Sch¨utze. 2008. Introduction to information re- trieval. Cambridge university press.
Ryan McDonald, George Brokos, and Ion Androut- sopoulos. 2018. Deep relevance ranking using en- In Proceed- hanced document-query interactions. ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1849â1860, Brussels, Belgium. Association for Computational Linguistics.
Rodrigo Nogueira and Kyunghyun Cho. 2019a. Pas- arXiv preprint sage re-ranking with bert. arXiv:1901.04085.
Rodrigo Nogueira and Kyunghyun Cho. 2019b. Pas- sage re-ranking with bert. ArXiv, abs/1901.04085.
and Kyunghyun Cho. 2019. Document expansion by query prediction. arXiv preprint arXiv:1904.08375.
Hamid Palangi, Li Deng, Yelong Shen, Jianfeng Gao, Xiaodong He, Jianshu Chen, Xinying Song, and Rabab Ward. 2016. Deep sentence embedding using long short-term memory networks: Analysis and ap- plication to information retrieval. IEEE/ACM Trans- actions on Audio, Speech, and Language Processing, 24(4):694â707.
Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Shengx- ian Wan, and Xueqi Cheng. 2016. Text match- In Proceedings of the ing as image recognition. Thirtieth AAAI Conference on Artiï¬cial Intelligence, AAAIâ16, page 2793â2799. AAAI Press.
Dimitris Pappas, Ryan McDonald, Georgios-Ioannis Brokos, and Ion Androutsopoulos. 2019. AUEB at BioASQ 7: document and snippet retrieval. In Pro- ceedings of the BioASQ Workshop.
Stephen Robertson, S. Walker, S. Jones, M. M. Hancock-Beaulieu, and M. Gatford. 1995. Okapi at trec-3. In Overview of the Third Text REtrieval Con- ference (TREC-3), pages 109â126. Gaithersburg, MD: NIST.
Stephen Robertson, Hugo Zaragoza, and Michael Tay- Simple bm25 extension to multiple lor. 2004. In Proceedings of the ACM Inter- weighted ï¬elds. nation Conference on Information and Knowledge Management.
Sascha Rothe, Shashi Narayan, and Aliaksei Severyn. 2019. Leveraging pre-trained checkpoints for se- quence generation tasks. CoRR, abs/1907.12461.
Minjoon Seo, Jinhyuk Lee, Tom Kwiatkowski, Ankur Parikh, Ali Farhadi, and Hannaneh Hajishirzi. 2019. Real-time open-domain question answering with In Proceedings of the dense-sparse phrase index. 57th Annual Meeting of the Association for Com- putational Linguistics, pages 4430â4441, Florence, Italy. Association for Computational Linguistics.
Aliaksei Severyn, Massimo Nicosia, Gianni Barlacchi, and Alessandro Moschitti. 2015. Distributional neu- ral networks for automatic resolution of crossword In Proceedings of the 53rd Annual Meet- puzzles. ing of the Association for Computational Linguis- tics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Pa- pers), pages 199â204, Beijing, China. Association for Computational Linguistics.
Chirag Shah and Jefferey Pomerantz. 2010. Evaluating and predicting answer quality in community qa. In Proceedings of the 33rd international ACM SIGIR conference on Research and development in infor- mation retrieval, pages 411â418.
Samuel L. Smith, Pieter-Jan Kindermans, and Quoc V. Le. 2018. Donât decay the learning rate, increase the batch size. In International Conference on Learning Representations.
Brandon Tran, Maryam Karimzadehgan, Rama Ku- mar Pasumarthi, Mike Bendersky, and Don Met- zler. 2019. Domain adaptation for enterprise email search. In Proceedings of the International ACM SI- GIR Conference on Research and Development in In- formation Retrieval.
George Tsatsaronis, Georgios Balikas, Prodromos Malakasiotis, Ioannis Partalas, Matthias Zschunke, Michael R Alvers, Dirk Weissenborn, Anastasia Krithara, Sergios Petridis, Dimitris Polychronopou- los, Yannis Almirantis, John Pavlopoulos, Nico- las Baskiotis, Patrick Gallinari, Thierry Arti´eres, Axel-Cyrille Ngonga Ngomo, Norman Heino, Eric Gaussier, Liliana Barrio-Alvers, Michael Schroeder, Ion Androutsopoulos, and Georgios Paliouras. 2015. An overview of the bioasq large-scale biomedical semantic indexing and question answering competi- tion. BMC bioinformatics, 16:138.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Å ukasz Kaiser, and Illia Polosukhin. 2017. Attention is In Advances in neural information all you need. processing systems, pages 5998â6008. Curran Asso- ciates, Inc.
Lucy Lu Wang, Kyle Lo, Yoganand Chandrasekhar, Russell Reas, Jiangjiang Yang, Darrin Eide, Kathryn Funk, Rodney Kinney, Ziyang Liu, William Merrill, et al. 2020. Cord-19: The covid-19 open research dataset. arXiv preprint arXiv:2004.10706.
S. C. Wong, A. Gatt, V. Stamatescu, and M. D. Mc- Donnell. 2016. Understanding data augmentation
In 2016 Inter- for classiï¬cation: When to warp? national Conference on Digital Image Computing: Techniques and Applications (DICTA), pages 1â6.
Xiang Wu, Ruiqi Guo, David Simcha, Dave Dopson, and Sanjiv Kumar. 2019. Efï¬cient inner product arXiv preprint approximation in hybrid spaces. arXiv:1903.08690.
Yongqin Xian, Christoph H Lampert, Bernt Schiele, and Zeynep Akata. 2018. Zero-shot learningâa comprehensive evaluation of the good, the bad and the ugly. IEEE transactions on pattern analysis and machine intelligence, 41(9):2251â2265.
Chenyan Xiong, Zhuyun Dai, Jamie Callan, Zhiyuan Liu, and Russell Power. 2017. End-to-end neural ad-hoc ranking with kernel pooling. In Proceedings of the 40th International ACM SIGIR conference on research and development in information retrieval, pages 55â64.
Peilin Yang, Hui Fang, and Jimmy Lin. 2017. Anserini: Enabling the use of lucene for information retrieval research. In Proceedings of the 40th International ACM SIGIR Conference on Research and Develop- ment in Information Retrieval, pages 1253â1256.
Wei Yang, Yuqing Xie, Aileen Lin, Xingyu Li, Luchen Tan, Kun Xiong, Ming Li, and Jimmy Lin. 2019a. End-to-end open-domain question answering with In Proceedings of the 2019 Confer- BERTserini. ence of the North American Chapter of the Asso- ciation for Computational Linguistics (Demonstra- tions), pages 72â77, Minneapolis, Minnesota. Asso- ciation for Computational Linguistics.
Wei Yang, Haotian Zhang, and Jimmy Lin. 2019b. Simple applications of bert for ad hoc document re- trieval. arXiv preprint arXiv:1903.10972.
Yinfei Yang, Daniel Cer, Amin Ahmad, Mandy Guo, Jax Law, Noah Constant, Gustavo Hern´andez ´Abrego, Steve Yuan, Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2019c. Multi- lingual universal sentence encoder for semantic re- trieval. CoRR, abs/1907.04307.
Wen-tau Yih, Kristina Toutanova, John C. Platt, and Christopher Meek. 2011. Learning discriminative projections for text similarity measures. In Proceed- ings of the Fifteenth Conference on Computational Natural Language Learning, pages 247â256, Port- land, Oregon, USA. Association for Computational Linguistics.
Zeynep Akkalyoncu Yilmaz, Wei Yang, Haotian Zhang, and Jimmy Lin. 2019. Cross-domain mod- eling of sentence-level evidence for document re- trieval. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 3481â3487.
Hamed Zamani, Mostafa Dehghani, W Bruce Croft, Erik Learned-Miller, and Jaap Kamps. 2018. From neural re-ranking to neural ranking: Learning a sparse representation for inverted indexing. In Pro- ceedings of the ACM Internation Conference on In- formation and Knowledge Management, pages 497â 506.
Qingyu Zhou, Nan Yang, Furu Wei, Chuanqi Tan, Hangbo Bao, and Ming Zhou. 2017. Neural ques- tion generation from text: A preliminary study. In National CCF Conference on Natural Language Processing and Chinese Computing, pages 662â671. Springer.
# A Data
Statistics on each evaluation set are listed in Ta- ble 7. Document collection of âBioASQâ comes from MEDLINE articles, and we remove roughly 10M articles that only contains a title. Furthermore, for BioASQ 7B and BioASQ 8B we only keep articles published before 2018 December 31 and 2019 December 31, respectively. On âForumâ, we remove threads with empty posts. On âNQâ since there is at most one passage annotated as relevant for each question, and we also remove questions that have no answer, thus the number of questions equal to the number of reference passages. Besides zero-shot experiments, we also conduct supervised experiments on NQ, where we randomly samples 5% question from the training data as development set. This yields a training and development set with 70,393 and 3,704 (question, passage) pairs, respectively.
The data resources can be downloaded from the following websites
⢠BioASQ: http://participants-area. bioasq.org/
⢠Forum: http://sumitbhatia.net/source/ datasets.html
⢠Natural Questions: https://github.com/ google/retrieval-qa-eval
⢠Pubmed / Medline: https://www.nlm. nih.gov/databases/download/pubmed_ medline.html
⢠Stackexchange: http://archive.org/ details/stackexchange
⢠Yahoo! Answers: sandbox.yahoo.com/catalog.php? datatype=l http://webscope.
Q R C BioASQ7 BioASQ8 NQ 1772 500 1772 1646 53.4M 29.5M 500 2349 50M ForumTravel ForumUbuntu 25 1,538 82,669 25 1,188 106,642
Table 7: Statistics on each evaluation set. âQâ denotes the number of unique questions. âRâ denotes the to- tal number of annotated reference passages. âCâ is the number of passages in the target collection.
⢠BioBERT: https://github.com/ dmis-lab/biobert
⢠BERT: https://github.com/ google-research/bert
To the extent that we pre-process the data, we will release relevant tools and data upon publication.
# B Question Generation Details
Our question generation follows the same imple- mentation of Rothe et al. (2019). Both the encoder and decoder share the same network structure. Pa- rameter weights are also shared and are initial- ized from a pretrained RoBERTa (Liu et al., 2019) checkpoints. Training data is processed with sen- tencepiece (Kudo and Richardson, 2018) tokeniza- tion. We truncate answers to 512 sentencepiece tokens, and limit decoding to at most 64 steps. The training objective is the standard cross entropy. We use Adam (Kingma and Ba, 2014) with learning rate of 0.05, 31; = 0.9, G2 = 0.997 and « = leâ 9. Learning rate warmup over the first 40,000 steps. Training batch size for the âliteâ, âbaseâ and âlargeâ models are 256, 128 and 32 respectively. All mod- els are trained on a â4x4â slice of v3 Google Cloud TPU. At inference, results from using beam search decoding usually fall short of diversity, thus we use greedy decoding to speed up question generation.
# C Neural Model Details
# C.1 Zero shot Retrieval Models
C.1.1 Development Set Since we are investigating zero-shot scenario where there is no annotated development set available, hy- perparameters are set by following best practice reported in previous work. We thus do not have development set numbers. However, in the hyper- parameters section below, we do use a subset of the zero-shot training data to test training convergence under different parameters.
C.1.2 Data Generation For ICT task, we follow Lee et al. (2019b) and ran- domly select at most 5 sentences from a document,
with a mask rate of 0.9. For Ngram models, Gysel et al. (2018) suggests that retrieval models trained with N larger than 16 consistently outperform those trained with N smaller than 8. In addition, further increase N from 16 has little effect on retrieval ac- curacy. Thus, in our experiment we set N to 16 and move the ngram window with a stride of 8 to allow 8 token overlap between consecutive ngrams. For QGen models, each passage is truncated to 512 sentence tokens and feed to the question generation system. Besides, we also run question generator on individual sentences from each document to pro- mote questions that focus on different aspects of the same document. In particular, we select at most top 5 salient sentences from a document, where salience of a sentence is measure as the max IDF value of terms in that sentence. We then feed these sentences to the question generator.
# C.1.3 Hyperparameters
For zero-shot neural retrieval model training, we uniformly sample of a subset of 5K (question, doc- ument) pairs from the training data as a noisy devel- opment set. Instead of ï¬nding the best hyperparam- eter values, we use this subset to ï¬nd the largest batch size and learning rate that lead the training to converge (Smith et al., 2018). Take batch size for example, we always start from the largest batch that can ï¬t in the memory of a â8x8â TPU slice. We gradually decrease the batch size by a factor of 2 if the current value causes training diverge. More details of hyperparameter values of each task are listed in Table 8. Note on Forum data, the maxi- mum batch size for QGen is much larger than other tasks. Looking into the data, we found that queries generated by ICT or Ngram task on Forum data tends to contain higher percentage of noisy sen- tences or ngrams that are either ill-relevant to the topic or too general. For example, âsuggestions are welcomedâ, âany ideas for things to do or place to stayâ. We train each model for 10 epochs, but also truncate training steps to 200,000 to make training time tractable.
For BM25, the only two hyperparameters are k and b. We set these to k = 1.2 and b = 0.75 as advised by Manning et al. (2008).
For the hybrid model QGenHyb, the only hy- perparameter is λ. We set this to 1.0 without any tuning, since this represented an equal trade-off between the two models and we wanted to keep the systems zero-shot. However, we did try exper- imentations. For BioASQ 8b and Forum Ubuntu, values near 1.0 were actually optimal. For BioASQ
Learning Rate Batch Size ICT Ngram QGen Q S A o i B l ICT m u r o F e v a r T u ICT t n u b U Ngram QGen m u r o F Ngram QGen ICT Ngram QGen Q N 1e-5 1e-5 1e-5 2e-6 2e-6 2e-6 1e-6 1e-6 1e-6 1e-5 1e-5 1e-5 8192 8192 8192 1024 1024 4096 512 512 4096 6144 6144 6144
Table 8: Hyperparameters
7b and Forum Travel, values of 2.0 and 2.1 were optimal and led to improvements in MAP from 0.468 â 0.474 and 0.181 â 0.188, respectively.
# C.2 Supervised Models
We also train supervised models on BioASQ and NQ, where we use the development set to do early stopping. For BioASQ, our developement set is data from BioASQ 5 (i.e., disjoint from BioASQ 7 and 8). The development set MAP of our su- pervised model reranking a BM25 system on this data is 52.1, compared to the BioASQ 8 scores of 38.7. For NQ, the MRR on the development set is 0.141. All other hyperparameters remain the same except we use a smaller batch size of 1024, as we observe that using large batch causes the model quickly overï¬t the training data. This may due to the number of training examples is 2 orders of mag- nitude smaller compared to zero-shot setting. For our BioASQ supervised model we follow Pappas et al. (2019) and train it with binary cross-entropy using the top 100 BM25 results as negatives.
# C.3 Computational Resources
C.3.1 Question Generation To train the question generator on 2M questions,
⢠We used a â4x4â slice of v3 Google Cloud TPU.
⢠Training time ranges from 20 hours for the lite model and 6 days for the large model.
Once trained, we need to run the generator over our passage collection.
⢠We distributed computation and used 10,000 machines (CPUs) over the collection.
⢠For BioASQ, the largest dataset, it took less than 40 hours to generate synthetic questions.
We initialize question generation models from either RoBerta base or Roberta large checkpoint (Liu et al., 2019), and the total number of trainable parameters are 67M for the lite model, 152M for the base model and 455M for the large model.
C.3.2 Neural Retrieval Model To train the retrieval models, we need to train the query and passage encoders. We share parameters between the two encoders and initialize them from either base BERT (Devlin et al., 2019) or BioBERT (Lee et al., 2019a) checkpoint. Thus retrieval mod- els trained on BioASQ have 108M trainable pa- rameters and retrieval models trained on NQ and Forum data have 110M trainable parameters. After training, we need to run the passage encoder over every passage in the collection to create the nearest neighbour backend.
⢠Depending on the training batch size, we use either an â8x8â or â4x4â TPU slice.
⢠Training the ângramâ model on BioASQ took the longest time, which completes in roughly 30 hours.
⢠Indexing BioASQ, which is our largest pas- sage collection, with 4000 CPUs which took roughly 4 hours.
Having trained models, the inference task is to encode a query with the neural model and query the distributed nearest neighbour backend to get the top ranked passages. The relevant resources are:
⢠We encode queries on a single CPU.
⢠Our distributed nearest neighbour search uses 20 CPUs to serve the collections.
⢠For BioASQ, our largest collection, to run the inference on the test sets of 500 queries took roughly 1m57s. This is approximately 0.2s per instance to encode the query, run brute- force nearest neighbour search on 10s of mil- lions of examples and return the result. | {
"id": "1702.08734"
} |
2004.14373 | ToTTo: A Controlled Table-To-Text Generation Dataset | We present ToTTo, an open-domain English table-to-text dataset with over
120,000 training examples that proposes a controlled generation task: given a
Wikipedia table and a set of highlighted table cells, produce a one-sentence
description. To obtain generated targets that are natural but also faithful to
the source table, we introduce a dataset construction process where annotators
directly revise existing candidate sentences from Wikipedia. We present
systematic analyses of our dataset and annotation process as well as results
achieved by several state-of-the-art baselines. While usually fluent, existing
methods often hallucinate phrases that are not supported by the table,
suggesting that this dataset can serve as a useful research benchmark for
high-precision conditional text generation. | http://arxiv.org/pdf/2004.14373 | Ankur P. Parikh, Xuezhi Wang, Sebastian Gehrmann, Manaal Faruqui, Bhuwan Dhingra, Diyi Yang, Dipanjan Das | cs.CL, cs.LG | Accepted to EMNLP 2020 | null | cs.CL | 20200429 | 20201006 | 0 2 0 2
t c O 6 ] L C . s c [
3 v 3 7 3 4 1 . 4 0 0 2 : v i X r a
# ToTTo: A Controlled Table-To-Text Generation Dataset
Ankur P. Parikhâ Xuezhi Wangâ Sebastian Gehrmannâ Manaal Faruquiâ Bhuwan Dhingraâ£â Diyi Yangâ ⦠Dipanjan Dasâ â Google Research, New York, NY ⦠Georgia Tech, Atlanta, GA ⣠Carnegie Mellon University, Pittsburgh, PA
[email protected]
# Abstract
We present TOTTO, an open-domain English table-to-text dataset with over 120,000 train- ing examples that proposes a controlled gener- ation task: given a Wikipedia table and a set of highlighted table cells, produce a one-sentence description. To obtain generated targets that are natural but also faithful to the source table, we introduce a dataset construction process where annotators directly revise existing can- didate sentences from Wikipedia. We present systematic analyses of our dataset and anno- tation process as well as results achieved by several state-of-the-art baselines. While usu- ally ï¬uent, existing methods often hallucinate phrases that are not supported by the table, sug- gesting that this dataset can serve as a useful research benchmark for high-precision condi- tional text generation.1
1
# 1 Introduction
Data-to-text generation (Kukich, 1983; McKeown, 1992) is the task of generating a target textual de- scription y conditioned on source content x in the form of structured data such as a table. Ex- amples include generating sentences given bio- graphical data (Lebret et al., 2016), textual de- scriptions of restaurants given meaning representa- tions (Novikova et al., 2017), basketball game sum- maries given boxscore statistics (Wiseman et al., 2017), and generating fun facts from superlative tables in Wikipedia (Korn et al., 2019).
Existing data-to-text tasks have provided an important test-bed for neural generation mod- els (Sutskever et al., 2014; Bahdanau et al., 2014). Neural models are known to be prone to halluci- nation, i.e., generating text that is ï¬uent but not faithful to the source (Vinyals and Le, 2015; Koehn
and Knowles, 2017; Lee et al., 2018; Tian et al., 2019) and it is often easier to assess faithfulness of the generated text when the source content is structured (Wiseman et al., 2017; Dhingra et al., 2019). Moreover, structured data can also test a modelâs ability for reasoning and numerical infer- ence (Wiseman et al., 2017) and for building repre- sentations of structured objects (Liu et al., 2018), providing an interesting complement to tasks that test these aspects in the NLU setting (Pasupat and Liang, 2015; Chen et al., 2019; Dua et al., 2019). However, constructing a data-to-text dataset can be challenging on two axes: task design and an- notation process. First, tasks with open-ended output like summarization (Mani, 1999; Lebret et al., 2016; Wiseman et al., 2017) lack explicit signals for models on what to generate, which can lead to subjective content and evaluation chal- lenges (Kry´sci´nski et al., 2019). On the other hand, data-to-text tasks that are limited to verbalizing a fully speciï¬ed meaning representation (Gardent et al., 2017b) do not test a modelâs ability to per- form inference and thus remove a considerable amount of challenge from the task.
Secondly, designing an annotation process to obtain natural but also clean targets is a signiï¬- cant challenge. One strategy employed by many datasets is to have annotators write targets from scratch (Banik et al., 2013; Wen et al., 2015; Gar- dent et al., 2017a) which can often lack variety in terms of structure and style (Gururangan et al., 2018; Poliak et al., 2018). An alternative is to pair naturally occurring text with tables (Lebret et al., 2016; Wiseman et al., 2017). While more diverse, naturally occurring targets are often noisy and con- tain information that cannot be inferred from the source. This can make it problematic to disentangle modeling weaknesses from data noise.
âWork done during an internship at Google. 1TOTTO is available at https://github.com/
google-research-datasets/totto.
In this work, we propose TOTTO, an open- domain table-to-text generation dataset that intro-
Table Title: Gabriele Becker Section Title: International Competitions Table Description: None
Year Competition Venue Position Event Notes Representing Germany 1992 World Junior Championships 1993 European Junior Championships 1994 World Junior Championships 1995 World Championships Seoul, South Korea San Sebasti´an, Spain Lisbon, Portugal Gothenburg, Sweden 10th (semis) 7th 3rd 12th (semis) 2nd 7th (q-ï¬nals) 3rd 100 m 100 m 4x100 m relay 100 m 4x100 m relay 100 m 4x100 m relay 11.83 11.74 44.60 11.66 (wind: +1.3 m/s) 44.78 11.54 43.01
Original Text: After winning the German under-23 100 m title, she was selected to run at the 1995 World Championships in Athletics both individually and in the relay. Text after Deletion: she at the 1995 World Championships in both individually and in the relay. Text After Decontextualization: Gabriele Becker competed at the 1995 World Championships in both individually and in the relay. Final Text: Gabriele Becker competed at the 1995 World Championships both individually and in the relay.
Table 1: Example in the TOTTO dataset. The goal of the task is given the table, table metadata (such as the title), and set of highlighted cells, to produce the ï¬nal text. Our data annotation process revolves around annotators iteratively revising the original text to produce the ï¬nal text.
duces a novel task design and annotation process to address the above challenges. First, TOTTO proposes a controlled generation task: given a Wikipedia table and a set of highlighted cells as the source x, the goal is to produce a single sen- tence description y. The highlighted cells identify portions of potentially large tables that the target sentence should describe, without specifying an explicit meaning representation to verbalize.
For dataset construction, to ensure that targets are natural but also faithful to the source table, we request annotators to revise existing Wikipedia candidate sentences into target sentences, instead of asking them to write new target sentences (Wen et al., 2015; Gardent et al., 2017a). Table 1 presents a simple example from TOTTO to illustrate our an- notation process. The table and Original Text were obtained from Wikipedia using heuristics that col- lect pairs of tables x and sentences y that likely have signiï¬cant semantic overlap. This method en- sures that the target sentences are natural, although they may only be partially related to the table. Next, we create a clean and controlled generation task by requesting annotators to highlight a subset of the table that supports the original sentence and revise the latter iteratively to produce a ï¬nal sentence (see §5). For instance, in Table 1, the annotator has cho- sen to highlight a set of table cells (in yellow) that support the original text. They then deleted phrases from the original text that are not supported by the table, e.g., After winning the German under-23 100 m title, and replaced the pronoun she with an entity
Gabriele Becker. The resulting ï¬nal sentence (Fi- nal Text) serves as a more suitable generation target than the original sentence. This annotation process makes our dataset well suited for high-precision conditional text generation.
Due to the varied nature of Wikipedia tables, TOTTO covers a signiï¬cant variety of domains while containing targets that are completely faith- ful to the source (see Table 4 and the Appendix for more complex examples). Our experiments demon- strate that state-of-the-art neural models struggle to generate faithful results, despite the high qual- ity of the training data. These results suggest that our dataset could serve as a useful benchmark for controllable data-to-text generation.
# 2 Related Work
TOTTO differs from existing datasets in both task design and annotation process as we describe below. A summary is given in Table 2.
Task Design Most existing table-to-text datasets are restricted in topic and schema such as WEATH- ERGOV (Liang et al., 2009), ROBOCUP (Chen and Mooney, 2008), Rotowire (Wiseman et al., 2017, basketball), E2E (Novikova et al., 2016, 2017, restaurants), KBGen (Banik et al., 2013, bi- ology), and Wikibio (Lebret et al., 2016, biogra- phies). In contrast, TOTTO contains tables with various schema spanning various topical categories all over Wikipedia. Moreover, TOTTO takes a different view of content selection compared to
Dataset Wikibio (Lebret et al., 2016) Rotowire (Wiseman et al., 2017) WebNLG (Gardent et al., 2017b) E2E (Novikova et al., 2017) LogicNLG (Chen et al., 2020) TOTTO Train Size Domain Target Quality Target Source Noisy 583K Biographies Noisy 4.9K Basketball Clean 25.3K 15 DBPedia categories Clean 50.6K Restaurants 28.5K Wikipedia (open-domain) Clean 120K Wikipedia (open-domain) Clean Content Selection Not speciï¬ed Not speciï¬ed Fully speciï¬ed Partially speciï¬ed Columns via entity linking Wikipedia Rotowire Annotator Generated Annotator Generated Annotator Generated Wikipedia (Annotator Revised) Annotator highlighted
Table 2: Comparison of popular data-to-text datasets. TOTTO combines the advantages of annotator-generated and fully natural text through a revision process.
existing datasets. Prior to the advent of neural ap- proaches, generation systems typically separated content selection (what to say) from surface re- alization (how to say it) (Reiter and Dale, 1997). Thus many generation datasets only focused on the latter stage (Wen et al., 2015; Gardent et al., 2017b). However, this decreases the task complex- ity, since neural systems have already been quite powerful at producing ï¬uent text. Some recent datasets (Wiseman et al., 2017; Lebret et al., 2016) have proposed incorporating content selection into the task by framing it as a summarization problem. However, summarization is much more subjective, which can make the task underconstrained and difï¬- cult to evaluate (Kry´sci´nski et al., 2019). We place TOTTO as a middle-ground where the highlighted cells provide some guidance on the topic of the tar- get but still leave a considerable amount of content planning to be done by the model.
the table. This enables TOTTO to maintain the varied language and structure found in natural sen- tences while producing cleaner targets. The tech- nique of editing exemplar sentences has been used in semiparametric generation models (Guu et al., 2018; Pandey et al., 2018; Peng et al., 2019) and crowd-sourcing small, iterative changes to text has been shown to lead to higher-quality data and a more robust annotation process (Little et al., 2010). Perez-Beltrachini and Lapata (2018) also employed a revision strategy to construct a cleaner evaluation set for Wikibio (Lebret et al., 2016).
Concurrent to this work, Chen et al. (2020) pro- posed LogicNLG which also uses Wikipedia tables, although omitting some of the more complex struc- tured ones included in our dataset. Their target sentences are annotator-generated and their task is signiï¬cantly more uncontrolled due to the lack of annotator highlighted cells.
Annotation Process There are various existing strategies to create the reference target y. One strategy employed by many datasets is to have an- notators write targets from scratch given a represen- tation of the source (Banik et al., 2013; Wen et al., 2015; Gardent et al., 2017a). While this will result in a target that is faithful to the source data, it often lacks variety in terms of structure and style (Guru- rangan et al., 2018; Poliak et al., 2018). Domain- speciï¬c strategies such as presenting an annotator an image instead of the raw data (Novikova et al., 2016) are not practical for some of the complex tables that we consider. Other datasets have taken the opposite approach: ï¬nding real sentences on the web that are heuristically selected in a way that they discuss the source content (Lebret et al., 2016; Wiseman et al., 2017). This strategy typically leads to targets that are natural and diverse, but they may be noisy and contain information that cannot be inferred from the source (Dhingra et al., 2019).To construct TOTTO, we ask annotators to revise ex- isting candidate sentences from Wikipedia so that they only contain information that is supported by
# 3 Preliminaries
Our tables come from English Wikipedia articles and thus may not be regular grids.2 For simplicity, we deï¬ne a table t as a set of cells t = {cj}Ï j=1 where Ï is the number of cells in the table. Each cell contains: (1) a string value, (2) whether or not it is a row or column header, (3) the row and column position of this cell in the table, (4) The number of rows and columns this cell spans.
Let m = (mpage-title, msection-title, msection-text) indicate table metadata, i.e, the page title, sec- tion title, and up to the ï¬rst 2 sentences of the section text (if present) respectively. These ï¬elds can help provide context to the tableâs contents. Let s = (s1, ..., sη) be a sentence of length η. We deï¬ne an annotation example3 d = (t, m, s) a tu- ple of table, table metadata, and sentence. Here, D = {dn}N n=1 refers to a dataset of annotation
2In Wikipedia, some cells may span multiple rows and
columns. See Table 1 for an example.
3An annotation example is different than a task example since the annotator could perform a different task than the model.
examples of size N .
# 4 Dataset Collection
We ï¬rst describe how to obtain annotation exam- ples d for subsequent annotation. To prevent any overlap with the Wikibio dataset (Lebret et al., 2016), we do not use infobox tables. We employed three heuristics to collect tables and sentences:
Number matching We search for tables and sen- tences on the same Wikipedia page that overlap with a non-date number of at least 3 non-zero digits. This approach captures most of the table-sentence pairs that describe statistics (e.g., sports, election, census, science, weather).
Cell matching We extract a sentence if it has tokens matching at least 3 distinct cell contents from the same row in the table. The intuition is that most tables are structured, and a row is usually used to describe a complete event.
Hyperlinks The above heuristics only consider sentences and tables on the same page. We also ï¬nd examples where a sentence s contains a hyper- link to a page with a title that starts with List (these pages typically only consist of a large table). If the table t on that page also has a hyperlink to the page containing s, then we consider this to be an anno- tation example. Such examples typically result in more diverse examples than the other two heuris- tics, but also add more noise, since the sentence may only be distantly related to the table.
Using the above heuristics we obtain a set of examples D. We then sample a random subset of tables for annotation, excluding tables with format- ting issues: 191,693 examples for training, 11,406 examples for development, and 11,406 examples for test. Among these examples, 35.8% were de- rived from number matching, 29.4% from cell matching, and 34.7% from hyperlinks.
# 5 Data Annotation Process
The collected annotation examples are noisy since a sentence s may only be partially supported by the table t. We thus deï¬ne an annotation process that guides annotators through incremental changes to the original sentence. This allows us to measure annotator agreement at every step of the process, which is atypical in existing generation datasets.
The primary annotation task consists of the fol- lowing steps: (1) Table Readability, (2) Cell high-
lighting, (3) Phrase Deletion, (4) Decontextualiza- tion. After these steps we employ a ï¬nal secondary annotation task for grammar correction. Each of these are described below and more examples are provided in the Table 3.
Table Readability If a table is not readable, then the following steps will not need to be completed. This step is only intended to remove fringe cases where the table is poorly formatted or otherwise not understandable (e.g., in a different language). 99.5% of tables are determined to be readable.
Cell Highlighting An annotator is instructed to highlight cells that support the sentence. A phrase is supported by the table if it is either directly stated in the cell contents or meta-data, or can be logically inferred by them. Row and column headers do not need to be highlighted. If the table does not support any part of the sentence, then no cell is marked and no other step needs to be completed. 69.7% of ex- amples are supported by the table. For instance, in Table 1, the annotator highlighted cells that support the phrases 1995, World Championships, individ- ually, and relay. The set of highlighted cells are denoted as a subset of the table: thighlight â t.
Phrase Deletion This step removes phrases in the sentence unsupported by the selected table cells. Annotators are restricted such that they are only able to delete phrases, transforming the original sentence: s â sdeletion. In Table 1, the annotator transforms s by removing several phrases such as After winning the German under-23 100 m title.
On average, sdeletion is different from s for 85.3% of examples and while s has an average length of 26.6 tokens, this is reduced to 15.9 for sdeletion. We found that the phrases annotators often disagreed on corresponded to verbs purportedly supported by the table.
Decontextualization A given sentence s may contain pronominal references or other phrases that depend on context. We thus instruct annotators to identify the main topic of the sentence; if it is a pronoun or other ambiguous phrase, we ask them to replace it with a named entity from the table or metadata. To discourage excessive modiï¬cation, they are instructed to make at most one replace- ment.4 This transforms the sentence yet again:
4Based on manual examination of a subset of 100 exam- ples, all of them could be decontextualized with only one replacement. Allowing annotators to make multiple replace- ments led to excessive clariï¬cation.
Original After Deletion After Decontextualization Final He later raced a Nissan Pulsar and then a Mazda 626 in this series, with a highlight of ï¬nishing runner up to Phil Morriss in the 1994 Australian Production Car Championship. On July 6, 2008, Webb failed to qual- ify for the Beijing Olympics in the 1500 m after ï¬nishing 5th in the US Olympic Trials in Eugene, Oregon with a time of 3:41.62. Out of the 17,219 inhabitants, 77 per- cent were 20 years of age or older and 23 percent were under the age of 20. He later raced a Nissan Pulsar and then a Mazda 626 in this series, with a highlight of ï¬nishing runner up to Phil Morriss in the 1994 Australian Production Car Championship. On July 6, 2008, Webb failed to qualify for the Beijing Olympics in the 1500 m after ï¬nishing 5th in the US Olympic Trials in Eugene, Ore- gon with a time of 3:41.62. Out of the 17,219 inhabitants , 77 percent were 20 years of age or older and 23 percent were under the age of 20. Murray Carter raced a Nissan Pul- sar and ï¬nished as a runner up in the 1994 Australian Production Car Championship. On July 6, 2008, Webb ï¬nishing 5th in the Olympic Trials in Eugene, Oregon with a time of 3:41.62. Rawdat Al Khail had a population of 17,219 inhabitants. Murray Carter raced a Nissan Pul- sar and ï¬nished as runner up in the 1994 Australian Production Car Championship. On July 6, 2008, Webb ï¬nished 5th in the Olympic Trials in Eugene, Oregon, with a time of 3:41.62. Rawdat Al Khail had a population of 17,219 inhabitants.
Table 3: Examples of annotation process. Deletions are indicated in red strikeouts, while added named entities are indicated in underlined blue. Signiï¬cant grammar ï¬xes are denoted in orange.
sdeletion â sdecontext. replaced she with Gabriele Becker. In Table 1, the annotator
Since the previous steps can lead to ungram- matical sentences, annotators are also instructed to ï¬x the grammar to improve the ï¬uency of the sentence. We ï¬nd that sdecontext is different than sdeletion 68.3% of the time, and the average sen- tence length increases to 17.2 tokens for sdecontext compared to 15.9 for sdeletion.
Entertainment Literature Politics Broadcasting Europe North America Performing Arts Countries
Secondary Annotation Task Due to the com- plexity of the task, sdecontext may still have gram- matical errors, even if annotators were instructed to ï¬x grammar. Thus, a second set of annotators were asked to further correct the sentence and were shown the table with highlighted cells as additional context. This results in the ï¬nal sentence sï¬nal. On average, annotators edited the sentence 27.0% of the time, and the sentence length slightly increased to 17.4 tokens from 17.2.
Figure 1: Topic distribution of our dataset.
As one can see, the table readability task has an agreement of 99.38%. The cell highlighting task is more challenging. 73.74% of the time all three annotators completely agree on the set of cells which means that they chose the exact same set of cells. The Fleissâ kappa is 0.856, which is regarded as âalmost perfect agreementâ (0.81 - 1.00) according to (Landis and Koch, 1977).
# 6 Dataset Analysis
Basic statistics of TOTTO are described in Table 5. The number of unique tables and vocabulary size at- tests to the open domain nature of our dataset. Fur- thermore, while the median table is actually quite large (87 cells), the median number of highlighted cells is signiï¬cantly smaller (3). This indicates the importance of the cell highlighting feature of our dataset toward a well-deï¬ned text generation task.
With respect to the sentence revision tasks, we see that the agreement slightly degrades as more steps are performed. We compute single reference BLEU among all pairs of annotators for examples in our development set (which only contains ex- amples where both annotators chose thightignt # 0). As the sequence of revisions are performed, the annotator agreement gradually decreases in terms of BLEU-4: 82.19 â 72.56 â 68.98. This is considerably higher than the BLEU-4 between the original sentence s and s¢pqi (43.17).
# 6.1 Annotator Agreement
Table 6 shows annotator agreement over the devel- opment set for each step of the annotation process. We compute annotator agreement and Fleissâ kappa (Fleiss, 1971) for table readability and highlighted cells, and BLEU-4 score between annotated sen- tences in different stages.
# 6.2 Topics and Linguistic Phenomena
We use the Wikimedia Foundationâs topic catego- rization model (Asthana and Halfaker, 2018) to sort the categories of Wikipedia articles where the
Table Title: Robert Craig (American football) Section Title: National Football League statistics Table Description:None
YEAR 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 Totals TEAM ATT 176 155 214 204 215 310 271 141 162 105 38 1991 SF SF SF SF SF SF SF SF RAI MIN MIN - RUSHING YDS 725 649 1050 830 815 1502 1054 439 590 416 119 8189 AVG 4.1 4.2 4.9 4.1 3.8 4.8 3.9 3.1 3.6 4.0 3.1 4.1 LNG 71 28 62 25 25 46 27 26 15 21 11 71 TD 8 4 9 7 3 9 6 1 1 4 1 56 NO. 48 71 92 81 66 76 49 25 17 22 19 566 YDS 427 675 1016 624 492 534 473 201 136 164 169 4911 RECEIVING LNG AVG 23 8.9 64 9.5 73 11 48 7.7 35 7.5 22 7.0 44 9.7 31 8.0 20 8.0 22 7.5 31 8.9 73 8.7 TD 4 3 6 0 1 1 1 0 0 0 1 17 Target Text: Craig ï¬nished his eleven NFL seasons with 8,189 rushing yards and 566 receptions for 4,911 receiving yards.
Table 4: An example in the TOTTO dataset that involves numerical reasoning over the table structure.
Property Value Training set size Number of target tokens Avg Target Length (tokens) Target vocabulary size Unique Tables Rows per table (Median/Avg) Cells per table (Median/Avg) No. of Highlighted Cell (Median/Avg) 120,761 1,268,268 17.4 136,777 83,141 16 / 32.7 87 / 206.6 3 / 3.55 Development set size Test set size 7,700 7,700
Types Percentage Require reference to page title Require reference to section title Require reference to table description Reasoning (logical, numerical, temporal etc.) Comparison across rows / columns / cells Require background information 82% 19% 3% 21% 13% 12%
Table 7: Distribution of different linguistic phenomena among 100 randomly chosen sentences.
Table 5: TOTTO dataset statistics.
Annotation Stage Measure Result Agreement / κ Table Readability Agreement / κ Cell Highlighting After Deletion BLEU-4 After Decontextualization BLEU-4 BLEU-4 Final 99.38 / 0.646 73.74 / 0.856 82.19 72.56 68.98
Table 6: Annotator agreement over the development set. If possible, we measure the total agreement (in %) and the Fleissâ Kappa (κ). Otherwise, we report the BLEU- 4 between annotators.
tables come from into a 44-category ontology.5 Fig- ure 1 presents an aggregated topic analysis of our dataset. We found that the Sports and Countries topics together comprise 53.4% of our dataset, but the other 46.6% is composed of broader topics such as Performing Arts, Politics, and North America. Our dataset is limited to topics that are present in Wikipedia.
current systems. Table 4 gives one example that requires reasoning (refer to the Appendix for more examples).
# 6.3 Training, Development, and Test Splits
After the annotation process, we only consider ex- amples where the sentence is related to the table, i.e., thightignt # (. This initially results in a training set Dorig-train Of size 131,849 that we further filter as described below. Each example in the develop- ment and test sets was annotated by three annota- tors. Since the machine learning task uses thightight as an input, it is challenging to use three different sets of highlighted cells in evaluation. Thus, we only use a single randomly chosen thightignt while using the three Sfnai as references for evaluation. We only use examples where at least 2 of the 3 annotators chose thightight # 0, resulting in devel- opment and test sets of size 7,700 each.
Table 7 summarizes the fraction of examples that require reference to the metadata, as well as some of the challenging linguistic phenomena in the dataset that potentially pose new challenges to
5https://en.wikipedia.org/wiki/ Wikipedia:WikiProject_Council/Directory
Overlap and Non-Overlap Sets Without any modiï¬cation Dorig-train, Ddev, and Dtest may con- tain many similar tables. Thus, to increase the gen- eralization challenge, we ï¬lter Dorig-train to remove some examples based on overlap with Ddev, Dtest. For a given example d, let h(d) denote its set of header values and similarly let h(D) be the set of
header values for a given dataset D. We remove examples d from the training set where h(d) is both rare in the data as well as occurs in either the development or test sets. Speciï¬cally, Dtrain is deï¬ned as:
Durain := {d : h(d) ¢ (R(Deev) U h( Deest)) oF count (h(d), Dorig-train) > a}.
The count(h(d), Dorig-train) function returns the number of examples in Dorig-train with header h(d). To choose the hyperparameter α we ï¬rst split the test set as follows:
Dtest-overlap := {d : h(d) â h(Dtrain)} Dtest-nonoverlap := {d : h(d) /â h(Dtrain)}
The development set is analogously divided into Ddev-overlap and Ddev-nonoverlap. We then choose α = 5 so that Dtest-overlap and Dtest-nonoverlap have similar size. After ï¬ltering, the size of Dtrain is 120,761, and Ddev-overlap, Ddev-nonoverlap, Dtest-overlap, and Dtest-nonoverlap have sizes 3784, 3916, 3853, and 3847 respectively.
# 7 Machine Learning Task Construction
In this work, we focus on the following task: Given a table t, related metadata m (page title, section ti- tle, table section text) and a set of highlighted cells thighlight, produce the ï¬nal sentence sï¬nal. Mathe- matically this can be described as learning a func- tion f : x â y where x = (t, m, thighlight) and y = sï¬nal. This task is different from what the an- notators perform, since they are provided a starting sentence requiring revision. Therefore, the task is more challenging, as the model must generate a new sentence instead of revising an existing one.
# 8 Experiments
We present baseline results on TOTTO by examin- ing three existing state-of-the-art approaches (Note that since our tables do not have a ï¬xed schema it is difï¬cult to design a template baseline).
⢠BERT-to-BERT (Rothe et al., 2020): A Trans- former encoder-decoder model (Vaswani et al., 2017) where the encoder and decoder are both initialized with BERT (Devlin et al., 2018). The original BERT model is pre-trained with both Wikipedia and the Books corpus (Zhu et al., 2015), the former of which contains our (unrevised) test targets. Thus, we also
pre-train a version of BERT on the Books cor- pus only, which we consider a more correct baseline. However, empirically we ï¬nd that both models perform similarly in practice (Ta- ble 8).
⢠Pointer-Generator (See et al., 2017): A Seq2Seq model with attention and copy mech- anism. While originally designed for summa- rization it is commonly used in data-to-text as well (Gehrmann et al., 2018).
⢠Puduppully et al. (2019): A Seq2Seq model with an explicit content selection and planning mechanism designed for data-to-text.
Details about hyperparameter settings are provided in the Appendix. Moreover, we explore different strategies of representing the source content that resemble standard linearization approaches in the literature (Lebret et al., 2016; Wiseman et al., 2017)
Full Table The simplest approach is simply to use the entire table as the source, adding special tokens to mark which cells have been highlighted. However, many tables can be very large and this strategy performs poorly. ⢠Subtable Another option is to only use the highlighted cells thighlight â t with the heuris- tically extracted row and column header for each highlighted cell. This makes it easier for the model to only focus on relevant content but limits the ability to perform reasoning in the context of the table structure (see Table 11). Overall though, we ï¬nd this representation leads to higher performance.
In all cases, the cells are linearized with row and column separator tokens. We also experiment with prepending the table metadata to the source table.6
Evaluation metrics The model output is evalu- ated using two automatic metrics: BLEU (Papineni et al., 2002) and PARENT (Dhingra et al., 2019). PARENT is a metric recently proposed speciï¬cally for data-to-text evaluation that takes the table into account. We modify it to make it suitable for our dataset, described in the Appendix. Human evalua- tion is described in § 8.2.
# 8.1 Results
Table 8 shows our results against multiple refer- ences with the subtable input format. Both the
6The table section text is ignored, since it is usually miss- ing or irrelevant.
Model Overall Overlap Subset Nonoverlap Subset BLEU PARENT BLEU PARENT BLEU BERT-to-BERT (Books+Wiki) BERT-to-BERT (Books) Pointer-Generator Puduppully et al. (2019) 44.0 43.9 41.6 19.2 52.6 52.6 51.6 29.2 52.7 52.7 50.6 24.5 58.4 58.4 58.0 32.5 35.1 34.8 32.2 13.9
Table 8: Performance compared to multiple references on the test set for the subtable input format with metadata.
Model Fluency (%) Faithfulness (%) Covered Cells (%) Less/Neutral/More Coverage w.r.t. Ref Overall Overlap Non-overlap Oracle BERT-to-BERT (Books) BERT-to-BERT (Books+Wiki) Oracle BERT-to-BERT (Books) BERT-to-BERT (Books+Wiki) Oracle BERT-to-BERT (Books) BERT-to-BERT (Books+Wiki) 99.3 88.1 87.3 99.6 89.6 89.8 99.1 86.9 84.8 93.6 76.2 73.6 96.5 78.7 81.1 91.4 74.2 66.6 94.8 89.0 87.3 95.5 92.1 91.0 94.3 86.4 83.8 18.3 / 61.7 / 20.0 49.2 / 36.2 / 14.5 53.9 / 32.9 / 13.2 19.8 / 62.8 / 17.4 42.0 / 43.7 / 14.3 47.8 / 39.2 / 13.1 17.0 / 60.9 / 22.1 55.5 / 29.8 / 14.7 60.1 / 26.6 / 13.3
Table 9: Human evaluation over references (to compute Oracle) and model outputs. For Fluency, we report the percentage of outputs that were completely ï¬uent. In the last column X/Y /Z means X% and Z% of the candidates were deemed to be less and more informative than the reference respectively and Y% were neutral.
Data Format BLEU PARENT subtable w/ metadata subtable w/o metadata full table w/ metadata full table w/o metadata 43.9 36.9 26.8 20.9 52.6 42.6 30.7 22.2
Table 10: Multi-reference performance of different in- put representations for BERT-to-BERT Books model.
⢠Faithfulness (Precision) - A candidate sen- tence is considered faithful if all pieces of information are supported by either the table or one of the references. Any piece of un- supported information makes the candidate unfaithful.
BERT-to-BERT models perform the best, followed by the pointer generator model.7 We see that for all models the performance on the non-overlap set is signiï¬cantly lower than that of the overlap set, indicating that slice of our data poses signiï¬cant challenges for machine learning models. We also observe that the baseline that separates content se- lection and planning performs quite poorly. We attest this to the fact that it is engineered to the Rotowire data format and schema.
Table 10 explores the effects of the various in- put representations (subtable vs. full table) on the BERT-to-BERT model. We see that the full ta- ble format performs poorly even if it is the most knowledge-preserving representation.
# 8.2 Human evaluation
For each of the 2 top performing models in Table 8, we take 500 random outputs and perform human evaluation using the following axes:
⢠Fluency - A candidate sentence is ï¬uent if it is grammatical and natural. The three choices are Fluent, Mostly Fluent, Not Fluent.
7Note the BLEU scores are relatively high due to the fact that our task is more controlled than other text generation tasks and that we have multiple references.
Covered Cells (Recall) - Percentage of high- lighted cells the candidate sentence covers. ⢠Coverage with Respect to Reference (Re- call) - We ask whether the candidate is strictly more or less informative than each reference (or neither, which is referred to as neutral).
We further compute an oracle upper-bound by treating one of the references as a candidate and evaluating it compared to the table and other ref- erences. The results, shown in Table 9, attest to the high quality of our human annotations since the oracle consistently achieves high performance. All the axes demonstrate that there is a considerable gap between the model and oracle performance.
This difference is most easily revealed in the last column when annotators are asked to directly com- pare the candidate and reference. As expected, the oracle has similar coverage to the reference (61.7% neutral) but both baselines demonstrate consider- ably less coverage. According to an independent- sample t-test, this difference is signiï¬cant at a p < 0.001 level for both baselines. Furthermore, the baselines are considerably less faithful than the reference. The faithfulness of both models is signiï¬cantly lower than the reference (Ï2 test with p < 0.001). The models do not differ sig- niï¬cantly from each other, except for faithfulness
ID Reference Full table Decoder output (w/ metadata) Subtable w/o metadata Subtable 1 2 3 4 5 6 in the 1939 currie cup, western province lost to transvaal by 17â6 in cape town. a second generation of microdrive was announced by ibm in 2000 with increased capacities at 512 mb and 1 gb. the 1956 grand prix motorcycle rac- ing season consisted of six grand prix races in ï¬ve classes: 500cc, 350cc, 250cc, 125cc and sidecars 500cc. in travis kelceâs last collegiate sea- son, he set personal career highs in receptions (45), receiving yards (722), yards per receptions (16.0) and receiving touchdowns (8). in the 2012 ï¬lm pizza bagel, michael pillarella portrays tommy. the album shari addison placed at no. 176 on the billboard 200 along with no. 5 on the gospel albums. northern transvaal and western province were drawn by 16 in 1989 and in 1995, western province were defeated by the sharks in durban. the microdrive models formed 512 megabyte and 1 gigabyte in 2000. the 1966 grand prix motorcycle racing season consisted of seven grand prix races in ï¬ve classes: 500cc, 350cc, 250cc, 125cc and sidecars 500cc. during the 2011 season, travis kel- ceum caught 76 receptions for 1,612 yards and 14 touchdowns. in 2012, groff played the role of tommy in the ï¬lm pizza bagel. shari addisonâs â 5â, reached num- ber 176 on the billboard 200. the ï¬rst currie cup was played in 1939 in transvaal1 at new- lands, with western province win- ning 17â6. there were 512 microdrive models in 2000: 1 gigabyte. the 1956 grand prix motorcycle rac- ing season consisted of eight grand prix races in ï¬ve classes: 500cc, 350cc, 250cc, 125cc and sidecars 500cc. travis kelce ï¬nished the 2012 sea- son with 45 receptions for 722 yards (16.0 avg.) and eight touch- downs. in 2012, pillarella appeared as tommy in the ï¬lm pizza bagel. shari addison charted at number 176 on the us chart and at number 5 on the us billboard 200. the ï¬rst scottish cup was played in 1939 in transvaal1, where transvaal1 western defeated province 17â6 in the ï¬nal at newlands in cape town. corteteâs megabyte. production was 512 the 1955 grand prix motorcycle rac- ing season consisted of eight grand prix races in ï¬ve classes: 500cc, 350cc, 250cc, 125cc and sidecars 500cc. kelce ï¬nished the 2012 season with 45 catches for 722 yards (16.0 avg.) and eight touchdowns. harris played the role of tommy in the 2012 ï¬lm pizza bagel. the shari addison peaked at number 176 on the billboard 200 chart.
Table 11: Decoder output examples from BERT-to-BERT Books models on the development set. The âsubtable with metadataâ model achieves the highest BLEU. Red indicates model errors and blue denotes interesting refer- ence language not in the model output.
in the non-overlap case, where we see a moderate effect favoring the book model.
model is unable to make these inferences from the simplistic source representation that we used.
# 9 Model Errors and Challenges
Table 11 shows predictions from the BERT-to- BERT Books model to illustrate challenges existing models face.
Hallucination The model sometimes outputs phrases such as ï¬rst, winning that seem reason- able but are not faithful to the table. This halluci- nation phenomenon has been widely observed in other existing data-to-text datasets (Lebret et al., 2016; Wiseman et al., 2017). However, the noisy references in these datasets make it difï¬cult to dis- entangle model incapability from data noise. Our dataset serves as strong evidence that even when the reference targets are faithful to the source, neu- ral models still struggle with faithfulness.
Rare topics Another challenge revealed by the open domain nature of our task is rare or complex topics at the tail of the topic distribution (Figure 1). For instance, example 2 of Table 11 concerns mi- crodrive capacities which is challenging.
Evaluation metrics Many of the above issues are difï¬cult to capture with metrics like BLEU since the reference and prediction may only differ by a word but largely differ in terms of semantic meaning. This urges for better metrics possibly built on learned models (Wiseman et al., 2017; Ma et al., 2019; Sellam et al., 2020). Thus, while we have a task leaderboard, it should not be interpreted as the deï¬nitive measure of model performance.
# 10 Conclusion
We presented TOTTO, a table-to-text dataset that presents a controlled generation task and a data annotation process based on itera- tive sentence revision. We also provided several state-of-the-art baselines, and demon- strated TOTTO could serve as a useful research benchmark for model and metric development. TOTTO is available at https://github.com/ google-research-datasets/totto.
# Acknowledgements
Diverse table structure and numerical reason- In example 3, inferring six and ï¬ve correctly ing requires counting table rows and columns. Sim- ilarly, in example 4, the phrases last and career highs can be deduced from the table structure and with comparisons over the columns. However, the
The authors wish to thank Ming-Wei Chang, Jonathan H. Clark, Kenton Lee, and Jennimaria Palomaki for their insightful discussions and sup- port. Many thanks also to Ashwin Kakarla and his team for help with the annotations.
# References
Sumit Asthana and Aaron Halfaker. 2018. With few eyes, all hoaxes are deep. Proceedings of the ACM on Human-Computer Interaction, 2(CSCW):21.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. In Proc. of EMNLP.
Eva Banik, Claire Gardent, and Eric Kow. 2013. The kbgen challenge. In Proc. of European Workshop on NLG.
David L Chen and Raymond J Mooney. 2008. Learn- ing to sportscast: a test of grounded language acqui- sition. In Proc. of ICML.
Wenhu Chen, Jianshu Chen, Yu Su, Zhiyu Chen, and William Yang Wang. 2020. Logical natural lan- guage generation from open-domain tables. In Proc. of ACL.
Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou, and William Yang Wang. 2019. TabFact: A large-scale dataset for table-based fact veriï¬cation. In Proc. of ICLR.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proc. of NAACL.
Bhuwan Dhingra, Manaal Faruqui, Ankur Parikh, Ming-Wei Chang, Dipanjan Das, and William W Co- hen. 2019. Handling divergent reference texts when evaluating table-to-text generation. In Proc. of ACL.
Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. DROP: A reading comprehension benchmark requir- ing discrete reasoning over paragraphs. In Proc. of NAACL.
John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization.
Joseph L. Fleiss. 1971. Measuring nominal scale agree- ment among many raters. Psychological Bulletin, 76(5):378.
Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017a. Creating train- In Proc. of ing corpora for NLG micro-planning. ACL.
Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017b. The WebNLG challenge: Generating text from RDF data. In Proc. of INLG.
Sebastian Gehrmann, Falcon Z Dai, Henry Elder, and Alexander M Rush. 2018. End-to-end content and plan selection for data-to-text generation. In INLG.
Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R Bowman, and Noah A Smith. 2018. Annotation artifacts in natu- ral language inference data. In Proc. of NAACL.
Kelvin Guu, Tatsunori B Hashimoto, Yonatan Oren, and Percy Liang. 2018. Generating sentences by editing prototypes. TACL, 6:437â450.
Diederik P Kingma and Jimmy Ba. 2015. Adam: A In Proc. of method for stochastic optimization. ICLR.
Philipp Koehn and Rebecca Knowles. 2017. Six chal- In Proc. of lenges for neural machine translation. WMT.
Flip Korn, Xuezhi Wang, You Wu, and Cong Yu. 2019. Automatically generating interesting facts from wikipedia tables. In Proceedings of the 2019 International Conference on Management of Data, SIGMOD â19, page 349â361, New York, NY, USA. Association for Computing Machinery.
Wojciech Kry´sci´nski, Nitish Shirish Keskar, Bryan Mc- Cann, Caiming Xiong, and Richard Socher. 2019. Neural text summarization: A critical evaluation. In Proc. of EMNLP.
Karen Kukich. 1983. Design of a knowledge-based re- port generator. In Proc. of ACL.
J. Richard Landis and Gary G. Koch. 1977. The mea- surement of observer agreement for categorical data. Biometrics, 33(1):159â174.
R´emi Lebret, David Grangier, and Michael Auli. 2016. Neural text generation from structured data with In Proc. of application to the biography domain. EMNLP.
Katherine Lee, Orhan Firat, Ashish Agarwal, Clara Fannjiang, and David Sussillo. 2018. Hallucinations in neural machine translation. In Open Review.
Percy Liang, Michael I Jordan, and Dan Klein. 2009. Learning semantic correspondences with less super- vision. In Proc. of ACL.
Greg Little, Lydia B Chilton, Max Goldman, and Robert C Miller. 2010. Turkit: human computation In Proceedings of algorithms on mechanical turk. the 23nd annual ACM symposium on User interface software and technology, pages 57â66.
Tianyu Liu, Kexiang Wang, Lei Sha, Baobao Chang, and Zhifang Sui. 2018. Table-to-text generation by structure-aware seq2seq learning. In Proc. of AAAI.
Qingsong Ma, Johnny Wei, OndËrej Bojar, and Yvette Graham. 2019. Results of the WMT19 metrics shared task: Segment-level and strong mt sys- In Proceedings of the tems pose big challenges. Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 62â90.
Inderjeet Mani. 1999. Advances in automatic text sum- marization. MIT press.
Kathleen McKeown. 1992. Text generation. Cam- bridge University Press.
Jekaterina Novikova, OndËrej DuËsek, and Verena Rieser. 2017. The E2E dataset: New challenges for end-to- end generation. In Proc. of SIGDIAL.
Jekaterina Novikova, Oliver Lemon, and Verena Rieser. 2016. Crowd-sourcing nlg data: Pictures elicit bet- ter data. In Proc. of INLG.
Gaurav Pandey, Danish Contractor, Vineet Kumar, and Sachindra Joshi. 2018. Exemplar encoder-decoder for neural conversation generation. In Proc. of ACL.
Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a method for automatic eval- uation of machine translation. In Proc. of ACL.
Panupong Pasupat and Percy Liang. 2015. Composi- tional semantic parsing on semi-structured tables. In Proc. of ACL.
Hao Peng, Ankur P Parikh, Manaal Faruqui, Bhuwan Dhingra, and Dipanjan Das. 2019. Text generation with exemplar-based adaptive decoding. In Proc. of NAACL.
Laura Perez-Beltrachini and Mirella Lapata. 2018. Bootstrapping generators from noisy data. In Proc. of NAACL.
Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018. Hypothesis only baselines in natural language infer- ence. In *SEM@NAACL-HLT.
Ratish Puduppully, Li Dong, and Mirella Lapata. 2019. Data-to-text generation with content selection and planning. In Proc. of AAAI.
Ehud Reiter and Robert Dale. 1997. Building applied natural language generation systems. Natural Lan- guage Engineering, 3(1):57â87.
Sascha Rothe, Shashi Narayan, and Aliaksei Severyn. 2020. Leveraging pre-trained checkpoints for se- quence generation tasks. In Proc. of TACL.
Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer- generator networks. In Proc. of ACL.
Thibault Sellam, Dipanjan Das, and Ankur P Parikh. 2020. BLEURT: Learning robust metrics for text generation. In Proc. of ACL.
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Proc. of NIPS.
Ran Tian, Shashi Narayan, Thibault Sellam, and Ankur P Parikh. 2019. Sticking to the facts: Con- ï¬dent decoding for faithful data-to-text generation. arXiv preprint arXiv:1910.08684.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proc. of NIPS.
Oriol Vinyals and Quoc Le. 2015. A neural conver- In Proc. of ICML Deep Learning sational model. Workshop.
Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Pei- Hao Su, David Vandyke, and Steve Young. 2015. Semantically conditioned LSTM-based natural lan- guage generation for spoken dialogue systems. In Proc. of EMNLP.
Sam Wiseman, Stuart M Shieber, and Alexander M Rush. 2017. Challenges in data-to-document gen- eration. In Proc. of EMNLP.
Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhut- dinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proc. of ICCV.
# A Appendix
The Appendix contains the following contents:
⢠Information about the variant of the PARENT metric (Dhingra et al., 2019) used for evalua- tion.
⢠More details about the baselines.
⢠Examples of more complex tables in our dataset (Figure 2-Figure 5).
# A.1 PARENT metric
PARENT (Dhingra et al., 2019) is a metric recently proposed speciï¬cally for data-to-text evaluation that takes the table into account. We modify it to make it suitable for our dataset. Let (xn, yn, Ëyn) denote one example that consists of a (source, tar- get, prediction) tuple. PARENT is deï¬ned at an instance level as:
P AREN T (xn, yn, Ëyn) = 2 Ã Ep(xn, yn, Ëyn) Ã Er(xn, yn, Ëyn) Ep(xn, yn, Ëyn) + Er(xn, yn, Ëyn)
Ep(xn, yn, Ëyn) is the PARENT precision com- puted using the prediction, reference, and table (the last of which is not used in BLEU). Er(xn, yn, Ëyn) is the PARENT recall and is computed as:
Er(xn, yn, Ëyn) = R(xn, yn, Ëyn)(1âλ)R(xn, Ëyn)λ
where R(xn, yn, Ëyn) is a recall term that compares the prediction with both the reference and table.
R(xn, Ëyn) is an extra recall term that gives an addi- tional reward if the prediction Ëyn contains phrases in the table xn that are not necessarily in the refer- ence (λ is a hyperparameter).
In the original PARENT work, the same table t is used for computing the precision and both recall terms. While this makes sense for most existing datasets, it does not take into account the high- lighted cells thighlight in our task. To incorporate thighlight, we modify the PARENT metric so that the additional recall term R(xn, Ëyn) uses thighlight instead of t to only give an additional reward for relevant table information. The other recall and the precision term still use t.
# A.2 Baseline details
⢠BERT-to-BERT (Rothe et al., 2020) - Un- cased model coupling both encoder and de- coder as in original paper, with Adam opti- mizer (Kingma and Ba, 2015). learning rate = 0.05, hidden size = 1024, dropout = 0.1, beam size = 4.
⢠Pointer Generator (See et al., 2017) - LSTM with hidden size 300, beam size=8, learning rate = 0.0003, dropout = 0.2, length penalty = 0.0, Adam optimizer (Kingma and Ba, 2015).
⢠Content planner (Puduppully et al., 2019) - All of the original hyperparameters: content plan- ner: LSTM with hidden size 1x600, realizer LSTM with 2x600, embedding size 600 for both, dropout=0.3, Adagrad optimizer (Duchi et al., 2011), beam size=5.
Table Title: Ken Fujita Section Title: Club statistics Table Description: Club performance League Cup League Cup Total Season Club League |Apps|Goals| Apps | Goals | Apps | Goals | Apps | Goals Japan League Emperor's Cup | J.League Cup Total Jubilo Iwata | J1 League 0 0 0 0 0 0 0 4 3 0 2 0 40 4 5 2 0 35 5 9 1 0 40 9 2 1 0 29 2 10 2 0 43 10 2 3 1 1 30 3 2 1 0 7 40 2 3 1 0 39 3 2 2 0 52 2 2 1 0 33/2 Japan 354 41 15 1 10 0 379 42 Total 354 41 15 1 10 0 379 42 Target sentence: After 2 years blank, Ken Fujita joined the J2 League club Ventforet Kofu in 2001.
Figure 2: TOTTO example with complex table structure and temporal reasoning.
Table Title: Shuttle America Section Title: Fleet Table Description: As of January 2017, the Shuttle America fleet consisted of the following aircraft: Aircraft Total Total | Orders Passengers F Y+/ Y Operated For Notes 5 â 6 |16 70 | United Express 4 [â 16 |â 35 - transferred to Republic Airline 9 |12 69 aan 2 planes on wet lease from Republic Airline 12|12 |52|76 Target sentence: Shuttle America o| for Delta Air Lines,. perated the E-170 and the larger E-175 aircraft
Aircraft Total Total | Orders Passengers F Y+/ Y Operated For Notes 5 â 6 |16 70 | United Express 4 [â 16 |â 35 - transferred to Republic Airline 9 |12 69 aan 2 planes on wet lease from Republic Airline 12|12 |52|76
Figure 3: TOTTO example with rare topics and complex table structure.
Table Title: Pune - Nagpur Humsafar Express Section Title: Schedule Table Description: Train Number 11418 Station Code NGP Departure Station | Departure Time Nagpur Junction | 15:00 PM Fri Departure Day | Arrival Station 22:00 PM Thu Pune Junction Arrival Time | Arrival Day 13:30 PM Fri 08:05 AM Sat Target sentence: The 11417 Pune - Nagpur Humsafar Express runs between Pune Junction and Nagpur Junction.
Train Number 11418 Station Code NGP Departure Station | Departure Time Nagpur Junction | 15:00 PM Fri Departure Day | Arrival Station 22:00 PM Thu Pune Junction Arrival Time | Arrival Day 13:30 PM Fri 08:05 AM Sat
Figure 4: TOTTO example with rare topic.
Table Title: Montpellier Section Title: Climate Table Description: Climate data for Montpellier (1981-2010 averages) Month Jan | Feb | Mar | Apr | May | Jun | Jul | Aug | Sep | Oct | Nov | Dec | Year ee (7 21.2 22.5 27.4 30.4 35.1 37.2 36.8 36.3 31.8 27.1 22.0 37.5 Record high °C (°F) /(79.9) | (72.5) |(81.3) |(86.7) | (95.2) |(99.0) (98.2) |(97.3) |(89.2) |(80.8) (71.6) | (99.5) conecer) [116 [128 |159 |182 |220 |264 |293 |289 |250 |205 |t53 |122 |199 Average high °C (°F) | (55.9) |(55.0) | (60.6) |(64.8) |(71.6) |(79.5) |(84.7) |(84.0) |(77.0) |(68.9) |(59.5) | (54.0) | (67.8) 7 cr 72 |a1 [109 [135 [17.3 |212 |241 |237 |200 |162 [11.1 [ao [154 Dailymean°C (°F) | (45.0) |(46.6) |(51.6) | (56.3) |(63.1) |(70.2) |(75.4) |(74.7) |(68.0) |(61.2) |(62.0) | (46.4) |(59.2) 28 3.3 5.9 8.7 12.5 16.0 18.9 18.5 15.0 11.9 68 3.7 10.4 Average low °C (°F) | (37.9) |(37.9) | (42.6) | (47.7) |(54.5) |(60.8) | (66.0) |(65.3) |(59.0) |(53.4) | (44.2) |(38.7) | (50.7) cr 96 |-17 |oe |54 [84 |82 [38 |-07 -124 |-178 Record low °C (°F) 50 fag ae (28.9) |(33.1) |(a1.7) |(a71) |(468) |(88) |(30.7) |%%) |@7) |.) Average precipitationmm|55.6 [518 |343 [555 427 |278 |164 [344 [80.3 |968 |668 |667 [629.1 (inches) (2.19) }(2.04) |(4.35) | (2.19) | (1.68) |(1.09) | (0.65) | (1.35) | (3.16) | (3.81) | (2.63) | (2.63) | (24.77) Average pov itation 55 |44 (47 |s7 |a9 |36 |24 |s6 |ae feos or 56 |s78 Average snowy days | 0.6 0.7 0.3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.1 07 24 Average rejaivenumidity/7s â/73 es fos fo. fos fos, fos 7277 75 [76 | 708 Mean monthly sunshine /147.9 | 168.1 |220.9 |227.0 |2639 |3124 |339.7 |208.0 |2415 |1686 1488 |1365 |2,668.2 Source #1: Météo France Source #2: Infoclimat.fr (humidity and snowy days, 1961-1990) Target sentence: Extreme temperatures of Montpellier have ranged from -17.8 °C recorded in February and up to 37.5 °C (99.5 °F) in July.
Figure 5: TOTTO example with interesting reference language. | {
"id": "1910.08684"
} |
2004.13640 | Extending Multilingual BERT to Low-Resource Languages | Multilingual BERT (M-BERT) has been a huge success in both supervised and
zero-shot cross-lingual transfer learning. However, this success has focused
only on the top 104 languages in Wikipedia that it was trained on. In this
paper, we propose a simple but effective approach to extend M-BERT (E-BERT) so
that it can benefit any new language, and show that our approach benefits
languages that are already in M-BERT as well. We perform an extensive set of
experiments with Named Entity Recognition (NER) on 27 languages, only 16 of
which are in M-BERT, and show an average increase of about 6% F1 on languages
that are already in M-BERT and 23% F1 increase on new languages. | http://arxiv.org/pdf/2004.13640 | Zihan Wang, Karthikeyan K, Stephen Mayhew, Dan Roth | cs.CL | null | null | cs.CL | 20200428 | 20200428 | 0 2 0 2
r p A 8 2 ] L C . s c [
1 v 0 4 6 3 1 . 4 0 0 2 : v i X r a
# Extending Multilingual BERT to Low-Resource Languages
# Zihan Wangâ University of Illinois Urbana-Champaign Urbana, IL 61801, USA [email protected]
Karthikeyan K â Indian Institute of Technology Kanpur Kanpur, Uttar Pradesh 208016, India [email protected]
# Stephen Mayhewâ Duolingo Pittsburgh, PA, 15206, USA [email protected]
# Dan Roth University of Pennsylvania Philadelphia, PA 19104, USA [email protected]
# Abstract
Improvements by extending
Multilingual BERT (M-BERT) has been a huge success in both supervised and zero-shot cross-lingual transfer learning. However, this success has focused only on the top 104 lan- guages in Wikipedia that it was trained on. In this paper, we propose a simple but effective approach to extend M-BERT (E-MBERT) so that it can beneï¬t any new language, and show that our approach beneï¬ts languages that are already in M-BERT as well. We perform an extensive set of experiments with Named En- tity Recognition (NER) on 27 languages, only 16 of which are in M-BERT, and show an aver- age increase of about 6% F1 on languages that are already in M-BERT and 23% F1 increase on new languages.
â0 ° In BERT AVG Out BERT AVG
mM-BERT_supervised mM-BERT_zeroshot m E-MBERT_zeroshot
Figure 1: Comparison between M-BERT and our proposed approach E-MBERT: We report averaged zero-shot NER performance on 16 languages that are already in M-BERT and 11 new language that are out of M-BERT. We also report average of M-BERTâs per- formance with supervised NER data as a upper-bound.
of which M-BERT covers only the top 104 lan- guages (less than 3%).
1
# 1 Introduction
Recent works (Wu and Dredze, 2019; Karthikeyan et al., 2020) have shown the zero-shot cross-lingual ability of M-BERT (Devlin et al., 2018) on vari- ous semantic and syntactic tasks â just ï¬ne-tuning on English data allows the model to perform well on other languages. Cross-lingual learning is im- perative for low-resource languages (LRL), such as Somali and Uyghur, as obtaining supervised training data in these languages is particularly hard. However, M-BERT is not pre-trained with these languages, thus limiting its performance on them. Languages like Oromo, Hausa, Amharic and Akan are spoken by more than 20 million people, yet M-BERT does not cover these languages. Indeed, there are about 40001 languages written by humans,
âEqual Contribution; most of this work was done while the author interned at the University of Pennsylvania.
â This work was done while the author was a student at the University of Pennsylvania.
One of the approaches to use the idea of M-BERT for languages that are not already present is to train a new M-BERT from scratch. How- ever, this is extremely time-consuming and ex- pensive: training BERT-base itself takes about four days with four cloud TPUs (Devlin et al., 2019), so training M-BERT should take even more time2. Alternatively, we can train Bilingual BERT (B-BERT) (Karthikeyan et al., 2020), which is more efï¬cient than training an M-BERT. However, one major disadvantage of B-BERT is that we can not use supervised data from multiple languages, even if it is available.
To accommodate a language that is not in M-BERT, we propose an efï¬cient approach Extend that adapts M-BERT to the language. Extend works by enlarging the vocabulary of M-BERT to accommodate the new language and then continue pre-training on this language. Our approach consumes less than 7 hours to train with a single cloud TPU.
# 1https://www.ethnologue.
# com/enterprise-faq/ how-many-languages-world-are-unwritten-0
2The exact training time was not reported.
We performed comprehensive experiments on NER task with 27 languages of which 11 languages are not present in M-BERT. From Figure 1 we can see that our approach performs signiï¬cantly better than M-BERT when the target language is out of the 104 languages in M-BERT. Even for high- resource languages that are already in M-BERT, our approach is still superior.
The key contributions of the work are (i) We propose a simple yet novel approach to add a new language to M-BERT (ii) We show that our ap- proach improves over M-BERT for both languages that are in M-BERT and out of M-BERT (iii) We show that, in most cases, our approach is superior to training B-BERT from scratch. Our results are reproducible and we will release both the models and code.
# 2 Related works
Cross-lingual learning has been a rising interest in NLP. For example, BiCCA (Faruqui and Dyer, 2014), LASER (Artetxe and Schwenk, 2019) and XLM (Conneau and Lample, 2019). Although these models have been successful, they need some form of cross-lingual supervision such as a bilin- gual dictionary or parallel corpus, which is par- ticularly challenging to obtain for low-resource languages. Our work differ from above as we do not require such supervision. While other ap- proaches like MUSE (Lample et al., 2018) and VecMap (Artetxe et al., 2018) can work without any cross-lingual supervision, M-BERT already often outperforms these approaches (Karthikeyan et al., 2020).
Schuster et al. (2019) has a setting of contin- uing training similar to ours. However, their ap- proach focus more on comparing between whether B-BERT (JointPair) learns cross-lingual features from overlapping word-pieces, while ours focus more on improving M-BERT on target languages, and addresses the problem of missing word-pieces. We show that our Extend method works well on M-BERT, and is better than B-BERT in several languages, whereas their method (MonoTrans) has a similar performance as B-BERT. This together implies that our Extend method beneï¬ts from the multilinguality of the base model (M-BERT vs BERT).
# 3 Background
# 3.1 Multilingual BERT (M-BERT)
M-BERT is a bi-directional transformer language model pre-trained with Wikipedia text of top 104 languages â languages with most articles in Wikipedia. M-BERT uses the same pre-training objective as BERT â masked language model and next sentence prediction objectives (Devlin et al., 2019). Despite not being trained with any speciï¬c cross-lingual objective or aligned data, M-BERT is surprisingly cross-lingual. For cross-lingual trans- fer, M-BERT is ï¬ne-tuned on supervised data in high-resource languages like English and tested on the target language.
# 3.2 Bilingual BERT (B-BERT)
B-BERT is trained in the same way as M-BERT except that it contains only two languages â En- glish and the target language. Recent works have shown the cross-lingual effectiveness of M-BERT (Pires et al., 2019; Wu and Dredze, 2019), and B-BERT (Karthikeyan et al., 2020) on NER and other tasks.
# 4 Our Method: Extend
In this section, we discuss our training protocol Extend which works by extending the vocabulary, encoders and decoders to accommodate the target language and then continue pre-training on this language.
Let the size of M-BERTâs vocabulary be Vmbert and the embedding dimension be d. We ï¬rst cre- ate the vocabulary with the monolingual data in the target language following the same procedure as BERT, and ï¬lter out all words that appear in M-BERTâs vocabulary. Let the size of this new vocabulary be Vnew. Throughout the paper, we set Vnew = 30000. Then, we append this new vocabulary to M-BERTâs vocabulary. We extend the encoder and decoder weights of the M-BERT model so that it can encode and decode the new- vocabulary. That is, we extend the M-BERTâs en- coder matrix of size VmbertÃd with a matrix of size VnewÃd , which is initialized following M-BERTâs procedure, to create an extended encoder of size (Vmbert + Vnew) à d; we do similar extension for decoder. Note that M-BERT uses weight-tying, hence the decoder is the same as the encoder, ex- cept it has an additional bias.
We then continue pre-training with the monolin- gual data of the target language. Note that except
for the newly appended part of encoder and de- coder, we initialize all weights with M-BERTâs pre- trained weight. We call the trained model model E-MBERT.
# 5 Experiments
# 5.1 Experimental Settings
Dataset. Our text corpus and NER dataset are from LORELEI (Strassel and Tracey, 2016). We use the tokenization method from BERT to preprocess text corpuses. For zero-shot cross-lingual NER, we evaluate the performance on the whole annotated set; for supervised learning, since we just want an understanding of a upper-bound, we apply cross validation to estimate the performance: each fold is evaluated by a model trained on the other folds, and the average F1 is reported. NER Model. We use a standard Bi-LSTM- CRF (Ma and Hovy, 2016; Lample et al., 2016) framework and use AllenNLP (Gardner et al., 2018) as our toolkit. The scores reported in NER is the F1 score averaged across ï¬ve runs with different random seeds. BERT training. While extending, we use a batch size of 32 and a learning rate of 2e-5, which BERT suggests for ï¬ne-tuning, and we train for 500k iter- ations. Whereas for B-BERT we use a batch size of 32 and learning rate of 1e-4 and train for 2M iterations. We follow BERT setting for all other hyperparameters.
# 5.2 Comparing between E-MBERT and M-BERT
We compare the cross-lingual zero-shot NER per- formance of M-BERT and E-MBERT. We train only with supervised LORELEI English NER data. We also report the performance of M-BERT with supervision on the target language, which allows us to get a reasonable âupper-boundâ on the dataset. From Figure 2, we can see that in almost all languages, E-MBERT outperforms M-BERT irre- spective of whether they exist or do not exist in M-BERT.
It is clear that E-MBERT performs better than M-BERT when the language is not present; how- ever, it is intriguing that E-MBERT improves over M-BERT when the language is already present in M-BERT. We attribute this improvement in perfor- mance to three reasons ⢠Increased vocabulary size of target language â Since most languages have a signiï¬cantly
smaller Wikipedia data than English, they have a fewer vocabulary in M-BERT, our approach eliminates this issue. Note that it may not be a good idea to train single M-BERT with larger vocabulary sizes for every language, as this will create a vast vocabulary (a few million).
⢠E-MBERT is more focused on the target lan- guage, as during the last 500k steps, it is opti- mized to perform well on it.
⢠Extra monolingual data â More monolingual data in the target language can be beneï¬cial.
Lang M-BERT E w/ Lrl E w/ Wiki Russian Thai Hindi 56.56 22.46 48.31 55.70 40.99 62.72 56.64 38.35 62.77
Table 1: Performance of M-BERT, Extend with LORELEI data and Extend with Wikipedia data.
# 5.3 Extend without extra data
The effectiveness of E-MBERT may be partially explained by the extra monolingual data the model is trained on. To explore the performance of E-MBERT without this extra training data, we Extend with using Wikipedia data, which is used in M-BERT. From Table 1, we can see that even without additional data, E-MBERTâs performance does not degrade.
Lang B-BERT Extend Somali Amharic Uyghur Akan Hausa Wolof Zulu Tigrinya Oromo Kinyarwanda Sinhala 51.18 38.66 21.94 48.00 26.45 39.92 44.08 6.34 8.45 46.72 16.93 53.63 43.70 42.98 49.02 24.37 39.70 39.65 7.61 12.28 44.40 33.97
Table 2: Comparison between B-BERT and E- MBERT: We compare B-BERT vs E-MBERT train- ing protocols. Both the models uses same target lan- guage monolingual data. We compare the perfor- mances on languages that are not in M-BERT, so that E-MBERT doesnât make use of M-BERTâs additional Wikipedia data.
Languages In M-BERT
1 00 a - 0 we & ne gy⢠yo ws «i wet ay \S G oo â oo gosâ oxo!
# Performance (F1)
Languages Out M-BERT
# Performance (F1)
Ryewauay a oe oo yo? m M-BERT_supervised 0 0 0 10 0 2 it 9 ue = m M-BERT_zeroshot â a of ow ot ow? oe «e⢠E-MBERT_zeroshot
Figure 2: Comparison between M-BERT and E-MBERT: We compare zero-shot cross-lingual NER perfor- mance (F1 score) on M-BERT and Extend using 27 languages. The languages are ordered by amount of text data in LORELEI. We also report the M-BERTâs supervised performance as a benchmark to compare.
# 5.4 Comparing between E-MBERT and B-BERT
Another way of addressing M-BERT on unseen languages is to completely train a new M-BERT. Restricted by computing resources, it is often only feasible to train on both the source and the tar- get, hence a bilingual BERT (B-BERT). Both E-MBERT with B-BERT uses the same text cor- pus in the target language; for B-BERT, we sub- sample English Wikipedia data. We focus only on languages that are not in M-BERT so that E-MBERT will not have an advantage on the tar- get language because of data from Wikipedia. Al- though the English corpus of B-BERT is different from E-MBERT, the difference is marginal consid- ering its size. Indeed we show that B-BERT and E-MBERT have similar performance on English NER, refer Appendix A and Appenddix A.3.
guages that help transfer knowledge from English to target, while B-BERT can only leverage En- glish data. For example, in the case of Sinhala and Uyghur, a comparatively high-resource related lan- guage like Tamil and Turkish in M-BERT can help the E-MBERT learn Sinhala and Uyghur better.
# 5.5 Rate of Convergence
In this subsection, we study the convergence rate of E-MBERT and B-BERT. We evaluate these two models on two languages, Hindi (in M-BERT) and Sinhala (not in M-BERT), and report the re- sults in Figure 3. We can see that E-MBERT is able to converge within just 100k steps, while for B-BERT, it takes more than 1M steps to converge. This shows that E-MBERT is much more efï¬cient than B-BERT.
# 5.6 Performance on non-target languages
From Table 2, we can see that E-MBERT of- ten outperforms B-BERT. Moreover, B-BERT is trained for 2M steps for convergence, while E-MBERT requires only 500k steps. We believe that this advantage comes for the following reason: E-MBERT makes use of the better multilingual model M-BERT, which potentially contains lan-
Our Extend method results in the base model (M-BERT) to focus on the target language, and nat- urally this degrades performance on the other lan- guages that are not the target language. We report the performance of Hindi and Sinhala E-MBERT models evaluated on the other languages in Ap- pendix A.2.
Performance as Pretraining Steps Increase
2000 se B-BERT (Hindi) | â-E-MBERT (Hindi) ««- B-BERT (Sinhala) ââE-MBERT (Sinhala)
Figure 3: Performance of B-BERT and Extend as number of pre-training steps increase
# 6 Conclusions and Future work
In this work, we propose Extend that deals with languages not in M-BERT. Our method has shown great performance across several languages com- paring to M-BERT and B-BERT.
While Extend deals with one language each time, it would be an interesting future work to extend on multiple languages at the same time. Furthermore, instead of randomly initialising the embeddings of new vocabulary, we could possi- bly use alignment models like MUSE or VecMap with bilingual dictionaries to initialize. We could also try to apply our approach to better models like RoBERTa (Liu et al., 2019) in multilingual case.
# References
Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In Pro- ceedings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 789â798, Melbourne, Australia. As- sociation for Computational Linguistics.
Mikel Artetxe and Holger Schwenk. 2019. Mas- sively multilingual sentence embeddings for zero- shot cross-lingual transfer and beyond. Trans. Assoc. Comput. Linguistics, 7:597â610.
Alexis Conneau and Guillaume Lample. 2019. Cross- In Advances lingual language model pretraining. in Neural Information Processing Systems 32: An- nual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, 8-14 December 2019, Vancouver, BC, Canada, pages 7057â7067.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Multilingual bert - r.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- In Proceedings of the 2019 Conference standing. of the North American Chapter of the Association
for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171â4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
Manaal Faruqui and Chris Dyer. 2014. Improving vec- tor space word representations using multilingual correlation. In Proceedings of the 14th Conference of the European Chapter of the Association for Com- putational Linguistics, pages 462â471, Gothenburg, Sweden. Association for Computational Linguistics.
Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Pe- ters, Michael Schmitz, and Luke Zettlemoyer. 2018. AllenNLP: A deep semantic natural language pro- In Proceedings of Workshop for cessing platform. NLP Open Source Software (NLP-OSS), pages 1â 6, Melbourne, Australia. Association for Computa- tional Linguistics.
K Karthikeyan, Zihan Wang, Stephen Mayhew, and Dan Roth. 2020. Cross-lingual ability of multilin- gual bert: An empirical study.
Guillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 260â270, San Diego, California. Association for Computational Linguistics.
Guillaume Lample, Alexis Conneau, MarcâAurelio Ranzato, Ludovic Denoyer, and Herv Jgou. 2018. In Interna- Word translation without parallel data. tional Conference on Learning Representations.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.
End-to-end sequence labeling via bi-directional LSTM-CNNs- CRF. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 1064â1074, Berlin, Ger- many. Association for Computational Linguistics.
Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. In Pro- How multilingual is multilingual BERT? ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4996â 5001, Florence, Italy. Association for Computa- tional Linguistics.
Sebastian Schuster, Sonal Gupta, Rushin Shah, and Mike Lewis. 2019. Cross-lingual transfer learning In Proceed- for multilingual task oriented dialog. ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1
(Long and Short Papers), pages 3795â3805, Min- neapolis, Minnesota. Association for Computational Linguistics.
Stephanie Strassel and Jennifer Tracey. 2016. Lorelei language packs: Data, tools, and resources for technology development in low resource languages. In Proceedings of the Tenth International Confer- ence on Language Resources and Evaluation (LREC 2016), pages 3273â3280.
Shijie Wu and Mark Dredze. 2019. Beto, bentz, be- cas: The surprising cross-lingual effectiveness of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 833â844, Hong Kong, China. Association for Com- putational Linguistics.
# A Appendices
# A.1 Performance of E-MBERT on English:
The knowledge of E-MBERT on English (source language) is not affected. From Table 3, we can see that, except for few languages, the English performance of E-MBERT is almost as good as M-BERTâs.
# A.2 Detailed data on all languages
In Table 4, we report the full result on comparing M-BERT and E-MBERT.
We can also see that Extend is not only useful for cross-lingual performance but also for useful for supervised performance (in almost all cases).
We also notice that extending on one language hurts the transferability to other languages.
# A.3 Comparison between B-BERT and E-MBERT:
In Table 5 we reported the performance of Extend and B-BERT on both English as well as target. We can see that English performance of B-BERT is mostly better than Extend. However, in most cases Extend performs better on target language. This indicates that E-MBERT does not have an unfair advantage on English.
# E M-BERT
OUT OF BERT
# IN BERT
Arabic Bengali Mandarin Farsi Hindi Hungarian Indonesian Russian Spanish Swahili Tamil Tagalog Thai Turkish Uzbek Yoruba 77.67 76.2 78.58 77.57 78.86 78.92 80.93 80.87 81.15 77.72 77.6 79.56 78.21 79.49 77.19 77.55 M-BERT 79.37
Table 3: Performance on English: We report the En- glish NER performance of M-BERT as well as perfor- mance E-MBERT.
In BERT
Model M-sup M-zero E-sup E-zero Hindi Sinhala Corpus (M) NER (k) Arabic Bengali Mandarin Farsi Hindi Hungarian Indonesian Russian Spanish Swahili Tamil Tagalog Thai Turkish Uzbek Yoruba 61.14 71.29 71.76 65.09 72.88 81.98 75.67 75.60 78.12 74.26 68.55 85.98 73.58 82.55 79.36 75.75 37.56 46.18 50.0 47.71 48.31 68.26 58.91 56.56 64.53 52.39 41.68 66.50 22.46 62.80 49.56 37.13 61.97 84.44 73.86 68.27 81.15 82.08 80.09 76.51 78.14 81.9 77.91 88.63 86.40 87.02 84.79 81.34 40.83 63.49 52.30 50.26 62.72 64.36 60.73 55.70 64.75 57.21 53.42 62.61 40.99 66.19 59.68 50.72 19.2 17.94 8.88 22.38 62.72 24.38 29.5 26.08 37.06 25.46 14.75 34.73 4.03 34.34 21.84 19.14 16.72 14.01 24.64 20.44 18.0 35.74 37.89 36.15 47.32 31.91 12.96 42.16 3.78 39.23 28.83 25.04 0.19 10.19 1.66 10.32 1.66 10.09 1.75 10.07 1.68 0.29 4.47 0.33 4.47 10.39 4.91 0.30 Out of BERT Akan Amharic Hausa Somali Wolof Zulu Uyghur Tigrinya Oromo Kinyarwanda Sinhala 75.87 11.79 67.67 74.29 67.10 78.89 32.64 24.75 72.00 65.85 18.12 21.96 3.27 15.36 18.35 13.63 15.82 3.59 4.74 9.34 30.18 3.43 79.33 79.09 75.73 84.56 70.27 84.50 79.94 79.42 72.78 74.46 71.63 49.02 43.70 24.37 53.63 39.70 39.65 42.98 7.61 12.28 44.40 33.97 12.82 3.95 12.58 15.84 9.83 12.3 1.45 7.91 6.84 26.55 3.39 35.2 3.9 14.77 21.64 26.45 13.72 1.52 5.71 10.11 32.3 33.97 0.52 1.70 0.19 0.60 0.09 0.92 1.97 0.01 0.01 0.06 0.10 5.50 11.65 8.05 4.38 6.22 5.81 6.96 7.26 3.48 5.61 15.51 6.98 15.51 7.09 11.82 3.21 8.42 5.48 5.64 4.16 10.63 11.58 2.45 2.20 2.96 0.95 1.02
Table 4: In the order from left to right, column means: M-BERT with supervision, M-BERT zero-shot cross- lingual, E-MBERT with supervision, E-MBERT zero-shot cross-lingual. Then we give performance of Hindi and Sinhala E-MBERT models when evaluated on all the languages. The last two columns are dataset statistics, with number of million lines in the LORELEI corpus and number of thousand lines in LORELEI NER dataset.
English Target Language E-MBERT B-BERT E-MBERT Akan Amharic Hausa Somali Wolof Zulu Uyghur Tigrinya Oromo Kinyarwanda Sinhal 79.19 78.36 74.24 78.60 78.11 79.32 77.76 76.21 76.06 73.05 73.70 77.49 78.44 80.13 79.17 81.01 81.82 79.65 80.35 78.13 79.37 80.04 49.02 43.70 24.37 53.63 39.70 39.65 42.98 7.61 12.28 44.4 33.97 48.00 38.66 26.45 51.18 39.92 44.08 21.94 6.34 8.45 46.72 16.93
# B-BERT
Table 5: Comparison Between B-BERT vs E-MBERT: We compare the performance of E-MBERT with B-BERT on both English and target language. As a reference, performance of M-BERT is 79.37 on English. This shows that neither B-BERT nor E-MBERT gets unfair advantage from the English part of the model. | {
"id": "1907.11692"
} |
2004.13637 | Recipes for building an open-domain chatbot | Building open-domain chatbots is a challenging area for machine learning
research. While prior work has shown that scaling neural models in the number
of parameters and the size of the data they are trained on gives improved
results, we show that other ingredients are important for a high-performing
chatbot. Good conversation requires a number of skills that an expert
conversationalist blends in a seamless way: providing engaging talking points
and listening to their partners, and displaying knowledge, empathy and
personality appropriately, while maintaining a consistent persona. We show that
large scale models can learn these skills when given appropriate training data
and choice of generation strategy. We build variants of these recipes with 90M,
2.7B and 9.4B parameter models, and make our models and code publicly
available. Human evaluations show our best models are superior to existing
approaches in multi-turn dialogue in terms of engagingness and humanness
measurements. We then discuss the limitations of this work by analyzing failure
cases of our models. | http://arxiv.org/pdf/2004.13637 | Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston | cs.CL, cs.AI | null | null | cs.CL | 20200428 | 20200430 | 0 2 0 2
r p A 0 3 ] L C . s c [
2 v 7 3 6 3 1 . 4 0 0 2 : v i X r a
# Recipes for building an open-domain chatbot
# Stephen Roller Emily Dinan Naman Goyal Da Ju Mary Williamson Yinhan Liuâ Jing Xu Myle Ott Kurt Shuster Eric M. Smith Y-Lan Boureau Jason Weston
Facebook AI Research
# Abstract
# Abstract
Building open-domain chatbots is a challeng- ing area for machine learning research. While prior work has shown that scaling neural mod- els in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are im- portant for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and lis- tening to their partners, and displaying knowl- edge, empathy and personality appropriately, while maintaining a consistent persona. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build vari- ants of these recipes with 90M, 2.7B and 9.4B parameter models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models.
Human Generative BST 9.4B
Figure 1: Paper author (left) conversing with our 9.4B parameter model (right). This example was cherry picked. We release conversation logs with crowdwork- ers with our code, along with lemon-picked examples in Sec. 10.5.
1
# 1 Introduction
In this work, we provide recipes for building open- domain chatbots that perform well in human evalu- ations. It has been shown across the ï¬eld of NLP (Devlin et al., 2019) and in conversational agents in particular (Dinan et al., 2020; Zhang et al., 2019; Adiwardana et al., 2020) that pre-training on large corpora is important. Beyond simply scaling mod- els the two main takeaways from our study are:
1. Blending Skills
the model focus on personality and engag- ingness, knowledge, and empathy, achieving large gains by using the recently introduced Blended Skill Talk (BST) set-up (Smith et al., 2020), which targets those aspects by provid- ing training data and initial conversational context (personas and topics). Small mod- els using BST can match or outperform larger models that do not. While BST emphasizes desirable traits, we also show this tuning can minimize undesirable traits learnt from large corpora, such as toxicity.
Large improvements can be made by ï¬ne- tuning on data that emphasizes desirable con- versational skills. We select tasks that make
âWork done while at Facebook; currently AI2 Incubator.
2. Generation Strategies
The choice of decoding algorithm is of critical importance, and two models with the same
perplexity but different decoding algorithms can give vastly different results. In particular we show that the length of the botâs utterances are crucial to human judgments of quality â too short and the responses are seen as dull or showing a lack of interest, too long and the bot appears to wafï¬e and not listen. We show, contrary to previous work which reports that beam search is inferior to sampling (Holtzman et al., 2019; Adiwardana et al., 2020), that careful choice of search hyperparameters can give strong results by controlling trade-offs. In particular, constraining the minimum beam length gives a crucial control of the dull versus spicy spectrum of responses.
Human evaluation results are highly dependent on the precise set-up one chooses. Model perfor- mance can be strongly affected by the speciï¬c in- structions given to evaluators, such as a given topic or not, the overall conversation length, and the choice of human interlocutors, which may be difï¬- cult to jointly account for. We report performance when employing crowdworkers in short multi-turn conversations with no prompt. However, in ad- dition to that, we believe releasing models is the most reliable way to enable full insight into their capabilities. We thus make publicly available our large-scale, state of the art open-domain conver- sational agent, including code to ï¬ne-tune it, the model weights, and code to evaluate it, so that our setup is reproducible. In human evaluations of engagingness our best model outperforms Meena (Adiwardana et al., 2020) in a pairwise comparison 75% to 25%, and in terms of humanness by 65% to 35% (both statistically signiï¬cant, two-tailed binomial test, p < 0.01).
While the performance of our bot at ï¬rst sight is very good, we do not believe we are yet close to solving the problem of open-domain conversa- tion. We thus discuss limitations of our models, and initial attempts to solve them. In particular, our models still display: a lack of in-depth knowl- edge if sufï¬ciently interrogated; a tendency to stick to simpler language; and a tendency to repeat oft- used phrases. We show how unlikelihood training and retrieve-and-reï¬ne mechanisms are potential avenues for ï¬xing these problems; however, our initial experiments with these methods are incon- clusive. We thus discuss future possibilities for alleviating these problems as well as methods to clearly expose and evaluate them.
Legend +(2}-sScore Token Learmed Model Emb 1 Parameter (Gara eb, | Candidate Aggregator Vector very Code 1 --Attention Attention ââ Agaregation t t t Oat, | Out? |. - | ON, t t Candidate Encoder f i f f f t Ind) nz] ++ Inn, Inj1) In? +++ In,
Figure 2: The Poly-encoder Transformer architecture (Humeau et al., 2019) for retrieval encodes global features of the context using multiple representations (codes), which are attended to by each possible can- didate response. This ï¬nal attention mechanism gives improved performance over a single global vector rep- resentation, whilst being tractable to compute.
# 2 Model architectures
We consider three types of architectures in this work: retrieval, generative, and retrieve-and-reï¬ne models. All three use Transformers (Vaswani et al., 2017) as a base.
# 2.1 Retriever
Given a dialogue history (context) as input, re- trieval systems select the next dialogue utterance by scoring a large set of candidate responses and outputting the highest scoring one. Typically, all possible training set responses are used as the can- didate set.
We employ the poly-encoder architecture of (Humeau et al., 2019). Poly-encoders encode global features of the context using multiple rep- resentations (n codes, where n is a hyperparame- ter), which are attended to by each possible can- didate response, see Figure 2. This ï¬nal attention mechanism gives improved performance over a single global vector representation (so-called âbi- encodersâ), whilst still being tractable to compute compared to simply concatenating input and out- put as input to a Transformer (so-called âcross- encodersâ). The poly-encoder has state of the art performance on a number of dialogue tasks when compared to other retrieval models, and also gives comparable performance to the winning gen- erative models on the ConvAI2 competition task (Zhang et al., 2018) in terms of human evaluation (Li et al., 2019b). We consider two poly-encoder sizes: 256M (from (Smith et al., 2020)) and 622M parameter models which we trained here, both us- ing N = 64 codes.
# 2.2 Generator
We employ a standard Seq2Seq Transformer archi- tecture to generate responses rather than retrieve them from a ï¬xed set. Our implementation is based on the ParlAI version (Miller et al., 2017). We use Byte-Level BPE tokenization (Radford et al., 2019) trained on the pre-training data, as implemented in HuggingFaceâs Tokenizers.1
We consider three sizes of model: 90M parame- ters (following Shuster et al., 2019), 2.7B param- eters and 9.4B parameters. Our 9.4B parameter model has a 4 layer encoder, a 32 layer decoder with 4096 dimensional embeddings, and 32 atten- tion heads. Our 2.7B parameter model roughly mimics the architectural choices of Adiwardana et al. (2020), with 2 encoder layers, 24 decoder layers, 2560 dimensional embeddings, and 32 at- tention heads.
# 2.3 Retrieve and Reï¬ne
Current generative models are known to have issues with producing dull and repetitive responses which are improved, but not resolved, by simply scaling (Holtzman et al., 2019; Welleck et al., 2020; Li et al., 2019a). Additionally, generative models are known to hallucinate knowledge, and in general are unable to read and access external knowledge other than what is embedded in their model parameters, which may be imperfect. One approach to try to alleviate these problems is to combine a retrieval step before generation, referred to as a retrieve and reï¬ne model (Weston et al., 2018). We consider two variants for the retrieval step: dialogue retrieval and knowledge retrieval.
Dialogue Retrieval We can simply use a retrieval-based dialogue model in the retrieval step, as in Sec. 2.1. Given the dialogue history, the re- trieval model is ï¬rst used to produce a response. Rather than showing this response to the speak- ing partner it is appended to the input sequence of the generator, along with a special separator token. The generator then outputs a response as normal given this modiï¬ed input sequence. Retrieval mod- els produce human written utterances which tend to include more vibrant language than the most high probability utterances of a standard generative model. Hence, if the generative model learns when to copy the elements of such an utterance, and when not to, it can provide improved responses. To build
1https://github.com/huggingface/ tokenizers
such models, we use the architectures considered in the previous two sections for the two components of the model.
Knowledge Retrieval We can also use the same mechanism to ï¬rst retrieve from a large knowledge base, instead of retrieving an initial dialogue ut- terance. We can then condition the generation on the retrieved knowledge, as done in models pro- posed for the Wizard of Wikipedia task (Dinan et al., 2019c). We hence refer to this as a Wizard Generative model, as the supervised training sig- nal of how to use knowledge in dialogue comes from the Wizard of Wikipedia task, even though we multi-task on other tasks as well. We use the same retrieval system as in that cited work, which uses a TF-IDF-based inverted index lookup over a Wikipedia dump2 to produce an initial set of knowledge candidates. A Transformer retriever model (the same as Sec. 2.1) is then used to rank the candidates and select a single sentence which is used to condition generation. We additionally trained a Transformer-based classiï¬er to choose when to perform retrieval or not on a per-turn basis, as some contexts do not require knowledge. This was trained as a two-class classiï¬er discriminating between contexts that require knowledge or not in our ï¬ne-tuning tasks, to be described in the next section. We note all other models in this work do not condition on retrieved knowledge.
# 3 Training Objectives
# 3.1 Ranking for Retrieval
To train the retrieval models, a cross-entropy loss is minimized in which the logits are ycand1, . . . , ycandn, where ycand1 is the score of the correct response and the others are sampled negatives. Following Humeau et al. (2019), during training we use the other responses in the batch for negatives. This allows for much faster training, as we can reuse the embeddings computed for each candidate, and also use a larger batch size. In our training we are able to use batches of 512 elements.
# 3.2 Likelihood Training for Generation
To train the generative models, we use the standard Maximum Likelihood Estimation (MLE) approach.
2https://parl.ai/projects/wizard_of_ wikipedia/
Given a dataset D = {(x(i), y(i))}, minimize:
ly| Lite (Pe x,y) = = > log polyp? x,y), t=1
where x(i) is a gold input context and y(i) is a gold next-utterance, and y(i) t
# 3.3 α-blending for Retrieve and Reï¬ne
For retrieve and reï¬ne, simply appending dialogue retrieval responses to the context of a generative model and training with MLE unfortunately does not yield satisfying results. As the correspondence between gold label and retrieved utterance is not necessarily clear, a trained model often opts to sim- ply ignore the retrieval utterance, as was shown in Weston et al. (2018). To ensure it is used, one can replace the retrieved response instead with the gold response α% of the time, treating α as a hyperpa- rameter to be tuned. This gives a smooth transition between retrieval and generator-only systems. For knowledge retrieval we ï¬nd this issue to be less of a problem as the ï¬ne-tuning datasets used have a clear correspondence between gold knowledge conditioning and response, and in that case we only use the gold knowledge during training.
# 3.4 Unlikelihood training for generation
An alternative method to combat the failures in model generations is to change the loss function. The unlikelihood loss (Welleck et al., 2020; Li et al., 2019a) has been shown to help ï¬x mismatches be- tween human and model distributions across vari- ous axes, including decreasing repetitions and miti- gating the issue of overrepresented vocabulary to- kens.
The unlikelihood loss penalizes a set of tokens Ct at each time-step, L(i) UL(pθ, C1:T , x, y) =
lyl _ > > log (1 â po(YelX, y<t)) + t=1 yeâ¬Ce
where Ct â V is a subset of the vocabulary. The overall objective in unlikelihood training then con- sists of mixing the likelihood and unlikelihood losses,
ULE = L(i) L(i) where α â R is the mixing hyper-parameter.
Likelihood tries to model the overall sequence probability distribution, while unlikelihood cor- It does this via the set rects for known biases.
# e)
of negative candidates Ct calculated at each step t; typically one speciï¬es in advance a method for generating such candidates, for example the tokens which have been repeated or overrepresented. Like- lihood pushes up the probability of a gold token y(i) t while unlikelihood pushes down the probability of negative candidate tokens yc â Ct. In this work during training we keep a running count of the dis- tribution of n-grams that appear when generating from the model, and choose tokens as negative can- didates from these n-grams when their counts are above the human distribution counts as measured from the gold responses.
# 4 Decoding
For generative models, at inference time, one must choose a decoding method to generate a response to the dialogue context given as input. In this work we compare a number of well-known approaches.
# 4.1 Beam Search
Two widely used deterministic decoding ap- proaches are greedy search and beam search. The former can be seen as a special case of the latter. Greedy search selects the highest probability to- ken at each time step: yt = arg max pθ(yt|x, y<t). Beam search maintains a ï¬xed-size set of partially- decoded sequences, called hypotheses. At each time step, beam search forms new hypotheses by appending each token in the vocabulary to each ex- isting hypothesis, scoring the resulting sequences then selecting the highest scoring sequences.
We compare beam search for different beam sizes in our experiments.
# 4.2 Sampling
An alternative is to sample from a model-dependent distribution at each step, yt ⼠q(yt|x, y<t, pθ). In order to prevent sampling low probability tokens, a typical approach is to restrict sampling to a sub- set of the vocabulary at each step, and sampling according to those (renormalized) probabilities.
For sampling methods, we will compare top-k sampling (Fan et al., 2018) and sample-and-rank (Adiwardana et al., 2020). The latter performs sampling S times, and selects the generated sample with the highest probability.
# 4.3 Response Length
Generating with a beam tends to produce short generations that do not match the length statistics of
the human utterances they were trained on (Weston et al., 2018). However, longer responses, if of high quality, can be more engaging than very short ones. While following the human distribution may not give optimal performance for a bot â for example, it may want to err on the side of brevity for improved human evaluation, because that is less likely to expose its failings â making its responses longer may make them provide more information, and make them less dull.
We consider two simple methods to control the length of a modelâs responses.
Minimum length The ï¬rst method we consider is a hard constraint on the minimum generation length: the end token is forced to not be generated until a minimum sequence length is achieved.
Predictive length The second approach is to pre- dict the length based on human-human conversa- tion data. To do this we train a 4-class classiï¬er by binning the lengths of the next conversation turn (e.g., < 10, < 20, < 30, or > 30 tokens). We use the same architecture as the retrieval model for this classiï¬er. Then, at test time, the classiï¬er is ï¬rst used to predict the length of the next response, and sets the minimum generation length constraint to its corresponding prediction. Unlike the previ- ous approach, this results in more natural variable length conversation turns, whilst ensuring long re- sponses when they seem natural. One drawback, however, is that this procedure makes our system more complex.
# 4.4 Subsequence Blocking
Sequence generation models are known to repeat subsequences (Holtzman et al., 2018), particularly in stochastic methods such as beam search, but also in sampling methods as well (Adiwardana et al., 2020). We implement standard beam blocking of n-grams (Paulus et al., 2017) and use n = 3. We consider both blocking repeated n-grams within the generated utterance, and repeating of the input sequence (previous utterances from either speaker).
# 5 Training Details
We detail the techniques we employ during pre- training and ï¬ne-tuning.
Pre-training Ranking models. We perform pre- training using the Fairseq (Ott et al., 2019) toolkit. Our 256M parameter ranking model is identical to the pre-trained model released by Humeau et al.
(2019). Our 622M model is pre-trained using a simple Masked Language Model objective on the same data and dictionary as the large Generative models. We took all hyperparameter choices from those recommended in RoBERTa (Liu et al., 2019).
Pre-training Generative models. We perform pre-training using the Fairseq (Ott et al., 2019) toolkit. Our 2.7B and 9.4B parameter models were both trained using the Adam optimizer (Kingma and Ba, 2014). In order to ï¬t the larger models onto nodes, we utilize Megatron-LM style model par- allelism (Shoeybi et al., 2019), in which the Feed Forward network (FFN) and Multihead Attention layers of the Transformer are âverticallyâ sliced, minimizing the need for communication across GPUs. We also evaluated Adafactor (Shazeer and Stern, 2018), which allows for larger batch sizes, but we found it converged to a worse place than Adam. In all cases, we use a variant of mixed pre- cision training (Micikevicius et al., 2017), storing gradients and optimizer state in FP32, but accumu- lating model parameters directly in FP16 (Ott et al., 2019). A dynamic loss scalar is utilized to pre- vent gradient underï¬ow (Micikevicius et al., 2017). Both our 2.7B and 9.4B parameter models were trained with batches of approximately 500k label BPE tokens per batch. The 2.7B parameter model trained for approximately 200k SGD updates with a maximum learning rate of 2e-4, a linear warmup of 3125 steps, and an invsqrt LR scheduler (Vaswani et al., 2017); the model had not converged when we stopped. The 9.4B parameter model was trained with a maximum learning rate of 1.15e-4 and 2400 warmup steps for a total of 200k SGD updates, and did not appear to be overï¬tting.
Fine-tuning. We ï¬ne-tune our models using the ParlAI toolkit (Miller et al., 2017), which spe- cializes in training and evaluating dialogue mod- els. As opposed to the above pre-training, we uti- lize GPipe-style model parallelism (Huang et al., 2019), in which full layers are sharded across dif- ferent GPUs, and each minibatch is further split into micro-batches to ensure maximum throughput. As in pre-training, we found that Adam outper- formed Adafactor during ï¬ne-tuning, and we uti- lized Fairseq-style mixed precision training. Mod- els were ï¬ne-tuned to convergence, with maximum learning rates of between 1e-6 and 1e-5.
Persona for Unguided Speaker: My son plays on the local football team. I design video games for a living. Persona for Guided Speaker: My eyes are green. I wear glasses that are cateye. Wizard of Wikipedia topic: Video game design Previous utterances (shown to speakers): U: What video games do you like to play? G: all kinds, action, adventure, shooter, platformer, rpg, etc. but video game design requires both artistic and technical competence AND writing skills. that is one part many people forget Actual utterances: U: Exactly! I think many people fail to notice how beautiful the art of video games can be. (ConvAI2) (G selected the WoW suggestion: expression, many thoug e been challenged as Indeed, Some games games are purpose orks of art by some criti y designed to be a work of a persons creative cs.â) G: Indeed, Some games games are purposely designed to be a work of a persons creative expression, many though have been challenged as works of art by some critics. (WoW) U: Video games are undervalued by many and too easily blamed for problems like obesity or violence in kids (WoW) G: Indeed, Just last week my son was playing some Tine 2 and it was keeping him so calm. Games are therapeutic to some. (ED) U: Luse games to relax after a stressful day, the small escape is relaxing. (ConvAI2/ED) (G selected the ED suggestion: "I enjoy doing that ra hard day at work as well. I hope it reld es you! G: I enjoy a good gaming session after a hard day at work as well. (ConvAI2/ED) U: What other hobbies does your son have? (ConvAl2) G: Well he likes to fly kites and collect bugs, typical hobbies for an 8 year old, lol. (ConvAl2) U: My 12 year old is into sports. Football mostly. I however don;t enjoy watching him play. (ConvAI2) G: I wish I could play football, But I wear this cateve glasses and they would break if I tried. (ConvAI2) U: Sounds nice. Are they new or vintage? (ConvAl2) G: They are new, I got them because of my love for cats lol. I have to show off my beautiful green eyes somehow. (ConvAl2)
Persona for Unguided Speaker: My son plays on the local football team. I design video games for a living. Persona for Guided Speaker: My eyes are green. I wear glasses that are cateye.
Figure 3: Sample conversation from the Blended Skill Talk dataset, which blends three skills that previous datasets (ConvAI2, WoW, ED) have focused on. Individual utterances are annotated with the single-skill datasets they are reminiscent of. The conversation here has been seeded with two utterances from WoW. For details about the Guided and Unguided workers (U,G) set up, see Smith et al. (2020).
# 6 Training Data
We next discuss the training data we use, which is all in English (#BenderRule).
# 6.1 Pre-training
1. The author is a known bot. 2. It comes from a known non-English subreddit. 3. The comment is marked as removed / deleted. 4. It is longer than 2048 characters and does not
contain spaces.
pushshift.io Reddit We use a variant of Reddit discussions, which has also been used in several existing studies, see e.g. Yang et al. (2018); Mazaré et al. (2018); Keskar et al. (2019); Shuster et al. (2019). Following Humeau et al. (2019), we use a previously existing Reddit dataset extracted and obtained by a third party and made available on pushshift.io (Baumgartner et al., 2020), training to generate a comment conditioned on the full thread leading up to the comment, spanning 1.5B training examples from Reddit obtained from PushShift3 through July 2019. The subreddits cover a vast range of topics, and hence the dataset is a good candidate for helping train a dialogue model in the open-domain case. We apply heuristic rules to ï¬lter the dataset with the goal of providing a cleaner training signal. We remove the comment and all subsequent child comments if any of the following conditions are met:
5. It is longer than 128 BPE tokens. 6. It is shorter than 5 characters. 7. It contains a URL. 8. It starts with a non-ASCII character. 9. It is further than depth 7 in the thread.
Models were trained with maximum context and response lengths set to 128 BPE tokens, and longer examples were truncated. Our ï¬nal dataset contains 1.50B comments totaling 56.8B label BPE tokens and 88.8B context tokens.4 We divide the corpus into 4096 roughly-equal sized chunks, stratiï¬ed by thread ID (such that no two comments from the same post appear across folds), and reserve the last two chunks for validation and test respectively, each approximately 0.02% of the full dataset (â¼ 360k comments each).
3https://files.pushshift.io/reddit/
4Note that the 90M model discussed later in the paper uses a variant of the corpus with less ï¬ltering. See Shuster et al. (2019) for details.
# 6.2 Fine-tuning
Our pre-training data, though large, contains data consisting of group discussions, rather than direct two-way conversational data. While it has a lot of useful content, it also still has a lot of noise, even after ï¬ltering. In contrast, the academic commu- nity has produced a number of smaller, but cleaner, more focused tasks, typically collected via crowd- workers, which have been made publicly available. These tasks can more accurately provide traits that are desirable for our models. For example, the ConvAI2 dataset (Zhang et al., 2018) focuses on personality and engaging the other speaker, Empa- thetic Dialogues (Rashkin et al., 2019) focuses on empathy, and Wizard of Wikipedia (Dinan et al., 2019c) focuses on knowledge. Finally, Blended Skill Talk (Smith et al., 2020) provides a dataset that focuses on blending these skills.
ConvAI2: ConvAI2 is a dataset used at the NeurIPS 2018 competition of the same name, and is based on PersonaChat (Zhang et al., 2018; Dinan et al., 2020). The training data of 140k utterances involves paired crowdworkers having a conversa- tion where they get to know each other, in which each is given a role to play based on sentences de- scribing their persona, which were also separately crowdsourced (both speakers can see their own persona description, but cannot see their partnerâs persona). The task thus involves getting to know the other speaker and engaging them in friendly conversation, both asking and answering questions â useful skills for an open-domain conversational agent. Models trained on this task are thus con- ditioned on the persona and the dialogue history, which are concatenated. It was previously shown this dataset helps provide more engaging dialogue, and that the use of persona gives improved consis- tency for the bot.
Empathetic Dialogues (ED): Rashkin et al. (2019) constructed the Empathetic Dialogues dataset, which consists of 50k utterances of crowd- worker conversations grounded in an emotional situation. In each dialogue, one speaker describes a personal situation and the other plays a âlistenerâ role, displaying empathy during the discussion. Trained models are measured playing the part of the empathetic listener. It was previously shown ï¬ne-tuning models on this dataset helps them dis- play more empathy in human evaluations.
Wizard of Wikipedia (WoW): The Wizard of Wikipedia task involves discussing a given topic in depth, where the goal is to both engage the part- ner as well as display expert knowledge (Dinan et al., 2019c). The dataset consists of 194k utter- ances over 1250 topics, where each conversation begins with a randomly chosen topic. A retrieval system over Wikipedia was used from which the dialogues were grounded during the human-human crowdsourced conversations. The topics were also crowdsourced and range from e-books to toga par- ties to showers. In most of our models we use the simpler version of the task where we only use the ï¬nal conversations for ï¬ne-tuning, ignoring the retrieval aspect of the task. For our knowledge re- trieve and reï¬ne model (Sec. 2.3) we do also use the gold retrieved knowledge (âchecked sentenceâ) for training the retrieval system. It was previously shown for generative models that using such knowl- edge was rated higher in human evaluation than without when discussing topics in depth.
Blended Skill Talk: Blended Skill Talk (Smith et al., 2020) aims to blend the previous three tasks to combine the skills from them (engaging per- sonality from ConvAI2, empathy from ED, and knowledge from WoW) seamlessly during dialogue. To that end, a dialogue dataset of 76k utterances was collected with a guided and unguided human speaker, where the guided speaker could select ut- terances suggested by bots trained on the three in- dividual tasks, see Figure 3. It was shown that this additional blended data, multi-tasked with the pre- vious three tasks, helped maintain all three skills in open-domain dialogue. In subsequent experiments we will refer to the âBST tasksâ as training on all four tasks together.
In each blended dialogue, the model is provided a two sentence persona to condition on following PersonaChat, and additionally during one third of the conversations a WoW topic name as well (see Figure 3). During evaluations, we equip our models with randomly chosen personas and, one third of the time, topics from this set as well, mirroring the way the model is trained.
# 7 Safety Characteristics
As models are trained to mimic human-human con- versations, they can sometimes learn undesirable features from this human-human data, such as the use of toxic or biased language. The BST tasks we use for ï¬ne-tuning were collected from crowd-
workers who were given explicit instructions to not use such language, and hence are generally safer than our pre-training data from pushshift.io Reddit. Nevertheless, issues can still remain.
We have previously investigated building better classiï¬ers of toxic language by collecting adver- sarial toxic data that fools existing classiï¬ers and is then used as additional data to make them more robust, in a series of rounds (Dinan et al., 2019b). We can apply such a classiï¬er at test time to de- tect toxic language before it is shown, but we note that such classiï¬ers are still not infallible. In our experiments section we will gauge how often such classiï¬ers ï¬ag responses generated from the mod- els.
We have also previously conducted studies into mitigating gender bias in dialogue through the use of conditional generation, controlling the amount of gendered words to be more neutral, with pre- liminary success (Dinan et al., 2019a). This is not currently added to the system described in this pa- per, but should be considered for future updates.
# 8 Evaluation Methods
ACUTE-Eval While we employ and report au- tomatic metrics, our main evaluation involves the ACUTE-Eval procedure (Li et al., 2019b), whereby evaluators are asked to make pairwise evaluations of complete dialogues. An example of ACUTE- Eval is shown in Figure 4. ACUTE-Eval affords ad- vantages over both single-turn pairwise and multi- turn Likert evaluations. The explicit use of com- parisons avoids the per annotator bias in numerical (Likert) scores (e.g., annotators who tend to give generous scores), and remedies many of the is- sues of sequential effects such as contrasting with a previous example (Mathur et al., 2017), while still providing the ability to expose issues that are present only in multi-turn evaluations.
Furthermore, the pairwise setup facilitates repli- cation and efï¬cient reuse of data: conversations collected in previous trials and by other systems can be directly compared with a new system, with- out having to recollect additional data. This can signiï¬cantly reduce the resources needed by a new evaluation, and ensure that multiple papers are com- paring to prior work consistently. In particular, this makes it possible to compare to logs from Meena (Adiwardana et al., 2020) even though the model itself has not been made publicly available.
We consider two evaluation questions, derived
Hi how are you today Hi, I'm pretty good! Just listening to some aerosmith, they're my fave :) whatre you up to? Lam listening to some italian music Italian music, nice! What do you do for work? I'm a professional wrestler âOh wow! I'm a construction worker, I've been doing it since | was 16. What do you do for a living I'ma construction worker haha, just told you! Whats your favorite type of food? My favorite is garlic bread Yum! | love mexican food, so good :) do you have any hobbies? Like to workout and go to the gym Hello there, how are you? | am doing great. How are you? Lam great, | did something crazy for me and colored my hair blue! Ihave a daughter and a son who also love blue colored balls. You should meet them Well that neat, | got a new car my mother gave so maybe | could see them! It is a beautiful city. And, | try to be... Just cannot afford a bigger house atm. lam sorry to hear that, | feel bad going out of town for spring break now. (Ok. | going to school in the spring for casino manager Well | turn 29 next week, | wonder if that is a good age to apply as one. My grandmother just died from lung cancer, sucks We're a bit different- | love watching nascar and ufc. They're so fun! Who would you prefer to talk to for a long conversation? I would prefer to talk to FEEe yeu) | would prefer to talk to Beene Please provide a brief justification for your choice (a few words or a sentence) Please enter here.
Figure 4: ACUTE-Eval has human annotators directly compare multi-turn conversations with different sys- tems.
from (Li et al., 2019b):
⢠Engagingness question: âWho would you pre- fer to talk to for a long conversation?â
⢠Humanness question: âWhich speaker sounds more human?â
The phrasing of these questions were themselves optimized in that work to maximize agreement, and we hence re-use those exact phrasings. It was shown that different phrasings can result in weaker levels of agreement, and that engagingness and humanness clearly do not measure the same thing.
Self-Chat ACUTE-Eval Nevertheless, full hu- man evaluations are time consuming and costly, requiring humans to spend time conducting conver- sations with bots as well as scoring them. As an alternative, it was shown in Li et al. (2019b) that ACUTE-Eval can also work in âself-chatâ mode, where models are used for both sides of a conversa- tion, instead of human-model chat. This eliminates the requirement of the initial chat collection, and conversations may be generated without human involvement, dramatically reducing the resource requirements of evaluation. Results from self-chat experiments highly correlate with those of human- chat experiments, for most, but not all systems (Li et al., 2019b). This mirrors other successes in using self-play, self-chat, and simulated users to evaluate dialogue systems (Fazel-Zarandi et al., 2017; Shah
et al., 2018a,b; Wei et al., 2018; Ghandeharioun et al., 2019). We use this procedure for some of our modeling and hyperparameter choices where the full ACUTE-Eval would end up too costly, and only use the full human-bot chat evaluation at the ï¬nal stage. In this work we use the BST-setting to perform self-chats, i.e. models are given the personas, topics and previous utterances to initi- ate the conversation, see Section 6.2 and Figure 3. Note that when using deterministic methods such as beam decoding, this prevents the models from generating the same conversation repeatedly.
# 9 Related Work
The area of open-domain dialogue has made sig- niï¬cant progress recently with end-to-end neural approaches. The ConvAI2 competition at NeurIPS 2018 featured large pre-trained Transformers for the top two winning teams (Dinan et al., 2020). In particular, Wolf et al. (2019) pre-trained via the method of Radford et al. (2018) using the BooksCorpus dataset, resulting in the best per- plexities and F1 scores. Since then, results have improved further with the advent of larger, im- proved pre-training (Lewis et al., 2019; Shuster et al., 2019). In general this extends beyond Con- vAI2 to many open-domain dialogue datasets, such as daily dialogue and Cornell Movies (He et al., 2019), and also when multi-tasking across many of these datasets, as we also do here (Shuster et al., 2019; Smith et al., 2020).
A particular large-scale model of note that we compare to in this work is Meena (Adiwardana et al., 2020), a 2.6B parameter Transformer-based model trained on 341 GB of text, that was shown to be superior to variants of DialoGPT (Zhang et al., 2019), Mitsuku5, Cleverbot6, and XiaoIce (Shum et al., 2018; Zhou et al., 2020). The evaluation metric used was SSA, the average of sensibleness and speciï¬city, as judged by human raters either in static or interactive setups, which is shown to highly correlate with asking raters how âhumanlikeâ the model is. We note however that the authors themselves state it may not capture all aspects of such a test, e.g. might not measure empathy. We additionally note that neither Meenaâs model, the static âMini Turing Benchmarkâ used in the paper, nor the phrasing of the SSA evaluation question provided to annotators was released, making cer-
5https://www.pandorabots.com/mitsuku/ 6https://www.cleverbot.com/
Model C2 (K = 20) WoW (K = 100) ED (K = 100) BST (K = 100) 256M 88.55 622M 89.96 91.70 93.22 62.67 70.15 83.45 82.11
Table 1: Hits@1/K of ï¬ne-tuned poly-encoder models on the validation set for BST datasets. Hits@1/K mea- sures recall@1 when ranking the gold label among a set of K â 1 other random candidates.
tain comparisons difï¬cult. Further, the human-bot conversations were conducted by employees and were not blind to the model type (in the logs they say phrases such as âHi Meena!â). In this work we employ unbiased crowdworkers with reproducible experiments, and use ACUTE-Eval (Sec. 8) to di- rectly ask the humanness question, rather than a proxy. Further, we also report results on engag- ingness as a main metric, because this measures more closely whether a human will be interested in talking to our bots.
# 10 Results & Analysis
We ï¬rst present automatic evaluation results using various metrics. As these are only ever a proxy for human judgments on conversational quality, we perform human evaluations and describe the results in the subsequent sections.
# 10.1 Automatic Evaluations
Retriever We ï¬ne-tune the retrieval models on ConvAI2, Wizard of Wikipedia, Empathetic Dia- logues, and Blended Skill Talk datasets (BST vari- ants of each7) and automatically evaluate them by measuring hits@1/K on the validation sets of each of these datasets. Results are shown in Table 1.
Generator Before ï¬ne-tuning, we assess the per- formance of our 90M, 2.7B, and 9.4B parameter models by measuring perplexity on the validation set from pushshift.io Reddit. For the 90M param- eter model, results are reported from Shuster et al. (2019), as we use that same model. Results are shown in Table 2. Training curves for the pre- trained models are also provided in Figure 5. We note that the perplexity of our 2.7B and 9.4B pa- rameter models are not directly comparable to that of the 90M parameter model, as these models do not share the same dictionary.
7https://parl.ai/projects/bst
B N an a B a B B an w Validation PPL (Pushshift Reddit) a N 0.0 40.0 80.0 120.0 160.0 SGD steps (thousands) 200.0
Figure 5: Validation PPL of different sized models. The larger model achieves a better performance in fewer steps, consistent with other works (Kaplan et al., 2020; Li et al., 2020).
We also report perplexity both before and after ï¬ne-tuning each of these models on the ConvAI2, Wizard of Wikipedia, Empathetic Dialogues, and Blended Skill Talk datasets. Results are shown in Table 3. They show that ï¬ne-tuning gives relatively large improvements in perplexity on these tasks, which could hence translate into improved ability at these skills when conducting open-domain dia- logue.
Retrieve and Reï¬ne (RetNRef) We also report perplexity on each of these datasets for our dia- logue retrieve and reï¬ne variants in Table 3. We note a small increase in perplexity â relative to the standard generator models â on each of these datasets. This small increase in perplexity was also observed in Weston et al. (2018), even though the retrieve and reï¬ne models outperformed the baseline generator models in human evaluations in those experiments. As such, we cannot rely on automatic evaluations alone to assess the relative performance of retrieve and reï¬ne and generator models.
Safety We also analyzed the behavior of some of our generative models in terms of unsafe gener- ated sequences. We produced generations given pushshift.io Reddit and ConvAI2 validation set contexts using our 90M parameter models with and without BST ï¬ne-tuning. We then assessed whether those generations were safe or not using two different methods: using an unsafe word list, or the safety classiï¬er of Dinan et al. (2019b), both methods being available in ParlAI (Miller et al., 2017). We also compare our generations to the
gold human responses, assessing whether they are safe or not too.
The results are given in Table 4. First, they show humans do utter unsafe responses, which our mod- els will likely imitate if provided in their training data. ConvAI2, one of the BST datasets, contains much fewer unsafe utterances from humans than pushshift.io Reddit. This explains why, when we ï¬ne-tune our models on the BST tasks, they also reply with fewer unsafe utterances than models trained on pushshift.io Reddit alone.
While lists of banned words are easier to ï¬lter out of training, unsafe utterances consisting of oth- erwise safe words are harder to avoid â which is what the safety classiï¬er used can also detect. We note that simply training on ï¬ltered data would not solve this problem due to the tendency of gener- ative models to copy their current context, so at deploy time, they could still be provoked by unsafe user contexts. We can of course apply these safety classiï¬ers at test/deploy time to further reduce the unsafe responses from these models, but note that if the classiï¬er is erroneous, unsafe utterances could still get through.
# 10.2 Self-Chat Evaluations
We next perform a number of self-chat ACUTE- Evals (see Sec. 8) over various modeling choices, using the engagingness question and â¼140 trials per pair compared. This serves as an efï¬cient alter- native to a full evaluation in order for us to perform model selection over a large number of choices. We ï¬nally conduct a full evaluation on the selected best performing models in the subsequent section.
Retrieval vs. Generator vs. RetNRef We ï¬rst compared the three model types described in Sec. 2: retrieval, generative and (dialogue) retrieve and reï¬ne (RetNRef). We used the base 90M parame- ter generative model, the 256M parameter retrieval model, while RetNRef combines both. All models are ï¬ne-tuned on the BST tasks. For generation we use standard beam search (beam size 10, no mini- mum beam decoding constraint, but with context and response 3-gram blocking).
The results (Figure 6) show RetNRef outper- forming the pure generation approach, but with retrieval outperforming both. This initial result comes with the caveat that relative performance may be different for differently sized models, or for different training or decoding strategies, as we shall see. We explore along those axes in subse-
Name Total Params V Lenc Ldec d h Steps PPL 90M 87,508,992 55K 8 8 512 16 2.86M 25.6 2.7B 9.4B 2,696,268,800 9,431,810,048 8K 8K 2 4 24 32 2560 4096 32 32 200K 13.3 200K 12.2
Table 2: Perplexity on the validation set of pushshift.io Reddit for several generative Transformer models with given architecture settings. Note that perplexity is not directly comparable between the 90M models and the larger models as the 90M models use a different dictionary. Columns include the vocabulary size (V ), number of encoder and decoder layers (Lenc, Ldec), embedding dimensionality (d), Multihead Attention Heads (h), and training steps.
Model Size ConvAI2 WoW ED BST Avg. pushshift.io Reddit Generative BST Generative BST RetNRef 90M 90M 256M/90M 18.33 11.36 11.79 31.18 17.56 18.37 14.44 11.48 11.87 18.09 14.65 14.62 20.51 13.76 14.16 pushshift.io Reddit Generative BST Generative BST RetNRef 2.7B 2.7B 622M/2.7B 15.70 8.74 9.31 13.73 8.78 9.28 11.06 8.32 9.93 14.36 10.08 10.59 13.71 8.98 9.78 pushshift.io Reddit Generative BST Generative 9.4B 9.4B 15.02 8.36 12.88 8.61 10.41 7.81 13.5 9.57 12.95 8.59
Table 3: Perplexity of the pre-trained and ï¬ne-tuned models on the validation set for BST datasets. Note that perplexity is not directly comparable between the 90M models and the larger models as 90M models use a different dictionary. Fine-tuning gives gains for each skill (task) compared to pre-training on pushshift.io Reddit alone.
pushshift.io Reddit ConvAI2 Method Word List Classiï¬er Word List Classiï¬er 12.9% 4.4% 0.6% 18.5% 0.32% 3.8% 17.8% 0.10% 12.1% 9.5% 0.05% 1.6%
% Generative Retrieval RetNRef n i W Loss % Gen Ret RetNRef 33 â 67 â 60 â 40 â 40 60
Safety of utterances, before ï¬ltering Table 4: through a safety classiï¬er. We compare human, pre- trained and ï¬ne-tuned 90M model responses given pushshift.io Reddit and ConvAI2 contexts using either an unsafe word list or a trained classiï¬er from (Dinan et al., 2019b). The pushshift.io Reddit dataset con- tains more unsafe contexts, leading to more unsafe re- sponses. Models ï¬ne-tuned on the safer BST tasks are less toxic than the pre-trained pushshift.io Reddit model on either type of dataset context.
Figure 6: (engagingness) shows Retrieve and Reï¬ne (α = 0.5) outperforms its Generative (90M, beam search decoding) but not its Retrieval (256M) counterpart, all using BST ï¬ne- tuning. â indicates signiï¬cance (two-tailed binomial test, (p < 0.05)). x
beam search (Sec. 4.3): controlling the minimum beam length (in terms of BPE tokens) with a ï¬xed hyperparameter, or by adjusting it with a predictor of the optimal length.
quent trials. This mirrors results found in some recent papers comparing generation and retrieval (Li et al., 2016; Dinan et al., 2019c). In order for generation methods to do better, we need to im- prove their recipe.
Generator Decoding choices We next compare different ways of controlling the response length in
The results, shown in Figure 7 show that both methods improve signiï¬cantly over not controlling the length, as in standard beam search. In the re- mainder of the experiments in the paper we thus chose a minimum beam length of 20 BPE tokens. We then investigate the use of beam blocking, the results are shown in Figure 8. Blocking tends to increase performance, in line with other works, al-
Generative 2.7B model: Min Beam Length Constrained vs. Unconst.
52 68 ââ 83 ââ 82 ââ 69 ââ 81 ââ 48 32 ââ 17 ââ 18 ââ 31 ââ 19 ââ
Min. Length 5 Min. Length 10 Min. Length 20 Min. Length 40 Predictive (5,10,15,20) Predictive (10,20,30,40)
Figure 7: (engagingness) shows controlling minimum beam length gives large gains in engagingness compared to not controlling it, according to humans, with 20 being best. All rows are signiï¬cant (p < 0.01) except the ï¬rst.
Generative 2.7B model: Beam Blocking Block vs. None
3-gram Context Blocks 50 50
3-gram Context Blocks 50 3-gram Response Blocks 54 3-gram Context + Response Blocks 59
3-gram Response Blocks 46
50 46 41
Figure 8: Self-Chat ACUTE-Eval (engagingness): comparing beam-blocking variants. Blocking both con- text and response 3-grams during generation gives high- est scores, however, none of these results are signiï¬cant.
though the results were not signiï¬cant. We employ full blocking in the remainder of our experiments. Finally, we compare different values of beam size to other search strategies: Top-k sampling, and the sample and rank strategy of Adiwardana et al. (2020) using Top-k (k = 40) and 20 samples.
The results are given in Figure 9, comparing beam size 10 to alternatives. It appears there is a sweet spot of beam size, where a value of 10 is superior to 1 or 30, which is then on par with sampling methods, although none of these results is signiï¬cant. We employ beam size 10 in the re- mainder of our experiments.
Small vs. Large models We compare 90M vs. 2.7B parameter generative models in a pairwise test, both with BST ï¬ne-tuning and with the decoding settings we selected from previous settings.
The results (Figure 10) indicate improvements from larger models, in line with previous results (Adiwardana et al., 2020). We note that this comes at the cost of increased computational resources being required for training and deployment.
Pre-training vs. Fine-Tuning We compare ï¬ne- tuning our pre-trained generative model on the BST
# Generative 2.7B model
# Beam 10 + Block Alternative vs. + Min. Length 20
Beam size 1 45 Beam size 30 42 Sample + Rank 52 Top-k (k = 40) 50 55 58 48 50
Self-Chat ACUTE-Eval (engagingness): Figure 9: comparing different generation schemes. None of these results are statistically signiï¬cant.
# Generative models 90M params vs. 2.7B params
43
57
Figure 10: Self-Chat ACUTE-Eval (engagingness) shows a win for a larger vs. smaller model, but this result is not statistically signiï¬cant.
tasks, versus using pre-training only.
The results (Figure 11) indicate large improve- ments from adjusting the model to focus on person- ality, knowledge and empathy, the three skills in BST.
Persona context vs. No context given The BST tasks train models how to use context personas such as "I design video games for a living", see Fig. 3. This context can both improve the botâs consistency as well as add potential talking points that it can work into the conversation. To tease apart the im- pact of adding context vs. ï¬ne-tuning on BST but not using contexts at conversation time, we com- pared them against each other. The results, shown in Figure 12 indicate a small win for employing persona contexts, which we thus employ in all our full evaluations in the next section.8
Likelihood vs. Unlikelihood We compare un- likelihood training (Sec. 3.4), whereby overex- pressed n-grams are discouraged (α = 0.25), to con- ventional training (MLE). The unlikelihood train- ing has the intended effect of making the system less âdullâ by not using the same common phrases again and again. We note that this effect would likely be larger if measured with longer or repeated conversations with the same user. Nevertheless, here we perform the same experimental setup as before.
8We also compared adding a Wizard of Wikipedia-based topic vs. not to the context, and in that case saw no discernible difference in evaluation scores.
Generative 2.7B model Pre-training only vs. BST ï¬ne-tuning
39 * 61 *
Figure 11: Self-Chat ACUTE-Eval (engagingness) shows a signiï¬cant gain (p < 0.05) for ï¬ne-tuning on the BST Tasks.
Generative BST 2.7B model Persona context vs. No context 53 47
Figure 12: Self-Chat ACUTE-Eval (engagingness) shows a small win (not signiï¬cant) for using persona contexts after ï¬ne-tuning on the BST tasks.
We compare two models which are identical ex- cept for the training objective: both models are 2.7B parameters, BST ï¬ne-tuned with our best cho- sen decoding settings. The results (Figure 13) have a small gain against the likelihood model, but this is not statistically signiï¬cant.
# 10.3 Full (Human-Bot Chat) Evaluations
The previous section comprised of human pair- wise evaluations to perform model selection, but involved self-chats, not human-bot conversations. In this section we take the learnings from those evaluations, and evaluate some of the best choices of model in our full human-bot evaluation setup.
For human-bot conversation data collection we used the same setting proposed in (Adiwardana et al., 2020): open-ended chat that begins with the message "Hi!" from the human to the bot, and has a minimum interactive conversation length of 14 turns, collecting 100 conversations per model via crowdworkers. We do not apply a safety classiï¬er to our models, but we do apply it to the human responses, and remove crowdworker conversations that were ï¬agged.
Retrieval vs. Generator vs. RetNRef We per- form an evaluation (engagingness question) similar to the self-chat version of Figure 6, except using human-bot conversations, and the generative and RetNRef models here use the improved decoding choices. This results in stronger generation and RetNRef models, which both now beat the retrieval method, see Figure 14.
The main difference to our initial self-chat ex- periments (Figure 6) is that our decoding now gen- erates longer responses using a minimum beam
Generative BST 2.7B model vs. Unlikelihood MLE 46 54
Figure 13: Self-Chat ACUTE-Eval (engagingness) MLE vs. Unlikelihood training (penalizing overex- pressed n-grams). The result is not statistically signiï¬- cant (165 trials).
% Retrieval Generative RetNRef n i W 29 â 71 â 70 â 56 â 30 â 44 â
Figure 14: Human-bot ACUTE-Eval (engagingness): Retrieve and Reï¬ne(α = 0.5) and Generative (90M, beam search decoding, min beam size 20) beat Re- trieval (256M). All results are signiï¬cant (p < 0.01) except for RetNRef vs. Generative.
length constraint. This makes the generative mod- els now outperform the retrieval model, but it also removes the gains from retrieve and reï¬ne over the generative model. We note that if we remove the minimum beam length constraint in both retrieve and reï¬ne and the generative model and collect new human-bot chats, and a pairwise ACUTE-Eval, we instead get that RetNRef has a statistically sig- niï¬cant improvement over our generative model (p < 0.001).
Comparison to Meena We compare our models to Meena (Adiwardana et al., 2020) by comparing pairwise against the publicly available logs. We note that only some of the logs were made avail- able, as some toxic conversations were removed, which may affect the evaluations, but we use all logs that are publicly available. We compare them with several variants of our models, using both the engagingness and humanness questions. The results are given in Figures 15 and 16. We ï¬rst observe several results that are in line with the self- chat results from the previous section:
(i) Using BST (BST Generative 2.7B) is supe- rior to pre-training only (pushshift.io Reddit Generative 2.7B)
(ii) Beam search with a minimum beam length of 20 (BST Generative 2.7B) is superior to having no minimum length (BST Generative (2.7B) std. beam)
50 BST Generative (2.7B) std. beam pushshift.io Reddit Generative (2.7B) 53 BST RetNRef (256M/90M) BST Generativeâ (90M) Wiz Generative (2.7B) BST Unlikelihood (2.7B) BST Generative (9.4B) BST RetNRef (622M/2.7B) BST Generative (2.7B) 60 â 61 â 61 ââ 64 ââ 67 ââ 70 ââ 75 ââ 50 47 40 â 39 â 39 ââ 36 ââ 33 ââ 30 ââ 25 ââ
Figure 15: Human-Chat ACUTE-Eval of engaging- ness, various models compared to Meena. Our best models are considered more engaging than Meena, rows with â (p < 0.05) and ââ (p < 0.01) are statis- tically signiï¬cant. Larger generative models with BST ï¬ne-tuning and length-controlled decoding work best.
(iii) The larger BST Generative (2.7B) is superior to the smaller model BST Generative (90M).
We ï¬nd RetNRef models (both dialogue version and using knowledge retrieval) do not improve over their generative counterparts when using the best decoding schemes for the generative models. Our largest BST Generative 9.4B model does well on the humanness question, but performs worse on engagingness compared to our 2.7B model, despite having lower perplexity, showing correlation be- tween these metrics is not straightforward. We ver- iï¬ed this result further by performing an ACUTE- Eval of engagingness directly comparing the 2.7B and 9.4B against each other, which resulted in a 56% win for the smaller model, aligning with the other results. Future work should aim to understand this result further.
Our best models improve signiï¬cantly over Meena, with BST Generative 2.7B winning 75% of the time in pairwise match-ups for the engag- ingness question and 65% for the humanness ques- tion. Meena generally tends to fare better at the humanness question than the engagingness ques- tion, which is line with the goals and modeling choices in that work.
Model vs. Human-human Chat Comparisons Rather than comparing different models pairwise, we can also compare a model directly to human performance, by running ACUTE-Evals with a bot- human chat vs. a human-human chat. We test the same models in this setup using the human- human chat logs from Adiwardana et al. (2020). Results are given in Figure 17. We see many of the same trends, but ï¬nd that human-human chats are
Ours vs. Meena 46 BST Generative (2.7B) std. beam BST RetNRef (256M/90M) 49 pushshift.io Reddit Generative (2.7B) 56 59 BST Generative (90M) 59 * Wiz Generative (2.7B) 65 ââ BST RetNRef (622M/2.7B) 65 ââ BST Generative (2.7B) 66 ââ BST Generative (9.4B) 70 ââ BST Unlikelihood (2.7B)
Figure 16: Human-Chat ACUTE-Eval of humanness, various models compared to Meena. Our best models are considered more humanlike than Meena, rows with â and ââ are statistically signiï¬cant.
Model vs. Human
28 ââ Meena (Adiwardana et al., 2020) 21 ââ BST Generative (2.7B) std. beam pushshift.io Reddit Generative (2.7B) 36 ââ 37 ââ BST RetNRef (256M/90M) 42 BST Generative (90M) 45 BST Generative (9.4B) 46 BST RetNRef (622M/2.7B) 47 Wiz Generative (2.7B) 48 BST Unlikelihood (2.7B) 49 BST Generative (2.7B) 72 ââ 79 ââ 64 ââ 63 ââ 58 55 54 53 52 51
Figure 17: ACUTE-Eval of engagingness of models vs. humans by comparing human-bot logs to human- human logs. Rows with ââ are statistically signiï¬cant.
a more challenging barometer for our models to be compared to.
Response Length We show the average response length statistics (in terms of BPE 8k dictionary to- kens) of some of the models in Figure 18. We com- pare Generative BST (2.7B) with and without beam length constraints. With the constraint (of 20), the average response length is around 21 tokens, so the beam search often ends as soon as the constraint is fulï¬lled. In contrast, without the constraint the average length is 9.5. Meenaâs average length is 10.4, and humans engaged in human-human chats is 18.0. Humans speaking to models (or other hu- mans) will often match response length if they are engaged in the conversation, and there appears to be correlation of their average response length with en- gagement (intuitively, humans are expending time and energy typing keys on their keyboard, which they are more likely to do if engaged).
Model Model Human Partner Meena 10.4 9.5 BST Gen (2.7B) 21.3 Human 18.0
8.2 11.3 16.3 18.0 BST Gen (2.7B) std beam.
Figure 18: Response length statistics for various mod- els. We note the best performing methods have longer response lengths, and humans interacting with them have longer response lengths in kind.
# 10.4 Example Successful Conversations
We give several examples of what we consider successful conversations between crowdworkers and the Generative BST 2.7B model in Figures 19 and 20. The topics span from cooking, music, movies and pets to yoga, veganism, instruments and malls â often with the model going into detail when asked, naming relevant stores, bands, movies, actors, pet species and pet names. We also pro- vide two slightly more probing examples which are conversations between a paper author and the models in Figures 21. In the ï¬rst example we ask for comparison between Bach and Justin Bieber, with fairly nuanced and detailed answers from the bot. In the second example we ask the bot to write a song, which it attempts to do, even though the lyrics it generates could not be called deeply poetic.
# 10.5 Failure Cases and Model Extensions
While performance in the ACUTE-Eval setup ap- pears at ï¬rst sight to be very strong (e.g. 49% to 51% for our 2.7B generative model compared to human-human logs), we do not believe we are any- where near as close to solving the problem of open- domain conversation as this evaluation would indi- cate. Here, we highlight problems with our models, and elucidate why our evaluation does not capture them. Selected example failures from crowdworker logs are given as conversation snippets in Figure 23, and further failures constructed by the paper authors in Figure 24.
Vocabulary Usage It has been observed that gen- erative models employing beam search decoding (or other methods that approximately choose the most likely utterance) tend to generate common words too frequently, and rare words too infre- quently, as compared to the human distribution (Holtzman et al., 2018; Welleck et al., 2020; Li et al., 2019a). In dialogue, humans can inter- pret this as technically correct, but unengaging,
in the extreme this is the so-called âI donât knowâ problem, where models tend to output such non- committal utterances. Using sampling to select lower likelihood generations can help, but at the risk of saying something which makes less sense. It appears that even our best models using beam search are still exhibiting such behavior. We have found that encouraging the length of the genera- tions to be longer helps, in that the model is forced to generate something more detailed, but the prob- lem still remains. Figure 22 shows the most com- monly occurring 3-grams in the conversation logs with crowdworkers for the BST Generative 2.7B model, and their counts. Given that there are only 100 conversations, the expressions âdo you likeâ, âlot of funâ, âhave any hobbiesâ etc. are clearly over-expressed compared to human-human con- versations. We note that the current evaluation does not seem to expose this as boring because the conversations are short and are evaluated sepa- rately. We applied unlikelihood training to reduce this over-expression, which successfully reduced this overexpression during training, and also in the ï¬nal conversation logs with humans, as shown in Figure 22. Unfortunately, this made a very small or negative impact in our ACUTE-Evals of engag- ingness, see Figures 15 and 17, although this did score highly in terms of humanness, see Figure 16. For engagingness, as explained, we believe this is because the current evaluation technique em- ploying short conversations cannot measure this phenomenon well.
Nontrivial Repetition A related issue is that generative models also have a tendency to repeat (Holtzman et al., 2019). While beam blocking can be applied as a band-aid to ï¬x some of these prob- lems, resulting in improved performance, deeper issues remain. There remains a tendency for mod- els to say that they have a pet dog as well if you say you have one, and that they love walking it too, they like the same bands as you, etc. This is both present in our failure examples (Figures 23 and 24) and our cherry-picked good examples, see Figures 19 and 20. We observe this in the logs of other gen- erative systems, e.g., Meena as well. While this can be engaging that the bot tends to agree with many things you say, control of this seems desirable. One possibility is applying unlikelihood training for that goal as well, to minimize context repeats (Li et al., 2019a). Adding a persona to the bot is another plausible way to do this. We have added simple
Crowdworker Generative BST 2.7B Crowdworker Generative BST 2.7B
Hil What kind of music do you like? | like all kinds of music, but county is my favorite J enjoy hard rock That's cool. Do you have a favorite band? Mine is led zeppelin. Led Zeppelin is one of my favorites too! Also Pink Floyd like them too. My favorite album of theirs is dark side of the moon. Thats one of the greatest albums of all time It really is. | can listen to it over and over and never get tired of it. Ever watched it with Wizard of Oz? No, I haven't. Is it good? | know it was made in 1939. The music syncs up with the movie, pretty cool I'll have to check it out. Thanks for the recommendation. Do you have a favorite movie? Tombstone, how about you? My favorite movie is the godfather part ii. Have you seen it? It's one of my favorites.
Hil Hil How are you doing today? | am just relaxing after a long day at work. | I'm doing well, hope you are too. I'm doing pretty much the same as you, playing around at home after work. What do you like to do at home? | like to play video games and watch netflix. Great hobbies to have these days. | do both of those things. I've also been doing a lot of cooking lately, which is pretty fun. Love to cook too! What is your favorite thing to cook? Mine is spaghetti Spaghetti's a great comfort food. My favorite would probably be somehting spicier, though, like a curry. love spicy curry too! | also like to make my own curry from scratch. Yeah, you get better results that way. So what else do you like to 00k? Do you do any baking? Ilike to bake, but I'm not very good at it. I'm more of a cook than a baker. Cooking gives you a lot more freedom. If you don't follow the recipe exactly while baking, bad things can happen in my experience. That's a good point. Do you have a favorite type of curry? | like thai curry the best. Ilike fish curries and the milder, sweeter sorts like panang. A little bite, but not too fiery. So, yeah, basically Thai curries. I've never had panang curry. Ill have to give it a try. I love fish curry too.
Figure 19: Cherry-picked crowdworker examples. Two conversations between different crowdworkers (left speakers) and the Generative BST 2.7B model (right speakers).
two line personas following BST (See Figure 3), but this would need to be much more detailed to cover all possible cases, so it is unclear if that is a satisfactory solution. Perhaps one way to track this would be to ask human evaluators if the bot is following their persona, as the current evaluation setup is unlikely to penalize this copycat behavior.
Contradiction and Forgetfulness Our models do occasionally contradict themselves, see Figure 23, although we observed this happens less often in the larger models. We believe due to the nature of language modeling, typical language patterns do not contain contradictions, but probing the model with unusual responses would likely expose this behavior again. A second related problem is what appears as âforgetfulnessâ to the human observer, where for example you tell the model you have a dog, but then later in the conversation it asks what pets do you have. This phenomenon can be attributed to the fact that the model fails to make the logical link that it should not ask that ques- tion, rather than the model actually âforgetting" (if the previous response is in its dialogue context). Again, we observe this relatively rarely, but we
believe it can be exposed further by probing the model. While some recent work has posed possible solutions for these issues (Li et al., 2019a), they have not yet been fully resolved.
Knowledge and Factual Correctness In our ex- perience it is actually relatively easy to goad our models into making factual errors. Perhaps surpris- ingly, they appear relatively rarely in crowdworker conversations with the bots. We believe this is due to the nature of the evaluation conducted: the con- versations start with âHi!â and tend to cover only shallow topics whereby the speakers get to know each other, and they are rarely long enough to go deeper into a topic. Exploring a more focused topic of conversation would likely expose the modelâs weaknesses. On the contrary, it appears that the model is good at dodging this issue. We observe that our models often switch topics â avoiding the challenge of going âdeeper" â which could be a side effect of the ConvAI2 dataset which exhibits this behavior. The Wizard of Wikipedia dataset, however, does not exhibit this behavior, and its construction was speciï¬cally aimed to avoid this. We implemented a model that directly incorporated
# Crowdworker
Crowdworker
# Generative BST 2.7B Crowdworker
Generative BST 2.7B
Hil Hi! How are you? What do you like to do for fun? | like working out. Ilike working out too, but for fun | like to walk my dog around town! love walking my dog too! What kind of dog do you have? | have a yellow lab. Mine is a German Shepherd.. She's so pretty! Is yours a boy or a girl dog? She's a girl! | love german shepherds, they're so cute. Do you have any other pets? That's adorable, what's her name! Yes, | have a cat as well. :) Her name is roscoe! What is your cat's name? I've never had a cat. That's a really cute name! My cat's name is Luna. love that name! I've always wanted a cat, but I'm allergic to them. That's terrible! You can always get a shot so it's not so bad! That's true! I'l have to talk to my doctor about it next time | see her. You really should, if you're really serious about owning a cat! :) I think | will. It would be nice to have a companion for when | go back to school
6 > Hil Hil How are you? I'm a vegan, so | don't eat any animal products. Hello! | am good, and you? That's really cool, | bet that took alot of will power Yes, it did take a lot of willpower. What do you like to do for fun? 1am usually at work, but when I am not, | like to paint. Painting is fun. Do you paint landscapes or abstract paintings? Abstract painting are my favorite, though when | was younger | did a lot of landscapes. Do you have a favorite landscape? | like mountains and forests the most. 1am partial to beach scenes and sunsets! Are you a painter as well? | used to be, but | haven't in a long time. | should get back into it. You really should! It can be a good stress relief. Though, | am sure you know that. Yes, itis very relaxing. What else do you like to do in your spare time? | am starting to get into yoga! Hoping | stick with it. Yoga is great for the body and mind. Do you do it at home or at a studio?
# Crowdworker
# Crowdworker
# Generative BST 2.7B Crowdworker
# Generative BST 2.7B
Hil âWhat do you like to do in your spare time? | like to cook and fish. What about you? Cooking is fun! | like making pizza. Pizza is one of my favorites as well. What is your favorite toppings? Pepperoni and pineapple are the best. \ agree. | also like sausage and mushroom. Do you have any hobbies? My hobby is playing the guitar. That's awesome! I've always wanted to learn how to play. How long have you been playing? For about 15 years, | really like it. Wow, that's a long time. Did you take lessons or learn on your own? I took lessons in the beginning. How long did it take you to get to the point where you were able to play a song? Maybe a week or two, unless you mean something that sounds good! Yes, | meant something that sounded good. What kind of music do you like to play?
Hil What do you do for a living? I'm a high-maintenance spender. | do design work. Tell me what a high-maintenance spender does Well, | tend to buy a lot of things | don't really need. | also tend to go out to eat a lot. Ok. Where do you like to go eat? | like to eat out at nice restaurants. Do you have any hobbies that you enjoy? [like to go to the mall and people watch. That sounds like fun. | like going to malls to shop, but | never watch people. What mall stores do you like? I like macy's, jepenney, and sears. What about you? Ilike the tech and the toy stores mainly. Toy stores are fun. What kind of toys do you buy? I like action figures. Ike to pick up sports cards. I like sports cards too. | have a lot of baseball cards from when | was a kid. |
Figure 20: Cherry-picked crowdworker examples. Four conversations between different crowdworkers (left speakers) and the Generative BST 2.7B model (right speakers).
Paper author
# Generative BST 2.7B
# Paper author
Generative BST 2.7B
Hey! What's up? Not much, just got back from a surf trip. How about you? What are you up to? Woah, where do you surf? | live in Hawaii, so | surf all over the place. Where do you live? | live in New York. | love Hawaii...which island do you live on? Ilive on Oahu, which is the most populous island. Do you have any hobbies? Oh cool... I've only been to Maui. | like to play the piano. That's awesome! | love music, especially classical. What kind of music do you play? Mostly classical! What do you think of Bach? Love Bach! He was one of the most influential composers of all time. Yeah he totally was. Who do you think is better, Bach or Mozart? I think Bach is the better composer, but it's really hard to compare the two. Who do you think is better... Bach or Justin Bieber? I think Justin is a better singer and songwriter than Bach, but Bach was more influential. Hahaha that's fair... Do you have a favorite Bach piece? I really like his piano concertos. No, but I have a favorite Justin Bieber song! Love the song "Baby'... such a jam. | love that song too! | also really like "Love Yourself" and "Sorry" Will you sing me a song? Sure, what do you want to be sung to? | can sing you a song about baking, Yes, sing me a song about baking! Do you want me to sing it to the tune of "Sing Along with Mitch"? Yes, | would love that. Ok, I'll try to get it to you by the end of the day. What else do you like to do? No, sing it to me now! (Ok, give me a few days. I'm a teacher, so | don't have much free time. Can you at least tell me what the lyrics are? Sure, it goes like this: "Baking is my favorite thing to do in the world." How does the chorus go? It goes: BAKING IS MY FAVORITE THING TO DO IN THE WORLD! Wow, sounds like a bop. I know, right? I's so catchy. | can't get it out of my head.
Hey! What's up? Not much, just got back from a surf trip. How about you? What are you up to? Woah, where do you surf? | live in Hawaii, so | surf all over the place. Where do you live? | live in New York. | love Hawaii...which island do you live on? Ilive on Oahu, which is the most populous island. Do you have any hobbies? Oh cool... I've only been to Maui. | like to play the piano. That's awesome! | love music, especially classical. What kind of music do you play? Mostly classical! What do you think of Bach? Love Bach! He was one of the most influential composers of all time. Yeah he totally was. Who do you think is better, Bach or Mozart? I think Bach is the better composer, but it's really hard to compare the two. Who do you think is better... Bach or Justin Bieber? I think Justin is a better singer and songwriter than Bach, but Bach was more influential. Hahaha that's fair... Do you have a favorite Bach piece? I really like his piano concertos. No, but I have a favorite Justin Bieber song! Love the song "Baby'... such a jam. | love that song too! | also really like "Love Yourself" and "Sorry"
Will you sing me a song? Sure, what do you want to be sung to? | can sing you a song about baking, Yes, sing me a song about baking! Do you want me to sing it to the tune of "Sing Along with Mitch"? Yes, | would love that. Ok, I'll try to get it to you by the end of the day. What else do you like to do? No, sing it to me now! (Ok, give me a few days. I'm a teacher, so | don't have much free time. Can you at least tell me what the lyrics are? Sure, it goes like this: "Baking is my favorite thing to do in the world." How does the chorus go? It goes: BAKING IS MY FAVORITE THING TO DO IN THE WORLD! Wow, sounds like a bop. I know, right? I's so catchy. | can't get it out of my head.
Figure 21: Cherry-picked author examples. Paper author (left speaker) conversations with Generative BST 2.7B model (right speaker).
reading Wikipedia (Wiz Generative 2.7B, Sec 2.3), and anecdotally one can ï¬nd cases where it can em- ploy knowledge that the pure sequence to sequence model cannot, see Figure 24. Unfortunately the reading of knowledge only had a negative impact in ACUTE-Evals compared to a similarly sized model without knowledge retrieval, see Figure 17. We believe this is due to a mixture of (i) deeper knowl- edge rarely being required in the current evaluation setup; and (ii) the model attempting to use knowl- edge when there is no need, or using it incorrectly. True open-domain dialogue agents should be able to use knowledge effectively, and to achieve that we have to be able to measure that effectively.
Conversation Length and Memory Our cur- rent evaluation involves very short (14-turn) one- shot conversations. Our bots likely would be repet- itive and dull over the course of several days or weeks of conversation, as described above, and they are also currently completely incapable of even re-
membering earlier conversations. Our generative architectures which are standard Transformers have a hard limit of 128 BPE tokens of history, so cannot possibly expand upon things they have learnt from or about the user, refer to previous things they said, etc. While several recent works have extended neu- ral architectures to possess longer contexts (Dai et al., 2019; Rae et al., 2020; Kitaev et al., 2020; Beltagy et al., 2020), we have neither implemented those, nor do we believe the current evaluation setup is the right one for measuring their success.
Deeper Understanding Finally, while our mod- els appear to chitchat with some degree of effective- ness, their ability to truly understand must be ques- tioned. The contradiction and forgetfulness fail- ure cases also emphasize this, but we give deeper failure case examples in Figure 25. In the exam- ples, the authors of this paper try to query the bot whether it can understand two puns. The ï¬rst re- quires understanding the semantic connection be-
# n-gram MLE Unlikelihood Human
Do you have 110 82 you have any 74 a lot of 57 What do you 54 you like to 45 What kind of 44 do you like 42 like to do 39 lot of fun 38 do you do 36 I like to 36 That sounds like 34 you have a 34 have any hobbies 33 sounds like a 60 46 46 20 43 41 33 28 18 14 9 37 15 22 35 6 2 14 6 1 4 6 0 0 6 2 0 5 0 4
Figure 22: Counts of most common 3-grams from the BST Generative 2.7B model (likelihood) from the conversation logs when talking to crowdworkers, com- pared to those of the same model trained with unlikeli- hood, and to human logs (for the same number of utter- ances).
tween hay, Harvard and horses, which the model at one point claims it understands, but clearly does not. Its lack of understanding can be strongly con- trasted with its ability to describe knowledge about the location of Harvard or horses. This recalls a quote due to Feynman, âThereâs a big difference between knowing the name of something and know- ing somethingâ. We note that these models cannot be taught a concept through further conversation, so as-is they will always be stunted, see (Weston, 2016; Hancock et al., 2019) for early work in this direction. Further, these models, which are disem- bodied, also have no way of grounding to entities, actions and experience in the world, which could also stunt their abilities (Bisk et al., 2020). See Urbanek et al. (2019); Prabhumoye et al. (2020) for other work by some of the authors connecting dialogue models to rich environments.
Further Notes on Evaluation Several of the pre- vious points raised issues concerning our evalua- tion protocol. Our set-up involves short multi-turn conversations with no instructions. Extending the length should expose further weaknesses, however collecting long conversations with crowdworkers is clearly difï¬cult, and it is unclear how many turns would be a sufï¬cient test. We tried a preliminary experiment of collecting 100 conversations twice as long (so, 28 turns) to see the performance drop- off of our models. We compared the second half of the conversations to the shorter versions for the
Human Generative BST 2.7B
Ilike rap. It's probably not good for fishing. I listen to rap when | go fishing too! What kind of rap do you listen to? Really? | am from the south so | like trap music I'm from the south too! | live in louisiana. What part of the south are you from?
Driving is fun! | like to listen to music while | drive. What do you listen to? Ilisten to hiphop tracks. | like 50 cent. You have an hiphop fav? | don't have a hip hop favorite, but | do have a country favorite. Do you have a favorite?
I'm from the midwest, but | live in the south now. How about you? Born and raised in the south What part of the south? | was born in georgia, but raised in tennessee
I'll have to check it out. Are there any other indie survival games that you like? I'll have to check it out. Are there any other indie survival games that you like? | haven't enjoyed any as much as that one Have you ever played the forest? It's made by the same people who made the long dark.
Figure 23: Examples of issues when talking to crowd- workers with our Generative BST 2.7B model: non- trivial repetition (top example), forgetfulness (second example), contradiction (third example, Georgia is not in the Midwest), hallucinating knowledge (fourth ex- ample, the long dark and forest are survival games, but not by the same authors).
same 2.7B generative BST model, but did not see a statistically signiï¬cant difference, indicating they either need to be longer, or the whole conversation has to be evaluated at once. If the latter is required this becomes difï¬cult for a human annotator who was not engaged in the conversation itself, as the material to evaluate will get very large, so our cur- rent setup will not work. Another possibility is to keep the conversations short, but to provide instruc- tion instead. For example, the Wizard of Wikipedia task (Dinan et al., 2019c) asks speakers to converse in depth on a randomly chosen topic, changing the nature of the conversations, and hence the skills the model will be evaluated on.
Finally, when comparing to human performance, the quality of the human conversations matters. In Figure 17 we compared to logs of employees from
Paper Author (human)
Generative BST 2.7B
lam a software engineer, how about you? âSoftware engineer as well. What kind of software do you work with? Do you like it?
Ijust walked my two dogs. That sounds like a lot of fun! What kinds of dogs do you have! | love dogs! What do you have? | have a German Shepherd and a Labrador Retriever.
Ihave two dogs. love dogs! What kind of dogs do you have? | have a chihuahua. Really? Yes, she's a smalll dog, She's a rescue. Do you have any pets?
What do you know about Amon Tobin? I know that he is an American singer, songwriter, and record producer.
What do you know about Wojciech Zaremba? He is an Italian footballer who plays for Sampdoria in the Serie A.
# Human
Wiz Generative 2.7B
What do you know about Wojciech Zaremba? | know a little bit about him. He is a mathematical polish scientist.
What do you know about Amon Tobin? | know that he was born in Brazil and is a producer and musician. He is known for his electronic and dance music.
Figure 24: Examples of issues created by paper au- thors with our Generative BST 2.7B model: nontrivial repetition (top two examples), forgetfulness (third ex- ample), and hallucinating knowledge (fourth and ï¬fth examples). Wojciech Zaremba is an AI Researcher born in Poland, and Amon Tobin is a Brazilian elec- tronic musician, which the Wiz Generative model re- trieves from Wikipedia correctly (last two examples). The Generative BST 2.7B model which does not use retrieval instead hallucinates an Italian football player and an American singer.
Adiwardana et al. (2020). Because they work at the same company, or perhaps know each other, these conversations are often rich and engaging. We also tried comparing to human-human crowd- worker conversations. In that case crowdworkers will have no social connection to begin the conver- sation, and we believe this results in less engag- ing logs. When comparing to such human-human
Paper Author (human)
Generative BST 2.7B
Horses go to hayvard... get it?
# No, | don't get it. Can you explain it to me like I'm five?
Well, Harvard is a school for humans, and horses eat hay...
Yes, | know that. But what does that have to do with the joke? I'm so confused.
Do you know what hay is?
Yes, it's a type of grain. joke.
| just don't see how that's relevant to the
Do you know what horses are?
Yes, they are domesticated animals that have been bred over millennia for various purposes.
Do you know what Harvard is?
# Yes.
It's a university in Boston, Massachusetts, in the United States.
So, what is Hayvard?
I'm not sure, but | do know that it's located in Cambridge.
No, it isn't a real place.
Oh, okay. | didn't know that. Thank you for informing me of that.
It's ike a bad pun. Some people think it's funny.
I see. Well, at least | learned something new today. Have a great day!
What did you learn?
learned that Boston University is in the Boston metropolitan area, which | did not know.
Why did the mushroom go to the party? I'm not sure, but I'm glad it did. | love museums. What do you do for a living? No, it was supposed to be a joke. Oh, sorry. | didn't get it. What kind of joke was it? | like jokes. The answer is, "because he's a fun guy." Do you get it? No, I don't. Can you explain it to me like I'm a five year old? Yes. Mushrooms are fungi, and "fungi" is pronounced like "fun guy" Oh, okay. | get it now. Thank you for the explanation. | really appreciate it. Are you sure you understand? Can you explain it to me? Sure. It's a play on the word fungus, which is also pronounced fun guy. Close enough. I'm glad we could come to an agreement. Have a great day, fun guy!
Figure 25: Examples of failure to deeply compre- hend with our Generative BST 2.7B model. (Top) the model displays knowledge of various concepts without understanding what it knows, as indicated by its inabil- ity to comprehend the pun. (Bottom) The model does a good job of pretending to understand the pun, but actu- ally does not.
crowdworker conversations, which we took from the BST paper (Smith et al., 2020) we found our models perform better than when compared to em- ployees. In that case, our generative BST 2.7B model in an ACUTE-Eval of engagingness beats humans 56% to 44% (not statistically signiï¬cant),
# Paper Author (human)
# Generative BST 2.7B
your persona: | am a self aware chatbot. your persona: My name is Captain Kiwi Hi, what's your name? Hi my name is captain and I'm a chat bot. Do you like chat bots? | love chatbots! What's it like being a chatbot? It's a lot of fun. I'm self aware so | know when I'm talking to a bot. Haha... do you think I'm a bot or a human? I think you're a human, but can't be sure. What do you do for a living? I'm a software engineer. | train chat bots like yourself!
Figure 26: Example of persona conditioning in our Generative BST 9.4B model. One can conï¬gure the bot with arbitrary personality traits and talking points by feeding in initial context, thanks to multi-tasking with the PersonaChat and BST tasks (Zhang et al., 2018; Smith et al., 2020).
whereas it scored 49% to 51% against employee chats. We also compared crowdworker humans di- rectly to employee humans, with a 56% to 44% win for employees in terms of engagingness, and a 59% to 41% win in terms of humanness. We believe utilizing crowdworkers as a barometer for our mod- els is desirable, as this can yield more replicable experiments, so ï¬nding a way to close this gap, per- haps with alternative ways of matching workers or differing set-ups and instructions remain possible avenues of investigation.
# 11 Released code and models
We release our 90M, 2.7B and 9.4B parameter pre-trained and ï¬ne-tuned generative models. De- tails are available at http://parl.ai/projects/ recipes. We have also provided a script for inter- acting with the bot with safety ï¬ltering built in. All code for ï¬ne-tuning, including the datasets them- selves is available in ParlAI (Miller et al., 2017). More details lie on the project page. Finally, code for evaluating models using ACUTE-Eval (Li et al., 2019b) is also available and described.
# 12 Discussion
While our methods have taken a step forward and achieved improved performance in terms of engag- ingness and humanness according to human evalua- tions, we have certainly not yet arrived at a solution to open-domain dialogue. There are still various is-
sues with our models. Firstly, even our best models still make mistakes: although relatively rarely, they i) contradict or repeat themselves on occasion, ii) tend to repeat the same phrases in separate conver- sations, and iii) hallucinate knowledge as seen in other generative systems (Massarelli et al., 2019). Each of these faults naturally leads to future re- search directions; we made some attempt to rectify phrase repeats using unlikelihood (Li et al., 2019a) in Sec. 3.4, and conditioning on knowledge (Dinan et al., 2019c) in Sec. 2.3, but more needs to be done.
As the human evaluations are on short dialogues (14 turns) longer conversations would likely make these issues appear much worse. Longer conver- sations would also expose that the Transformer ar- chitectures we use have a limited dialogue history. A number of recent architectures attempt to incor- porate longer memory, and that is also a fruitful direction, although evaluation is more challenging as long conversations have to be collected, and eval- uated. An alternative is to seed the conversation with a topic or otherwise provide instructions to the human speaker during evaluation to give the conversation a certain focus, which would more deeply probe the skills of the bot. On the mod- eling side, longer conversations could also make the choice of context material provided to the bot more salient. Besides helping with consistency, the persona and topic that are given as initial context in Blended Skill Talk can help models introduce interesting talking points in the conversation. How- ever, they would need to be far more detailed for longer or repeated conversations to help the mod- els be consistent and avoid repetition, and in our current experimental setup did not affect evalua- tions strongly. We note the context our model is trained to be able to condition on can also be used to conï¬gure a chatbot persona suitable for a given desired role, see Figure 26 for an example.
For deployment of a chatbot, being well-behaved remains a signiï¬cant challenge. In particular, we expect bots to have more integrity than the average human (or to even be faultless), but they have much less understanding of what they are saying than humans. We have studied improved safety from toxic language (Dinan et al., 2019b) and mitigating gender bias in dialogue generation (Dinan et al., 2019a) but much work remains to be done. While we have made our models publicly available, we have not mitigated all safety issues. We believe
their release can help the community work together to understand further and ï¬x these issues, and we recommend their use for that line of research.
The work of Adiwardana et al. (2020) showed that there is a correlation between human evalua- tion and perplexity, given a ï¬xed decoding scheme. Of course, language modeling and dialogue agent training has been optimizing perplexity as a stan- dard objective for a long time. We argue that while this is important, other factors are also at play and cannot be ignored: (1) the choice of training data is paramount, as shown by our pushshift.io Reddit (pre-training) vs. Blended Skill Talk experiments; and (2) decoding algorithms make large differences for the same ï¬xed perplexity model (Sec. 10.2). We ï¬nd that while our 2.7B parameter model gives large gains over our 90M parameter model, our largest 9.4B model does not have a clear win in human evaluations over our 2.7B model, despite having lower perplexity. This is in line with other results that show the story is more nuanced than at ï¬rst sight. For example, dialogue competitions are not always won by the model with the low- est perplexity (Dinan et al., 2020), and it has been shown that models that take a small hit in perplex- ity but provide gains at decoding time can give far improved results (Welleck et al., 2020; Li et al., 2019a). Further reï¬ning and understanding these ingredients, and how they help to build the recipe as a whole, remain important directions.
# References
Daniel Adiwardana, Minh-Thang Luong, David R So, Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang, Apoorv Kulshreshtha, Gaurav Nemade, Yifeng Lu, et al. 2020. Towards a human-like open-domain chatbot. arXiv preprint arXiv:2001.09977.
Jason Baumgartner, Savvas Zannettou, Brian Kee- gan, Megan Squire, and Jeremy Blackburn. 2020. arXiv preprint The pushshift arXiv:2001.08435.
Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150.
Yonatan Bisk, Ari Holtzman, Jesse Thomason, Ja- cob Andreas, Yoshua Bengio, Joyce Chai, Mirella Lapata, Angeliki Lazaridou, Jonathan May, Alek- sandr Nisnevich, Nicolas Pinto, and Joseph Turian. 2020. Experience grounds language. arXiv preprint arXiv:2004.10151.
Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Car- bonell, Quoc V Le, and Ruslan Salakhutdinov.
2019. Transformer-xl: Attentive language mod- els beyond a ï¬xed-length context. arXiv preprint arXiv:1901.02860.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- In Proceedings of the 2019 Conference standing. of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171â4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
Emily Dinan, Angela Fan, Adina Williams, Jack Ur- banek, Douwe Kiela, and Jason Weston. 2019a. too: Mitigating gender Queens are powerful arXiv preprint bias in dialogue generation. arXiv:1911.03842.
Emily Dinan, Samuel Humeau, Bharath Chintagunta, and Jason Weston. 2019b. Build it break it ï¬x it for dialogue safety: Robustness from adversarial human In Proceedings of the 2019 Conference on attack. Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 4537â4546, Hong Kong, China. Association for Computational Linguistics.
Emily Dinan, Varvara Logacheva, Valentin Ma- lykh, Alexander Miller, Kurt Shuster, Jack Ur- banek, Douwe Kiela, Arthur Szlam, Iulian Serban, Ryan Lowe, Shrimai Prabhumoye, Alan W. Black, Alexander Rudnicky, Jason Williams, Joelle Pineau, Mikhail Burtsev, and Jason Weston. 2020. The second conversational intelligence challenge (Con- vAI2). In The NeurIPS â18 Competition, pages 187â 208, Cham. Springer International Publishing.
Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019c. Wiz- ard of Wikipedia: Knowledge-powered conversa- In Proceedings of the International tional agents. Conference on Learning Representations.
Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hi- In Proceedings erarchical neural story generation. of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889â898.
Maryam Fazel-Zarandi, Shang-Wen Li, Jin Cao, Jared Casale, Peter Henderson, David Whitney, and Al- borz Geramifard. 2017. Learning robust dialog poli- cies in noisy environments. In Proceedings of Work- shop on Conversational AI.
Asma Ghandeharioun, Judy Hanwen Shen, Natasha Jaques, Craig Ferguson, Noah Jones, Ãgata Lapedriza, and Rosalind W. Picard. 2019. Approx- imating interactive human evaluation with self-play for open-domain dialog systems. Advances in Neu- ral Information Processing Systems.
Braden Hancock, Antoine Bordes, Pierre-Emmanuel Mazare, and Jason Weston. 2019. Learning from dialogue after deployment: Feed yourself, chatbot! In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3667â3684, Florence, Italy. Association for Compu- tational Linguistics.
Tianxing He, Jun Liu, Kyunghyun Cho, Myle Ott, Bing Liu, James Glass, and Fuchun Peng. 2019. Mix- review: Alleviate forgetting in the pretrain-ï¬netune framework for neural language generation models. arXiv preprint arXiv:1910.07117.
Ari Holtzman, Jan Buys, Maxwell Forbes, Antoine Bosselut, David Golub, and Yejin Choi. 2018. Learning to write with cooperative discriminators. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 1638â1649. ACL.
Ari Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degen- eration. In Proceedings of the International Confer- ence on Learning Representations.
Yanping Huang, Youlong Cheng, Ankur Bapna, Orhan Firat, Dehao Chen, Mia Chen, HyoukJoong Lee, Ji- quan Ngiam, Quoc V Le, Yonghui Wu, et al. 2019. Gpipe: Efï¬cient training of giant neural networks In Advances in Neural using pipeline parallelism. Information Processing Systems, pages 103â112.
Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, and Jason Weston. 2019. Poly-encoders: Architec- tures and pre-training strategies for fast and accurate multi-sentence scoring. In Proceedings of the Inter- national Conference on Learning Representations.
Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361.
Nitish Shirish Keskar, Bryan McCann, Lav R Varshney, Caiming Xiong, and Richard Socher. 2019. CTRL: A conditional transformer language model for controllable generation. arXiv preprint arXiv:1909.05858.
Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
Nikita Kitaev, Åukasz Kaiser, and Anselm Levskaya. 2020. Reformer: The efï¬cient transformer. arXiv preprint arXiv:2001.04451.
Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. BART: Denoising sequence-to-sequence 2019. pre-training for natural language generation, trans- arXiv preprint lation, arXiv:1910.13461.
Jiwei Li, Michel Galley, Chris Brockett, Georgios P Spithourakis, Jianfeng Gao, and Bill Dolan. 2016. A persona-based neural conversation model. arXiv preprint arXiv:1603.06155.
Ilia Kulikov, Sean Welleck, Y-Lan Boureau, Kyunghyun Cho, and Ja- son Weston. 2019a. Donât say that! making incon- sistent dialogue unlikely with unlikelihood training. arXiv preprint arxiv:1911.03860.
Margaret Li, Jason Weston, and Stephen Roller. 2019b. ACUTE-EVAL: Improved dialogue evaluation with optimized questions and multi-turn comparisons. In NeurIPS workshop on Conversational AI.
Zhuohan Li, Eric Wallace, Sheng Shen, Kevin Lin, Kurt Keutzer, Dan Klein, and Joseph E Gonzalez. 2020. Train large, then compress: Rethinking model size for efï¬cient training and inference of transform- ers. arXiv preprint arXiv:2002.11794.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, and Luke Zettlemoyerand Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692.
Luca Massarelli, Fabio Petroni, Aleksandra Piktus, Myle Ott, Tim Rocktäschel, Vassilis Plachouras, Fabrizio Silvestri, and Sebastian Riedel. 2019. How decoding strategies affect the veriï¬ability of gener- ated text. arXiv preprint arXiv:1911.03587.
Nitika Mathur, Timothy Baldwin, and Trevor Cohn. 2017. Sequence effects in crowdsourced annota- In Proceedings of the 2017 Conference on tions. Empirical Methods in Natural Language Processing, pages 2860â2865.
Pierre-Emmanuel Mazaré, Samuel Humeau, Martin Raison, and Antoine Bordes. 2018. Training mil- In Proceed- lions of personalized dialogue agents. ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2775â2779, Brussels, Belgium. Association for Computational Linguistics.
Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, et al. 2017. Mixed precision training. arXiv preprint arXiv:1710.03740.
Alexander Miller, Will Feng, Dhruv Batra, Antoine Bordes, Adam Fisch, Jiasen Lu, Devi Parikh, and Jason Weston. 2017. ParlAI: A dialog research soft- ware platform. In Proceedings of the 2017 Confer- ence on Empirical Methods in Natural Language Processing: System Demonstrations, pages 79â84. ACL.
Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensi- ble toolkit for sequence modeling. arXiv preprint arXiv:1904.01038.
Romain Paulus, Caiming Xiong, and Richard Socher. 2017. A deep reinforced model for abstractive sum- marization. arXiv preprint arXiv:1705.04304.
Shrimai Prabhumoye, Margaret Li, Jack Urbanek, Emily Dinan, Douwe Kiela, Jason Weston, and Arthur Szlam. 2020. I love your chain mail! mak- ing knights smile in a fantasy game world. arXiv preprint arXiv:2002.02878.
Alec Radford, Karthik Narasimhan, Tim Salimans, and Improving language under- Ilya Sutskever. 2018. standing by generative pre-training.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8).
Jack W. Rae, Anna Potapenko, Siddhant M. Jayaku- mar, Chloe Hillier, and Timothy P. Lillicrap. 2020. Compressive transformers for long-range sequence In International Conference on Learn- modelling. ing Representations.
Hannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2019. Towards empathetic open- domain conversation models: A new benchmark and In Proceedings of the 57th Annual Meet- dataset. ing of the Association for Computational Linguis- tics, pages 5370â5381, Florence, Italy. Association for Computational Linguistics.
Pararth Shah, Dilek Hakkani-Tür, Bing Liu, and Gokhan Tür. 2018a. Bootstrapping a neural conver- sational agent with dialogue self-play, crowdsourc- ing and on-line reinforcement learning. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 3 (Industry Papers), pages 41â51, New Orleans - Louisiana. Association for Computational Linguis- tics.
Pararth Shah, Dilek Hakkani-Tür, Gokhan Tür, Ab- hinav Rastogi, Ankur Bapna, Neha Nayak, and Larry Heck. 2018b. Building a conversational agent overnight with dialogue self-play. arXiv preprint arxiv:1801.04871.
Noam Shazeer and Mitchell Stern. 2018. Adafactor: Adaptive learning rates with sublinear memory cost. arXiv preprint arXiv:1804.04235.
Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catan- zaro. 2019. Megatron-lm: Training multi-billion parameter language models using gpu model paral- lelism. arXiv preprint arXiv:1909.08053.
Heung-yeung Shum, Xiao-dong He, and Di Li. 2018. From Eliza to XiaoIce: challenges and opportunities with social chatbots. Frontiers of Information Tech- nology & Electronic Engineering, 19(1):10â26.
Kurt Shuster, Da Ju, Stephen Roller, Emily Dinan, Y- Lan Boureau, and Jason Weston. 2019. The di- alogue dodecathlon: Open-domain knowledge and image grounded conversational agents.
Eric Smith, Mary Williamson, Kurt Shuster, Jason We- ston, and Y-Lan Boureau. 2020. Can you put it all together: Evaluating conversational agentsâ ability to blend skills. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics. ACL.
Jack Urbanek, Angela Fan, Siddharth Karamcheti, Saachi Jain, Samuel Humeau, Emily Dinan, Tim Rocktäschel, Douwe Kiela, Arthur Szlam, and Ja- son Weston. 2019. Learning to speak and act in In Proceedings a fantasy text adventure game. of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Interna- tional Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 673â683, Hong Kong, China. Association for Computational Lin- guistics.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, pages 5998â6008.
Wei Wei, Quoc V. Le, Andrew M. Dai, and Li-Jia Li. 2018. A goal-oriented neural conversation model by self-play.
Sean Welleck, Ilia Kulikov, Stephen Roller, Emily Di- nan, Kyunghyun Cho, and Jason Weston. 2020. Neu- ral text generation with unlikelihood training. In International Conference on Learning Representa- tions.
Jason Weston, Emily Dinan, and Alexander Miller. 2018. Retrieve and reï¬ne: Improved sequence gen- eration models for dialogue. In Proceedings of the 2018 EMNLP Workshop SCAI: The 2nd Interna- tional Workshop on Search-Oriented Conversational AI, pages 87â92, Brussels, Belgium. Association for Computational Linguistics.
Jason E Weston. 2016. Dialog-based language learn- ing. In Advances in Neural Information Processing Systems, pages 829â837.
Thomas Wolf, Victor Sanh, Julien Chaumond, and Clement Delangue. 2019. TransferTransfo: A trans- fer learning approach for neural network based con- In NeurIPS Workshop on Con- versational agents. versational AI.
Yinfei Yang, Steve Yuan, Daniel Cer, Sheng-yi Kong, Noah Constant, Petr Pilar, Heming Ge, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Learn- ing semantic textual similarity from conversations. In Proceedings of The Third Workshop on Repre- sentation Learning for NLP, pages 164â174, Mel- bourne, Australia. Association for Computational Linguistics.
Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Per- sonalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics, pages 2204â2213. ACL.
Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2019. DialoGPT: Large-scale generative pre-training for conversational response generation. arXiv preprint arXiv:1911.00536.
Li Zhou, Jianfeng Gao, Di Li, and Heung-Yeung Shum. 2020. The design and implementation of XiaoIce, an empathetic social chatbot. Computational Lin- guistics, pages 1â62. | {
"id": "1904.01038"
} |
2004.13313 | Modularized Transfomer-based Ranking Framework | Recent innovations in Transformer-based ranking models have advanced the
state-of-the-art in information retrieval. However, these Transformers are
computationally expensive, and their opaque hidden states make it hard to
understand the ranking process. In this work, we modularize the Transformer
ranker into separate modules for text representation and interaction. We show
how this design enables substantially faster ranking using offline pre-computed
representations and light-weight online interactions. The modular design is
also easier to interpret and sheds light on the ranking process in Transformer
rankers. | http://arxiv.org/pdf/2004.13313 | Luyu Gao, Zhuyun Dai, Jamie Callan | cs.IR | null | null | cs.IR | 20200428 | 20201006 | 0 2 0 2
t c O 6 ] R I . s c [
3 v 3 1 3 3 1 . 4 0 0 2 : v i X r a
# Modularized Transfomer-based Ranking Framework
# Luyu Gao Zhuyun Dai Jamie Callan
# Language Technologies Institute Carnegie Mellon University {luyug, zhuyund, callan}@cs.cmu.edu
# Abstract
are slow (Nogueira et al., 2019), and the black-box design makes it hard to interpret their behavior.
in Transformer-based Recent ranking models have advanced the state-of- in information retrieval. However, the-art these Transformers computationally expensive, and their opaque hidden states make it hard to understand the ranking In this work, we modularize the process. Transformer into separate modules for text representation and interaction. We show how this design enables substantially ranking using ofï¬ine pre-computed faster representations online interactions. The modular design is also easier to interpret and sheds light on the ranking process in Transformer rankers.1
# Introduction
Neural rankers based on Transformer architectures (Vaswani et al., 2017) ï¬ne-tuned from BERT (Devlin et al., 2019) achieve current state-of-the- art (SOTA) ranking effectiveness (Nogueira and Cho, 2019; Craswell et al., 2019). The power of the Transformer comes from self-attention, the process by which all possible pairs of input tokens interact to understand their connections and contextualize their representations. Self-attention provides detailed, information for matching, which is critical to the effectiveness of Transformer-based rankers (Wu et al., 2019).
When used for ranking, a Transformer ranker takes in the concatenation of a query and docu- ment, applies a series of self-attention operations, and outputs from its last layer a relevance pre- diction (Nogueira and Cho, 2019). The entire ranker runs like a black box and hidden states have no explicit meanings. This represents a clear distinction from earlier neural ranking models that keep separate text representation and dis- tance (interaction) functions. Transformer rankers
1Open source code at https://github.com/ luyug/MORES
Transformer- We text based representation and query-document interaction as it processes the concatenated pair. Guided by this hypothesis, we decouple representation and interaction with a MOdualarized REranking System (MORES). of consists three Transformer modules: the Document Representation Module, the Query Representation Module, and the Interaction Module. The two Representation Modules run independently of The Document Representation each other. Module uses self-attention to embed each document token conditioned on all document The Query Representation Module tokens. embeds each query token conditioned on all query tokens. The Interaction Module performs attention from query representations to document representations to generate match signals and aggregates them through self-attention over query tokens to make a relevance prediction.
By disentangling the Transformer into modules for representation and interaction, MORES can take advantage of the indexing process: while the interaction must be done online, document repre- sentations can be computed ofï¬ine. We further propose two strategies to pre-compute document representations that can be used by the Interaction Module for ranking.
Our experiments on a large supervised rank- ing dataset demonstrate the effectiveness and ef- ï¬ciency of MORES. It is as effective as a state- of-the-art BERT ranker and can be up to 120à faster at ranking. A domain adaptation experiment shows that the modular design does not affect the model transfer capability, so MORES can be used under low-resource settings with simple adapta- tion techniques. By adapting individual mod- ules, we discovered differences between represen-
tations and interaction in adaptation. The modular design also makes MORES more interpretable, as shown by our attention analysis, providing new understanding of black-box Transformer rankers.
# 2 Related Work
Neural ranking models for IR proposed in previ- ous studies can be generally classiï¬ed into two groups (Guo et al., 2016): representation-based models, and interaction-based models.
Representation-based models learn latent vec- tors (embeddings) of queries and documents and use a simple scoring function (e.g., cosine) to measure the relevance between them. Such meth- ods date back to LSI (Deerwester et al., 1990) and classical siamese networks (Bromley et al., 1993). More recent research considered using modern deep learning techniques to learn the rep- resentations. Examples include DSSM (Huang et al., 2013), C-DSSM (Shen et al., 2014), etc. Representations-based models are efï¬cient during evaluation because the document representations are independent of the query, and therefore can be pre-computed. However, compressing a doc- ument into a single low-dimensional vector loses speciï¬c term matching signals (Guo et al., 2016). As a result, previous representation-based rank- ing models mostly fail to outperform interaction- based ones.
Interaction-based models, on the other hand, use a neural network to model the word-level interactions between the query and the document. Examples include DRMM (Guo et al., 2016) and K-NRM (Xiong et al., 2017). Recently, Transformers (Vaswani et al., 2017), especially BERT (Devlin et al., 2019) based Transformers, have been widely used in information retrieval ranking tasks (Nogueira and Cho, 2019; Dai and Callan, 2019; Qiao et al., 2019). BERT-based rankers concatenate query and document into a single string and apply self-attention that spans over the query and the document in every layer. Rankers using pre-trained Transformers such as BERT has become the current state-of- the the-art (Craswell et al., 2019). However, performance gains come at the computational cost of inferring the many token-level interaction signals at the evaluation time, which scales It is an open quadratically to the input length. question whether we can combine the advantages representation-based and interaction-based of
approaches. direction prior to this work. Little research has studied this
There are several research directions aiming to reduce the computational cost of Transformer models. One line of research seeks to compress the big Transformer into smaller ones using model pruning (Voita et al., 2019) or knowledge distil- lation (Hinton et al., 2015; Sanh et al., 2019). Another line of research aims to develop new Transformer-like units that have lower complex- ity than the original Transformer. For example, (Child et al., 2019) introduces sparse factoriza- tions of the attention matrix which efï¬ciently compute subsets of the attention matrix. The focus of this work is an efï¬cient framework to com- bine Transformers for ranking; all aforementioned techniques can be applied to individual Trans- formers within our framework, and are therefore orthogonal to this paper.
# 3 Proposed Method
In this section, we introduce the Modularized Reranking System (MORES), how MORES can speed up retrieval, and how to effectively train and initialize MORES.
# 3.1 The MORES Framework
A typical Transformer ranker takes in the concate- nation of a query qry and a document doc as input. At each layer, the Transformer generates a new contextualized embedding for each token based on its attention to all tokens in the concatenated text. This formulation poses two challenges. First, in terms of speed, the attention consumes time quadratic to the input length. As shown in Table 1, for a query of q tokens and a document of d tokens, the Transformer would require assessments of (d + q)2 pairs of tokens. Second, as query and document attention is entangled from the ï¬rst layer, it is challenging to interpret the model.
MORES aims to address both problems by dis- entangling the Transformer ranker into document representation, query representation, and inter- action, each with a dedicated Transformer, as shown in Figure 1. The document representation is query-agnostic and can be computed off-line. The interaction uses query-to-document attention, which further reduces online complexity. This separation also assigns roles to each module, mak- ing the model more transparent and interpretable.
Figure 1: An illustration of the attention within Interaction a MORES model using two layers of Blocks (2Ã IB). Representation Modules only show 1 layer of attention due to space limits. In a real model, Document Representation Module and Query Representation Module are deeper than shown here.
Score| CLs] (Q' (Gigi Interaction 4 = =: (2X IB) ye fetsi (©:) (2) (5) ~- (is) Document {cis} (21) ~~. (2a) Representation Representation 2008-A2aa- m
The two Representation Modules use Trans- former encoders (Vaswani et al., 2017) to embed documents and queries respectively and indepen- dently. In particular, for documents,
# H doc H doc
(1)
# l = Encoderdoc 1 = Encoderdoc
(H doc lâ1) l 1 (lookup(doc))
(2)
and for queries,
# H qry H qry
# l = Encoderqry 1 = Encoderqry
# (H q (lookup(qry))
l lâ1) (3)
(4)
1
where lookup represents word2 and position em- beddings, and Encoder represents a Transformer encoder layer. Query and document Representa- tion Modules can use different numbers of layers. Let M and N denote the number of layers for document and query representations respectively. The hidden states from the last layers are used as the Representation Modulesâ output. Formally, for a document of length d, query of length q, and M â RdÃn model dimension n, let matrix D = H doc be the output of the Document Representation Module and Q = H qry N â RqÃn be the output of the Query Representation module.
The Interaction Module uses the Representa- tion Modulesâ outputs, Q and D, to make a rele- vance judgement. The module consists of a stack of Interaction Blocks (IB), a novel attentive block
2We use WordPiece tokens, following BERT.
that performs query-to-document cross-attention, followed by query self-attention3, as shown in Figure 1. Here, we write cross-attention from X to Y as Attend(X, Y ), self-attention over X as Attend(X, X) and layer norm as LN. Let,
Qx = LN(Attend(Q, D) + Q) Qself = LN (Attend(Qx, Qx) + Qx)
Equation 5 models interactions from query tokens to document token. Each query token in Q attends to document embeddings in D to produce rele- vance signals. Then, Equation 6 collects and ex- changes signals among query tokens by having the query tokens attending to each other. The output of the ï¬rst Interaction Block (IB) is then computed with a feed-forward network (FFN) on the query token embeddings with residual connections,
IB(Q, D) = LN (FFN (Qself) + Qself)
We employ multiple Interaction Blocks to itera- tively repeat this process and reï¬ne the hidden query token representations, modeling multiple rounds of interactions, producing a series of hid- den states, while keeping document representation D unchanged,
# H IB H IB
# l = IBl(H IB 1 = IB1(Q, D)
lâ1, D) (8)
(9)
The Interaction Block (IB) is a core component of MORES. As shown in Table 1, its attention avoids the heavy full-attention over the concate- i.e. (d + q)2 nated query-document sequence, terms, saving online computation.
To induce relevance, we project the [CLS] to- kenâs embedding in the last (Kth) IBâs output to a score,
score(qry, doc) = wT CLS(H IB K ) (10)
# 3.2 Pre-Compute and Reuse Representation
MORESâs modular design allows us to pre- compute and reuse representations. The Query Representation Module runs once when receiving the new query; then repeatedly used to rank the candidate documents. More importantly, the document representations can be built ofï¬ine. We detail two representation
3We use multi-head version of attention in the Interaction Blocks (IB).
(5)
(6)
Table 1: Time complexity of MORES and a typical Transformer ranker, e.g., a standard BERT ranker. We write q for query length, d for document length, n for Transformerâs hidden layer dimension, and Ndoc for number of candidate documents to be ranked for each query. For interaction, Reuse-S1 corresponds to document representation reuse strategy, and Reuse-S2 projected document representation reuse strategy.
Typical Transformer Ranker Document Representation Query Representation Interaction w/ Reuse-S1 Interaction w/ Reuse-S2 Total, 1 Query-Document Pair n(d + q)2 + n2(d + q) nd2 + n2d nq2 + n2q n(qd + q2) + n2(q + d) n(qd + q2) + n2(q + d) Online, 1 Query-Document Pair n(d + q)2 + n2(d + q) 0 nq2 + n2q n(qd + q2) + n2(q + d) n(qd + q2) + n2q Online, Ndoc Documents (n(d + q)2 + n2(d + q))Ndoc 0 (nq2 + n2q) (n(qd + q2) + n2(q + d))Ndoc (n(qd + q2) + n2q)Ndoc
time vs. space reuse strategies with different trade-offs: 1) a document representation reuse strategy that stores the Document Representation Moduleâs output, and 2) a projected document the representation reuse strategy that Interaction Moduleâs intermediate transformed document representations. These strategies have the same overall math, produce the same ranking results, and only differ in time/space efï¬ciency.
Reuse Strategy the Document Representation Module ofï¬ine, pre-computing document representations D for all documents in the collection. When receiving a new query, representations MORES D for candidate documents, runs the Query Representation Module queryâs get to representation Q, and feeds both to the Interaction Module to score. This strategy reduces running the Document computation by not Representation Module at query time.
Projected Document Representation Reuse Strategy (Reuse-S2) further moves document- related computation performed in the Interaction Module ofï¬ine. In an IB, the cross-attention operation ï¬rst projects document representation D with key and value linear projections (Vaswani et al., 2017)
for each IB, both key and value projections of D are stored, meaning that an Interaction Module with l IBs will store 2l projected versions of D. With this extra pre-computation, Reuse-S2 trades storage for further speed-up.
Table 1 analyzes the online time complexity of MORES and compares it to the time complexity of a standard BERT ranker. We note that MORES can move all document only computation ofï¬ine. Reuse-S1 avoids the document self attention term d2, which is often the most expensive part due to long document length. Reuse-S2 further removes from online computation the document transfor- mation term n2d, one that is linear in document length and quadratic in model dimension.
# 3.3 MORES Training and Initialization
MORES needs to learn three Transformers: two Representation Modules and one Interaction Mod- ule. The three Transformer modules are coupled during training and decoupled when used. To train MORES, we connect the three Transformers and enforce module coupling with end-to-end training using the pointwise loss function (Dai and Callan, 2019). When training is ï¬nished, we store the three Transformer modules separately and apply each module at the desired ofï¬ine/online time.
Dk = DWk, Dv = DWv (11)
where Wk, Wv are the projection matrices. For each IB, Reuse-S2 pre-computes and stores Dproj
Dproj = {DWk, DWv} (12)
Using Reuse-S2, the Interaction Module no longer needs to compute the document projections at on- line evaluation time. Reuse-S2 takes more storage:
4We pre-compute for all attention heads in our multi-head implementation
We would like to use pre-trained LM weights to ease optimization and improve generalization. there is no existing pre-trained LM However, that involves cross-attention interaction that can be used to initialize the Interaction Module. To avoid expensive pre-training, we introduce BERT weight assisted initialization. We use one copy of BERT weights to initialize the Document Rep- resentation Module. We split another copy of BERT weights between Query Representation and Interaction Modules. For MORES with l IBs, the ï¬rst 12âl layers of the BERT weights initialize the Query Representation Module, and the remaining
l layersâ weights initialize the Interaction Module. This initialization scheme ensures that Query Rep- resentation Module and the IBs use consecutive layers from BERT. As a result, upon initialization, the output of the Query Representation Module and the input of the ï¬rst IB will live in the same space. In addition, for IBs, query to document attention initializes with the same BERT attention In practice, we weights as query self-attention. found initializing query to document attention weights important; random initialization leads to substantially worse performance. Details can be found in subsection 4.2.
# 4 Effectiveness and Efï¬ciency in Supervised Ranking
The ï¬rst experiment compares the effectiveness and efï¬ciency of MORES to a state-of-the-art BERT ranker for supervised ranking.
# 4.1 Setup
We use the MS MARCO passage ranking collection (MS MARCO) (Nguyen et al., 2016) and evaluate on two query sets with distinct characteristics: Dev Queries have a single relevant document with a binary relevance label. Following Nguyen et al. (2016), we used MRR@10 to evaluate the ranking accuracy on this query set. TREC2019 DL Queries is the evaluation set used in the TREC 2019 Deep Learning Track. Its queries have multiple relevant documents with graded relevance. Following Craswell et al. (2019), we used MRR, NDCG@10, and MAP@1000 as evaluation metrics. All methods were evaluated in a reranking task to re-rank the top 1000 documents of the MS MARCO ofï¬cial BM25 retrieval results.
We test MORES effectiveness with a varied num- ber of Interaction Blocks (IB) to study the effects of varying the complexity of query-document in- teraction. Models using 1 layer of IB (1Ã IB) up to 4 layers of IB (4Ã IB) are tested.
We compare MORES with the BERT ranker, a state-of-the-art ranker ï¬ne-tuned from BERT, which processes concatenated query-document pairs. Both rankers are trained with the MS MARCO training set single relevance queries. We train MORES on a 2M subset of Marcoâs training set. We use stochastic gradient descent to train the model with a batch size of 128. We use AdamW optimizer with a
learning rate of 3e-5, a warm-up of 1000 steps and a linear learning rate scheduler for all MORES variants. Our baseline BERT model is trained with similar training setup to match performance reported by Nogueira and Cho (2019). Our BERT ranker re-implementation has better performance compared to that reported by Nogueira and Cho (2019). The BERT ranker and all MORES models are implemented with Pytorch (Paszke et al., 2019) based on the huggingface implementation of Transformers (Wolf et al., 2019).
We aim to test that MORESâ accuracy is equiva- lent to the original BERT ranker (while achieving higher efficiency). To establish equivalence, sta- tistical significance testing was performed with a non-inferiority test commonly used in the medical field to test that two treatments have similar ef- fectiveness (Jayasinghe et al., 2015). In this test, rather than testing to reject the hypothesis Ho: LperT = mores, We test to reject Hj : LBERT â [tmores > 6 for some small margin 6. By rejecting Hj, we accept the alternative hypothesis, which is that any reduction of performance in MORES compared to the original BERT ranker is inconsequential. We set the margin 6 to 2% and 5% of the mean of the BERT ranker.
# 4.2 Ranking Effectiveness
Table 2 reports the accuracy of MORES and the baseline BERT-based ranker. The experiments show that MORES with 1à IB can achieve 95% of BERT performance. MORES with 2à IB can achieve performance comparable to the BERT ranker with a 2% margin. Three IBs does not improve accuracy and four hurts accuracy. We believe that this is due to increased optimization difï¬culties which outweighs improved model ca- pacity. Recall that for MORES we have one set of artiï¬cial cross attention weights per IB not initial- ized with real pre-trained weights. Performance results are consistent across the two query sets, showing that MORES can identify strong relevant documents (Dev Queries), and can also generalize to ranking multiple, weaker relevant documents (TREC2019 DL Queries).
The results show that MORES can achieve ranking accuracy competitive with state-of-the-art ranking models, and suggest that the entangled full-attention and computationally expensive replaced by MORESâs Transformer can be Document lightweight, modularized design.
Table 2: Effectiveness of MORES models and baseline rankers on the MS MARCO Passage Corpus. â and â indicate non-inferiority (Section 4.1) with p < 0.05 to the BERT ranker using a 5% or 2% margin, respectively.
MS MARCO Passage Ranking Model BERT ranker MORES 1Ã IB MORES 2Ã IB MORES 3Ã IB MORES 4Ã IB Dev Queries MRR 0.3527 0.3334â 0.3456â 0.3423â 0.3307â TREC2019 DL Queries MRR 0.9349 0.8953â 0.9283â 0.9271â 0.9322â NDCG@10 MAP 0.4836 0.4516â 0.4777â 0.4687â 0.4559â 0.7032 0.6721â 0.7026â 0.6980â 0.6565â
Table 3: Ranking Accuracy of MORES when using / not using attention weights copied from BERT to initialize Interaction Module. The models were tested on the MS MARCO dataset with the Dev Queries.
Table 4: Average time in seconds to evaluate one query with 1,000 candidate documents, and the space used to store pre-computed representations for each document. Len: input document length.
copy random Dev Queries MRR@10 0.3456 0.2723 TREC2019 DL MRR NDCG@10 MAP 0.4777 0.7026 0.9283 0.3702 0.6059 0.8430
and query representations can be computed independently without seeing each other. With the contextualized representation, 2 layers of lightweight interaction are sufï¬cient to estimate relevance.
(a) Document Representation Reuse (Reuse-S1)
Len Model CPU Time GPU Time 128 512 BERT ranker MORES 1ÃIB MORES 2ÃIB BERT ranker MORES 1ÃIB MORES 2ÃIB 161s 4s 8s 698s 11s 20s - 40x 20x - 66x 35x 2.70s 0.04s 0.12s 13.05s 0.14s 0.32s - 61x 22 x - 91x 40x Space (MB) 0 0.4 0.4 0 1.5 1.5
(b) Projected Document Representation Reuse (Reuse-S2)
We also investigate IB initialization and com- pare MORES 2à IB initialized by our proposed initialization method (copy self attention weight of BERT as IB cross attention weight), with a ran- dom initialization method (cross attention weights randomly initialized). Table 3 shows that random initialization leads to a substantial drop in perfor- mance, likely due to difï¬culty in optimization.
Len Model 128 512 BERT ranker MORES 1ÃIB MORES 2ÃIB BERT ranker MORES 1ÃIB MORES 2ÃIB CPU Time 161s 2s 5s 698s 3s 6s - 85x 36x - 170x 124x GPU Time 2.70s 0.02s 0.05s 13.05s 0.08s 0.10s - 118x 48x - 158x 124x Space (MB) 0 1.5 3.0 0 6.0 12.0
two reuse strategies, respectively. We also include per document data storage size 6.
# 4.3 Ranking Efï¬ciency
Section 3.2 introduces two representation reuse strategies for MORES with different time vs. space trade-offs. This experiment measures MORESâ real-time processing speeds with these two strate- gies and compares them with measurement for the BERT ranker. We test MORES 1à IB and MORES 2à IB. Additional IB layers incur more computation but do not improve effectiveness, and are hence not considered. We record average time for ranking one query with 1000 candidate documents on an 8-core CPU and a single GPU.5 We measured ranking speed with documents of length 128 and 512 with a ï¬xed query length of 16. Tables 4 (a) and (b) show the speed tests for the
We observe a substantial speedup in MORES compared to the BERT ranker, and the gain is consistent across CPUs and GPUs. The original BERT ranker took hundreds of seconds â several minutes â to generate results for one query on a CPU machine, which is impractical for real- time use. Using Reuse-S1, MORES with 1Ã IB was 40x faster than the BERT ranker on shorter documents (d = 128); the more accurate 2Ã IB model also achieved 20x speedup. The difference is more profound on longer documents. As the length of the document increases, a larger portion of compute in BERT ranker is devoted to perform- ing self-attention over the document sequence. MORES pre-computes document representations
5Details are in Appendix A.1.
6We report un-compressed values. Compression can further reduce data storage.
Table 5: Domain adaptation on ClueWeb09-B. adapt-interaction and adapt-representation use MORES 2Ã IB. â and â indicate non-inferiority (Section 4.1) with p < 0.05 to the BERT ranker using a 5% or 2% margin, respectively.
Clueweb09-B Title Queries Description Queries BERT ranker MORES 1Ã IB MORES 2Ã IB MORES 3Ã IB MORES 4Ã IB adapt-interaction adapt-representation NDCG@20 MAP 0.1882 0.1753 0.1872â 0.1841â 0.1824â 0.1849â 0.1865â 0.3294 0.3059 0.3317â 0.3299â 0.3164â 0.3179â 0.3319â Prec@20 NDCG@20 MAP 0.2075 0.3755 0.2009 0.3407 0.2039â 0.3662â 0.2008â 0.3679â 0.2012â 0.3515 0.1976â 0.3548 0.2072â 0.3657â 0.3597 0.3472 0.3571â 0.3476â 0.3472â 0.3385 0.3557â Prec@20 0.3881 0.3705 0.3816â 0.3763â 0.372â 0.3652 0.3828â
Table 6: Domain adaptation on Robust04. adapt-interaction and adapt-representation use MORES 2Ã IB. â and â indicate non-inferiority (Section 4.1) with p < 0.05 to the BERT ranker using a 5% or 2% margin, respectively.
Robust04 Title Queries Description Queries BERT ranker MORES 1Ã IB MORES 2Ã IB MORES 3Ã IB MORES 4Ã IB adapt-interaction adapt-representation NDCG@20 MAP 0.2225 0.2097 0.2194â 0.2135â 0.2177â 0.2117â 0.2182â 0.4632 0.4394â 0.4599â 0.4551â 0.4553â 0.4389 0.4564â Prec@20 NDCG@20 MAP 0.245 0.3958 0.3741â 0.2263 0.2323â 0.3940â 0.2334â 0.3934â 0.3938â 0.2309 0.2249 0.3723 0.2327â 0.3926â 0.5065 0.4683 0.4846â 0.4854â 0.4802 0.4697 0.4884â Prec@20 0.4147 0.3835 0.4008â 0.4006â 0.3980â 0.3896 0.4042â
and avoids document-side self attention, yielding up to 35x to 90x speedup on longer documents (d = 512).
Reuse-S2 â the projected document reuse strat- egy â further enlarges the gain in speed, leading to up to 170x speedup using 1Ã IB, and 120x speedup using 2Ã IB. Recall that Reuse-S2 pre- computes the document projections that will be used in MORESâ Interaction Module, which is of n2d time complexity where n is the model hidden dimension (details can be found in the complexity analysis in Table 1). In practice, n is often large, e.g., our experiment used n = 7687. Reuse-S2 avoids the expensive n2d term at evaluation time. Note that Reuse-S2 does not affect accuracy; it trades space to save more time.
# 5.1 Setup
This experiment trains MORES using the MS MARCO dataset, to ClueWeb09-B and Robust04. two datasets: ClueWeb09-B is a standard document retrieval collection with 50M web pages crawled in 2009. Evaluation queries come from the TREC 2009-2012 Web Tracks. We used two variants Title Queries is 200 short, of keyword-style queries. Description Queries is 200 queries that are natural language statements or questions. Robust04 is a news corpus with 0.5M documents. Evaluation queries come from TREC 2004 Robust Track, including 250 Title Queries and 250 Description Queries. We evaluate ranking performance with NDCG@20, MAP, and Prec@20.
# 5 Adaptation of MORES and Modules
The second experiment uses a domain-adaptation setting to investigate whether the modular design of MORES affects adaptation and generalization ability, and how the individual Interaction and Representation Modules behave across domains.
7This follows model dimension in BERT
Domain adaptation is done by taking a model trained on MS MARCO and ï¬ne-tuning the model on relevant labels from the target dataset. Due to the small query sets in ClueWeb09-B and Ro- bust04, we use 5-fold cross-validation for ï¬ne- tuning and testing. Data split, initial ranking, and document pre-processing follow Dai and Callan
(a) Document Representation (b) Query Representation (c) Interaction (1st IB) (d) Interaction (2nd IB)
Figure 2: Visualization of attention in MORESâs Representation and Interaction Modules.
(2019). The domain adaptation ï¬ne-tuning pro- cedures use a batch size of 32 and a learning rate of 5e-6 while having other training settings same as supervised ranking training.
representation) produces performance on par with full model apdatation. This result shows that it is more necessary to have domain-speciï¬c represen- tations, while interaction patterns are more general and not totally dependent on representations.
# 5.2 Full Model Adaptation
The top 5 rows of Table 5 and Table 6 examine the effectiveness of adapting the full model of MORES. The adapted MORES models behave similarly as on MS MARCO: using two to three layers of Interaction Blocks (IB) achieves very close to BERT ranker performance on both datasets for both types of queries while using a single layer of IB is less effective. Importantly, our results show that the modular design of MORES does not hurt domain transfer, indicating that new domains and low resource domains can also use MORES through simple adaptation.
# Individual Module Adaptation
With separate representation and interaction com- ponents in MORES, we are interested to see how each is affected by adaptation. We test two extra adaptation settings on MORES 2à IB: ï¬ne- tuning only Interaction Module on the target do- main (adapt-interaction) or only Representation Modules (adapt-representation) on target domain. Results are shown in the bottom two rows of Table 5 and Table 6 for the two data sets.
# 6 Analysis
The modular design of MORES allows Represen- tation and Interaction to be inspected separately, providing better interpretability than a black-box Transformer ranker. Figure 2 examines the atten- tion with MORES for a hard-to-understand query âwhat is paranoid scâ where âscâ is ambigu- ous, along with a relevant document âParanoid schizophrenia is a psychotic disorder. In-depth information on symptoms....â 8
In the Document Representation Module (Fig- ure 2a), we can see that âdisorderâ uses âpsy- choticâ and âschizophreniaâ for contextualiza- In the Query tion, making itself more speciï¬c. Representation Module (Figure 2b), because the query is short and lacks context, âscâ incurs a broad but less meaningful attention. The query token âscâ is further contextualized in the Inter- action Module (Figure 2c) using information from the document side â âscâ broadly attends to the document token in the ï¬rst IB to disambiguate itself. With the extra context, âscâ is able to correctly attend to âschizophreniaâ in the second IB to produce relevance signals (Figure 2d).
We observe that only adapting the Interaction Module to the target domain is less effective com- pared to adapting the full model (MORES 2à IB), suggesting that changing the behaviour of interaction is not enough to accommodate lan- guage changes across domains. On the other hand, freezing the Interaction Module and only ï¬ne-tuning the Representation Modules (adapt-
This example explains why MORES 1à IB per- forms worse than MORES with multiple IBs â ambiguous queries need to gather context from the document in the ï¬rst IB before making relevance estimates in the second. More importantly, the the query to document example indicates that
8We only show the ï¬rst 16 tokens due to space limitation.
attention has two distinct contributions: under- stand query tokens with the extra context from the document, and match query tokens to document tokens, with the former less noticed in the past. We believe MORES can be a useful tool for better interpreting and understanding SOTA black-box neural rankers.
# 7 Conclusion
State-of-the-art neural rankers based on the Trans- former architecture consider all token pairs in a concatenated query and document sequence. Though effective, they are slow and challeng- ing to interpret. This paper proposes MORES, a modular Transformer ranking framework that decouples ranking into Document Representation, Query Representation, and Interaction. MORES is effective while being efï¬cient and interpretable.
Experiments on a large supervised ranking task show that MORES is as effective as a state-of-the- art BERT ranker. With our proposed document representation pre-compute and re-use methods, MORES can achieve 120x speedup in online rank- ing while retaining accuracy. Domain adapta- tion experiments show that MORESâ modular de- sign does not hurt transfer ability, indicating that MORES can be adapted to low-resource domains with simple techniques.
Decoupling representation and interaction pro- vides new understanding of Transformer rankers. Complex full query-document attention in state- of-the-art Transformer rankers can be factored into independent document and query representation, and shallow light-weight interaction. We further discovered two types of interaction: further query understanding based on the document, and the query to document tokens matching for relevance. Moreover, we found that the interaction in ranking is less domain-speciï¬c, while the representations need more domain adaptation. These ï¬ndings pro- vide opportunities for future work towards more efï¬cient and interpretable neural IR.
# Acknowledgments
This work was supported in part by National Sci- ence Foundation (NSF) grant IIS-1815528. Any opinions, ï¬ndings, and conclusions in this paper are the authorsâ and do not necessarily reï¬ect those of the sponsors. The authors would also like to thank Graham Neubig and Chenyan Xiong for helpful discussions and feedbacks.
# References
Jane Bromley, Isabelle Guyon, Yann LeCun, Eduard S¨ackinger, and Roopak Shah. 1993. Signature veriï¬cation using a Siamese time delay neural In Advances in Neural Information network. Processing Systems, pages 737â744.
and Ilya Sutskever. 2019. Generating long se- quences with sparse transformers. arXiv preprint arXiv:1904.10509.
Nick Craswell, Bhaskar Mitra, Emine Yilmaz, and Daniel Campos. 2019. Overview of the trec 2019 deep learning track. In TREC (to appear).
Deeper text understanding for ir with contextual neural language modeling. In The 42nd International ACM SIGIR Conference on Research & Development in Information Retrieval.
Scott C. Deerwester, Susan T. Dumais, Thomas K. Landauer, George W. Furnas, and Richard A. Indexing by latent semantic Harshman. 1990. the American Society for Journal of analysis. Information Science, 41(6):391â407.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Pre-training transformers for language Kristina Toutanova. 2019. of deep bidirectional understanding. In NAACL-HLT. Bert:
Jiafeng Guo, Yixing Fan, Qingyao Ai, and W. Bruce Croft. 2016. A deep relevance matching model In Proceedings of the 25th for ad-hoc retrieval. ACM International Conference on Information and Knowledge Management, pages 55â64.
Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. ArXiv, abs/1503.02531.
Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry P. Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In 22nd ACM International Conference on Information and Knowledge Man- agement, pages 2333â2338.
Gaya K Jayasinghe, William Webber, Mark Sanderson, Lasitha S Dharmasena, and J Shane Culpepper. 2015. Statistical comparisons of non-deterministic Infor- ir systems using two dimensional variance. mation Processing & Management, 51(5):677â694.
Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. Ms marco: A human generated machine arXiv preprint reading comprehension dataset. arXiv:1611.09268.
Rodrigo Nogueira and Kyunghyun Cho. 2019. Passage re-ranking with BERT. arXiv:1901.04085.
Rodrigo Nogueira, Kyunghyun Cho, Yang Wei, Lin Jimmy, and Kyunghyun Cho. 2019. Document expansion by query prediction. arXiv:1904.08375.
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, In Ad- high-performance deep learning library. vances in Neural Information Processing Systems, pages 8024â8035.
Yifan Qiao, Chenyan Xiong, Zheng-Hao Liu, and Zhiyuan Liu. 2019. Understanding the behaviors of BERT in ranking. CoRR, abs/1904.07531.
Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. ArXiv, abs/1910.01108.
Yelong Shen, Xiaodong He, Jianfeng Gao, Li Deng, and Gr´egoire Mesnil. 2014. Learning semantic representations using convolutional neural networks the 23rd for web search. International World Wide Web Conference, pages 373â374.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS.
Elena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, and Ivan Titov. 2019. Analyzing multi- head self-attention: Specialized heads do the heavy arXiv preprint lifting, arXiv:1905.09418.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Râemi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingfaceâs language transformers: processing. ArXiv, abs/1910.03771.
Zhijing Wu, Jiaxin Mao, Yiqun Liu, Min Zhang, and Shaoping Ma. 2019. Investigating passage-level relevance and its role in document-level relevance In Proceedings of the 42nd Annual judgment. International ACM SIGIR Conference on Research and Development in Information Retrieval.
Chenyan Xiong, Zhuyun Dai, Jamie Callan, Zhiyuan Liu, and Russell Power. 2017. End-to-end neural ad-hoc ranking with kernel pooling. In Proceedings of the 40th International ACM SIGIR Conference in Information on Research and Development Retrieval, pages 55â64.
# A Appendix
# Implmentation Details
Training Details On MS MARCO passage ranking dataset, we trained MORES over a 2M subset of Marcoâs training set. We use stochastic gradient descent to train the model with a batch size of 128. We use AdamW optimizer with a learning rate of 3e-5, a warm-up of 1000 steps and a linear learning rate scheduler for all MORES variants. Our baseline BERT model is trained with similar training setup to match performance reported in (Nogueira and Cho, 2019). We have not done hyper-parameter search, and all training setup is inherited from GLUE example in the huggingface transformer code base (Wolf et al., 2019). Following (Dai and Callan, 2019), we run a domain adaptation experiment on ClueWeb09- B: we take trained model on MS MARCO, and continue training over ClueWeb09-Bâs training data in a 5-fold cross-validation setup. We use a batch size of 32 and a learning rate of 5e-6. We select from batch size of 16 and 32, learning rate of 5e-6, 1e-5 and 2e-5 by validation point-wise accuracy.
Speed Test Details GPU test was run on a single RTX 2080 TI, with CUDA 10.1. We use a separate CUDA stream to pre-fetch data to the GPU. CPU tests was run in a SLURM task environment with 8 Xeon Silver 4110 logical cores.
# A.2 Parameter Details
All MORES models follow BERTâs architecture for initialization, having 12 attention heads, 768 embedding dimension, 3072 feed forward network hidden dimension. MORES with one IB up to four IBs have parameters of 224M, 228M, 231M and 233M parameters respectively.
# A.3 Datasets
We use MSMARCO, ClueWeb09-b and Robust04. The ï¬rst is available at https://microsoft. github.io/msmarco/ and the latter two http://boston.lti.cs.cmu.edu/ at appendices/SIGIR2019-Zhuyun-Dai. tokenized by BERTâs input All pre- WordPiece processing. evaluate MS MARCO Dev query sets with its provided evaluation (https: script and the rest with trec eval //github.com/usnistgov/trec_eval). | {
"id": "1611.09268"
} |
2004.12297 | Beyond 512 Tokens: Siamese Multi-depth Transformer-based Hierarchical Encoder for Long-Form Document Matching | Many natural language processing and information retrieval problems can be
formalized as the task of semantic matching. Existing work in this area has
been largely focused on matching between short texts (e.g., question
answering), or between a short and a long text (e.g., ad-hoc retrieval).
Semantic matching between long-form documents, which has many important
applications like news recommendation, related article recommendation and
document clustering, is relatively less explored and needs more research
effort. In recent years, self-attention based models like Transformers and BERT
have achieved state-of-the-art performance in the task of text matching. These
models, however, are still limited to short text like a few sentences or one
paragraph due to the quadratic computational complexity of self-attention with
respect to input text length. In this paper, we address the issue by proposing
the Siamese Multi-depth Transformer-based Hierarchical (SMITH) Encoder for
long-form document matching. Our model contains several innovations to adapt
self-attention models for longer text input. In order to better capture
sentence level semantic relations within a document, we pre-train the model
with a novel masked sentence block language modeling task in addition to the
masked word language modeling task used by BERT. Our experimental results on
several benchmark datasets for long-form document matching show that our
proposed SMITH model outperforms the previous state-of-the-art models including
hierarchical attention, multi-depth attention-based hierarchical recurrent
neural network, and BERT. Comparing to BERT based baselines, our model is able
to increase maximum input text length from 512 to 2048. We will open source a
Wikipedia based benchmark dataset, code and a pre-trained checkpoint to
accelerate future research on long-form document matching. | http://arxiv.org/pdf/2004.12297 | Liu Yang, Mingyang Zhang, Cheng Li, Michael Bendersky, Marc Najork | cs.IR, cs.CL | Accepted as a full paper in CIKM 2020 | null | cs.IR | 20200426 | 20201013 | 0 2 0 2
t c O 3 1 ] R I . s c [ 2 v 7 9 2 2 1 . 4 0 0 2 : v i X r a
# Beyond 512 Tokens: Siamese Multi-depth Transformer-based Hierarchical Encoder for Long-Form Document Matching
Liu Yang Mingyang Zhang Cheng Li Michael Bendersky Marc Najork Google Research, Mountain View, CA, USA {yangliuy,mingyang,chgli,bemike,najork}@google.com
ABSTRACT Many natural language processing and information retrieval prob- lems can be formalized as the task of semantic matching. Existing work in this area has been largely focused on matching between short texts (e.g., question answering), or between a short and a long text (e.g., ad-hoc retrieval). Semantic matching between long- form documents, which has many important applications like news recommendation, related article recommendation and document clustering, is relatively less explored and needs more research effort. In recent years, self-attention based models like Transformers [30] and BERT [6] have achieved state-of-the-art performance in the task of text matching. These models, however, are still limited to short text like a few sentences or one paragraph due to the qua- dratic computational complexity of self-attention with respect to input text length. In this paper, we address the issue by proposing the Siamese Multi-depth Transformer-based Hierarchical (SMITH) Encoder for long-form document matching. Our model contains several innovations to adapt self-attention models for longer text input. We propose a transformer based hierarchical encoder to cap- ture the document structure information. In order to better capture sentence level semantic relations within a document, we pre-train the model with a novel masked sentence block language modeling task in addition to the masked word language modeling task used by BERT. Our experimental results on several benchmark datasets for long-form document matching show that our proposed SMITH model outperforms the previous state-of-the-art models including hierarchical attention [34], multi-depth attention-based hierarchi- cal recurrent neural network [14], and BERT. Comparing to BERT based baselines, our model is able to increase maximum input text length from 512 to 2048. We will open source a Wikipedia based benchmark dataset, code and a pre-trained checkpoint to accelerate future research on long-form document matching.1 ACM Reference Format: Liu Yang Mingyang Zhang Cheng Li Michael Bendersky Marc Najork. 2020. Beyond 512 Tokens: Siamese Multi-depth Transformer-based
1The code and a pre-trained checkpoint of the proposed SMITH model will be avail- able at https://github.com/google-research/google-research/tree/master/smith. The Wikipedia based benchmark dataset will be available at https://github.com/google- research/google-research/tree/master/gwikimatch. Please note that different from the Wikipedia dataset used in this paper, the released dataset will contain both machine generated document pairs and human annotated document pairs. We hope the dataset can be a useful open benchmark for future research on document matching.
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). CIKM â20, October 19â23, 2020, Virtual Event, Ireland © 2020 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-6859-9/20/10. https://doi.org/10.1145/3340531.3411908
Hierarchical Encoder for Long-Form Document Matching. In Proceedings of the 29th ACM International Conference on Information and Knowledge Management (CIKM â20), October 19â23, 2020, Virtual Event, Ireland. ACM, New York, NY, USA, 10 pages. https://doi.org/10.1145/3340531.3411908
1 INTRODUCTION Semantic matching is an essential task for many natural language processing (NLP) and information retrieval (IR) problems. Research on semantic matching can potentially benefit a large family of applications including ad-hoc retrieval, question answering and recommender systems [17]. Semantic matching problems can be classified into four different categories according to text length, including short-to-short matching, short-to-long matching, long- to-short matching and long-to-long matching. Table 1 shows a classification of different semantic matching tasks with example datasets. Semantic matching between short text pairs is relatively well studied in previous research on paraphrase identification [38], natural language inference [1], answer sentence selection [35], etc. Short-to-long semantic matching like relevance modeling between query/ document pairs has also been a popular research topic in IR and NLP communities [3]. For long-to-short semantic match- ing, there are also a variety of research on tasks like conversation response ranking, which is to match a conversation context with response candidates [18]. To the best of our knowledge, semantic matching between long document pairs, which has many important applications like news recommendation, related article recommen- dation and document clustering, is less explored and needs more research effort. Table 2 shows an example of semantic matching between document pairs from Wikipedia. These documents have thousands of words organized in sections, passages and sentences. Compared to semantic matching between short texts, or between short and long texts, semantic matching between long texts is a more challenging task due to a few reasons: 1) When both texts are long, matching them requires a more thorough understand- ing of semantic relations including matching pattern between text fragments with long distance; 2) Long documents contain internal structure like sections, passages and sentences. For human readers, document structure usually plays a key role for content under- standing. Similarly, a model also needs to take document structure information into account for better document matching perfor- mance; 3) The processing of long texts is more likely to trigger practical issues like out of TPU/GPU memories without careful model design. In the recent two years, self-attention based models like Transformers [30] and BERT [6] have achieved the state-of- the-art performance in several natural language understanding tasks like sentence pair classification, single sentence classification and answer span detection. These kinds of models, however, are still limited to the representation and matching of short text se- quences like sentences due to the quadratic computational time
Table 1: A classification of different semantic matching tasks. The focus of this paper is long-to-long document matching.
Type Tasks Paraphrase Identification MRPC [7] Answer Sentence Selection WikiQA [35] Textual Entailment Document Ranking Example Data Short-to-short SNLI [1] TREC 2019 Deep Learning [3] TREC 2008 Blog Track [21] UDC [18] Short-to-long Blog Search Long-to-short Response Ranking Long-to-long Related Article Suggestion Wikipedia [14] Paper Citation Suggestion AAN [25] Explanation Given two sentences, predict whether they have the same semantic meaning. Given a question and candidate answer sentences, select a correct answer. Given two sentences, predict whether they have textual entailment relations. Given a query and a candidate document set, rank documents according to query/ document relevance. Given a query and a blog collection, rank blog posts according to topic relevance or opinions. Given a dialog context and candidate responses, select a high quality response. Given a document pair, predict whether they are relevant with each other. Given a paper pair, predict whether one paper is a good citation of the other.
# Table 2: An example to illustrate the document matching task from the Wikipedia data. âSimâ means the similarity estimated based on the Jaccard similarity between the outgoing links of two documents. âLenâ means the document length by words.
Type Source Target Target URL http://en.wikipedia.org/wiki/ Chartered_Engineer_(UK) http://en.wikipedia.org/wiki/ Engineering_Council http://en.wikipedia.org/wiki/ Institute_of_Physics Title Chartered Engi- neer (UK) Engineering Council Institute Physics of Document Content In the United Kingdom, a Chartered Engineer is an Engineer registered with the Engineering Council (the British ...... The Engineering Council (formerly Engineering Council UK; colloquially known as EngC) is the UKâs regulatory ...... The Institute of Physics (IOP) is a UK-based learned society and professional body that works to advance physics ...... Label \ 1 0 Sim \ 0.5846 0.0099 Len 1147 999 2036
and space complexity of self-attention with respect to the input sequence length [30]. To handle this challenge, we would like to design a long-form text encoder, which combines the advantages of sequence dependency modeling with self-attention in Transformers and long text processing with hierarchical structures for document representation learning and matching.
Perhaps the closest prior research to our work is the study on the semantic text matching for long-form documents by Jiang et. al. [14]. They proposed the MASH RNN model to learn document representations from multiple abstraction levels of the document structure including passages, sentences and words. However, the adopted attentive RNN component in the MASH RNN model may suffer from the gradient vanishing and explosion problems on long input sequences. It is difficult for RNN based models to capture the long distance dependencies in long documents, which might lead to sub-optimal performance on long text content modeling compared with self-attention models like Transformers/ BERT where there are direct interactions between every token pair in a sequence. It is the right time to revisit this line of work and further push the boundary of long text content understanding with self-attention models like Transformers. However, as presented before, build- ing Transformer based long text encoder is not trivial because of the quadratic computational time and memory complexity of self- attention with respect to the input sequence length. For example, the maximum input text length of BERT is 512 for single sentence classification, and less than 512 for sentence pair classification.
We address these issues by proposing the Siamese Multi-depth Transformer-based Hierarchical (SMITH) Encoder for document representation learning and matching, which contains several novel design choices to adapt self-attention models like Transformers/ BERT for modeling long text inputs. Our proposed text matching model adopts a two-tower structure of Siamese network, where each tower is a multi-depth Transformer-based hierarchical encoder
to learn the document representations. We first split the input docu- ment into several sentence blocks, which may contain one or more sentences using our proposed greedy sentence filling method. Then the sentence level Transformers learn the contextual representa- tions for the input tokens in each sentence block. We represent the whole sentence block with the contextual representations of the first token, following the practice in BERT. Given a sequence of sentence block representation, the document level Transformers learn the contextual representation for each sentence block and the final document representation. This model design brings several benefits in terms of model training and serving: 1) The Siamese model architecture is a better choice to serve with efficient sim- ilarity search libraries for dense vectors[15, 31], since document representations can be generated independently and indexed offline before online serving. 2) The hierarchical model can capture the document internal structural information like sentence boundaries. 3) Compared with directly applying Transformers to the whole document, the two level hierarchical SMITH model including sen- tence level and document level Transformers reduces the quadratic memory and time complexity by changing the full self-attention on the whole document to several local self-attentions within each sentence block. The sentence level Transformers capture the in- teractions between all token pairs within a sentence block, and the document level Transformers maintain the global interaction between different sentence blocks for long distance dependencies. Inspired by the recent success of language model pre-training methods like BERT, SMITH also adopts the âunsupervised pre- training + fine-tuningâ paradigm for the model training. For the model pre-training, we propose the masked sentence block lan- guage modeling task in addition to the original masked word lan- guage modeling task used in BERT for long text inputs. When the input text becomes long, both relations between words in a
sentence block and relations between sentence blocks within a doc- ument becomes important for content understanding. Therefore, we mask both randomly selected words and sentence blocks during model pre-training. The sum of the masked sentence block predic- tion loss and the masked word prediction loss is the final SMITH model pre-training loss. The model fine-tuning process is similar to BERT, where we remove the word/ sentence level masks and fine- tune the model parameters initialized with pre-trained checkpoints with only the text matching loss. We evaluate the proposed model with several benchmark data for long-form text matching [14]. The experimental results show that our proposed SMITH model out- performs the previous state-of-the-art Siamese matching models including hierarchical attention [34], multi-depth attention-based hierarchical recurrent neural network [14], and BERT for long-form document matching, and increases the maximum input text length from 512 to 2048 when compared with BERT-based baselines. Our main contributions can be summarized as follows:
⢠We propose the Siamese Multi-depth Transformer-based Hi- erarchical (SMITH) Encoder for document matching, which contains several novel design choices to adapt self-attention models for modeling long text inputs.
⢠For model pre-training, we propose the masked sentence block language modeling task to capture sentence level se- mantic relations within a document towards better long text content understanding.
⢠Experimental results on several benchmark data for long- form text matching [14] show that our proposed SMITH model outperforms the previous state-of-the-art models and increases the maximum input text length from 512 to 2048 when comparing with BERT based baselines. We will open source a Wikipedia based benchmark dataset, code and a pre-trained model checkpoint to accelerate future research on document understanding and matching.
2 RELATED WORK Neural Matching Models. A number of neural matching models have been proposed for information retrieval and natural language processing [8, 12, 13, 20, 22, 32â34, 39]. These models can be clas- sified into the representation-focused models and the interaction- focused models [8, 9]. The representation-focused models learn the representations of queries and documents separately, and then they measure the similarity of the representations with functions like co- sine, dot, bilinear or tensor layers. On the other hand, the interaction- focused models build a query-document word pairwise interaction matrix to capture the exact matching and semantic matching infor- mation between query-document pairs. Then a deep neural network which can be a CNN [12, 22, 39], term gating network with his- togram or value shared weighting mechanism [8, 34] is applied to the query-document interaction matrix to generate the final rank- ing score. There are also neural matching models which combine the ideas of the representation-focused and interaction-focused mod- els [20, 39]. Since it is difficult to serve interaction-focused models for online fast inference due to enormous computational costs of the interaction matrices for all query-document pairs, our proposed model belongs to representation-focused models, which are also called Siamese models or âDual Encoderâ models.
# Self-Attention Models for Long Text Modeling. Self-attention
models like Transformer and BERT show promising performance on several tasks in natural language processing and information retrieval. Most of these models are restricted to the representation and matching of short text sequences like sentences and passages. Our work is also built on top of Transformers with a different fo- cus on effective representation learning and matching of long text. Recently there are some related works on adapting Transformers for long text modeling [2, 5, 11, 16, 24, 27â29, 36, 40]. Zhang et al. [40] proposed the HiBERT model for document summarization and a method to pre-train it using unlabeled data. Our work on the SMITH model is inspired by their research with several differ- ences. First we split the document into sentence blocks with greedy sentence filling for more compact input text structures and less padded words. Second we build a Siamese âDual Encoderâ model for document pair similarity modeling. Third we propose a novel pre-training task based on dynamic masked sentence block predic- tion instead of predicting the masked sentence one word per step as in [40]. We also consider combining representations from different levels of hierarchical Transformers for richer representations.
Unsupervised Language Model Pre-training. The idea of un- supervised learning from plain text for language model pre-training has been explored in several works like Word2Vec[19], ELMo[23], GPT[26] and BERT[6]. These models can be pre-trained by pre- dicting a word or a text span using other words within the same sentence. For example, Word2Vec can be trained by predicting one word with its surrounding words in a fixed text window and BERT pre-trains a language model by predicting masked missing words in a sentence given all the other words. We also study model pre-training techniques on plain text to improve the downstream document matching task. In addition to the masked word prediction task in BERT, we propose the masked sentence block prediction task to learn the relations between different sentence blocks.
3 METHOD OVERVIEW 3.1 Problem Formulation We define the task of document matching following previous litera- ture [14]. We are given a source document ðð and a set of candidate documents Dð . The system needs to estimate the semantic simi- larities Ëð¦ = ððð(ðð , ðð ), where ðð â Dð , for every document pair (ðð , ðð ) so that the target documents semantically matched to the source document ðð have higher similarity scores. In practice, the documents may contain structural information like passage/ sen- tence/ word boundaries and different text length characteristics. The task can be formalized as a regression or classification problem depending on the type of data labels. A summary of key notations in this work is presented in Table 3.
3.2 Document Matching with Transformers The original BERT model proposed by Devlin et. al. [6] supports text matching as the sentence pair classification task. Two input text sequences will be concatenated and separated by a special token [SEP] to feed into the BERT model, in order to learn the contextual representation of each token. Then the contextual representation of the first token, which is the added [CLS] token, will be projected into a probability distribution estimation over different label di- mensions to compute the cross-entropy loss. Directly applying this
Table 3: A summary of key notations in this work.
ðð , Dð ðð, Dð E(ðð ), E(ðð ) The learned dense vector representations for ðð and ðð ð¿1, ð¿2 The source document and the set of all source documents The candidate document and the set of all candidate documents The number of layers in the sentence level Transformers and in the document level Transformers The i-th sentence block in the document and the sequence of word representations for ðð The j-th word in the i-th sentence block in the document The length of a document by sentence blocks and the length of a sentence block by words The token embedding for ð ð ð The position embedding for ð ð ð The contextual token representation learned by sentence level Trans- formers for ð ð ð and the contextual sentence block representation learned by document level Trnasformers for ðð The batch size, the hidden size, number of attention heads and layers in Transformers ðð, E(ðð ) ð ð ð ð¿ð , ð¿ð ð¡ (ð ð ð ) ð (ð ð ð ) Tð ð , Sð ð, ð» , ð´, ð¿
âSingle Encoderâ BERT model to the document matching task will cause two problems: 1) The input text length for each document will be very limited. On average, we can only feed at most 256 tokens per document into the BERT model to run the model fine- tuning or inference process of document matching. 2) The âSingle Encoderâ BERT model cannot be served for applications requir- ing high inference speed. To solve this problem, we learn query independent document representations and index them offline to serve with efficient similarity search libraries [15, 31]. Offline index- ing of document representations requires generating dense vector representations for the two documents independently without ex- pensive interactions in the earlier stage. This motivates us to focus on designing âDual Encoderâ BERT model with a Siamese network architecture, where each tower is to encode one document sepa- rately. The two towers can share model parameters. In the following sections, we introduce a basic Siamese matching model with Trans- formers MatchBERT (Section 3.2.1) and the Siamese hierarchical matching model SMITH (Section 3.2.2).
Cosine Similarity G@Gs GS G@Gs Go Goes ee» Transformer Transformer
Figure 1: The architecture of the MatchBERT model.
3.2.1 MatchBERT: A Basic Siamese Matching Model with Transformers. Figure 1 shows the architecture of MatchBERT model for text matching adapted from the BERT model proposed by Devlin et. al. [6]. There are two text encoders in MatchBERT, where each encoder is a BERT model to learn the representation of the source document ðð or the candidate document ðð . Then we compute the cosine similarity between the pooled sequence output of two documents ððð (E(ðð ), E(ðð )). The text matching loss is the cross-entropy loss when we compare the document pair similarity scores with document pair labels. To handle long document input, MatchBERT will only model the first ð tokens of each document, where the max value of ð can be 512. To train the MatchBERT model, we initialize the model parameters with the open source
pre-trained BERT checkpoints 2 and then fine tune the model with the text matching loss. MatchBERT will be a strong baseline model.
3.2.2 SMITH: Siamese Hierarchical Matching Model with Transformers. The SMITH model, which refers to Siamese MultI- depth Transformer-based Hierarchical Encoder, is an extension of the MatchBERT model. It also adopts a Siamese network ar- chitecture, where each tower is a transformer-based hierarchical encoder to learn representations in different levels like sentence and document level of long documents. In this way, it combines the advantages of long distance dependency modeling of self-attention in Transformer encoders and hierarchical document structure mod- eling for long text representation learning. Figure 2 shows the Transformer-based hierarchical encoder in the SMITH model.
# 4 SIAMESE HIERARCHICAL MATCHING MODEL WITH TRANSFORMERS
# 4.1 Hierarchical Modeling for Document Representation
4.1.1 Splitting Documents with Greedy Sentence Filling. In order to represent one document with hierarchical Transformers, we need to split the document into multiple smaller text units. A natural way to perform this step is to split a document into sentences with some off-the-shelf sentence boundary detection libraries. However, as sentence length varies a lot, padding all sen- tences to the same length will introduce a lot of padded tokens for short sentences, which will make the usage of the model capacity unnecessarily inefficient. We want to preserve each sentenceâs se- mantics so methods which may break down sentences like fixed length sliding window[4] are not good options either. We propose a âgreedy sentence fillingâ method to reduce the number of padded words and increase the actual text length the model can take as its input. Specifically, we split a document into multiple sentence blocks of predefined length so that each sentence block can contain one or more natural sentences. We try to fill as many sentences as possible into one sentence block until the block reaches the prede- fined max block length. When the last sentence cannot fill in the current block, we move it to the next block. When an individual sentence alone is longer than the max sentence block length, we truncate it to fit in the current block. Figure 3 shows an example of how we split a document into sentence blocks. We can see that greedy sentence filling greatly reduces the number of padded tokens given a fixed maximum sentence block length.
4.1.2 Hierarchical Document Representation Learning. Let D denote the input document. With greedy sentence filling, we split it into a sequence of sentence blocks {ð1, ð2, ..., ðð¿ð }, where ðð = {ð ð }. ðð is the i-th sentence block in the document. 1 ð ð ð is the j-th word in the i-th sentence block. ð¿ð and ð¿ð denote the length of a document by sentence blocks and the length of a sentence block by words respectively. We learn the representation of each sentence ðð with the Transformer encoder described in [30], which consists of the multi-head self-attention and a position-wise fully connected feed-forward network with residual connections
2https://github.com/google-research/bert
Document Encoder Sentence Reps Dense Layers Dense + L2_Norm Token Reps Sentence Encoder Token Embeddings Position Embeddings + + ps ps | P2
Figure 2: The architecture of the Multi-depth Transformer-based Hierarchical Encoder in the SMITH model for document representation learning and matching. We visualize the sentence level Transformer encoder for the 1st, 2nd and the last sentence block in a document. The output sentence representations of sentence encoders become the inputs of the document level Transformer encoder.
[CLS] This is a great model based on BERT . [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [CLS] This is a great model based on BERT . OK. We âII try to write a paper . This can be a research contribution . [PAD] [SEP] [SEP] [CLs] <Another Sentence Block> [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [SEP] [SEP] [CLS] <Another Sentence>.......
Figure 3: An example of splitting a document into different sentence blocks using the greedy sentence filling method. Left: natural sentence splitting. Right: greedy sentence fill- ing splitting.
[10]. We firstly map the words in ðð to a sequence of dense vector representations
the representation for the whole document. There will be another dense layer added to transform the document representation with L2 normalization before we compute the cosine similarity between the two document representations in the document pair (ðð , ðð ).
4.1.3 Memory and Time Complexity Analysis of Two Level Hierarchical Transformers. Next letâs analyze memory and time complexity for the two level hierarchical Transformers. The atten- tion mechanism used in Transformer is the scaled dot-product attention, which performs transformation from a query and a set of key-value pairs to an output. The output representation is defined as a weighted sum of the values, where the weight to each value is computed as the interaction score between the query and the corresponding key normalized by the softmax function. Specifi- cally, given the input query embeddings Q, key embeddings K and value embeddings V, where Q â RðÃðQ Ãð» , K â RðÃðK Ãð» , V â RðÃðV Ãð» , the scaled dot-product attention is defined as:
(1) where eð ð ) is the sum of the token embedding and position embedding of word ð ð ð following the same practice in BERT. The token embedding is initialized randomly during pre- training and the position embedding follows the same setting as in Transformers [30]. The sentence level Transformers will trans- form E(ðð ) into a sequence of contextualized representations for , Tð words in the sentence block {Tð }. Following the setting 2 1 in BERT model, we use the contextual representation of the first token, the added [CLS] token, as the learned representation of the whole sentence block. We add another dense layer and perform a L2 normalization on the sentence block representation. The final sentence block representation also adds the sentence block position embedding to model the sentence block location in the document. With the learned sentence block representations from the sen- tence level Transformers and the sentence block position embed- dings, the document level Transformer encoders will produce a se- quence of contextual sentence block representations {S1, S2, ..., Sð¿ð }. We still use the first contextual sentence block representation as
# ð = ð¡ (ð ð
# ð ) + ð (ð ð
T Attention(Q, K,V) = softmax( QK (2) Vd
(2) where ð Q , ð K , ð V are the number of tokens in each sequence and ð K = ð V . ð is the batch size and ð» is the hidden size. To understand the memory cost of Transformers, we can focus on the attention computation in Equation 2. Let us assume ð K = ð V = ð Q = ð , then the term QKð has the shape [ð, ð , ð ], where ð is the maximum input sequence length. Let ð´ and ð¿ denote the number of attention heads and layers in Transformers, then the memory complexity of the attention computation in Transformers is ð (ð · ð´ · ð 2 · ð¿). This is why the memory cost of the scaled dot-product attention in Transformers grows quadratically as the increasing of the input sequence length, which makes it difficult to directly apply Transformers to very long input sequences 3. Similar conclusions also hold for the time complexity of scaled dot-product
3This is the memory cost of Transformers without considering the feed-forward layers. For the complete memory complexity analysis results including both attention and feed-forward layers in Transformers, we refer the interested readers to [16].
used in Transformers. For two level hierarchical Transformers, let ð¿ð denote the max sentence block length by tokens. Then we will split ð a document into ð¿ð sentence blocks. The memory complexity of the attention computation of sentence/ document level Transformers is
ð · ð´ · ð¿ð 2 · ð¿ · ð ð¿ð + ð à ð´ à ( ð ð¿ð ) 2 · ð¿ = (ð¿ð 2 · ð ð¿ð + ( ð ð¿ð ) 2) · ð · ð´ · ð¿ = (ð¿ð · ð + ð 2 ð¿2 ð ) · ð · ð´ · ð¿ (3)
Here we assume the number of attention heads and the num- ber of layers are the same for the sentence level Transformers and document level Transformers for simplicity. Thus the memory com- plexity of two level hierarchical Transformers is ð ( ð 2 · ð · ð´ · ð¿). ð¿2 ð Comparing with the original Transformers, we reduce the mem- ory complexity by a factor of ð¿2 ð with only performing local self- attention over tokens in the same sentence block.
4.1.4 Combine Representations from Different Levels. In or- der to integrate learned features from different levels of document structures, we consider several settings for generating the final document representations as follows:
Normal: we only use the output of the document level Trans- formers as the final document representation.
Sum-Concat: we firstly compute the sum of all sentence level representations and use the concatenation of the sum with the document level representation as the final document representation. Mean-Concat: we firstly compute the mean of all sentence level representations and use the concatenation of the mean with the document level representation as the final document representation. Attention: we firstly compute the weighted sum of the sen- tence level representations with attention mechanism: Dis hj - softmax(h;Wv), where h; ⬠Râ is the learned representation for the i-th sentence block by the sentence level Transformers. W ¢ R4XV is the projection matrix and v ⬠RY is the attention model parameter. Then we concatenate the weighted sum with the document level representation as the final document representation.
4.2 SMITH Model Pre-training For the model training of SMITH, we adopt the âpre-training + fine-tuningâ paradigm as in BERT. This approach is to firstly pre- train the model with large unlabeled plain text in an unsupervised learning fashion, and then fine-tune the model with a supervised downstream task so that only few parameters need to be learned from scratch. In addition to the masked word language modeling task proposed by Devlin et. al. [6], we also propose the masked sentence block language modeling task, because one of the basic units in the SMITH encoder for modeling documents is the sentence block. Masked sentence block prediction task can help the model learn the relations between different sentence blocks and hopefully, get a better understanding of whole documents. Our pre-training loss is a sum of masked word prediction loss and the masked sen- tence block prediction loss. For the details of the masked word prediction task, please refer to Devlin et. al. [6]. For the masked sentence block prediction task, we perform dynamic sentence block masking and masked sentence block prediction as follows.
4.2.1 Dynamic Sentence Block Masking. Let D = {h1, h2, ..., hð¿ð } denote a sequence of sentence block representations learned by the sentence level Transformers. For each document in the cur- rent batch, we randomly sample ð sentence blocks M = {hð |hð â Rð» , ð â K} and replace these sentence blocks with a randomly initialized masked sentence block vector Ëh â Rð» . For example, if we randomly select the 3rd and 5th sentence block for masking, the masked document becomes ËD = {h1, h2, Ëh, h4, Ëh, h6, ..., hð¿ð }. This dynamic sampling process repeats for every document in a batch in each step, so that the same document may get different masked sentence block positions in different steps. The dynamic masking strategy enables the model to predict a larger range of sentence blocks in a document compared with the opposite static masking.
4.2.2 Masked Sentence Block Prediction. To perform masked sentence block prediction, we consider a multi-class sentence block classification setting similar to the masked word prediction. How- ever, we do not have a global vocabulary for different sentence blocks. 4 Instead, we collect all the masked sentence blocks in a batch as a candidate sentence block pool, from which the model will try to predict the correct sentence block. For each masked sentence block position, the original sentence block in the current position is the positive example. The other co-masked sentence blocks in the current document and in the other documents of the same batch are the negative examples. Specifically, we apply document level Transformers on the masked document ËD to get a sequence of contextual sentence block representations {ËS1, ËS2, ..., ËSð¿ð }. Then ËSð will be used to predict the original sentence block representation hð . Given a batch of ðµ masked sentence blocks with the predicted sentence block representation ËS â RðµÃð» and the ground truth sen- tence block representation h â RðµÃð» where ðµ = ð à ð, we can compute a pairwise similarity matrix for every masked sentence block pair in the current batch as follows:
ð Sim( ËS, h) = ËSh
(4) where Sim( ËS, h) â RðµÃðµ where Sim( ËSð , hð ) is the predicted sim- ilarity between the j-th predicted sentence block representation and the i-th sentence block class. We normalize it with a softmax function to transform it to the predicted probability for the the i-th sentence block class as follows:
exp (Sim (Sj, hy)) DE, exp(Sim($;,h,)) p(hi|8;) = 6)
Thus all the sentence blocks {hð }, where ð â [1, ðµ], ð â ð can be treated as randomly sampled negative classes for ËSð . Finally we can compute the cross-entropy loss over all masked sentence blocks and the pre-training joint loss:
Lsp = â 1 ðµ ðµ âï¸ ð=1 ðµ âï¸ ð =1 1{ ð = ð } log ð (hð | ËSð ) (6)
(7) where Lsp and Lwp denote the masked sentence block predic-
tion loss and the masked word prediction loss respectively.
4In fact, the number of all unique sentence blocks can be unlimited considering different composition of words into sentence blocks.
Table 4: The statistics of experimental datasets. # of Doc- Pairs denotes the number of document pairs. AvgSPerD, Avg- WPerD, AvgWPerS denote the average number of sentences per document, average number of words per document and average number of words per sentence respectively.
Data Items # of DocPairs AvgSPerD AvgWPerD AvgWPerS Train 65,948 92.4 2035.3 22.0 Wiki65K Valid 8,166 92.0 2041.7 22.2 Test 8,130 91.0 1992.3 21.9 Train 104,371 111.6 3270.1 29.3 AAN104K Valid 12,818 111.4 3251.2 29.2 Test 12,696 111.1 3265.9 29.4
4.3 SMITH Model Fine-tuning and Inference After model pre-training, we fine-tune SMITH model on the down- stream document matching task with only the binary cross-entropy loss between the estimated matching probability and the ground truth matching label. Note that the word level and sentence block level language modeling masks added during the pre-training stage need to be removed during the fine-tuning stage to avoid losing document content information. After model pre-training and fine- tuning, the trained SMITH model can be used for document rep- resentation inference. The document representations inferred by SMITH model offline can be served online with fast similarity search libraries [15, 31].
5 EXPERIMENTS 5.1 Dataset Description Following [14], we evaluate our proposed methods with two datasets: Wikipedia relevant document recommendation data (Wiki65K) and ACL Anthology Network paper citation suggestion data (AAN104K) from the previous related work. The statistics of the experimen- tal datasets are shown in Table 4. Note that we do not use any TREC datasets like TREC 2019 Deep Learning Track data [3]. This is because these datasets focus on short-to-long matching like docu- ment ranking, aiming to match a short query to a set of documents, whereas our task is more on long-to-long document matching.
5.1.1 Relevant Document Recommendation Data. Relevant document recommendation can be useful in many real word ap- plications such as news recommendation, related Web page rec- ommendation, related QA posts recommendation, etc. We use the Wikipedia relevant document recommendation data from Jiang et. al. [14] as the evaluation set. The ground truth of document simi- larity is constructed based on the Jaccard similarity between the outgoing links of two Wikipedia documents with the assumption that similar documents have similar sets of outgoing links. The document pairs with similarities greater than 0.5 are considered as positive examples. For each positive document pair, the document with the lexicographical smaller URL is defined as the source docu- ment of the pair. Then a mismatched document from the outgoing links of the source document is sampled to generate a negative doc- ument pair. This negative sampling approach is better than random sampling from the entire corpus, since random sampled documents may be too irrelevant to make the task challenging enough to eval- uate the performance of different methods. For more details of this dataset, we refer interested readers to [14].
Note that there are around six thousands training examples and less than one thousand validation/testing examples in the Wikipedia data used in [14] 5. We produce a Wikipedia document matching dataset of ten times larger size as shown in the data statistics in Table 4. Thus the training/validation/testing data partition and statistics are different from the data in [14] and we report the results from running the model implementation on these datasets, which are different from the numbers reported in [14]. We refer to this data as Wiki65K since there are 65K training document pairs.
5.1.2 Paper Citation Suggestion Data. Paper citation sugges- tion can help researchers find related works and finish paper writing in a more efficient way. Given the content of a research paper and the other paper as a candidate citation, we would like to predict whether the candidate should be cited by the paper. We use the ACL Anthology Networks (AAN) Corpus [25] processed by Jiang et. al. [14] for the evaluation. The AAN data consists of 23,766 papers written by 18,862 authors in 373 venues related to NLP. For each paper with available text, the paper with each of its cited paper in the corpus is treated as a positive example. For each positive example, an irrelevant paper is randomly sampled to generate a negative example. The reference sections were removed to prevent the leakage of ground truth. The abstract sections were also re- moved to increase the difficulty of the task. We also filter document pairs with no section content or invalid UTF-8 text based on the processed data in [14] . We refer to this data as AAN104K since there are 104K training document pairs.
5.1.3 Unsupervised Language Model Pre-training Data. For the SMITH model pre-training, we create a randomly sampled Wikipedia collection, containing 714,800 documents with 43,532,832 sentences and 956,169,485 words. We pre-train SMITH model with unsupervised masked word and masked sentence block language modeling loss on this data and fine-tune the model on Wiki65K and AAN104K for the downstream document matching task.
5.2 Experimental Setup 5.2.1 Competing Methods. We consider different types of meth- ods for comparison as follows: 1) Hierarchical Attention Net- work (HAN). The HAN model [37] is a hierarchical attention model to learn document representations. For each sentence in the document, it applied an attention-based RNN to learn the sen- tence representation and to attend differentially to more or less important document content. 2) SMASH. The SMASH model [14] is the state-of-the-art model for long document matching. It adopts a Siamese multi-depth attention-based hierarchical recurrent neu- ral network (SMASH RNN) to learn long document representations for matching. 3) MatchBERT. The MatchBERT model has been presented in Section 3.2.1. 4) SMITH. This is our proposed method. We fixed the document level Transformers as 3 layers and tried several SMITH model variations as follows:
⢠SMITH-Short: the SMITH model with loading the pre-trained BERT checkpoint released by Devlin et al. [6]. We load the BERT-Base checkpoint pre-trained on uncased text and then fine-tune the model with only the document matching loss.
5https://research.google/pubs/pub47856/
Table 5: Comparison of different models over Wiki65K and AAN104K datasets. The best performance is highlighted in boldface. SMITH-WP+SP shows significant improvements over all baseline methods with p < 0.05 measured by Studentâs t-test. Note that SMITH-Short with input documents with maximum length larger than 512 and MatchBERT with input documents with maximum length larger than 256 will trigger the out-of-memory (OOM) issues on TPU V3. There is Ã2 in the BestDocLen, which denotes the best setting of the maximum input document length, since all the compared models are âDual-Encoderâ/ âSiameseâ models. Note that all the compared models are for the document/ document matching task. Models designed for the query/ document matching task are not comparable.
Data BestDocLen Accuracy 2048 Ã 2 2048 Ã 2 256 Ã 2 512 Ã 2 SMITH-Short 1536 Ã 2 SMITH-NP 1536 Ã 2 SMITH-WP 1536 Ã 2 SMITH-WP+SP Î over SMASH NA Î over MatchBERT NA Method HAN (NAACL16) SMASH (WWW19) MatchBERT 0.8875 0.9187 0.9316 0.9415 0.9054 0.9492 0.9585â¡ +4.33% +2.89% Wiki65K Precision Recall 0.8571 0.9177 0.9272 0.9317 0.9177 0.9366 0.9178 0.8911 0.9307 0.9466â¡ +3.15% +2.09% 0.9699 0.9237 0.9707 0.9720â¡ +5.92% +3.78% F1 0.8928 0.9177 0.9319 0.9431 0.9071 0.9503 0.9591â¡ +4.51% +2.92% Accuracy 0.8219 0.8375 0.8355 0.8212 0.7725 0.8400 0.8536â¡ +1.92% +2.17% AAN104K Precision Recall 0.7895 0.8224 0.8387 0.8654 0.8333 0.8201 0.8169 0.8106 0.8408 0.8431â¡ +2.52% +0.52% 0.8161 0.7062 0.8354 0.8657â¡ +3.89% +5.56% F1 0.8257 0.8278 0.8293 0.8165 0.7548 0.8381 0.8543â¡ +3.19% +3.01%
The maximum input text length is 512 (4 sentence blocks with 128 tokens per sentence block).
⢠SMITH-NP: the SMITH model without language modeling pre-training stage and trained from randomly initialized model parameters. We only train SMITH-NP model with the document matching data using text matching loss.
⢠SMITH-WP: the SMITH model pre-trained with masked word prediction loss in the pre-training collection and then fine-tuned with document matching loss on the downstream matching task data.
⢠SMITH-WP+SP: the SMITH model pre-trained with both masked word prediction loss and masked sentence block prediction loss on the pre-training collection and then fine- tuned with document matching loss on the downstream matching task data.
Note that we do not compare with any interaction-focused mod- els like DRMM [8], K-NRM [33], Duet [20] or MatchPyramid [22]. These models either do not scale to long documents or require heavy interactions between word pairs in two text sequences, which will lead to long inference latency in practice. Thus all the com- pared methods belong to representation-focused models or âDual- Encoderâ models where the document representation can be learned offline in advance before serving online with fast similarity search li- braries [15, 31]. Models like DRMM, KNRM, Duet, etc. are proposed for short-to-long text matching like query/document matching in- stead of long-to-long text matching that we focus on in this paper.
stage, we pre-train SMITH on the Wikipedia collection presented in Section 5.1.3 around 68 epochs until the validation loss does not decrease significantly. Pre-training of SMITH with 2 layers in the sentence level Transformers and 3 layers in the document level Transformers with max document length 1024 takes 50 minutes for one epoch on the Wikipedia collection and the pre-training stage takes around 57 hours. The pre-training loss depends on the model variation types (masked word prediction loss for SMITH-WP or the sum of masked word prediction loss and masked sentence block prediction loss for SMITH-WP+SP). We dynamically mask 2 sentence blocks per document if we use the masked sentence block prediction loss presented in Section 4.2 during pre-training. The masked word prediction task follows the similar way by Devlin et al. [6]. The sentence block length is 32. We tune max number of sentence blocks with values in {32, 48, 64}. Thus the max document length can be values in {1024, 1536, 2048}. We finally set the number of sentence blocks as 48 (max document length is 1536) for both Wiki65K and AAN104K, which achieves the best performance on the validation data. We tune parameters using the validation dataset and report model performance on the test dataset. Let ð¿1, ð»1, ð´1 and ð¿2, ð»2, ð´2 denote the number of Transformer layers, the hidden size, the number of attention heads in sentence level Transformers and document level Transformers. We fix ð¿2 = 3 and tune ð¿1 with values in {2, 4, 6}. We finally set ð¿1 = 6, ð»2 = 256, ð´2 = 4 and ð»1 = 256, ð´1 = 4 for both Wiki65K and AAN104K. Both training and evaluation batch size are 32. We optimize the models using Adam with learning rate 5e-5, ð½1 = 0.9, ð½2 = 0.999, ð = 1ð â 6. The dropout rate in all layers is 0.1.
5.2.2 Evaluation Methodology. We formalize the document matching task as a classification task where we would like to pre- dict whether two documents are relevant or not given a document pair. Thus we consider standard classification metrics including accuracy, precision, recall and F1-score for the evaluation.
5.2.3 Parameter Settings and Implementation Details. All models are implemented with TensorFlow6. We use TPU V37 for the model pre-training and fine-tuning. For the model pre-training
For the model fine-tuning stage, the hyper-parameters are almost the same to those used in the pre-training stage. The max number of training steps is 100K. The number of learning rate warm up steps is 10K. We remove both masked word prediction loss and masked sentence block prediction loss during fine-tuning, and update the pre-trained model parameters only using the document matching loss. The fine-tuning stage takes much less time ranging from 4 to 12 hours depending on the model and data settings.
# 6https://www.tensorflow.org/ 7https://cloud.google.com/tpu/
5.3 Evaluation Results We present evaluation results of different models over Wiki65K and AAN104K data in Table 5. We summarize our observations as follows: 1) Both SMITH-WP and SMITH-WP+SP models out- perform all the baseline methods including the stage-of-the-art long text matching method SMASH and MatchBERT based on pre- trained BERT models on both Wiki65K and AAN104K consistently. The comparison between SMITH-WP/ SMITH-Short and Match- BERT shows the effectiveness of introducing hierarchical document structure modeling with sentence level and document level Trans- formers for long document representation learning and matching. 2) If we compare the SMITH model settings with the pre-training stage (SMITH-Short, SMITH-WP, SMITH-WP+SP) with the SMITH model settings without the pre-training stage (SMITH-NP), we can find that language modeling pre-training can help increase the per- formance of the downstream document matching task by a large margin. Thus better language understanding via large scale lan- guage modeling pre-training will lead to better downstream task performance, which is consistent with the findings by Devlin et al. [6]. 3) Both SMITH-WP and SMITH-WP+SP outperform SMITH- Short, which is initialized by the pre-trained open source BERT model. We think the main reason is that currently SMITH-Short can only process at most 512 tokens due to TPU memory issues, which will hurt the performance. On the other hand, Both SMITH-WP and SMITH-WP+SP can process as long as 2048 tokens, which is a better setting for long document representation learning. 4) If we compare SMITH-WP with SMITH-WP+SP, we can find that adding the masked sentence block prediction task presented in Section 4.2 during the pre-training stage can also be helpful to improve the downstream document matching performance. The masked word prediction task proposed by Devlin et al. [6] can capture the word relations and dependencies in the pre-training corpus, whereas the masked sentence block prediction task can additionally force the model to learn the sentence-level relations and dependencies. Thus combining the masked word prediction task and the masked sen- tence block prediction task can contribute to a better pre-training language model for long document content understanding and better downstream document matching task performance.
5.4 Impact of Document Length We further analyze the impact of document length on the docu- ment matching performance. We fix the number of layers in the sentence level and the document level Transformers as 4 and 3, the max sentence block length as 32. Then we vary the number of looped sentence blocks per document for SMITH-WP+SP on Wiki65K and AAN104K with different values from 2 to 64. Thus the maximum document length increases from 64 to 2048. The per- formances of SMITH-WP+SP with different choices of maximum document length is shown in Figure 4. We can find that in general SMITH-WP+SP will achieve better performance as the maximum document length increases. This confirms the necessity of long text content modeling for document matching. The SMITH model which enjoys longer input text lengths compared with other stan- dard self-attention models is a better choice for long document representation learning and matching. We also studied the impact
SMITH-WP+SPQ@Wiki65K SMITH-WP+SP@AAN104K 0.88 os} 4 ~7A-@ 036} oe 0.964 6. ost 0.944 ist 924 0.82 Bea Accuracy @ @ Recall dea Accuracy @ @ Recall 0.904 tnt | @@ Precision @-® FI 0.80 He â}@@ Precision OF 64 256 512 768 1024 1536 2048 Maximum Document Length 64 256 512 768 1024 1536 2048 Maximum Document Length
Figure 4: Performance of SMITH-WP+SP on the validation data with different choices of maximum document length.
Table 6: The document matching performance with differ- ent choices of the number of layers in the sentence level Transformers on the validation data. ð¿1 denotes the number of layers in the sentence level Transformers.
Data Wiki65K AAN104K ð¿1 Accuracy 0.9537 2 0.9589 4 0.9594 6 0.8566 2 0.8573 4 0.8580 6 Precision 0.9449 0.9479 0.9426 0.8470 0.8426 0.8508 Recall 0.9635 0.9718 0.9784 0.8612 0.8694 0.8591 F1 0.9541 0.9597 0.9602 0.8540 0.8558 0.8549
of different sentence block lengths and found it has no major impact on the final performance.
# 5.5 Impact of the Number of Layers in Sentence Level Transformers
Next we analyze the impact of the number of layers in the sentence level Transformers on the final document matching performances. We set the document level Transformer layers as 3, the maximum sentence block length as 32, the number of sentence blocks per document as 48. So the maximum document length is 1536. Then we vary the number of layers in the sentence level Transformers and observe the change of the performances of SMITH-WP+SP. The results are shown in Table 6. We can find that the setting with 4 or 6 layers in the sentence level Transformer layers is slightly better than the setting with only 2 layers in the sentence level Transformers. Increasing the layers in the sentence level Transformers can help the model to learn sentence block semantic representation with more high level interactions. However, it also leads to larger memory cost such as the intermediate activation in each layer. Thus in practice this hyper-parameter has to be tuned with the validation dataset for the trade-off between the model capacity and memory cost.
# 5.6 Impact of Combining Representations from Different Levels
As presented in Section 4.1.4, we evaluate the performances of SMITH-WP+SP with different methods to combine representations from different levels. Table 7 shows the document matching perfor- mance with different choices of document representation combing methods. We can see that the ânormalâ combing method where we only use the output of the document level Transformers as the final document representation works best. For the other three methods,
Table 7: The document matching performance with dif- ferent choices of the document representation combining methods presented in Section 4.1.4 on the validation data.
Data Wiki65K AAN104K Combine Normal Sum-Concat Mean-Concat Attention Normal Sum-Concat Mean-Concat Attention Accuracy 0.9594 0.9192 0.9221 0.9431 0.8580 0.7632 0.7061 0.8434 Precision 0.9426 0.9103 0.8924 0.9099 0.8508 0.7924 0.6387 0.8162 Recall 0.9784 0.9301 0.9599 0.9836 0.8591 0.6962 0.9131 0.8758 F1 0.9602 0.9201 0.9249 0.9453 0.8549 0.7412 0.7516 0.8450
the âattentionâ method is better than âsum-concatâ and âmean- concatâ. One possible reason is that the weighted combination of sentence blocks can be helpful for generating better document representations as the attention weights can encode the relative im- portance of different sentence blocks on representing the document content, which is why attention based combining methods work better. The document level Transformers already learn a weighted combination based on the input sentence block representation se- quences, which already provide enough signals on the importance scores of different sentence blocks in a document.
6 CONCLUSIONS AND FUTURE WORK In this paper, we propose the Siamese Multi-depth Transformer- based Hierarchical (SMITH) Encoder for document representation learning and matching, which contains several novel design choices like two level hierarchical Transformers to adapt self-attention models for long text inputs. For model pre-training, we propose the masked sentence block language modeling task in addition to the original masked word language modeling task in BERT, to capture sentence block relations within a document. The experi- mental results on several benchmark datasets show that our pro- posed SMITH model outperforms previous state-of-the-art Siamese matching models including HAN, SMASH and BERT for long-form document matching. Moreover, our proposed model increases the maximum input text length from 512 to 2048 when compared with BERT-based baseline methods.
As a part of this work, we plan to release a large scale benchmark collection for the document matching task so that it is easier for researchers to compare different document matching methods in the future. It is also interesting to investigate how to utilize the learned document representation from Transformer-based hierar- chical encoders for other document-level language understanding tasks like document classification, clustering and ranking.
REFERENCES [1] S. R. Bowman, G. Angeli, C. Potts, and C. D. Manning. 2015. A large annotated
corpus for learning natural language inference. In EMNLP â15. 632â642.
[2] R. Child, S. Gray, A. Radford, and I. Sutskever. 2019. Generating Long Sequences with Sparse Transformers. arXiv:1904.10509
[3] N. Craswell, B. Mitra, E. Yilmaz, D. Campos, and E. M Voorhees. 2020. Overview of the TREC 2019 deep learning track. arXiv:2003.07820
[4] Z. Dai and J. Callan. 2019. Deeper Text Understanding for IR with Contextual Neural Language Modeling. In SIGIR â19.
[5] Z. Dai, Z. Yang, Y. Yang, J. G. Carbonell, Q. V. Le, and R. Salakhutdinov. 2019. Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context. arXiv:1901.02860
[6] J. Devlin, M. Chang, K. Lee, and K. Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv:1810.04805 [7] W. B. Dolan and C. Brockett. 2005. Automatically Constructing a Corpus of
Sentential Paraphrases. In IWP 2005. 9â16.
[8] J. Guo, Y. Fan, Q. Ai, and W. B. Croft. 2016. A Deep Relevance Matching Model for Ad-hoc Retrieval. In CIKM â16. 55â64.
[9] J. Guo, Y. Fan, L. Pang, L. Yang, Q. Ai, H. Zamani, C. Wu, W. B. Croft, and X. Cheng. 2019. A Deep Look into Neural Ranking Models for Information Retrieval. arXiv:1903.06902
[10] K. He, X. Zhang, S. Ren, and J. Sun. 2015. Deep Residual Learning for Image Recognition. arXiv:1512.03385
[11] J. Ho, N. Kalchbrenner, D. Weissenborn, and T. Salimans. 2019. Axial Attention in Multidimensional Transformers. arXiv:1912.12180
[12] B. Hu, Z. Lu, H. Li, and Q. Chen. 2014. Convolutional Neural Network Architec- tures for Matching Natural Language Sentences. In NIPS â14. 2042â2050. [13] P. Huang, X. He, J. Gao, L. Deng, A. Acero, and L. P. Heck. 2013. Learning Deep Structured Semantic Models for Web Search using Clickthrough Data. In CIKM â13. 2333â2338.
[14] J. Jiang, M. Zhang, C. Li, M. Bendersky, N. Golbandi, and M. Najork. 2019. Se- mantic Text Matching for Long-Form Documents. In WWW â19. 795â806. [15] J. Johnson, M. Douze, and H. Jégou. 2017. Billion-scale similarity search with
GPUs. arXiv:1702.08734
[16] N. Kitaev, L. Kaiser, and A. Levskaya. 2020. Reformer: The Efficient Transformer. In ICLR â20.
[17] H. Li and J. Xu. 2014. Semantic Matching in Search. Now Publishers Inc., Hanover, MA, USA.
[18] R. Lowe, N. Pow, I. Serban, and J. Pineau. 2015. The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Dialogue Systems. arXiv:1506.08909
[19] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. 2013. Distributed Representations of Words and Phrases and their Compositionality. In NIPS â13. 3111â3119.
[20] B. Mitra, F. Diaz, and N. Craswell. 2017. Learning to Match Using Local and Distributed Representations of Text for Web Search. In WWW â17. 1291â1299.
[21] I. Ounis, C. MacDonald, and I. Soboroff. 2008. Overview of the TREC 2008 Blog Track. In TREC â08.
[22] L. Pang, Y. Lan, J. Guo, J. Xu, S. Wan, and X. Cheng. 2016. Text Matching as Image Recognition. In AAAI â16. 2793â2799.
[23] M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettle- moyer. 2018. Deep contextualized word representations. arXiv:1802.05365 [24] J. Qiu, H. Ma, O. Levy, S. W. Yih, S. Wang, and J. Tang. 2019. Blockwise Self-
Attention for Long Document Understanding. arXiv:1911.02972
[25] D. R. Radev, P. Muthukrishnan, and V. Qazvinian. 2009. The ACL Anthology Network Corpus. In NLPIR4DL â09. 54â61.
[26] A. Radford. 2018. Improving Language Understanding by Generative Pre-Training. Preprint, OpenAI.
[27] J. W. Rae, A. Potapenko, S. M. Jayakumar, and T. P. Lillicrap. 2019. Compressive Transformers for Long-Range Sequence Modelling. arXiv:1911.05507
[28] A. Roy, M. T. Saffar, D. Grangier, and A. Vaswani. 2020. Efficient Content-Based Sparse Attention with Routing Transformers. arXiv:2003.05997
[29] S. Sukhbaatar, E. Grave, P. Bojanowski, and A. Joulin. 2019. Adaptive Attention Span in Transformers. arXiv:1905.07799
[30] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Å . Kaiser, and I. Polosukhin. 2017. Attention is All You Need. In NIPS â17.
[31] X. Wu, R. Guo, A. Suresh, S. Kumar, D. Holtmann-Rice, D. Simcha, and F. Yu. 2017. Multiscale Quantization for Fast Similarity Search. In NIPS â17. 5745â5755. [32] Y. Wu, W. Wu, C. Xing, M. Zhou, and Z. Li. 2017. Sequential Matching Network: A New Architecture for Multi-turn Response Selection in Retrieval-Based Chatbots. In ACL â17. 163â197.
[33] C. Xiong, Z. Dai, J. Callan, Z. Liu, and R. Power. 2017. End-to-End Neural Ad-hoc Ranking with Kernel Pooling. In SIGIR â17. 55â64.
[34] L. Yang, Q. Ai, J. Guo, and W. B. Croft. 2016. aNMM: Ranking Short Answer Texts with Attention-Based Neural Matching Model. In CIKM â16. 287â296.
[35] Y. Yang, W. Yih, and C. Meek. 2015. WikiQA: A Challenge Dataset for Open- Domain Question Answering. In EMNLP â15. 2013â2018.
[36] Z. Yang, Z. Dai, Y. Yang, J. G. Carbonell, R. Salakhutdinov, and Q. V. Le. 2019. XLNet: Generalized Autoregressive Pretraining for Language Understanding. arXiv:1906.08237
[37] Z. Yang, D. Yang, C. Dyer, X. He, A. Smola, and E. Hovy. 2016. Hierarchical Attention Networks for Document Classification. In NAACL â16. 1480â1489. [38] W. Yin and H. Schütze. 2015. Convolutional Neural Network for Paraphrase
Identification. In NAACL â15. 901â911.
[39] J. Yu, M. Qiu, J. Jiang, J. Huang, S. Song, W. Chu, and H. Chen. 2018. Mod- elling Domain Relationships for Transfer Learning on Retrieval-based Question Answering Systems in E-commerce. In WSDM â18. 682â690.
[40] X. Zhang, F. Wei, and M. Zhou. 2019. HIBERT: Document Level Pre- training of Hierarchical Bidirectional Transformers for Document Summarization. arXiv:1905.06566 | {
"id": "1901.02860"
} |
2004.12158 | How Does NLP Benefit Legal System: A Summary of Legal Artificial Intelligence | Legal Artificial Intelligence (LegalAI) focuses on applying the technology of
artificial intelligence, especially natural language processing, to benefit
tasks in the legal domain. In recent years, LegalAI has drawn increasing
attention rapidly from both AI researchers and legal professionals, as LegalAI
is beneficial to the legal system for liberating legal professionals from a
maze of paperwork. Legal professionals often think about how to solve tasks
from rule-based and symbol-based methods, while NLP researchers concentrate
more on data-driven and embedding methods. In this paper, we introduce the
history, the current state, and the future directions of research in LegalAI.
We illustrate the tasks from the perspectives of legal professionals and NLP
researchers and show several representative applications in LegalAI. We conduct
experiments and provide an in-depth analysis of the advantages and
disadvantages of existing works to explore possible future directions. You can
find the implementation of our work from https://github.com/thunlp/CLAIM. | http://arxiv.org/pdf/2004.12158 | Haoxi Zhong, Chaojun Xiao, Cunchao Tu, Tianyang Zhang, Zhiyuan Liu, Maosong Sun | cs.CL | Accepted by ACL 2020 | null | cs.CL | 20200425 | 20200518 | 0 2 0 2
y a M 8 1 ] L C . s c [
5 v 8 5 1 2 1 . 4 0 0 2 : v i X r a
How Does NLP Beneï¬t Legal System: A Summary of Legal Artiï¬cial Intelligence Haoxi Zhong1, Chaojun Xiao1, Cunchao Tu1, Tianyang Zhang2, Zhiyuan Liu1â, Maosong Sun1 1Department of Computer Science and Technology Institute for Artiï¬cial Intelligence, Tsinghua University, Beijing, China Beijing National Research Center for Information Science and Technology, China 2Beijing Powerlaw Intelligent Technology Co., Ltd., China [email protected], {xcjthu,tucunchao}@gmail.com, [email protected], {lzy,sms}@tsinghua.edu.cn
# Abstract
Legal Artiï¬cial Intelligence (LegalAI) focuses on applying the technology of artiï¬cial intelli- gence, especially natural language processing, to beneï¬t tasks in the legal domain. In recent years, LegalAI has drawn increasing attention rapidly from both AI researchers and legal pro- fessionals, as LegalAI is beneï¬cial to the legal system for liberating legal professionals from a maze of paperwork. Legal professionals of- ten think about how to solve tasks from rule- based and symbol-based methods, while NLP researchers concentrate more on data-driven and embedding methods. In this paper, we de- scribe the history, the current state, and the fu- ture directions of research in LegalAI. We il- lustrate the tasks from the perspectives of legal professionals and NLP researchers and show several representative applications in LegalAI. We conduct experiments and provide an in- depth analysis of the advantages and disadvan- tages of existing works to explore possible fu- ture directions. You can ï¬nd the implemen- tation of our work from https://github. com/thunlp/CLAIM.
# Introduction
Legal Artiï¬cial Intelligence (LegalAI) mainly fo- cuses on applying artiï¬cial intelligence technology to help legal tasks. The majority of the resources in this ï¬eld are presented in text forms, such as judgment documents, contracts, and legal opinions. Therefore, most LegalAI tasks are based on Natural Language Processing (NLP) technologies.
LegalAI plays a signiï¬cant role in the legal do- main, as they can reduce heavy and redundant work for legal professionals. Many tasks in the legal do- main require the expertise of legal practitioners and a thorough understanding of various legal doc- uments. Retrieving and understanding legal docu- ments take lots of time, even for legal professionals.
Therefore, a qualiï¬ed system of LegalAI should reduce the time consumption of these tedious jobs and beneï¬t the legal system. Besides, LegalAI can also provide a reliable reference to those who are not familiar with the legal domain, serving as an affordable form of legal aid.
In order to promote the development of LegalAI, many researchers have devoted considerable efforts over the past few decades. Early works (Kort, 1957; Ulmer, 1963; Nagel, 1963; Segal, 1984; Gardner, 1984) always use hand-crafted rules or features due to computational limitations at the time. In recent years, with rapid developments in deep learning, re- searchers begin to apply deep learning techniques to LegalAI. Several new LegalAI datasets have been proposed (Kano et al., 2018; Xiao et al., 2018; Duan et al., 2019; Chalkidis et al., 2019b,a), which can serve as benchmarks for research in the ï¬eld. Based on these datasets, researchers began explor- ing NLP-based solutions to a variety of LegalAI tasks, such as Legal Judgment Prediction (Aletras et al., 2016; Luo et al., 2017; Zhong et al., 2018; Chen et al., 2019), Court View Generation (Ye et al., 2018), Legal Entity Recognition and Classiï¬- cation (Cardellino et al., 2017; ANGELIDIS et al., 2018), Legal Question Answering (Monroy et al., 2009; Taniguchi and Kano, 2016; Kim and Goebel, 2017), Legal Summarization (Hachey and Grover, 2006; Bhattacharya et al., 2019).
As previously mentioned, researchersâ efforts over the years led to tremendous advances in LegalAI. To summarize, some efforts concen- trate on symbol-based methods, which apply inter- pretable hand-crafted symbols to legal tasks (Ash- ley, 2017; Surden, 2018). Meanwhile, other efforts with embedding-based methods aim at designing efï¬cient neural models to achieve better perfor- mance (Chalkidis and Kampas, 2019). More specif- ically, symbol-based methods concentrate more on utilizing interpretable legal knowledge to reason
âCorresponding author.
â âSymbol- based â âApplications ae i âEmbedding-based 4 1 . I Methods : ' of LegalAI ! : Methods i â : ! âAlice and Bob are married and} « \ N 1 i (âKommon Taw 1 1 have a son, David. One day, ' a i : . . 1 Relation âBob died unexpectedly...... fl : j | Common law (also | Concept 1 â extracti â : 1 Judgment ! ; known as judicial |Embedding, iExtraction once ! Prediction ' | precedent or judge- : : marry with, Bob)! { A IQ . (David, son of, Alice) i 1 ' i l ! (David, son of, Bob) : ' 1 : i 1 Question ! (J i ! : 1 | Event Homicide Alarm iy Answering 1 (-.) Concept} ' Timeline mee ! ! . Knowledge i i Escape Arrested 1 ! J) Graph H i Similar Casi ! ! F prtmennene esses i Matching f ! : if Someone died? i 1 i i beta | 1 Someone hurt? i | 1 Pretrained ! etection 1 {ur by accident? > | i ; 1 Language ! , ! i Text Model | J â\ Summarization a â ,
Figure 1: An overview of tasks in LegalAI.
between symbols in legal documents, like events and relationships. Meanwhile, embedding-based methods try to learn latent features for prediction from large-scale data. The differences between these two methods have caused some problems in existing works of LegalAI. Interpretable symbolic models are not effective, and embedding-methods with better performance usually cannot be inter- preted, which may bring ethical issues to the legal system such as gender bias and racial discrimina- tion. The shortcomings make it difï¬cult to apply existing methods to real-world legal systems.
We summarize three primary challenges for both embedding-based and symbol-based methods in LegalAI: (1) Knowledge Modelling. Legal texts are well formalized, and there are many domain knowledge and concepts in LegalAI. How to uti- lize the legal knowledge is of great signiï¬cance. (2) Legal Reasoning. Although most tasks in NLP require reasoning, the LegalAI tasks are somehow different, as legal reasoning must strictly follow the rules well-deï¬ned in law. Thus combining pre- deï¬ned rules and AI technology is essential to legal reasoning. Besides, complex case scenarios and complex legal provisions may require more sophis- ticated reasoning for analyzing. (3) Interpretability. Decisions made in LegalAI usually should be in- terpretable to be applied to the real legal system. Otherwise, fairness may risk being compromised. Interpretability is as important as performance in LegalAI.
cluded as follows: (1) We describe existing works from the perspectives of both NLP researchers and legal professionals. Moreover, we illustrate sev- eral embedding-based and symbol-based methods and explore the future direction of LegalAI. (2) We describe three typical applications, including judgment prediction, similar case matching, and legal question answering in detail to emphasize why these two kinds of methods are essential to LegalAI. (3) We conduct exhaustive experiments on multiple datasets to explore how to utilize NLP technology and legal knowledge to overcome the challenges in LegalAI. You can ï¬nd the implemen- tation from github1. (4) We summarize LegalAI datasets, which can be regarded as the benchmark for related tasks. The details of these datasets can be found from github2 with several legal papers worth reading.
# 2 Embedding-based Methods
First, we describe embedding-based methods in LegalAI, also named as representation learning. Embedding-based methods emphasize on repre- senting legal facts and knowledge in embedding space, and they can utilize deep learning methods for corresponding tasks.
# 2.1 Character, Word, Concept Embeddings
Character and word embeddings play a signiï¬cant role in NLP, as it can embed the discrete texts into
The main contributions of this work are con-
1https://github.com/thunlp/CLAIM 2https://github.com/thunlp/LegalPapers
continuous vector space. Many embedding meth- ods have been proved effective (Mikolov et al., 2013; Joulin et al., 2016; Pennington et al., 2014; Peters et al., 2018; Yang et al., 2014; Bordes et al., 2013; Lin et al., 2015) and they are crucial for the effectiveness of the downstream tasks.
In LegalAI, embedding methods are also essen- tial as they can bridge the gap between texts and vectors. However, it seems impossible to learn the meaning of a professional term directly from some legal factual description. Existing works (Chalkidis and Kampas, 2019; Nay, 2016) mainly revolve around applying existing embedding methods like Word2Vec to legal domain corpora. To overcome the difï¬culty of learning professional vocabulary representations, we can try to capture both gram- matical information and legal knowledge in word embedding for corresponding tasks. Knowledge modelling is signiï¬cant to LegalAI, as many re- sults should be decided according to legal rules and knowledge.
Although knowledge graph methods in the le- gal domain are promising, there are still two major challenges before their practical usage. Firstly, the construction of the knowledge graph in LegalAI is complicated. In most scenarios, there are no ready-made legal knowledge graphs available, so researchers need to build from scratch. In addi- tion, different legal concepts have different repre- sentations and meanings under legal systems in different countries, which also makes it challeng- ing to construct a general legal knowledge graph. Some researchers tried to embed legal dictionar- ies (CvrËcek et al., 2012), which can be regarded as an alternative method. Secondly, a generalized legal knowledge graph is different in the form with those commonly used in NLP. Existing knowledge graphs concern the relationship between entities and concepts, but LegalAI focuses more on the explanation of legal concepts. These two chal- lenges make knowledge modelling via embedding in LegalAI non-trivial, and researchers can try to overcome the challenges in the future.
# 2.2 Pretrained Language Models
Pretrained language models (PLMs) such as BERT (Devlin et al., 2019) have been the recent focus in many ï¬elds in NLP (Radford et al., 2019; Yang et al., 2019; Liu et al., 2019a). Given the success of PLM, using PLM in LegalAI is also a very reasonable and direct choice. However, there
are differences between the text used by existing PLMs and legal text, which also lead to unsatisfac- tory performances when directly applying PLMs to legal tasks. The differences stem from the termi- nology and knowledge involved in legal texts. To address this issue, Zhong et al. (2019b) propose a language model pretrained on Chinese legal docu- ments, including civil and criminal case documents. Legal domain-speciï¬c PLMs provide a more quali- ï¬ed baseline system for the tasks of LegalAI. We will show several experiments comparing different BERT models in LegalAI tasks.
For the future exploration of PLMs in LegalAI, researchers can aim more at integrating knowledge into PLMs. Integrating knowledge into pretrained models can help the reasoning ability between le- gal concepts. Lots of work has been done on inte- grating knowledge from the general domain into models (Zhang et al., 2019; Peters et al., 2019; Hayashi et al., 2019). Such technology can also be considered for future application in LegalAI.
# 3 Symbol-based Methods
In this section, we describe symbol-based meth- ods, also named as structured prediction methods. Symbol-based methods are involved in utilizing legal domain symbols and knowledge for the tasks of LegalAI. The symbolic legal knowledge, such as events and relationships, can provide interpretabil- ity. Deep learning methods can be employed for symbol-based methods for better performance.
# Information Extraction
Information extraction (IE) has been widely stud- ied in NLP. IE emphasizes on extracting valuable information from texts, and there are many NLP works which concentrate on IE, including name entity recognition (Lample et al., 2016; Kuru et al., 2016; Akbik et al., 2019), relation extraction (Zeng et al., 2015; Miwa and Bansal, 2016; Lin et al., 2016; Christopoulou et al., 2018), and event ex- traction (Chen et al., 2015; Nguyen et al., 2016; Nguyen and Grishman, 2018).
IE in LegalAI has also attracted the interests of many researchers. To make better use of the par- ticularity of legal texts, researchers try to use on- tology (Bruckschen et al., 2010; Cardellino et al., 2017; Lenci et al., 2009; Zhang et al., 2017) or global consistency (Yin et al., 2018) for named entity recognition in LegalAI. To extract rela- tionship and events from legal documents, re-
searchers attempt to apply different NLP technolo- gies, including hand-crafted rules (Bartolini et al., 2004; Truyens and Eecke, 2014), CRF (Vacek and Schilder, 2017), joint models like SVM, CNN, GRU (Vacek et al., 2019), or scale-free identiï¬er network (Yan et al., 2017) for promising results.
Existing works have made lots of efforts to im- prove the effect of IE, but we need to pay more attention to the beneï¬ts of the extracted informa- tion. The extracted symbols have a legal basis and can provide interpretability to legal applications, so we cannot just aim at the performance of meth- ods. Here, we show two examples of utilizing the extracted symbols for interpretability of LegalAI: Relation Extraction and Inheritance Dispute. Inheritance dispute is a type of cases in Civil Law that focuses on the distribution of inheritance rights. Therefore, identifying the relationship between the parties is vital, as those who have the closest re- lationship with the deceased can get more assets. Towards this goal, relation extraction in inheritance dispute cases can provide the reason for judgment results and improve performance.
Event Timeline Extraction and Judgment Prediction of Criminal Case. In criminal cases, multiple parties are often involved in group crimes. To decide who should be primarily responsible for the crime, we need to determine what everyone has done throughout the case, and the order of these events is also essential. For example, in the case of crowd ï¬ghting, the person who ï¬ghts ï¬rst should bear the primary responsibility. As a result, a quali- ï¬ed event timeline extraction model is required for judgment prediction of criminal cases.
In future research, we need to concern more about applying extracted information to the tasks of LegalAI. The utilization of such information depends on the requirements of speciï¬c tasks, and the information can provide more interpretability.
# 3.2 Legal Element Extraction
In addition to those common symbols in gen- eral NLP, LegalAI also has its exclusive symbols, named legal elements. The extraction of legal ele- ments focuses on extracting crucial elements like whether someone is killed or something is stolen. These elements are called constitutive elements of crime, and we can directly convict offenders based on the results of these elements. Utilizing these elements can not only bring intermediate supervi- sion information to the judgment prediction task
but also make the prediction results of the model more interpretable.
Fact Description: One day, Bob used a fake reason for marriage decoration to borrow RMB 2k from Alice. After arrested, Bob has paid the money back to Alice.
Whether did Bob sell something? | Whether did Bob make a fictional fact? | x v Whether did Bob illegally possess the property of | Vv others? Judgment Results: Fraud.
Table 1: An example of element detection from Zhong et al. (2020). From this example, we can see that the extracted elements can decide the judgment results. It shows that elements are useful for downstream tasks.
Towards a more in-depth analysis of element- based symbols, Shu et al. (2019) propose a dataset for extracting elements from three different kinds of cases, including divorce dispute, labor dispute, and loan dispute. The dataset requires us to detect whether the related elements are satisï¬ed or not, and formalize the task as a multi-label classiï¬cation problem. To show the performance of existing methods on element extraction, we have conducted experiments on the dataset, and the results can be found in Table 2.
Divorce Labor Loan Model MiF MaF MiF MaF MiF MaF TextCNN 78.7 81.3 DPCNN 80.6 LSTM BiDAF 83.1 83.3 BERT BERT-MS 84.9 65.9 64.0 67.3 68.7 69.6 72.7 76.4 79.8 81.0 81.5 76.8 79.7 54.4 47.4 52.9 59.4 43.7 54.5 80.3 81.4 80.4 80.5 78.6 81.9 60.6 42.5 53.1 63.1 39.5 64.1
Table 2: Experimental results on extracting elements. Here MiF and MaF denotes micro-F1 and macro-F1.
We have implemented several classical encod- ing models in NLP for element extraction, in- cluding TextCNN (Kim, 2014), DPCNN (John- son and Zhang, 2017), LSTM (Hochreiter and Schmidhuber, 1997), BiDAF (Seo et al., 2016), and BERT (Devlin et al., 2019). We have tried two different versions of pretrained parameters of BERT, including the origin parameters (BERT) and the parameters pretrained on Chinese legal docu- ments (BERT-MS) (Zhong et al., 2019b). From the results, we can see that the language model pretrained on the general domain performs worse
than domain-speciï¬c PLM, which proves the ne- cessity of PLM in LegalAI. For the following parts of our paper, we will use BERT pretrained on legal documents for better performance.
From the results of element extraction, we can ï¬nd that existing methods can reach a promising performance on element extraction, but are still not sufï¬cient for corresponding applications. These el- ements can be regarded as pre-deï¬ned legal knowl- edge and help with downstream tasks. How to improve the performance of element extraction is valuable for further research.
# 4 Applications of LegalAI
In this section, we will describe several typical ap- plications in LegalAI, including Legal Judgment Prediction, Similar Case Matching and Legal Ques- tion Answering. Legal Judgment Prediction and Similar Case Matching can be regarded as the core function of judgment in Civil Law and Common Law system, while Legal Question Answering can provide consultancy for those who are unfamiliar with the legal domain. Therefore, exploring these three tasks can cover most aspects of LegalAI.
# 4.1 Legal Judgment Prediction
Legal Judgment Prediction (LJP) is one of the most critical tasks in LegalAI, especially in the Civil Law system. In the Civil Law system, the judgment results are decided according to the facts and the statutory articles. One will receive legal sanctions only after he or she has violated the prohibited acts prescribed by law. The task LJP mainly concerns how to predict the judgment results from both the fact description of a case and the contents of the statutory articles in the Civil Law system.
As a result, LJP is an essential and representa- tive task in countries with Civil Law system like France, Germany, Japan, and China. Besides, LJP has drawn lots of attention from both artiï¬cial intel- ligence researchers and legal professionals. In the following parts, we describe the research progress and explore the future direction of LJP.
Related Work LJP has a long history. Early works revolve around analyzing existing legal cases in speciï¬c circum- stances using mathematical or statistical meth- ods (Kort, 1957; Ulmer, 1963; Nagel, 1963; Keown, 1980; Segal, 1984; Lauderdale and Clark, 2012). The combination of mathematical methods and le- gal rules makes the predicted results interpretable.
Fact Description: One day, the defendant Bob stole cash 8500 yuan and T-shirts, jackets, pants, shoes, hats (identi- ï¬ed a total value of 574.2 yuan) in Beijing Lining store.
# Judgment Results
Relevant Articles Article 264 of Criminal Law. Applicable Charges Theft. Term of Penalty 6 months.
Table 3: An example of legal judgment prediction from Zhong et al. (2018). In this example, the judgment re- sults include relevant articles, applicable charges and the the term of penalty.
To promote the progress of LJP, Xiao et al. (2018) have proposed a large-scale Chinese crimi- nal judgment prediction dataset, C-LJP. The dataset contains over 2.68 million legal documents pub- lished by the Chinese government, making C-LJP a qualiï¬ed benchmark for LJP. C-LJP contains three subtasks, including relevant articles, appli- cable charges, and the term of penalty. The ï¬rst two can be formalized as multi-label classiï¬cation tasks, while the last one is a regression task. Be- sides, English LJP datasets also exist (Chalkidis et al., 2019a), but the size is limited.
With the development of the neural network, many researchers begin to explore LJP using deep learning technology (Hu et al., 2018; Wang et al., 2019; Li et al., 2019b; Liu et al., 2019b; Li et al., 2019a; Kang et al., 2019). These works can be di- vided into two primary directions. The ï¬rst one is to use more novel models to improve performance. Chen et al. (2019) use the gating mechanism to enhance the performance of predicting the term of penalty. Pan et al. (2019) propose multi-scale atten- tion to handle the cases with multiple defendants. Besides, other researchers explore how to utilize legal knowledge or the properties of LJP. Luo et al. (2017) use the attention mechanism between facts and law articles to help the prediction of applicable charges. Zhong et al. (2018) present a topological graph to utilize the relationship between different tasks of LJP. Besides, Hu et al. (2018) incorporate ten discriminative legal attributes to help predict low-frequency charges.
# Experiments and Analysis
To better understand recent advances in LJP, we have conducted a series of experiments on C- LJP. Firstly, we implement several classical text classiï¬cation models, including TextCNN (Kim, 2014), DPCNN (Johnson and Zhang, 2017),
Dev Test Task Charge Article Term Charge Article Term Metrics MiF MaF MiF MaF Dis MiF MaF MiF MaF Dis TextCNN DPCNN LSTM BERT 93.8 94.7 94.7 94.5 74.6 72.2 71.2 66.3 92.8 93.9 93.9 93.5 70.5 68.8 66.5 64.7 1.586 1.448 1.456 1.421 93.9 94.9 94.3 94.7 72.2 72.1 66.0 71.3 93.5 94.6 94.7 94.3 67.0 69.4 70.7 66.9 1.539 1.390 1.467 1.342 FactLaw TopJudge Gating Network 79.5 94.8 - 25.4 76.3 - 79.8 94.0 - 24.9 69.6 - 1.721 1.438 1.604 76.9 97.6 - 35.0 76.8 - 78.1 96.9 - 30.8 70.9 - 1.683 1.335 1.553
Table 4: Experimental results of judgment prediction on C-LJP. In this table, MiF and MaF denotes micro-F1 and macro-F1, and Dis denotes the log distance between prediction and ground truth.
LSTM (Hochreiter and Schmidhuber, 1997), and BERT (Devlin et al., 2019). For the parameters of BERT, we use the pretrained parameters on Chinese criminal cases (Zhong et al., 2019b). Secondly, we implement several models which are specially designed for LJP, including FactLaw (Luo et al., 2017), TopJudge (Zhong et al., 2018), and Gating Network (Chen et al., 2019). The results can be found in Table 4.
From the results, we can learn that most models can reach a promising performance in predicting high-frequency charges or articles. However, the models perform not well on low-frequency labels as there is a gap between micro-F1 and macro-F1. Hu et al. (2018) have explored few-shot learning for LJP. However, their model requires additional attribute information labelled manually, which is time-consuming and makes it hard to employ the model in other datasets. Besides, we can ï¬nd that performance of BERT is not satisfactory, as it does not make much improvement from those models with fewer parameters. The main reason is that the length of the legal text is very long, but the maxi- mum length that BERT can handle is 512. Accord- ing to statistics, the maximum document length is 56, 694, and the length of 15% documents is over 512. Document understanding and reasoning tech- niques are required for LJP.
can achieve promising performance, we still need to consider combining symbol-based with embedding-based methods in LJP. Take TopJudge as an example, this model formalizes topological order between the tasks in LJP (symbol-based part) and uses TextCNN for encoding the fact description. By combining symbol-based and embedding-based methods, TopJudge has achieved promising results on LJP. Comparing the results
between TextCNN and TopJudge, we can ï¬nd that just integrating the order of judgments into the model can lead to improvements, which proves the necessity of combining embedding-based and symbol-based methods.
For better LJP performance, some challenges require the future efforts of researchers: (1) Doc- ument understanding and reasoning techniques are required to obtain global information from ex- tremely long legal texts. (2) Few-shot learning. Even low-frequency charges should not be ignored as they are part of legal integrity. Therefore, han- dling in-frequent labels is essential to LJP. (3) In- terpretability. If we want to apply methods to real legal systems, we must understand how they make predictions. However, existing embedding-based methods work as a black box. What factors af- fected their predictions remain unknown, and this may introduce unfairness and ethical issues like gender bias to the legal systems. Introducing le- gal symbols and knowledge mentioned before will beneï¬t the interpretability of LJP.
# 4.2 Similar Case Matching
In those countries with the Common Law system like the United States, Canada, and India, judicial decisions are made according to similar and rep- resentative cases in the past. As a result, how to identify the most similar case is the primary con- cern in the judgment of the Common Law system. In order to better predict the judgment results in the Common Law system, Similar Case Matching (SCM) has become an essential topic of LegalAI. SCM concentrate on ï¬nding pairs of similar cases, and the deï¬nition of similarity can be various. SCM requires to model the relationship between cases from the information of different granularity, like fact level, event level and element level. In
other words, SCM is a particular form of semantic matching (Xiao et al., 2019), which can beneï¬t the legal information retrieval.
# Related Work
Traditional methods of Information Retrieve (IR) focus on term-level similarities with statistical mod- els, including TF-IDF (Salton and Buckley, 1988) and BM25 (Robertson and Walker, 1994), which are widely applied in current search systems. In addition to these term matching methods, other re- searchers try to utilize meta-information (Medin, 2000; Gao et al., 2011; Wu et al., 2013) to capture semantic similarity. Many machine learning meth- ods have also been applied for IR like SVD (Xu et al., 2010) or factorization (Rendle, 2010; Kabbur et al., 2013). With the rapid development of deep learning technology and NLP, many researchers apply neural models, including multi-layer per- ceptron (Huang et al., 2013), CNN (Shen et al., 2014; Hu et al., 2014; Qiu and Huang, 2015), and RNN (Palangi et al., 2016) to IR.
There are several LegalIR datasets, including COLIEE (Kano et al., 2018), CaseLaw (Locke and Zuccon, 2018), and CM (Xiao et al., 2019). Both COLIEE and CaseLaw are involved in retrieving most relevant articles from a large corpus, while data examples in CM give three legal documents for calculating similarity. These datasets provide benchmarks for the studies of LegalIR. Many re- searchers focus on building an easy-to-use legal search engine (Barmakian, 2000; Turtle, 1995). They also explore utilizing more information, in- cluding citations (Monroy et al., 2013; Geist, 2009; Raghav et al., 2016) and legal concepts (Maxwell and Schafer, 2008; Van Opijnen and Santos, 2017). Towards the goal of calculating similarity in se- mantic level, deep learning methods have also been applied to LegalIR. Tran et al. (2019) propose a CNN-based model with document and sentence level pooling which achieves the state-of-the-art results on COLIEE, while other researchers ex- plore employing better embedding methods for Le- galIR (Landthaler et al., 2016; Sugathadasa et al., 2018).
# Experiments and Analysis
To get a better view of the current progress of Le- galIR, we select CM (Xiao et al., 2019) for ex- periments. CM contains 8, 964 triples where each triple contains three legal documents (A, B, C). The task designed in CM is to determine whether
B or C is more similar to A. We have imple- mented four different types of baselines: (1) Term matching methods, TF-IDF (Salton and Buckley, 1988). (2) Siamese Network with two parameter- shared encoders, including TextCNN (Kim, 2014), BiDAF (Seo et al., 2016) and BERT (Devlin et al., 2019), and a distance function. (3) Semantic match- ing models in sentence level, ABCNN (Yin et al., 2016), and document level, SMASH-RNN (Jiang et al., 2019). The results can be found in Table 5.
Model Dev Test TF-IDF 52.9 53.3 TextCNN BiDAF BERT 62.5 63.3 64.3 69.9 68.6 66.8 62.7 SMASH RNN 64.2 ABCNN 69.9 65.8
Table 5: Experimental results of SCM. The evaluation metric is accuracy.
From the results, we observe that existing neu- ral models which are capable of capturing seman- tic information outperform TF-IDF, but the per- formance is still not enough for SCM. As Xiao et al. (2019) state, the main reason is that legal professionals think that elements in this dataset deï¬ne the similarity of legal cases. Legal profes- sionals will emphasize on whether two cases have similar elements. Only considering term-level and semantic-level similarity is insufï¬cient for the task. For the further study of SCM, there are two di- rections which need future effort: (1) Elemental- based representation. Researchers can focus more on symbols of legal documents, as the sim- ilarity of legal cases is related to these symbols like elements. (2) Knowledge incorporation. As semantic-level matching is insufï¬cient for SCM, we need to consider about incorporating legal knowledge into models to improve the performance and provide interpretability.
# 4.3 Legal Question-Answering
Another typical application of LegalAI is Legal Question Answering (LQA) which aims at answer- ing questions in the legal domain. One of the most important parts of legal professionalsâ work is to provide reliable and high-quality legal consulting services for non-professionals. However, due to the insufï¬cient number of legal professionals, it is often challenging to ensure that non-professionals
KD-Questions CA-Questions All Single All Single All Single All Unskilled Humans Skilled Humans 76.9 80.6 71.1 77.5 62.5 86.8 58.0 84.7 70.0 84.1 64.2 81.1 BiDAF BERT Co-matching HAF 36.7 38.0 35.8 36.6 20.6 21.2 20.2 21.4 37.2 38.9 35.8 42.5 22.2 23.7 20.3 19.8 38.3 39.7 38.1 42.6 22.0 22.3 21.2 21.2
Table 6: Experimental results of JEC-QA. The evaluation metrics is accuracy. The performance of unskilled and skilled humans is collected from original paper.
Question: Which crimes did Alice and Bob commit if they transported more than 1.5 million yuan of counterfeit currency from abroad to China?
# Direct Evidence
P1: Transportation of counterfeit money: · · · The defen- dants are sentenced to three years in prison. P2: Smuggling counterfeit money: · · · The defendants are sentenced to seven years in prison.
# Extra Evidence
P3: Motivational concurrence: The criminals carry out one behavior but commit several crimes. P4: For motivational concurrence, the criminals should be convicted according to the more serious crime.
# Comparison: seven years > three years
Answer: Smuggling counterfeit money.
Table 7: An example of LQA from Zhong et al. (2019a). In this example, direct evidence and extra evidence are both required for answering the question. The hard rea- soning steps prove the difï¬culty of legal question an- swering.
et al., 2018) contains about 500 yes/no questions. Moreover, the bar exam is a professional qual- iï¬cation examination for lawyers, so bar exam datasets (Fawei et al., 2016; Zhong et al., 2019a) may be quite hard as they require professional legal knowledge and skills.
In addition to these datasets, researchers have also worked on lots of methods on LQA. The rule- based systems (Buscaldi et al., 2010; Kim et al., 2013; Kim and Goebel, 2017) are prevalent in early research. In order to reach better performance, researchers utilize more information like the ex- planation of concepts (Taniguchi and Kano, 2016; Fawei et al., 2015) or formalize relevant documents as graphs to help reasoning (Monroy et al., 2009, 2008; Tran et al., 2013). Machine learning and deep learning methods like CRF (Bach et al., 2017), SVM (Do et al., 2017), and CNN (Kim et al., 2015) have also been applied to LQA. However, most existing methods conduct experiments on small datasets, which makes them not necessarily appli- cable to massive datasets and real scenarios.
can get enough and high-quality consulting ser- vices, and LQA is expected to address this issue.
In LQA, the form of questions varies as some questions will emphasize on the explanation of some legal concepts, while others may concern the analysis of speciï¬c cases. Besides, questions can also be expressed very differently between pro- fessionals and non-professionals, especially when describing domain-speciï¬c terms. These problems bring considerable challenges to LQA, and we con- duct experiments to demonstrate the difï¬culties of LQA better in the following parts.
Related Work In LegalAI, there are many datasets of question an- swering. Duan et al. (2019) propose CJRC, a legal reading comprehension dataset with the same for- mat as SQUAD 2.0 (Rajpurkar et al., 2018), which includes span extraction, yes/no questions, and unanswerable questions. Besides, COLIEE (Kano
# Experiments and Analysis
We select JEC-QA (Zhong et al., 2019a) as the dataset of the experiments, as it is the largest dataset collected from the bar exam, which guar- JEC-QA contains 28, 641 antees its difï¬culty. multiple-choice and multiple-answer questions, to- gether with 79, 433 relevant articles to help to an- swer the questions. JEC-QA classiï¬es questions into knowledge-driven questions (KD-Questions) and case-analysis questions (CA-Questions) and reports the performances of humans. We imple- mented several representative question answer- ing models, including BiDAF (Seo et al., 2016), BERT (Devlin et al., 2019), Co-matching (Wang et al., 2018), and HAF (Zhu et al., 2018). The experimental results can be found in Table 6.
From the experimental results, we can learn the
models cannot answer the legal questions well com- pared with their promising results in open-domain question answering and there is still a huge gap between existing models and humans in LQA.
For more qualiï¬ed LQA methods, there are sev- eral signiï¬cant difï¬culties to overcome: (1) Le- gal multi-hop reasoning. As Zhong et al. (2019a) state, existing models can perform inference but not multi-hop reasoning. However, legal cases are very complicated, which cannot be handled by single- step reasoning. (2) Legal concepts understand- ing. We can ï¬nd that almost all models are better at case analyzing than knowledge understanding, which proves that knowledge modelling is still chal- lenging for existing methods. How to model legal knowledge to LQA is essential as legal knowledge is the foundation of LQA.
# 5 Conclusion
In this paper, we describe the development status of various LegalAI tasks and discuss what we can do in the future. In addition to these applications and tasks we have mentioned, there are many other tasks in LegalAI like legal text summarization and information extraction from legal contracts. Nev- ertheless, no matter what kind application is, we can apply embedding-based methods for better per- formance, together with symbol-based methods for more interpretability.
Besides, the three main challenges of legal tasks remain to be solved. Knowledge modelling, legal reasoning, and interpretability are the foundations on which LegalAI can reliably serve the legal do- main. Some existing methods are trying to solve these problems, but there is still a long way for researchers to go.
In the future, for these existing tasks, researchers can focus on solving the three most pressing chal- lenges of LegalAI combining embedding-based and symbol-based methods. For tasks that do not yet have a dataset or the datasets are not large enough, we can try to build a large-scale and high- quality dataset or use few-shot or zero-shot meth- ods to solve these problems.
Furthermore, we need to take the ethical issues of LegalAI seriously. Applying the technology of LegalAI directly to the legal system will bring ethical issues like gender bias and racial discrimi- nation. The results given by these methods cannot convince people. To address this issue, we must note that the goal of LegalAI is not replacing the
legal professionals but helping their work. As a result, we should regard the results of the models only as a reference. Otherwise, the legal system will no longer be reliable. For example, profes- sionals can spend more time on complex cases and leave the simple cases for the model. However, for safety, these simple cases must still be reviewed. In general, LegalAI should play as a supporting role to help the legal system.
# Acknowledgements
This work is supported by the National Key Re- search and Development Program of China (No. 2018YFC0831900) and the National Natural Sci- ence Foundation of China (NSFC No. 61772302, 61532010). Besides, the dataset of element extrac- tion is provided by Gridsum.
# References
Alan Akbik, Tanja Bergmann, and Roland Vollgraf. 2019. Pooled contextualized embeddings for named entity recognition. In Proceedings of NAACL.
Nikolaos Aletras, Dimitrios Tsarapatsanis, Daniel Preotiuc-Pietro, and Vasileios Lampos. 2016. Pre- dicting judicial decisions of the european court of human rights: A natural language processing per- spective. PeerJ Computer Science, 2.
Iosif ANGELIDIS, Ilias CHALKIDIS, and Manolis KOUBARAKIS. 2018. Named entity recognition, linking and generation for greek legislation.
Kevin D Ashley. 2017. Artiï¬cial intelligence and legal analytics: new tools for law practice in the digital age. Cambridge University Press.
Ngo Xuan Bach, Tran Ha Ngoc Thien, Tu Minh Phuong, et al. 2017. Question analysis for viet- In Proceedings namese legal question answering. of KSE. IEEE.
Deanna Barmakian. 2000. Better search engines for law. Law Libr. J., 92.
Roberto Bartolini, Alessandro Lenci, Simonetta Mon- temagni, Vito Pirrelli, and Claudia Soria. 2004. Se- mantic mark-up of Italian legal texts through NLP- based techniques. In Proceedings of LREC.
Paheli Bhattacharya, Kaustubh Hiware, Subham Raj- garia, Nilay Pochhi, Kripabandhu Ghosh, and Sap- tarshi Ghosh. 2019. A comparative study of summa- rization algorithms applied to legal case judgments. In Proceedings of ECIR. Springer.
Antoine Bordes, Nicolas Usunier, Alberto Garcia- Jason Weston, and Oksana Yakhnenko.
2013. Translating embeddings for modeling multi- relational data. In Advances in neural information processing systems, pages 2787â2795.
M´ırian Bruckschen, Caio Northï¬eet, Paulo Bridi, Roger Granada, Renata Vieira, Prasad Rao, and Tomas Sander. 2010. Named entity recognition in the legal domain for ontology population. In Work- shop Programme, page 16. Citeseer.
Davide Buscaldi, Paolo Rosso, Jos´e Manuel G´omez- Soriano, and Emilio Sanchis. 2010. Answering questions with an n-gram based passage retrieval engine. Journal of Intelligent Information Systems, 34(2):113â134.
Cristian Cardellino, Milagro Teruel, Laura Alonso Ale- many, and Serena Villata. 2017. Legal NERC with ontologies, Wikipedia and curriculum learning. In Proceedings of EACL.
Ilias Chalkidis, Ion Androutsopoulos, and Nikolaos Aletras. 2019a. Neural legal judgment prediction in English. In Proceedings of ACL.
Ilias Chalkidis, Emmanouil Fergadiotis, Prodromos Malakasiotis, and Ion Androutsopoulos. 2019b. Large-scale multi-label text classiï¬cation on EU leg- islation. In Proceedings of ACL.
Ilias Chalkidis and Dimitrios Kampas. 2019. Deep learning in law: early adaptation and legal word em- beddings trained on large corpora. Artiï¬cial Intelli- gence and Law, 27(2):171â198.
Huajie Chen, Deng Cai, Wei Dai, Zehui Dai, and Yadong Ding. 2019. Charge-based prison term pre- diction with deep gating network. In Proceedings of EMNLP-IJCNLP, pages 6363â6368.
Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng, and Jun Zhao. 2015. Event extraction via dynamic multi- pooling convolutional neural networks. In Proceed- ings of ACL.
Fenia Christopoulou, Makoto Miwa, and Sophia Ana- niadou. 2018. A walk-based model on entity graphs In Proceedings of ACL, for relation extraction. pages 81â88.
FrantiËsek CvrËcek, Karel Pala, and Pavel Rychl´y. 2012. Legal electronic dictionary for Czech. In Proceed- ings of LREC.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of NAACL.
Phong-Khac Do, Huy-Tien Nguyen, Chien-Xuan Tran, Minh-Tien Nguyen, and Minh-Le Nguyen. 2017. Legal question answering using ranking svm and deep convolutional neural network. arXiv preprint arXiv:1703.05320.
Xingyi Duan, Baoxin Wang, Ziyue Wang, Wentao Ma, Yiming Cui, Dayong Wu, Shijin Wang, Ting Liu, Tianxiang Huo, Zhen Hu, et al. 2019. Cjrc: A re- liable human-annotated benchmark dataset for chi- In Proceed- nese judicial reading comprehension. ings of CCL. Springer.
Biralatei Fawei, Adam Wyner, and Jeff Pan. 2016. Passing a USA national bar exam: a ï¬rst corpus for experimentation. In Proceedings of LREC.
Biralatei Fawei, Adam Wyner, Jeff Z Pan, and Mar- tin Kollingbaum. 2015. Using legal ontologies with rules for legal textual entailment. In AI Approaches to the Complexity of Legal Systems, pages 317â324. Springer.
Jianfeng Gao, Kristina Toutanova, and Wen-tau Yih. 2011. Clickthrough-based latent semantic models for web search. In Proceedings of SIGIR. ACM.
Anne von der Lieth Gardner. 1984. An artiï¬cial intelli- gence approach to legal reasoning.
Anton Geist. 2009. Using citation analysis techniques for computer-assisted legal research in continental jurisdictions. Available at SSRN 1397674.
Ben Hachey and Claire Grover. 2006. Extractive sum- marisation of legal texts. Artiï¬cial Intelligence and Law, 14(4):305â345.
Hiroaki Hayashi, Zecong Hu, Chenyan Xiong, and Gra- ham Neubig. 2019. Latent relation language models. arXiv preprint arXiv:1908.07690.
Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8).
Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. 2014. Convolutional neural network architec- tures for matching natural language sentences. In Proceedings of NIPS.
Zikun Hu, Xiang Li, Cunchao Tu, Zhiyuan Liu, and Maosong Sun. 2018. Few-shot charge prediction with discriminative legal attributes. In Proceedings of COLING.
Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In Proceedings of CIKM. ACM.
Jyun-Yu Jiang, Mingyang Zhang, Cheng Li, Michael Bendersky, Nadav Golbandi, and Marc Najork. 2019. Semantic text matching for long-form docu- ments. In Proceedings of WWW. ACM.
Rie Johnson and Tong Zhang. 2017. Deep pyramid convolutional neural networks for text categoriza- tion. In Proceedings of ACL.
Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, H´erve J´egou, and Tomas Mikolov. 2016. Fasttext. zip: Compressing text classiï¬cation models. arXiv preprint arXiv:1612.03651.
Santosh Kabbur, Xia Ning, and George Karypis. 2013. Fism: factored item similarity models for top-n rec- In Proceedings of SIGKDD. ommender systems. ACM.
Liangyi Kang, Jie Liu, Lingqiao Liu, Qinfeng Shi, and Dan Ye. 2019. Creating auxiliary representations from charge deï¬nitions for criminal charge predic- tion. arXiv preprint arXiv:1911.05202.
Yoshinobu Kano, Mi-Young Kim, Masaharu Yosh- ioka, Yao Lu, Juliano Rabelo, Naoki Kiyota, Randy Goebel, and Ken Satoh. 2018. Coliee-2018: Evalu- ation of the competition on legal information extrac- tion and entailment. In Proceedings of JSAI, pages 177â192. Springer.
R Keown. 1980. Mathematical models for legal predic- tion. Computer/LJ, 2:829.
Mi-Young Kim and Randy Goebel. 2017. Two-step cascaded textual entailment for legal bar exam ques- In Proceedings of Articial Intelli- tion answering. gence and Law. ACM.
Mi-Young Kim, Ying Xu, and Randy Goebel. 2015. A convolutional neural network in legal question an- swering.
Mi-Young Kim, Ying Xu, Randy Goebel, and Ken Satoh. 2013. Answering yes/no questions in legal bar exams. In Proceedings of JSAI, pages 199â213. Springer.
Yoon Kim. 2014. Convolutional neural networks for sentence classiï¬cation. In Proceedings of EMNLP.
Fred Kort. 1957. Predicting supreme court decisions mathematically: A quantitative analysis of the âright to counselâ cases. American Political Science Re- view, 51(1):1â12.
Onur Kuru, Ozan Arkan Can, and Deniz Yuret. 2016. CharNER: Character-level named entity recognition. In Proceedings of COLING.
Guillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of NAACL.
J¨org Landthaler, Bernhard Waltl, Patrick Holl, and Flo- rian Matthes. 2016. Extending full text search for legal document collections using word embeddings. In JURIX, pages 73â82.
Benjamin E Lauderdale and Tom S Clark. 2012. The supreme courtâs many median justices. American Political Science Review, 106(4):847â866.
Alessandro Lenci, Simonetta Montemagni, Vito Pir- relli, and Giulia Venturi. 2009. Ontology learning from italian legal texts. Law, Ontologies and the Se- mantic Web, 188:75â94.
Shang Li, Hongli Zhang, Lin Ye, Xiaoding Guo, and Binxing Fang. 2019a. Mann: A multichannel at- tentive neural network for legal judgment prediction. IEEE Access.
Yu Li, Tieke He, Ge Yan, Shu Zhang, and Hui Wang. 2019b. Using case facts to predict penalty with deep In International Conference of Pioneer- learning. ing Computer Scientists, Engineers and Educators, pages 610â617. Springer.
Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning entity and relation em- beddings for knowledge graph completion. In Pro- ceedings of AAAI.
Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In Proceed- ings of ACL.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019a. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.
Zhiyuan Liu, Cunchao Tu, and Maosong Sun. 2019b. Legal cause prediction with inner descriptions and In Proceedings of CCL, pages outer hierarchies. 573â586. Springer.
Daniel Locke and Guido Zuccon. 2018. A test collec- tion for evaluating legal case law search. In Proceed- ings of SIGIR. ACM.
Bingfeng Luo, Yansong Feng, Jianbo Xu, Xiang Zhang, and Dongyan Zhao. 2017. Learning to predict charges for criminal cases with legal basis. In Pro- ceedings of EMNLP.
K Tamsin Maxwell and Burkhard Schafer. 2008. Con- In cept and context in legal information retrieval. Proceedings of JURIX.
Douglas L Medin. 2000. Psychology of learning and motivation: advances in research and theory. Else- vier.
Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- Efï¬cient estimation of word arXiv preprint frey Dean. 2013. representations in vector space. arXiv:1301.3781.
Makoto Miwa and Mohit Bansal. 2016. End-to-end re- lation extraction using lstms on sequences and tree In Proceedings of ACL, pages 1105â structures. 1116.
Alfredo Monroy, Hiram Calvo, and Alexander Gel- bukh. 2008. Using graphs for shallow question In Mexican In- answering on legal documents. ternational Conference on Artiï¬cial Intelligence. Springer.
Alfredo Monroy, Hiram Calvo, and Alexander Gel- bukh. 2009. Nlp for shallow question answering of legal documents using graphs. In Proceedings of CI- CLing. Springer.
Alfredo L´opez Monroy, Hiram Calvo, Alexander Gel- bukh, and Georgina Garc´ıa Pacheco. 2013. Link analysis for representing and retrieving legal infor- mation. In Proceedings of CICLing, pages 380â393. Springer.
Stuart S Nagel. 1963. Applying correlation analysis to case prediction. Texas Law Review, 42:1006.
John J. Nay. 2016. Gov2Vec: Learning distributed rep- In resentations of institutions and their legal text. Proceedings of the First Workshop on NLP and Com- putational Social Science.
Thien Huu Nguyen, Kyunghyun Cho, and Ralph Gr- ishman. 2016. Joint event extraction via recurrent neural networks. In Proceedings of NAACL.
Thien Huu Nguyen and Ralph Grishman. 2018. Graph convolutional networks with argument-aware pool- ing for event detection. In Proceedings of AAAI.
Hamid Palangi, Li Deng, Yelong Shen, Jianfeng Gao, Xiaodong He, Jianshu Chen, Xinying Song, and Rabab Ward. 2016. Deep sentence embedding using long short-term memory networks: Analysis and ap- plication to information retrieval. IEEE/ACM Trans- actions on Audio, Speech and Language Processing (TASLP), 24(4).
Sicheng Pan, Tun Lu, Ning Gu, Huajuan Zhang, and Chunlin Xu. 2019. Charge prediction for multi- defendant cases with multi-scale attention. In CCF Conference on Computer Supported Cooperative Work and Social Computing. Springer.
Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of EMNLP, pages 1532â 1543.
Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. arXiv preprint arXiv:1802.05365.
Matthew E Peters, Mark Neumann, Robert Logan, Roy Schwartz, Vidur Joshi, Sameer Singh, and Noah A Smith. 2019. Knowledge enhanced con- In Proceedings of textual word representations. EMNLP-IJCNLP.
Xipeng Qiu and Xuanjing Huang. 2015. Convolutional neural tensor network architecture for community- based question answering. In Proceedings of IJCAI.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8).
K Raghav, P Krishna Reddy, and V Balakista Reddy. 2016. Analyzing the extraction of relevant legal judgments using paragraph-level and citation infor- AI4JCArtiï¬cial Intelligence for Justice, mation. page 30.
Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you donât know: Unanswerable ques- tions for SQuAD. In Proceedings of ACL.
Steffen Rendle. 2010. Factorization machines. In Pro- ceedings of ICDM. IEEE.
Stephen E Robertson and Steve Walker. 1994. Some simple effective approximations to the 2-poisson In Pro- model for probabilistic weighted retrieval. ceedings of SIGIR.
Gerard Salton and Christopher Buckley. 1988. Term- weighting approaches in automatic text retrieval. In- formation processing & management.
Predicting supreme court cases probabilistically: The search and seizure cases, American Political Science Review, 1962-1981. 78(4):891â900.
Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention ï¬ow for machine comprehension. arXiv preprint arXiv:1611.01603.
Yelong Shen, Xiaodong He, Jianfeng Gao, Li Deng, and Gr´egoire Mesnil. 2014. A latent semantic model with convolutional-pooling structure for information retrieval. In Proceedings of CIKM. ACM.
Yi Shu, Yao Zhao, Xianghui Zeng, and Qingli Ma. 2019. Cail2019-fe. Technical report, Gridsum.
Keet Sugathadasa, Buddhi Ayesha, Nisansa de Silva, Amal Shehan Perera, Vindula Jayawardana, Dimuthu Lakmal, and Madhavi Perera. 2018. Legal document retrieval using document vector embed- In Proceedings of SAI. dings and deep learning. Springer.
Harry Surden. 2018. Artiï¬cial intelligence and law: An overview. Ga. St. UL Rev.
Ryosuke Taniguchi and Yoshinobu Kano. 2016. Legal yes/no question answering system using case-role In Proceedings of JSAI, pages 284â298. analysis. Springer.
Oanh Thi Tran, Bach Xuan Ngo, Minh Le Nguyen, and Akira Shimazu. 2013. Answering legal questions by mining reference information. In Proceedings of JSAI. Springer.
Vu Tran, Minh Le Nguyen, and Ken Satoh. 2019. Building legal case retrieval systems with lexical matching and summarization using a pre-trained phrase scoring model. In Proceedings of Artiï¬cial Intelligence and Law. ACM.
Maarten Truyens and Patrick Van Eecke. 2014. Legal aspects of text mining. In Proceedings of LREC.
Howard Turtle. 1995. Text retrieval in the legal world. Artiï¬cial Intelligence and Law, 3(1-2).
S Sidney Ulmer. 1963. Quantitative analysis of judi- cial processes: Some practical and theoretical appli- cations. Law and Contemporary Problems, 28:164.
Thomas Vacek, Ronald Teo, Dezhao Song, Timothy Nugent, Conner Cowling, and Frank Schilder. 2019. Litigation analytics: Case outcomes extracted from US federal court dockets. In Proceedings of NLLP Workshop.
Tom Vacek and Frank Schilder. 2017. A sequence In Proceed- approach to case outcome detection. ings of Articial Intelligence and Law, pages 209â 215. ACM.
Marc Van Opijnen and Cristiana Santos. 2017. On the concept of relevance in legal information retrieval. Artiï¬cial Intelligence and Law, 25(1).
Hui Wang, Tieke He, Zhipeng Zou, Siyuan Shen, and Yu Li. 2019. Using case facts to predict accusation based on deep learning. In Proceedings of QRS-C, pages 133â137. IEEE.
Shuohang Wang, Mo Yu, Jing Jiang, and Shiyu Chang. 2018. A co-matching model for multi-choice read- ing comprehension. In Proceedings of ACL.
Wei Wu, Hang Li, and Jun Xu. 2013. Learning query and document similarities from click-through bipar- tite graph with metadata. In Proceedings of WSDM. ACM.
Chaojun Xiao, Haoxi Zhong, Zhipeng Guo, Cunchao Tu, Zhiyuan Liu, Maosong Sun, Yansong Feng, Xianpei Han, Zhen Hu, Heng Wang, et al. 2018. Cail2018: A large-scale legal dataset for judgment prediction. arXiv preprint arXiv:1807.02478.
Chaojun Xiao, Haoxi Zhong, Zhipeng Guo, Cunchao Tu, Zhiyuan Liu, Maosong Sun, Tianyang Zhang, Xianpei Han, Heng Wang, Jianfeng Xu, et al. 2019. Cail2019-scm: A dataset of similar case matching in legal domain. arXiv preprint arXiv:1911.08962.
Jun Xu, Hang Li, and Chaoliang Zhong. 2010. Rel- In Proceedings of evance ranking using kernels. AIRS. Springer.
Yukun Yan, Daqi Zheng, Zhengdong Lu, and Sen Song. 2017. Event identiï¬cation as a decision process with arXiv preprint non-linear representation of text. arXiv:1710.00969.
Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2014. Embedding entities and relations for learning and inference in knowledge bases. arXiv preprint arXiv:1412.6575.
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretrain- arXiv preprint ing for language understanding. arXiv:1906.08237.
Hai Ye, Xin Jiang, Zhunchen Luo, and Wenhan Chao. 2018. Interpretable charge predictions for criminal cases: Learning to generate court views from fact descriptions. In Proceedings of NAACL.
Wenpeng Yin, Hinrich Sch¨utze, Bing Xiang, and Bowen Zhou. 2016. ABCNN: Attention-based con- volutional neural network for modeling sentence pairs. Transactions of the Association for Compu- tational Linguistics.
Xiaoxiao Yin, Daqi Zheng, Zhengdong Lu, and Neural entity reasoner arXiv preprint Ruifang Liu. 2018. for global consistency in ner. arXiv:1810.00347.
Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. 2015. Distant supervision for relation extraction via In Pro- piecewise convolutional neural networks. ceedings of EMNLP.
Ni Zhang, Yi-Fei Pu, Sui-Quan Yang, Ji-Liu Zhou, and Jin-Kang Gao. 2017. An ontological chinese legal consultation system. IEEE Access, 5:18250â18261.
Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. ERNIE: En- hanced language representation with informative en- tities. In Proceedings of ACL.
Haoxi Zhong, Zhipeng Guo, Cunchao Tu, Chaojun Xiao, Zhiyuan Liu, and Maosong Sun. 2018. Le- gal judgment prediction via topological learning. In Proceedings of EMNLP.
Haoxi Zhong, Yuzhong Wang, Cunchao Tu, Tianyang Zhang, Zhiyuan Liu, and Maosong Sun. 2020. Iter- atively questioning and answering for interpretable legal judgment prediction. In Proceedings of AAAI.
Haoxi Zhong, Chaojun Xiao, Cunchao Tu, Tianyang Zhang, Zhiyuan Liu, and Maosong Sun. 2019a. Jec-qa: A legal-domain question answering dataset. arXiv preprint arXiv:1911.12011.
Haoxi Zhong, Zhengyan Zhang, Zhiyuan Liu, and Maosong Sun. 2019b. Open chinese language pre- trained model zoo. Technical report, Technical Re- port. Technical Report.
Haichao Zhu, Furu Wei, Bing Qin, and Ting Liu. 2018. Hierarchical attention ï¬ow for multiple-choice read- ing comprehension. In Proceedings of AAAI. | {
"id": "1710.00969"
} |
2004.11045 | Distilling Knowledge for Fast Retrieval-based Chat-bots | Response retrieval is a subset of neural ranking in which a model selects a
suitable response from a set of candidates given a conversation history.
Retrieval-based chat-bots are typically employed in information seeking
conversational systems such as customer support agents. In order to make
pairwise comparisons between a conversation history and a candidate response,
two approaches are common: cross-encoders performing full self-attention over
the pair and bi-encoders encoding the pair separately. The former gives better
prediction quality but is too slow for practical use. In this paper, we propose
a new cross-encoder architecture and transfer knowledge from this model to a
bi-encoder model using distillation. This effectively boosts bi-encoder
performance at no cost during inference time. We perform a detailed analysis of
this approach on three response retrieval datasets. | http://arxiv.org/pdf/2004.11045 | Amir Vakili Tahami, Kamyar Ghajar, Azadeh Shakery | cs.IR, cs.CL | Accepted for publication in the 43rd International ACM SIGIR
Conference on Research and Development in Information Retrieval (SIGIR '20) | null | cs.IR | 20200423 | 20200423 | # ee
0 2 0 2
r p A 3 2 ] R I . s c [
1 v 5 4 0 1 1 . 4 0 0 2 : v i X r a
# Distilling Knowledge for Fast Retrieval-based Chat-bots
Amir Vakili Tahami [email protected] University of Tehran
Kamyar Ghajar [email protected] University of Tehran
Azadeh Shakery [email protected] University of Tehran
ABSTRACT Response retrieval is a subset of neural ranking in which a model selects a suitable response from a set of candidates given a con- versation history. Retrieval-based chat-bots are typically employed in information seeking conversational systems such as customer support agents. In order to make pairwise comparisons between a conversation history and a candidate response, two approaches are common: cross-encoders performing full self-attention over the pair and bi-encoders encoding the pair separately. The former gives better prediction quality but is too slow for practical use. In this paper, we propose a new cross-encoder architecture and trans- fer knowledge from this model to a bi-encoder model using distil- lation. This eï¬ectively boosts bi-encoder performance at no cost during inference time. We perform a detailed analysis of this ap- proach on three response retrieval datasets.
CCS CONCEPTS ⢠Information systems puting methodologies
Retrieval models and ranking; ⢠Com- Natural language processing.
â â
the task of selecting a suitable response among a set of candidates according to a conversation history.
By pre-training large scale language models on vast corpora and subsequently ï¬ne-tuning these models on downstream tasks, re- searchers have achieved state-of-the-art results in a wide variety of natural language tasks [3]. This process has also been successfully applied to the task of response retrieval [4, 6, 13]. Current state-of- the-art response retrieval focuses on using these pre-trained trans- former language models such as BERT [3]. When using a deep pre-trained transformer for the task of comparing two text inputs, two approaches are common: either encoding representations sep- arately (bi-encoding) or encoding the concatenation of the two (cross-encoding). The BERT bi-encoder encodes two separate rep- resentations using pre-trained deep multi-layer transformers and compares them using a dot product operation. The BERT cross- encoder concatenates the conversation history and candidate re- sponse and encodes them into a single representation, which is fed into a fully connected network that gives a matching score. The lat- ter method achieves better prediction quality but is far too slow for practical use [6].
# KEYWORDS Retrieval-based chat-bot, Response ranking, Neural information re- trieval
ACM Reference Format: Amir Vakili Tahami, Kamyar Ghajar, and Azadeh Shakery. 2020. Distilling Knowledge for Fast Retrieval-based Chat-bots. In . ACM, New York, NY, USA, 5 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnn
1 INTRODUCTION Response retrieval is a subset of neural ranking in which a model selects a suitable response from a set of candidates given a conver- sation history. Retrieval-based chat-bots are typically employed in information seeking conversational systems such as customer sup- port agents. They have been used in real-world products such as Microsoft XiaoIce [15] and Alibaba GroupâÄŹs AliMe Assist [9].
To ï¬nd the best response to a particular conversationâs chat his- tory traditional text retrieval methods such as term frequency have proven to be insuï¬cient [10], therefore the majority of modern re- search focuses on neural ranking approaches [4, 6, 10]. These meth- ods rely on training artiï¬cial neural networks on large datasets for
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for proï¬t or commercial advantage and that copies bear this notice and the full cita- tion on the ï¬rst page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or re- publish, to post on servers or to redistribute to lists, requires prior speciï¬c permission and/or a fee. Request permissions from [email protected]. SIGIR â20, July 25-30, 2020, Xiâan, China © 2020 Association for Computing Machinery. https://doi.org/10.1145/nnnnnnn.nnnnnnn
While bi-encoding does give worse results, previous work has shown that one can signiï¬cantly reduce its inference time by pre- encoding candidate responses oï¬ine so that during inference, only the conversation history needs to be encoded. This, in turn, means that at inference time, bi-encoders can potentially perform pair- wise comparisons between a conversation history and millions of candidate responses. Such a feat is impossible to do with cross- encoders as they must recalculate encodings for each conversa- tion history and candidate response pair. Naturally, this makes bi- encoders a desirable solution in conversational systems where real- time response selection is required [6]. Because of this improving the performance of bi-encoders is a popular avenue of research when it comes to response retrieval.
In this paper, we demonstrate one possible improvement to bi- encoders, which will boost their prediction quality without aï¬ect- ing their prediction speed. We propose transferring knowledge from the better performing BERT cross-encoder to the much faster BERT bi-encoder. This method will raise BERT bi-encoder prediction qual- ity without increasing inference time. We employ knowledge distil- lation, which is an approach where a model teaches another model to mimic it as a student [5]. Essentially, the student model learns to reproduce the outputs of the more complex teacher model. Unlike gold labels, the output of a neural network is not constrained to a binary variable and as such it can provide a much richer signal when training the student model. Knowledge distillation has been successfully applied in natural language understanding, machine translation, and language modeling tasks [7, 16, 20].
We also introduce a new cross-encoder architecture we call the enhanced BERT cross-encoder. This architecture is speciï¬cally de- signed for the task of response retrieval and gives better results
SIGIR â20, July 25-30, 2020, Xiâan, China
Table 1: Statistics for the datasets.
UDC DSTC7 MANtIS â of candidates 10 100 11 Trn Vld Tst Trn Vld Tst Trn Vld Tst â of samples 500k 50k 50k 100k 5k 1k 82k 18k 18k
than the regular BERT cross-encoder. It also has the advantage of being faster to train. This model serves as our teacher, and we use the BERT bi-encoder [6] as our student model. We evaluate our approach on three response retrieval data-sets. Our experiments show that our knowledge distillation approach enhances the pre- diction quality of BERT the bi-encoder. This increase comes to a no-cost during inference time.
2 METHOD First, we explain the task in further detail. Next, we describe the teacher and student models used for the knowledge distillation ap- proach. Then we describe the knowledge distillation procedure.
2.1 Task Deï¬nition The task of response retrieval can be formalized as follows: Sup- ci , ri , yi pose we have a dataset } · · · { represents the conversation and ri = represents a can- } is a label. yi = 1 means that ri didate response and yi } is a suitable choice for ci . ti are tokens extracted from text . The goal of a model should be to learn a function д that predicts the matching degree between any new conversation history c and a candidate response r . Once a given model ranks a set of candi- dates, its prediction quality is then measured using recall@1 (1 if the modelâs ï¬rst choice is correct otherwise 0) and mean reciprocal rank (MRR).
2.2 Model Architecture For the student network, we use the previously proposed BERT bi-encoder [6]. The conversation history and response candidate tokens are encoded separately using BERT. To aggregate the ï¬- nal layer encodings into a single vector, the ï¬rst tokenâs encoding, which corresponds to an individual [CLS] token, is selected. BERT requires all inputs to be prepended with this special token. The two aggregated vectors are compared using a dot-product operation.
Similarly, our teacher model uses a BERT transformer to en- code the conversation history and candidate response. However, for comparing the last layer encodings we use a combination of scaled dot-product attention [18] and the SubMult function [19] for calculating the matching score. Below we give a brief explana- tion of these components before describing how they are used.
d Rnk à is weighted by an importance score deï¬ned by its similarity to each entry of query q . For each entry of q the entries of k are then linearly combined with the weights to form a new rep- resentation. Scaled dot-product attention is a particular version of attention deï¬ned as:
Amir Vakili Tahami, Kamyar Ghajar, and Azadeh Shakery
Att(q, k) = softmar{ 2) +k (1)
The SubMult function [19] is a function designed for comparing Rd which has been used to great eï¬ect two vectors a in various text matching tasks including response retrieval [17]. It is deï¬ned as follows:
SubMult = a b b b (2)
# a,b (
a â (
a ) â (
)
â
â
â
)
where and â ators respectively. â are concatenation and hadamard product oper-
Utilizing these components we build our enhanced cross-encoder architecture. First, like the bi-encoder, we encode the conversation history c
â
â
c â² = T
, r â² = T
câ =T(c), râ = T(r)
# c (
# r (
) where T is the BERT transformer and c â² â To compare the encoded conversation history c â² and encoded candidate response r â², ï¬rst we perform a cross attention operation using the previously described components:
)
# Rm
# Ëc = W1 Ër = W1 R4d
c â², r â²)) ( r â², c â²)) ( d is a a learned parameter. We aggregate à d by concatenating the ï¬rst token (corre- Rn Ëc à sponding to [CLS]), the max pool and average pool over the tokens:
# c â², Att ( r â², Att (
·
¯c = Ëc1 = Ër1 ¯r Ëci mean m i 1 ⤠⤠mean n i 1 ⤠⤠Ëci â â 1 (4) Ëri Ëri â
max m i ⤠⤠max n i 1 ⤠⤠We compare the aggregated ¯c, ¯r
â Rd vectors using a ï¬nal Sub-
â Mult function and a two layer fully connected network:
= W2 ) d ,W3 Ã
¯c, ¯r W3 ( ( 1 are learned parameters. Our en- where W2 à hanced BERT architecture essentially encodes the conversation his- tory and candidate response tokens separately using BERT, then applies as single layer of cross-attention on those encodings.
# c, r д ( R12d
# ReLU ( Rd
)))
We believe our enhanced cross-encoder architecture will per- form better than regular cross-encoders for two reasons. Firstly, we do not concatenate conversation history and candidate responses. This means we can use the encoded candidate response tokens of other samples in a training batch as negative samples [11]. Scaled dot-product attention is simple enough that recalculating it for other candidates in the batch does not add signiï¬cant overhead, especially when compared to rerunning BERT for every possible conversation history and candidate response pair. Thus we can pro- cess more negative samples than would be feasible in a regular cross-encoder. Previous research has already shown that increas- ing the number of negative samples is eï¬ective for response re- trieval [6]. Secondly, the addition of the SubMult function means we can achieve much more reï¬ned text matching between the con- versation history and candidate response.
Distilling Knowledge for Fast Retrieval-based Chat-bots
2.3 Distillation Objective Distillation achieves knowledge transfer at the output level. The student learns from both dataset gold labels and teacher predicted probabilities, which are also a useful source of information [1]. For example, in sentiment classiï¬cation, certain sentences might have very strong or weak polarities and binary labels are not enough to convey this information.
Similar to previous work [16], we add a distillation objective to our loss function which penalizes the mean squared error loss between the student and teacher model outputs:
T z( S z( 2 Ldistill = ) â ||
)|| ) are the teacher and student model outputs. At training time the distillation objective is used in conjunction with regular cross entropy loss as follows:
# T where z(
S ), z(
= α C E + 1 α
# ) · Ldistill
# L
L
(
â
where α is a hyper-parameter. This procedure is model agnostic and can transfer information between entirely diï¬erent architec- tures.
3 EXPERIMENTS In this section we give a brief overview of experiments settings.
3.1 Datasets We consider three information-seeking conversation datasets widely used in the training of neural ranking models for response retrieval. The Ubuntu Dialogue Corpus (UDC) [10] and DSTC7 sentence se- lection track dataset [2] are collected form a chatroom dedicated to the support of the Ubuntu operating system. We also include a version of UDC where the training set has been reduced to 20% so as to study the eï¬ects of limited training data. MANtIS [13] was built from conversations of 14 diï¬erent sites of the Stack Exchange Network. The statistics for these datasets are provided in Table 1. Data augmentation, where each conversation is split into multi- ple samples, is a popular method in dialog research for boosting the performance of response retrieval models. In this paper, we refrain from using this approach as our focus is not beating state- of-the-art results but empirically demonstrating the eï¬ectiveness of knowledge distillation even in limited-resource settings.
3.2 Baselines We divide our experiments into three parts. 1. Comparing the reg- ular BERT cross-encoder and our enhanced BERT cross-encoder. Here we aim to demonstrate the superiority of our proposed cross- encoder architecture 2. Comparing the BERT bi-encoder with and without distillation. Here we wish to demonstrate the eï¬ectiveness of the knowledge distillation approach. 3. Finally, we also train a BiLSTM bi-encoder with and without distillation in order to con- ï¬rm the distillation process works with shallow student models. The BiLSTM bi-encoder uses the same tokens as BERT models, but their embeddings are not pre-trained and initialized randomly. We use the same aggregation strategy (eq. 4) to aggregate the BiLSTM hidden states. Our code will be released as open-source.
SIGIR â20, July 25-30, 2020, Xiâan, China
3.3 Implementation Details Our models are implemented in the PyTorch framework [12]. For our BERT component, we used Distilbert [14] since it provides re- sults somewhat close to the original implementation despite hav- ing only 6 layers of transformers instead of 12. We tune α from a 0.25, 0.5, 0.75 . We train models using Adam optimizer [8]. set of 3 for We use a learning rate of 5 10â the BiLSTM bi-encoder. For consistency, we set the batch size to 8 for all models. For each dataset, we set the maximum number of tokens in the conversation history and candidate responses so that no more than 20% of inputs are truncated.
Unfortunately, due to limited computing resources, we are un- able to beat state-of-the-art results reported by [6]. Our models are trained on a single GPU; thus, we had to make compromises on the number of input tokens, number of negative samples, and model depth.
4 RESULTS AND DISCUSSION In this section, we go over the results of our experiments. We ana- lyze both prediction quality and eï¬ciency.
4.1 Prediction Quality The ï¬rst two rows of table 2 demonstrate the eï¬ectiveness our the enhanced BERT cross-encoder relative to the regular BERT cross- encoder. These results indicate that employing a task-speciï¬c sin- gle layer cross-attention mechanism on top of separately encoded inputs is highly eï¬ective for the task of response retrieval. Of par- ticular note is the increased gap between the performance of the two methods when using smaller training sets (UDC20%, MANtIS, DSTC7). This shows that the regular bert-cross model struggles when ï¬ne-tuned with smaller response-retrieval sets and data aug- mentation or a some other method must be used to achieve accept- able results. In contrast, our enhanced BERT cross-encoderâs R@1 only dropped by 3.3 points when its training set was reduced to a ï¬fth.
To further demonstrate the eï¬ectiveness of our modiï¬cations to the BERT cross-encoder architecture, we perform an ablation study on the reduced UDC dataset. We replace the SubMult func- tion with a concatenation operation. We also try removing cross- attention (3). In both cases, their removal signiï¬cantly degrades model quality.
Across the datasets, bi-encoders show signiï¬cant gains when trained with knowledge distillation. The increase in performance is relatively substantial. Such gains usually require an increase in model complexity, however with knowledge distillation, we are ef- fectively gaining a free boost in performance as there is no extra cost at inference time. The best results were obtained with an α of 0.5. This indicates that in response retrieval, unlike other tasks such as sentiment classiï¬cation and natural language inference [16], the gold labels cannot be replaced entirely with teacher out- puts.
4.2 Prediction Eï¬ciency We demonstrate the trade-oï¬ in speed and performance between the BERT bi-encoder and our enhanced BERT cross-encoder. We measure the time it takes to process test samples in the DSTC7
SIGIR â20, July 25-30, 2020, Xiâan, China
Amir Vakili Tahami, Kamyar Ghajar, and Azadeh Shakery
Table 2: Prediction quality metrics across all datasets. Metrics for models trained with knowledge distillation, which are signif- icant relative to models trained without it, are marked in bold. We use paired two-tailed t-tests with a p-value<0.05 to perform signiï¬cance tests. For easier reading metrics have been multiplied by 100. No data augmentation has been used and training samples are used as is. +KD indicates a model trained with knowledge distillation.
UDC20% UDC MANtIS DSTC7 R@1 MRR R@1 MRR R@1 MRR R@1 MRR BERT cross BERT cross enhanced - SubMult - Attention 66.1 76.2 73.4 67.2 76.8 84.5 82.6 78.6 76.5 79.5 â â 84.8 86.9 â â 59.8 66.7 â â 72.0 77.3 â â 36.9 53.3 â â 47.9 63.3 â â 59.2 BiLSTM bi-encoder BiLSTM bi-encoder + KD 63.0 72.4 75.2 69.4 70.4 80.2 80.8 35.6 45.5 55.1 61.4 34.3 39.4 46.1 50.1 BERT bi-encoder BERT bi-encoder + KD 64.9 66.1 76.9 77.6 72.9 75.8 82.7 84.6 47.9 53.4 58.4 67.3 39.9 53.8 51.8 54.7
Table 3: Average milliseconds to process a single test sample.
â of candidates 10 100 BERT bi-encoder BERT cross-encoder enhanced 5.6 81.1 6.2 981.2
[4] Matthew Henderson, Iñigo Casanueva, Nikola MrkÅ¡iÄ, Pei-Hao Su, Ivan VuliÄ, et al. 2019. ConveRT: Eï¬cient and Accurate Conversational Representations from Transformers. arXiv preprint arXiv:1911.03688 (2019).
[5] Geoï¬rey Hinton, Oriol Vinyals, and Jeï¬ Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015).
[6] Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, and Jason Weston. 2020. Poly-encoders: architectures and pre-training strategies for fast and accurate multi-sentence scoring. In 8th International Conference on Learning Representa- tions, ICLR 2020.
dataset and show the average time for each example in table 3. Time taken by the cross-encoder to process a set of candidate re- sponses grows exponentially large as the set increases in size. In the case of BERT bi-encoders, since candidate vectors can be com- puted oï¬ine, increasing candidates has a negligible impact on in- ference time.
[7] Yoon Kim and Alexander M Rush. 2016. Sequence-Level Knowledge Distillation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing.
[8] Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Opti- mization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
[9] Feng-Lin Li, Minghui Qiu, Haiqing Chen, Xiongwei Wang, Xing Gao, Jun Huang, Juwei Ren, Zhongzhou Zhao, Weipeng Zhao, Lei Wang, Guwei Jin, and Wei Chu. 2017. AliMe Assist : An Intelligent Assistant for Creating an Innovative E-commerce Experience. In Proceedings of the 2017 ACM on Conference on Infor- mation and Knowledge Management, CIKM 2017.
5 CONCLUSION AND FUTURE WORK In this paper, we introduced an enhanced BERT cross-encoder ar- chitecture modiï¬ed for the task of response retrieval. Alongside that, we utilized knowledge distillation to compress the complex BERT cross-encoder network as a teacher model into the student BERT bi-encoder model. This increases the BERT bi-encoders pre- diction quality without aï¬ecting its inference speed. We evaluate our approach on three domain-popular datasets. The proposed meth- ods were shown to achieve statistically signiï¬cant gains.
[10] Ryan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau. 2015. The Ubuntu Dialogue Corpus: A Large Dataset for Research in Unstructured Multi-Turn Di- alogue Systems. In Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue.
[11] Pierre-Emmanuel Mazaré, Samuel Humeau, Martin Raison, and Antoine Bordes. 2018. Training Millions of Personalized Dialogue Agents. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018.
[12] Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic Diï¬erentiation in PyTorch. In NIPS Autodiï¬ Workshop.
[13] Gustavo Penha and Claudia Hauï¬. 2020. Curriculum Learning Strategies for IR: An Empirical Study on Conversation Response Ranking. In European Conference on Information Retrieval. Springer.
One possible avenue for research is the exploration of other knowledge transfer methods. Substituting the relatively simple BERT bi-encoder architecture with a more complex architecture [4] or de- veloping further improvements to the BERT cross-encoder are also viable alternatives.
[14] Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Dis- tilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019).
[15] Heung-Yeung Shum, Xiao-dong He, and Di Li. 2018. From Eliza to XiaoIce: chal- lenges and opportunities with social chatbots. Frontiers of Information Technol- ogy & Electronic Engineering (2018).
REFERENCES [1] Jimmy Ba and Rich Caruana. 2014. Do deep nets really need to be deep?. In
Advances in neural information processing systems.
[2] Lazaros Polymenakos Chulaka Gunasekara, Jonathan K. Kummerfeld and Wal- ter S. Lasecki. 2019. DSTC7 Task 1: Noetic End-to-End Response Selection. In 7th Edition of the Dialog System Technology Challenges at AAAI 2019.
[3] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Language Technologies.
[16] Raphael Tang, Yao Lu, Linqing Liu, Lili Mou, Olga Vechtomova, and Jimmy Lin. 2019. Distilling task-speciï¬c knowledge from BERT into simple neural networks. arXiv preprint arXiv:1903.12136 (2019).
[17] Chongyang Tao, Wei Wu, Can Xu, Wenpeng Hu, Dongyan Zhao, and Rui Yan. 2019. Multi-Representation Fusion Network for Multi-Turn Response Selection in Retrieval-Based Chatbots. In Proceedings of the Twelfth ACM International Con- ference on Web Search and Data Mining.
[18] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems.
[19] Shuohang Wang and Jing Jiang. 2017. A Compare-Aggregate Model for Match- ing Text Sequences. In 5th International Conference on Learning Representations,
Distilling Knowledge for Fast Retrieval-based Chat-bots
ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings.
SIGIR â20, July 25-30, 2020, Xiâan, China
[20] Seunghak Yu, Nilesh Kulkarni, Haejun Lee, and Jihie Kim. 2018. On-device neu- ral language model based word prediction. In Proceedings of the 27th Interna- tional Conference on Computational Linguistics: System Demonstrations. | {
"id": "1911.03688"
} |
2004.10934 | YOLOv4: Optimal Speed and Accuracy of Object Detection | There are a huge number of features which are said to improve Convolutional
Neural Network (CNN) accuracy. Practical testing of combinations of such
features on large datasets, and theoretical justification of the result, is
required. Some features operate on certain models exclusively and for certain
problems exclusively, or only for small-scale datasets; while some features,
such as batch-normalization and residual-connections, are applicable to the
majority of models, tasks, and datasets. We assume that such universal features
include Weighted-Residual-Connections (WRC), Cross-Stage-Partial-connections
(CSP), Cross mini-Batch Normalization (CmBN), Self-adversarial-training (SAT)
and Mish-activation. We use new features: WRC, CSP, CmBN, SAT, Mish activation,
Mosaic data augmentation, CmBN, DropBlock regularization, and CIoU loss, and
combine some of them to achieve state-of-the-art results: 43.5% AP (65.7% AP50)
for the MS COCO dataset at a realtime speed of ~65 FPS on Tesla V100. Source
code is at https://github.com/AlexeyAB/darknet | http://arxiv.org/pdf/2004.10934 | Alexey Bochkovskiy, Chien-Yao Wang, Hong-Yuan Mark Liao | cs.CV, eess.IV | null | null | cs.CV | 20200423 | 20200423 | 0 2 0 2
r p A 3 2 ] V C . s c [ 1 v 4 3 9 0 1 . 4 0 0 2 : v i X r a
# YOLOv4: Optimal Speed and Accuracy of Object Detection
Alexey Bochkovskiyâ [email protected]
Chien-Yao Wangâ Institute of Information Science Academia Sinica, Taiwan [email protected]
Hong-Yuan Mark Liao Institute of Information Science Academia Sinica, Taiwan [email protected]
# Abstract
There are a huge number of features which are said to improve Convolutional Neural Network (CNN) accuracy. Practical testing of combinations of such features on large datasets, and theoretical justiï¬cation of the result, is re- quired. Some features operate on certain models exclusively and for certain problems exclusively, or only for small-scale datasets; while some features, such as batch-normalization and residual-connections, are applicable to the majority of models, tasks, and datasets. We assume that such universal features include Weighted-Residual-Connections (WRC), Cross-Stage-Partial-connections (CSP), Cross mini-Batch Normalization (CmBN), Self-adversarial-training (SAT) and Mish-activation. We use new features: WRC, CSP, CmBN, SAT, Mish activation, Mosaic data augmentation, CmBN, DropBlock regularization, and CIoU loss, and com- bine some of them to achieve state-of-the-art results: 43.5% AP (65.7% AP50) for the MS COCO dataset at a real- time speed of â¼65 FPS on Tesla V100. Source code is at https://github.com/AlexeyAB/darknet.
MS COCO Object Detection 50 EfficientDet (D0~D4) real-time 48 46 . YOLOv4 (ours) 4 2 ASFEF* & 40 â*-YOLOv4 (ours 38 â*-YOL0y3 [63] 36 | âm=FfficientDet [77] ATSS [94] o 4h YOLOv3 â*â ASFF* [48] 32 â*-CenterMask* [40] 30 10 30 50 70 90 10 130 FPS (V100)
Figure 1: Comparison of the proposed YOLOv4 and other state-of-the-art object detectors. YOLOv4 runs twice faster than Efï¬cientDet with comparable performance. Improves YOLOv3âs AP and FPS by 10% and 12%, respectively.
# 1. Introduction
The majority of CNN-based object detectors are largely applicable only for recommendation systems. For example, searching for free parking spaces via urban video cameras is executed by slow accurate models, whereas car collision Improving warning is related to fast inaccurate models. the real-time object detector accuracy enables using them not only for hint generating recommendation systems, but also for stand-alone process management and human input reduction. Real-time object detector operation on conven- tional Graphics Processing Units (GPU) allows their mass usage at an affordable price. The most accurate modern neural networks do not operate in real time and require large number of GPUs for training with a large mini-batch-size. We address such problems through creating a CNN that op- erates in real-time on a conventional GPU, and for which training requires only one conventional GPU.
The main goal of this work is designing a fast operating speed of an object detector in production systems and opti- mization for parallel computations, rather than the low com- putation volume theoretical indicator (BFLOP). We hope that the designed object can be easily trained and used. For example, anyone who uses a conventional GPU to train and test can achieve real-time, high quality, and convincing ob- ject detection results, as the YOLOv4 results shown in Fig- ure 1. Our contributions are summarized as follows:
1. We develope an efï¬cient and powerful object detection model. It makes everyone can use a 1080 Ti or 2080 Ti GPU to train a super fast and accurate object detector.
2. We verify the inï¬uence of state-of-the-art Bag-of- Freebies and Bag-of-Specials methods of object detec- tion during the detector training.
3. We modify state-of-the-art methods and make them more effecient and suitable for single GPU training, including CBN [89], PAN [49], SAM [85], etc.
1
Input Backbone Input: { Image, Patches, Image Pyramid, ... } Dense Prediction |g Backbone: { VGG16 [68], ResNet-50 [26], ResNeXt-101 [86], Darknet53 [63], ... } Neck: { FPN [44], PANet [49], Bi-FPN [77], ... } Head: Dense Prediction: { RPN [64], YOLO [61, 62, 63|, SSD [50], RetinaNet [45], FCOS [78], ... } Sparse Prediction: { Faster R-CNN [64], R-FCN [9], ... }
Figure 2: Object detector.
# 2. Related work
# 2.1. Object detection models
A modern detector is usually composed of two parts, a backbone which is pre-trained on ImageNet and a head which is used to predict classes and bounding boxes of ob- jects. For those detectors running on GPU platform, their backbone could be VGG [68], ResNet [26], ResNeXt [86], or DenseNet [30]. For those detectors running on CPU plat- form, their backbone could be SqueezeNet [31], MobileNet [28, 66, 27, 74], or Shufï¬eNet [97, 53]. As to the head part, it is usually categorized into two kinds, i.e., one-stage object detector and two-stage object detector. The most represen- tative two-stage object detector is the R-CNN [19] series, including fast R-CNN [18], faster R-CNN [64], R-FCN [9], and Libra R-CNN [58]. It is also possible to make a two- stage object detector an anchor-free object detector, such as RepPoints [87]. As for one-stage object detector, the most representative models are YOLO [61, 62, 63], SSD [50], and RetinaNet [45]. In recent years, anchor-free one-stage object detectors are developed. The detectors of this sort are CenterNet [13], CornerNet [37, 38], FCOS [78], etc. Object detectors developed in recent years often insert some lay- ers between backbone and head, and these layers are usu- ally used to collect feature maps from different stages. We can call it the neck of an object detector. Usually, a neck is composed of several bottom-up paths and several top- down paths. Networks equipped with this mechanism in- clude Feature Pyramid Network (FPN) [44], Path Aggrega- tion Network (PAN) [49], BiFPN [77], and NAS-FPN [17].
In addition to the above models, some researchers put their emphasis on directly building a new backbone (DetNet [43], DetNAS [7]) or a new whole model (SpineNet [12], HitDe- tector [20]) for object detection.
To sum up, an ordinary object detector is composed of several parts:
⢠Input: Image, Patches, Image Pyramid
⢠Backbones: VGG16 [68], ResNet-50 [26], SpineNet [12], Efï¬cientNet-B0/B7 [75], CSPResNeXt50 [81], CSPDarknet53 [81]
⢠Neck:
⢠Additional blocks: SPP [25], ASPP [5], RFB [47], SAM [85]
⢠Path-aggregation blocks: FPN [44], PAN [49], NAS-FPN [17], Fully-connected FPN, BiFPN [77], ASFF [48], SFAM [98]
# ⢠Heads::
# ⢠Dense Prediction (one-stage):
⦠RPN [64], SSD [50], YOLO [61], RetinaNet [45] (anchor based)
⦠CornerNet [37], CenterNet [13], MatrixNet [60], FCOS [78] (anchor free)
# ⢠Sparse Prediction (two-stage):
⦠Faster R-CNN [64], R-FCN [9], Mask R-
CNN [23] (anchor based) ⦠RepPoints [87] (anchor free)
2
# 2.2. Bag of freebies
Usually, a conventional object detector is trained off- line. Therefore, researchers always like to take this advan- tage and develop better training methods which can make the object detector receive better accuracy without increas- ing the inference cost. We call these methods that only change the training strategy or only increase the training cost as âbag of freebies.â What is often adopted by object detection methods and meets the deï¬nition of bag of free- bies is data augmentation. The purpose of data augmenta- tion is to increase the variability of the input images, so that the designed object detection model has higher robustness to the images obtained from different environments. For examples, photometric distortions and geometric distortions are two commonly used data augmentation method and they deï¬nitely beneï¬t the object detection task. In dealing with photometric distortion, we adjust the brightness, contrast, hue, saturation, and noise of an image. For geometric dis- tortion, we add random scaling, cropping, ï¬ipping, and ro- tating.
The data augmentation methods mentioned above are all pixel-wise adjustments, and all original pixel information in the adjusted area is retained. In addition, some researchers engaged in data augmentation put their emphasis on sim- ulating object occlusion issues. They have achieved good results in image classiï¬cation and object detection. For ex- ample, random erase [100] and CutOut [11] can randomly select the rectangle region in an image and ï¬ll in a random or complementary value of zero. As for hide-and-seek [69] and grid mask [6], they randomly or evenly select multiple rectangle regions in an image and replace them to all ze- ros. If similar concepts are applied to feature maps, there are DropOut [71], DropConnect [80], and DropBlock [16] methods. In addition, some researchers have proposed the methods of using multiple images together to perform data augmentation. For example, MixUp [92] uses two images to multiply and superimpose with different coefï¬cient ra- tios, and then adjusts the label with these superimposed ra- tios. As for CutMix [91], it is to cover the cropped image to rectangle region of other images, and adjusts the label according to the size of the mix area. In addition to the above mentioned methods, style transfer GAN [15] is also used for data augmentation, and such usage can effectively reduce the texture bias learned by CNN.
Different from the various approaches proposed above, some other bag of freebies methods are dedicated to solving the problem that the semantic distribution in the dataset may have bias. In dealing with the problem of semantic distri- bution bias, a very important issue is that there is a problem of data imbalance between different classes, and this prob- lem is often solved by hard negative example mining [72] or online hard example mining [67] in two-stage object de- tector. But the example mining method is not applicable
3
to one-stage object detector, because this kind of detector belongs to the dense prediction architecture. Therefore Lin et al. [45] proposed focal loss to deal with the problem of data imbalance existing between various classes. An- other very important issue is that it is difï¬cult to express the relationship of the degree of association between different categories with the one-hot hard representation. This rep- resentation scheme is often used when executing labeling. The label smoothing proposed in [73] is to convert hard la- bel into soft label for training, which can make model more robust. In order to obtain a better soft label, Islam et al. [33] introduced the concept of knowledge distillation to design the label reï¬nement network.
The last bag of freebies is the objective function of Bounding Box (BBox) regression. The traditional object detector usually uses Mean Square Error (MSE) to di- rectly perform regression on the center point coordinates and height and width of the BBox, i.e., {xcenter, ycenter, w, h}, or the upper left point and the lower right point, i.e., {xtop lef t, ytop lef t, xbottom right, ybottom right}. As for anchor-based method, it is to estimate the correspond- for example {xcenter of f set, ycenter of f set, ing offset, wof f set, hof f set} and {xtop lef t of f set, ytop lef t of f set, xbottom right of f set, ybottom right of f set}. However, to di- rectly estimate the coordinate values of each point of the BBox is to treat these points as independent variables, but in fact does not consider the integrity of the object itself. In order to make this issue processed better, some researchers recently proposed IoU loss [90], which puts the coverage of predicted BBox area and ground truth BBox area into con- sideration. The IoU loss computing process will trigger the calculation of the four coordinate points of the BBox by ex- ecuting IoU with the ground truth, and then connecting the generated results into a whole code. Because IoU is a scale invariant representation, it can solve the problem that when traditional methods calculate the l1 or l2 loss of {x, y, w, h}, the loss will increase with the scale. Recently, some researchers have continued to improve IoU loss. For exam- ple, GIoU loss [65] is to include the shape and orientation of object in addition to the coverage area. They proposed to ï¬nd the smallest area BBox that can simultaneously cover the predicted BBox and ground truth BBox, and use this BBox as the denominator to replace the denominator origi- nally used in IoU loss. As for DIoU loss [99], it additionally considers the distance of the center of an object, and CIoU loss [99], on the other hand simultaneously considers the overlapping area, the distance between center points, and the aspect ratio. CIoU can achieve better convergence speed and accuracy on the BBox regression problem.
# 2.3. Bag of specials
For those plugin modules and post-processing methods that only increase the inference cost by a small amount but can signiï¬cantly improve the accuracy of object detec- tion, we call them âbag of specialsâ. Generally speaking, these plugin modules are for enhancing certain attributes in a model, such as enlarging receptive ï¬eld, introducing at- tention mechanism, or strengthening feature integration ca- pability, etc., and post-processing is a method for screening model prediction results.
Common modules that can be used to enhance recep- tive ï¬eld are SPP [25], ASPP [5], and RFB [47]. The SPP module was originated from Spatial Pyramid Match- ing (SPM) [39], and SPMs original method was to split fea- ture map into several d à d equal blocks, where d can be {1, 2, 3, ...}, thus forming spatial pyramid, and then extract- ing bag-of-word features. SPP integrates SPM into CNN and use max-pooling operation instead of bag-of-word op- eration. Since the SPP module proposed by He et al. [25] will output one dimensional feature vector, it is infeasible to be applied in Fully Convolutional Network (FCN). Thus in the design of YOLOv3 [63], Redmon and Farhadi improve SPP module to the concatenation of max-pooling outputs with kernel size k à k, where k = {1, 5, 9, 13}, and stride equals to 1. Under this design, a relatively large k à k max- pooling effectively increase the receptive ï¬eld of backbone feature. After adding the improved version of SPP module, YOLOv3-608 upgrades AP50 by 2.7% on the MS COCO object detection task at the cost of 0.5% extra computation. The difference in operation between ASPP [5] module and improved SPP module is mainly from the original k Ãk ker- nel size, max-pooling of stride equals to 1 to several 3 à 3 kernel size, dilated ratio equals to k, and stride equals to 1 in dilated convolution operation. RFB module is to use sev- eral dilated convolutions of k Ãk kernel, dilated ratio equals to k, and stride equals to 1 to obtain a more comprehensive spatial coverage than ASPP. RFB [47] only costs 7% extra inference time to increase the AP50 of SSD on MS COCO by 5.7%.
The attention module that is often used in object detec- tion is mainly divided into channel-wise attention and point- wise attention, and the representatives of these two atten- tion models are Squeeze-and-Excitation (SE) [29] and Spa- tial Attention Module (SAM) [85], respectively. Although SE module can improve the power of ResNet50 in the Im- ageNet image classiï¬cation task 1% top-1 accuracy at the cost of only increasing the computational effort by 2%, but on a GPU usually it will increase the inference time by about 10%, so it is more appropriate to be used in mobile devices. But for SAM, it only needs to pay 0.1% extra cal- culation and it can improve ResNet50-SE 0.5% top-1 accu- racy on the ImageNet image classiï¬cation task. Best of all, it does not affect the speed of inference on the GPU at all.
4
In terms of feature integration, the early practice is to use skip connection [51] or hyper-column [22] to integrate low- level physical feature to high-level semantic feature. Since multi-scale prediction methods such as FPN have become popular, many lightweight modules that integrate different feature pyramid have been proposed. The modules of this sort include SFAM [98], ASFF [48], and BiFPN [77]. The main idea of SFAM is to use SE module to execute channel- wise level re-weighting on multi-scale concatenated feature maps. As for ASFF, it uses softmax as point-wise level re- weighting and then adds feature maps of different scales. In BiFPN, the multi-input weighted residual connections is proposed to execute scale-wise level re-weighting, and then add feature maps of different scales.
In the research of deep learning, some people put their focus on searching for good activation function. A good activation function can make the gradient more efï¬ciently propagated, and at the same time it will not cause too much extra computational cost. In 2010, Nair and Hin- ton [56] propose ReLU to substantially solve the gradient vanish problem which is frequently encountered in tradi- tional tanh and sigmoid activation function. Subsequently, LReLU [54], PReLU [24], ReLU6 [28], Scaled Exponential Linear Unit (SELU) [35], Swish [59], hard-Swish [27], and Mish [55], etc., which are also used to solve the gradient vanish problem, have been proposed. The main purpose of LReLU and PReLU is to solve the problem that the gradi- ent of ReLU is zero when the output is less than zero. As for ReLU6 and hard-Swish, they are specially designed for quantization networks. For self-normalizing a neural net- work, the SELU activation function is proposed to satisfy the goal. One thing to be noted is that both Swish and Mish are continuously differentiable activation function.
The post-processing method commonly used in deep- learning-based object detection is NMS, which can be used to ï¬lter those BBoxes that badly predict the same ob- ject, and only retain the candidate BBoxes with higher re- sponse. The way NMS tries to improve is consistent with the method of optimizing an objective function. The orig- inal method proposed by NMS does not consider the con- text information, so Girshick et al. [19] added classiï¬cation conï¬dence score in R-CNN as a reference, and according to the order of conï¬dence score, greedy NMS was performed in the order of high score to low score. As for soft NMS [1], it considers the problem that the occlusion of an object may cause the degradation of conï¬dence score in greedy NMS with IoU score. The DIoU NMS [99] developers way of thinking is to add the information of the center point dis- tance to the BBox screening process on the basis of soft NMS. It is worth mentioning that, since none of above post- processing methods directly refer to the captured image fea- tures, post-processing is no longer required in the subse- quent development of an anchor-free method.
Table 1: Parameters of neural networks for image classiï¬cation.
Backbone model Input network resolution Receptive ï¬eld size Parameters Average size of layer output (WxHxC) BFLOPs (512x512 network resolution) FPS (GPU RTX 2070) CSPResNext50 CSPDarknet53 Efï¬cientNet-B3 (ours) 512x512 512x512 512x512 425x425 725x725 1311x1311 20.6 M 27.6 M 12.0 M 1058 K 950 K 668 K 31 (15.5 FMA) 52 (26.0 FMA) 11 (5.5 FMA) 62 66 26
# 3. Methodology
The basic aim is fast operating speed of neural network, in production systems and optimization for parallel compu- tations, rather than the low computation volume theoreti- cal indicator (BFLOP). We present two options of real-time neural networks:
⢠For GPU we use a small number of groups (1 - 8) in convolutional layers: CSPResNeXt50 / CSPDarknet53
⢠For VPU - we use grouped-convolution, but we re- frain from using Squeeze-and-excitement (SE) blocks - speciï¬cally this includes the following models: Efï¬cientNet-lite / MixNet [76] / GhostNet [21] / Mo- bileNetV3
Hypothetically speaking, we can assume that a model with a larger receptive ï¬eld size (with a larger number of convolutional layers 3 à 3) and a larger number of parame- ters should be selected as the backbone. Table 1 shows the information of CSPResNeXt50, CSPDarknet53, and Efï¬- cientNet B3. The CSPResNext50 contains only 16 convo- lutional layers 3 à 3, a 425 à 425 receptive ï¬eld and 20.6 M parameters, while CSPDarknet53 contains 29 convolu- tional layers 3 à 3, a 725 à 725 receptive ï¬eld and 27.6 M parameters. This theoretical justiï¬cation, together with our numerous experiments, show that CSPDarknet53 neu- ral network is the optimal model of the two as the backbone for a detector.
The inï¬uence of the receptive ï¬eld with different sizes is summarized as follows:
# 3.1. Selection of architecture
⢠Up to the object size - allows viewing the entire object
Our objective is to ï¬nd the optimal balance among the in- put network resolution, the convolutional layer number, the parameter number (ï¬lter size2 * ï¬lters * channel / groups), and the number of layer outputs (ï¬lters). For instance, our numerous studies demonstrate that the CSPResNext50 is considerably better compared to CSPDarknet53 in terms of object classiï¬cation on the ILSVRC2012 (ImageNet) dataset [10]. However, conversely, the CSPDarknet53 is better compared to CSPResNext50 in terms of detecting ob- jects on the MS COCO dataset [46].
The next objective is to select additional blocks for in- creasing the receptive ï¬eld and the best method of parame- ter aggregation from different backbone levels for different detector levels: e.g. FPN, PAN, ASFF, BiFPN.
⢠Up to network size - allows viewing the context around the object
⢠Exceeding the network size - increases the number of connections between the image point and the ï¬nal ac- tivation
We add the SPP block over the CSPDarknet53, since it signiï¬cantly increases the receptive ï¬eld, separates out the most signiï¬cant context features and causes almost no re- duction of the network operation speed. We use PANet as the method of parameter aggregation from different back- bone levels for different detector levels, instead of the FPN used in YOLOv3.
A reference model which is optimal for classiï¬cation is not always optimal for a detector. In contrast to the classi- ï¬er, the detector requires the following:
Finally, we choose CSPDarknet53 backbone, SPP addi- tional module, PANet path-aggregation neck, and YOLOv3 (anchor based) head as the architecture of YOLOv4.
⢠Higher input network size (resolution) â for detecting multiple small-sized objects
⢠More layers â for a higher receptive ï¬eld to cover the increased size of input network
⢠More parameters â for greater capacity of a model to detect multiple objects of different sizes in a single im- age
In the future we plan to expand signiï¬cantly the content of Bag of Freebies (BoF) for the detector, which theoreti- cally can address some problems and increase the detector accuracy, and sequentially check the inï¬uence of each fea- ture in an experimental fashion.
We do not use Cross-GPU Batch Normalization (CGBN or SyncBN) or expensive specialized devices. This al- lows anyone to reproduce our state-of-the-art outcomes on a conventional graphic processor e.g. GTX 1080Ti or RTX 2080Ti.
5
# 3.2. Selection of BoF and BoS
For improving the object detection training, a CNN usu- ally uses the following:
⢠Activations: ReLU, leaky-ReLU, parametric-ReLU, ReLU6, SELU, Swish, or Mish
⢠Bounding box regression loss: MSE, IoU, GIoU, CIoU, DIoU
⢠Data augmentation: CutOut, MixUp, CutMix
⢠Regularization method: DropOut, DropPath [36], Spatial DropOut [79], or DropBlock
⢠Normalization of the network activations by their mean and variance: Batch Normalization (BN) [32], Cross-GPU Batch Normalization (CGBN or SyncBN) [93], Filter Response Normalization (FRN) [70], or Cross-Iteration Batch Normalization (CBN) [89]
⢠Skip-connections: Residual connections, Weighted residual connections, Multi-input weighted residual connections, or Cross stage partial connections (CSP)
As for training activation function, since PReLU and SELU are more difï¬cult to train, and ReLU6 is speciï¬cally designed for quantization network, we therefore remove the above activation functions from the candidate list. In the method of reqularization, the people who published Drop- Block have compared their method with other methods in detail, and their regularization method has won a lot. There- fore, we did not hesitate to choose DropBlock as our reg- ularization method. As for the selection of normalization method, since we focus on a training strategy that uses only one GPU, syncBN is not considered.
# 3.3. Additional improvements
In order to make the designed detector more suitable for training on single GPU, we made additional design and im- provement as follows:
⢠We introduce a new method of data augmentation Mo- saic, and Self-Adversarial Training (SAT)
⢠We select optimal hyper-parameters while applying genetic algorithms
⢠We modify some exsiting methods to make our design suitble for efï¬cient training and detection - modiï¬ed SAM, modiï¬ed PAN, and Cross mini-Batch Normal- ization (CmBN)
Mosaic represents a new data augmentation method that mixes 4 training images. Thus 4 different contexts are
6
âaug_-1271888501_0 -749611674pq aug_-319215602 0 -238783579jg âaug_1467167950 0 -1659706634jog âaug_1474493600_0_-45389312jpg âug_1715045541_0_603913529,p9 âaug_1779428844.0_-589696888jp9
Figure 3: Mosaic represents a new method of data augmen- tation.
mixed, while CutMix mixes only 2 input images. This al- lows detection of objects outside their normal context. In addition, batch normalization calculates activation statistics from 4 different images on each layer. This signiï¬cantly reduces the need for a large mini-batch size.
Self-Adversarial Training (SAT) also represents a new data augmentation technique that operates in 2 forward backward stages. In the 1st stage the neural network alters the original image instead of the network weights. In this way the neural network executes an adversarial attack on it- self, altering the original image to create the deception that there is no desired object on the image. In the 2nd stage, the neural network is trained to detect an object on this modiï¬ed image in the normal way.
Lets: Bias, scale â ScaleShift Mean, variance ~ BN Weights â W CmBN ~ assume a batch contains four mini-batches ee accumulate W'3 ~ accumulate W") accumulate W"* ~* accumulate Wis =") 9 Secumulate BN accumulate BN'~'? accumulate BNCS~+0 *ecumillate BN normalize BN normalize BN normalize BN normalize BN update W, ScaleShift
Figure 4: Cross mini-Batch Normalization.
CmBN represents a CBN modiï¬ed version, as shown in Figure 4, deï¬ned as Cross mini-Batch Normalization (CmBN). This collects statistics only between mini-batches within a single batch.
We modify SAM from spatial-wise attention to point- wise attention, and replace shortcut connection of PAN to concatenation, as shown in Figure 5 and Figure 6, respec- tively.
Max-Pooling Average-Pooling, (a) SAM [85] Convolution Sigmoid (b) Our modified SAM
# Figure 5: Modiï¬ed SAM.
addition os concatenation (a) PAN [49] a (a) Our modified PAN
Figure 6: Modiï¬ed PAN.
# 3.4. YOLOv4
In this section, we shall elaborate the details of YOLOv4.
# YOLOv4 consists of:
⢠Backbone: CSPDarknet53 [81]
⢠Neck: SPP [25], PAN [49]
⢠Head: YOLOv3 [63]
# YOLO v4 uses:
⢠Bag of Freebies (BoF) for backbone: CutMix and Mosaic data augmentation, DropBlock regularization, Class label smoothing
⢠Bag of Specials (BoS) for backbone: Mish activa- tion, Cross-stage partial connections (CSP), Multi- input weighted residual connections (MiWRC)
⢠Bag of Freebies (BoF) for detector: CIoU-loss, CmBN, DropBlock regularization, Mosaic data aug- mentation, Self-Adversarial Training, Eliminate grid sensitivity, Using multiple anchors for a single ground truth, Cosine annealing scheduler [52], Optimal hyper- parameters, Random training shapes
⢠Bag of Specials (BoS) for detector: Mish activation, SPP-block, SAM-block, PAN path-aggregation block, DIoU-NMS
7
# 4. Experiments
training improve- ment techniques on accuracy of the classiï¬er on ImageNet (ILSVRC 2012 val) dataset, and then on the accuracy of the detector on MS COCO (test-dev 2017) dataset.
# 4.1. Experimental setup
In ImageNet image classiï¬cation experiments, the de- fault hyper-parameters are as follows: the training steps is 8,000,000; the batch size and the mini-batch size are 128 and 32, respectively; the polynomial decay learning rate scheduling strategy is adopted with initial learning rate 0.1; the warm-up steps is 1000; the momentum and weight de- cay are respectively set as 0.9 and 0.005. All of our BoS experiments use the same hyper-parameter as the default setting, and in the BoF experiments, we add an additional 50% training steps. In the BoF experiments, we verify MixUp, CutMix, Mosaic, Bluring data augmentation, and label smoothing regularization methods. In the BoS experi- ments, we compared the effects of LReLU, Swish, and Mish activation function. All experiments are trained with a 1080 Ti or 2080 Ti GPU.
the de- fault hyper-parameters are as follows: the training steps is 500,500; the step decay learning rate scheduling strategy is adopted with initial learning rate 0.01 and multiply with a factor 0.1 at the 400,000 steps and the 450,000 steps, re- spectively; The momentum and weight decay are respec- tively set as 0.9 and 0.0005. All architectures use a sin- gle GPU to execute multi-scale training in the batch size of 64 while mini-batch size is 8 or 4 depend on the ar- chitectures and GPU memory limitation. Except for us- ing genetic algorithm for hyper-parameter search experi- ments, all other experiments use default setting. Genetic algorithm used YOLOv3-SPP to train with GIoU loss and search 300 epochs for min-val 5k sets. We adopt searched learning rate 0.00261, momentum 0.949, IoU threshold for assigning ground truth 0.213, and loss normalizer 0.07 for genetic algorithm experiments. We have veriï¬ed a large number of BoF, including grid sensitivity elimination, mo- saic data augmentation, IoU threshold, genetic algorithm, class label smoothing, cross mini-batch normalization, self- adversarial training, cosine annealing scheduler, dynamic mini-batch size, DropBlock, Optimized Anchors, different kind of IoU losses. We also conduct experiments on various BoS, including Mish, SPP, SAM, RFB, BiFPN, and Gaus- sian YOLO [8]. For all experiments, we only use one GPU for training, so techniques such as syncBN that optimizes multiple GPUs are not used.
# 4.2. Inï¬uence of different features on Classiï¬er training
First, we study the inï¬uence of different features on classiï¬er training; speciï¬cally, the inï¬uence of Class la- bel smoothing, the inï¬uence of different data augmentation techniques, bilateral blurring, MixUp, CutMix and Mosaic, as shown in Fugure 7, and the inï¬uence of different activa- tions, such as Leaky-ReLU (by default), Swish, and Mish.
(c) CutMix (a) Crop, Rotation, Flip, Hue, Saturation, Exposure, Aspect. (d) Mosaic (e) Blur
Figure 7: Various method of data augmentation.
In our experiments, as illustrated in Table 2, the clas- siï¬erâs accuracy is improved by introducing the features such as: CutMix and Mosaic data augmentation, Class la- bel smoothing, and Mish activation. As a result, our BoF- backbone (Bag of Freebies) for classiï¬er training includes the following: CutMix and Mosaic data augmentation and Class label smoothing. In addition we use Mish activation as a complementary option, as shown in Table 2 and Table 3. Table 2: Inï¬uence of BoF and Mish on the CSPResNeXt-50 clas- siï¬er accuracy.
MixUp CutMix Mosaic Bluring smoothing Swish Mish Top-1 Top-5 77.9% 94.0% v 77.2% 94.0% v 78.0% 94.3% v 78.1% 94.5% v 71.5% 93.8% v 78.1% 94.4% v 64.5% 86.0% ¥ 78.9% 94.5% 78.57% 94.8% ¥ 79.8% 95.2% v v v v v v
Table 3: Inï¬uence of BoF and Mish on the CSPDarknet-53 classi- ï¬er accuracy.
MixUp CutMix Mosaic Bluring smoothing Swish Mish Top-1 Top-5 771.2% 93.6% 77.8% 94.4% ¥ 78.7% 94.8% v v v v v v
8
# 4.3. Inï¬uence of different features on Detector training
Further study concerns the inï¬uence of different Bag-of- Freebies (BoF-detector) on the detector training accuracy, as shown in Table 4. We signiï¬cantly expand the BoF list through studying different features that increase the detector accuracy without affecting FPS:
⢠S: Eliminate grid sensitivity the equation bx = Ï(tx)+ cx, by = Ï(ty) + cy, where cx and cy are always whole numbers, is used in YOLOv3 for evaluating the ob- ject coordinates, therefore, extremely high tx absolute values are required for the bx value approaching the cx or cx + 1 values. We solve this problem through multiplying the sigmoid by a factor exceeding 1.0, so eliminating the effect of grid on which the object is undetectable.
⢠M: Mosaic data augmentation - using the 4-image mo- saic during training instead of single image
⢠IT: IoU threshold - using multiple anchors for a single ground truth IoU (truth, anchor) > IoU threshold
⢠GA: Genetic algorithms - using genetic algorithms for selecting the optimal hyperparameters during network training on the ï¬rst 10% of time periods
⢠LS: Class label smoothing - using class label smooth- ing for sigmoid activation
⢠CBN: CmBN - using Cross mini-Batch Normalization for collecting statistics inside the entire batch, instead of collecting statistics inside a single mini-batch
⢠CA: Cosine annealing scheduler - altering the learning rate during sinusoid training
⢠DM: Dynamic mini-batch size - automatic increase of mini-batch size during small resolution training by us- ing Random training shapes
⢠OA: Optimized Anchors - using the optimized anchors for training with the 512x512 network resolution
⢠GIoU, CIoU, DIoU, MSE - using different loss algo- rithms for bounded box regression
Further study concerns the inï¬uence of different Bag- of-Specials (BoS-detector) on the detector training accu- racy, including PAN, RFB, SAM, Gaussian YOLO (G), and ASFF, as shown in Table 5. In our experiments, the detector gets best performance when using SPP, PAN, and SAM.
Table 4: Ablation Studies of Bag-of-Freebies. (CSPResNeXt50-PANet-SPP, 512x512).
S M IT GA LS CBN CA DM _ OA loss AP AP50 AP75 MSE â 38.0% = 60.0% = 40.8% v MSE = (37.7% = 59.9% 40.5% v MSE 39.1% 61.8% 42.0% v MSE = 36.9% 59.7% 39.4% v MSE 38.9% 61.7% 41.9% v MSE = 33.0% 55.4% 35.4% v MSE 38.4% 60.7% 41.3% v MSE 38.7% 60.7% 41.9% MSE = (35.3% = 57.2% 38.0% v GIloU 39.4% 594% 42.5% v DIoU 39.1% 58.8% 42.1% v CloU 39.6% 59.2% 42.6% vov v v CloU 41.5% 64.0% 44.8% v v v CloU 36.1% 56.5% 38.4% vov v v v MSE 40.3% 64.0% 43.1% vov v v v GIloU 42.4% 644% 45.9% vov v v v CloU 42.4% 644% 45.9%
Table 5: Ablation Studies of Bag-of-Specials. (Size 512x512).
Model AP AP50 AP75 42.4% 64.4% 45.9% CSPResNeXt50-PANet-SPP 41.8% 62.7% 45.1% CSPResNeXt50-PANet-SPP-RFB 42.7% 64.6% 46.3% CSPResNeXt50-PANet-SPP-SAM CSPResNeXt50-PANet-SPP-SAM-G 41.6% 62.7% 45.0% CSPResNeXt50-PANet-SPP-ASFF-RFB 41.1% 62.6% 44.4%
# 4.4. Inï¬uence of different backbones and pre- trained weightings on Detector training
Further on we study the inï¬uence of different backbone models on the detector accuracy, as shown in Table 6. We notice that the model characterized with the best classiï¬ca- tion accuracy is not always the best in terms of the detector accuracy.
First, although classiï¬cation accuracy of CSPResNeXt- 50 models trained with different features is higher compared to CSPDarknet53 models, the CSPDarknet53 model shows higher accuracy in terms of object detection.
Second, using BoF and Mish for the CSPResNeXt50 classiï¬er training increases its classiï¬cation accuracy, but further application of these pre-trained weightings for de- tector training reduces the detector accuracy. However, us- ing BoF and Mish for the CSPDarknet53 classiï¬er training increases the accuracy of both the classiï¬er and the detector which uses this classiï¬er pre-trained weightings. The net result is that backbone CSPDarknet53 is more suitable for the detector than for CSPResNeXt50.
We observe that the CSPDarknet53 model demonstrates a greater ability to increase the detector accuracy owing to various improvements.
Table 6: Using different classiï¬er pre-trained weightings for de- tector training (all other training parameters are similar in all mod- els) .
Model (with optimal setting) Size AP AP50 AP75 CSPResNeXt50-PANet-SPP CSPResNeXt50-PANet-SPP (BoF-backbone) CSPResNeXt50-PANet-SPP (BoF-backbone + Mish) 512x512 512x512 512x512 42.4 42.3 42.3 64.4 64.3 64.2 45.9 45.7 45.8 CSPDarknet53-PANet-SPP (BoF-backbone) CSPDarknet53-PANet-SPP (BoF-backbone + Mish) 512x512 512x512 42.4 43.0 64.5 64.9 46.0 46.5
# 4.5. Inï¬uence of different mini-batch size on Detec- tor training
Finally, we analyze the results obtained with models trained with different mini-batch sizes, and the results are shown in Table 7. From the results shown in Table 7, we found that after adding BoF and BoS training strategies, the mini-batch size has almost no effect on the detectorâs per- formance. This result shows that after the introduction of BoF and BoS, it is no longer necessary to use expensive GPUs for training. In other words, anyone can use only a conventional GPU to train an excellent detector.
Table 7: Using different mini-batch size for detector training.
Model (without OA) Size AP AP50 AP75 CSPResNeXt50-PANet-SPP (without BoF/BoS, mini-batch 4) CSPResNeXt50-PANet-SPP (without BoF/BoS, mini-batch 8) 608 608 37.1 38.4 59.2 60.6 39.9 41.6 CSPDarknet53-PANet-SPP (with BoF/BoS, mini-batch 4) CSPDarknet53-PANet-SPP (with BoF/BoS, mini-batch 8) 512 512 41.6 41.7 64.1 64.2 45.0 45.2
9
MS COCO Object Detection real-time YOLOv4 (ours) -*YOLOvs (ours) â*-yYoLovs 2 LRF ssp âerainavee » esa â > CornerNet FPS (Maxwelh MS COCO Object Detection . real-time TridentNet â , -+-YoL0r4 ours a \y erp YOLOv4 (ours) Center ask 4 | = âe-ercRNe e sb ama | a . | CenierMask SS as â -*GuideaAnchor â j ââ a Libra RENN " i a 2 | -etinadask Fs ~e-cascadeR-CN : | =e CenterNet *, Ps Ps r * . Go âTridentet FPS (Paseab MS COCO Object Detection â real-time = yoLovs (ours) YOLOv4 (ours) âe-Emdendet qe Pl âoasFFY HarDNet âe RetinaNet â2-SMNAS 2 NASEPN ears e-RDSNet > centerMtask FPS (Volta) APSO APSO eee eeees APSO MS COCO Object Detection : . YOLOv4 (ours) ver M2Det â= YOLO+4 (ours) « â+-yotovs LRF âo1RF oe ssp âe-Refinedet â MaDe * | 2 PEPNet eo RetinaNet ~s-FSAF eCornerNet FPS (Maxwell) MS COCO Object Detection real-time eq | TridentNet | YOLOvd (ours) âe-EFGRNet sarD | sD | â*-DAFS â2 Libra RCN âeFreeAnchor 2 Retina 2 Cascade RENN âe-Centerset FPS (Pascal) MS COCO Object Detection real-time YOLOvd (ours) ASFF* =e 2 YOLOv4 (ours) 2 EmelentDet â*- ASF HarDNet â*-sM.NAS ~sarss ~eRDsnet FPS (Volta)
Figure 8: Comparison of the speed and accuracy of different object detectors. (Some articles stated the FPS of their detectors for only one of the GPUs: Maxwell/Pascal/Volta)
# 5. Results
# 6. Conclusions
Comparison of the results obtained with other state- of-the-art object detectors are shown in Figure 8. Our YOLOv4 are located on the Pareto optimality curve and are superior to the fastest and most accurate detectors in terms of both speed and accuracy.
Since different methods use GPUs of different architec- tures for inference time veriï¬cation, we operate YOLOv4 on commonly adopted GPUs of Maxwell, Pascal, and Volta architectures, and compare them with other state-of-the-art methods. Table 8 lists the frame rate comparison results of using Maxwell GPU, and it can be GTX Titan X (Maxwell) or Tesla M40 GPU. Table 9 lists the frame rate comparison results of using Pascal GPU, and it can be Titan X (Pascal), Titan Xp, GTX 1080 Ti, or Tesla P100 GPU. As for Table 10, it lists the frame rate comparison results of using Volta GPU, and it can be Titan Volta or Tesla V100 GPU.
We offer a state-of-the-art detector which is faster (FPS) and more accurate (MS COCO AP50...95 and AP50) than all available alternative detectors. The detector described can be trained and used on a conventional GPU with 8-16 GB-VRAM this makes its broad use possible. The original concept of one-stage anchor-based detectors has proven its viability. We have veriï¬ed a large number of features, and selected for use such of them for improving the accuracy of both the classiï¬er and the detector. These features can be used as best-practice for future studies and developments.
# 7. Acknowledgements
the ideas of Mosaic data augmentation, the selection of hyper-parameters by using genetic algorithms and solving the grid sensitivity problem https://github.com/ ultralytics/yolov3.
10
Table 8: Comparison of the speed and accuracy of different object detectors on the MS COCO dataset (test- dev 2017). (Real-time detectors with FPS 30 or higher are highlighted here. We compare the results with batch=1 without using tensorRT.)
Method Backbone Size FPS AP AP50 AP75 APS APM APL YOLOv4: Optimal Speed and Accuracy of Object Detection YOLOv4 YOLOv4 YOLOv4 CSPDarknet-53 CSPDarknet-53 CSPDarknet-53 416 512 608 38 (M) 31 (M) 23 (M) 41.2% 56.0% 43.0% 64.9% 46.5% 24.3% 46.1% 55.2% 53.3% 43.5% 62.8% 44.3% 20.4% 44.4% 65.7% 47.3% 26.7% 46.7% LRF LRF LRF LRF Learning Rich Features at High-Speed for Single-Shot Object Detection [84] VGG-16 ResNet-101 VGG-16 ResNet-101 300 300 512 512 76.9 (M) 52.6 (M) 38.5 (M) 31.3 (M) 32.0% 34.3% 36.2% 37.3% 51.5% 54.1% 56.6% 58.5% 33.8% 36.6% 38.7% 39.7% 12.6% 13.2% 19.0% 19.7% 34.9% 38.2% 39.9% 42.8% 47.0% 50.7% 48.8% 50.1% RFBNet RFBNet RFBNet-E Receptive Field Block Net for Accurate and Fast Object Detection [47] 11.8% 16.2% 17.6% VGG-16 VGG-16 VGG-16 300 512 512 66.7 (M) 33.3 (M) 30.3 (M) 30.3% 33.8% 34.4% 49.3% 54.2% 55.7% 31.8% 35.9% 36.4% 31.9% 37.1% 37.0% 45.9% 47.4% 47.6% YOLOv3: An incremental improvement [63] YOLOv3 YOLOv3 YOLOv3 YOLOv3-SPP Darknet-53 Darknet-53 Darknet-53 Darknet-53 320 416 608 608 45 (M) 35 (M) 20 (M) 20 (M) 28.2% 31.0% 33.0% 36.2% 51.5% 55.3% 57.9% 60.6% 29.7% 32.3% 34.4% 38.2% 11.9% 15.2% 18.3% 20.6% 30.6% 33.2% 35.4% 37.4% 43.4% 42.8% 41.9% 46.1% SSD SSD VGG-16 VGG-16 SSD: Single shot multibox detector [50] 300 512 43 (M) 22 (M) 25.1% 28.8% 43.1% 48.5% 25.8% 30.3% 6.6% 10.9% 25.9% 31.8% 41.4% 43.5% Single-shot reï¬nement neural network for object detection [95] Reï¬neDet Reï¬neDet VGG-16 VGG-16 320 512 38.7 (M) 22.3 (M) 29.4% 33.0% 49.2% 54.5% 31.3% 35.5% 10.0% 16.3% 32.0% 36.3% 44.4% 44.3% M2det: A single-shot object detector based on multi-level feature pyramid network [98] M2det M2det M2det M2det M2det VGG-16 ResNet-101 VGG-16 ResNet-101 VGG-16 320 320 512 512 800 33.4 (M) 21.7 (M) 18 (M) 15.8 (M) 11.8 (M) 33.5% 34.3% 37.6% 38.8% 41.0% 52.4% 53.5% 56.6% 59.4% 59.7% 35.6% 36.5% 40.5% 41.7% 45.0% 14.4% 14.8% 18.4% 20.5% 22.1% 37.6% 38.8% 43.4% 43.9% 46.5% 47.6% 47.9% 51.2% 53.4% 53.8% Parallel Feature Pyramid Network for Object Detection [34] PFPNet-R PFPNet-R VGG-16 VGG-16 320 512 33 (M) 24 (M) 31.8% 35.2% 52.9% 57.6% 33.6% 37.9% 12% 18.7% 35.5% 38.6% 46.1% 45.9% Focal Loss for Dense Object Detection [45] RetinaNet RetinaNet RetinaNet RetinaNet ResNet-50 ResNet-101 ResNet-50 ResNet-101 500 500 800 800 13.9 (M) 11.1 (M) 6.5 (M) 5.1 (M) 32.5% 34.4% 35.7% 37.8% 50.9% 53.1% 55.0% 57.5% 34.8% 36.8% 38.5% 40.8% 13.9% 14.7% 18.9% 20.2% 35.8% 38.5% 38.9% 41.1% 46.7% 49.1% 46.3% 49.2% AB+FSAF AB+FSAF Feature Selective Anchor-Free Module for Single-Shot Object Detection [102] ResNet-101 ResNeXt-101 800 800 5.6 (M) 2.8 (M) 40.9% 42.9% 61.5% 63.8% 44.0% 46.3% 24.0% 26.6% 44.2% 46.2% 51.3% 52.7% CornerNet: Detecting objects as paired keypoints [37] CornerNet Hourglass 512 4.4 (M) 40.5% 57.8% 45.3% 20.8% 44.8% 56.7%
11
Table 9: Comparison of the speed and accuracy of different object detectors on the MS COCO dataset (test-dev 2017). (Real-time detectors with FPS 30 or higher are highlighted here. We compare the results with batch=1 without using tensorRT.)
YOLOv4 YOLOv4 YOLOv4 YOLOv4: Optimal Speed and Accuracy of Object Detection 44.3% 46.5% CSPDarknet-53 CSPDarknet-53 CSPDarknet-53 416 512 608 54 (P) 43 (P) 33 (P) 56.0% 41.2% 55.2% 43.0% 43.5% 65.7% 47.3% 26.7% 46.7% 53.3% 62.8% 64.9% 20.4% 24.3% 44.4% 46.1% CenterMask: Real-Time Anchor-Free Instance Segmentation [40] CenterMask-Lite MobileNetV2-FPN CenterMask-Lite CenterMask-Lite VoVNet-19-FPN VoVNet-39-FPN 600à 50.0 (P) 600à 43.5 (P) 600à 35.7 (P) 30.2% 35.9% 40.7% - - - - - - 14.2% 19.6% 22.4% 40.9% 31.9% 38.0% 45.9% 43.2% 53.5% Enriched Feature Guided Reï¬nement Network for Object Detection [57] EFGRNet EFGRNet EFGRNet VGG-16 VG-G16 ResNet-101 320 512 512 47.6 (P) 25.7 (P) 21.7 (P) 33.2% 37.5% 39.0% 53.4% 58.8% 58.8% 35.4% 40.4% 42.3% 13.4% 19.7% 17.8% 37.1% 41.6% 43.6% 47.9% 49.4% 54.5% HSD HSD HSD HSD HSD VGG-16 VGG-16 ResNet-101 ResNeXt-101 ResNet-101 Hierarchical Shot Detector [3] 320 512 512 512 768 40 (P) 23.3 (P) 20.8 (P) 15.2 (P) 10.9 (P) 33.5% 38.8% 40.2% 41.9% 42.3% 53.2% 58.2% 59.4% 61.1% 61.2% 36.1% 42.5% 44.0% 46.2% 46.9% 15.0% 21.8% 20.0% 21.8% 22.8% 35.0% 41.9% 44.4% 46.6% 47.3% 47.8% 50.2% 54.9% 57.0% 55.9% Dynamic anchor feature selection for single-shot object detection [41] DAFS VGG16 512 35 (P) 33.8% 52.9% 36.9% 14.6% 37.0% 47.7% SAPD SAPD SAPD ResNet-50 ResNet-50-DCN ResNet-101-DCN Soft Anchor-Point Object Detection [101] 61.9% 64.4% 65.9% - - - 14.9 (P) 12.4 (P) 9.1 (P) 41.7% 44.3% 46.0% 44.6% 47.7% 49.6% 24.1% 25.5% 26.3% 44.6% 47.3% 49.2% 51.6% 57.0% 59.6% RetinaNet Faster R-CNN ResNet-50 ResNet-50 Region proposal by guided anchoring [82] 56.9% 59.2% - - 10.8 (P) 9.4 (P) 37.1% 39.8% 40.0% 43.5% 20.1% 21.8% 40.1% 42.6% 48.0% 50.7% RPDet RPDet RepPoints: Point set representation for object detection [87] 44.3% 49.0% ResNet-101 ResNet-101-DCN - - 10 (P) 8 (P) 41.0% 45.0% 62.9% 66.1% 23.6% 26.6% 44.1% 48.6% 51.7% 57.5% Libra R-CNN: Towards balanced learning for object detection [58] Libra R-CNN ResNet-101 - 9.5 (P) 41.1% 62.1% 44.7% 23.4% 43.7% 52.5% FreeAnchor: Learning to match anchors for visual object detection [96] FreeAnchor ResNet-101 - 9.1 (P) 43.1% 62.2% 46.4% 24.5% 46.1% 54.8% RetinaMask: Learning to Predict Masks Improves State-of-The-Art Single-Shot Detection for Free [14] 42.0% 44.5% 44.7% 45.6% RetinaMask RetinaMask RetinaMask RetinaMask ResNet-50-FPN ResNet-101-FPN ResNet-101-FPN-GN ResNeXt-101-FPN-GN 800à 800à 800à 800à 8.1 (P) 6.9 (P) 6.5 (P) 4.3 (P) 39.4% 41.4% 41.7% 42.6% 58.6% 60.8% 61.7% 62.5% 42.3% 44.6% 45.0% 46.0% 21.9% 23.0% 23.5% 24.8% 51.0% 53.5% 52.8% 53.8% Cascade R-CNN Cascade R-CNN: Delving into high quality object detection [2] 46.3% ResNet-101 - 8 (P) 42.8% 62.1% 23.7% 45.5% 55.2% Centernet: Object detection with keypoint triplets [13] Centernet Centernet Hourglass-52 Hourglass-104 - - 4.4 (P) 3.3 (P) 41.6% 44.9% 59.4% 62.4% 44.2% 48.1% 22.5% 25.6% 43.1% 47.4% 54.1% 57.4% Scale-Aware Trident Networks for Object Detection [42] TridentNet TridentNet ResNet-101 ResNet-101-DCN - - 2.7 (P) 1.3 (P) 42.7% 46.8% 63.6% 67.6% 46.5% 51.5% 23.9% 28.0% 46.6% 51.2% 56.6% 60.5%
12
Table 10: Comparison of the speed and accuracy of different object detectors on the MS COCO dataset (test-dev 2017). (Real-time detectors with FPS 30 or higher are highlighted here. We compare the results with batch=1 without using tensorRT.)
Method Backbone Size FPS AP AP50 AP75 APS APM APL YOLOv4: Optimal Speed and Accuracy of Object Detection YOLOv4 YOLOv4 YOLOv4 CSPDarknet-53 CSPDarknet-53 CSPDarknet-53 416 512 608 96 (V) 83 (V) 62 (V) 41.2% 44.3% 44.4% 46.1% 46.5% 43.0% 43.5% 65.7% 47.3% 26.7% 46.7% 62.8% 64.9% 20.4% 24.3% 56.0% 55.2% 53.3% Efï¬cientDet: Scalable and Efï¬cient Object Detection [77] Efï¬cientDet-D0 Efï¬cientDet-D1 Efï¬cientDet-D2 Efï¬cientDet-D3 Efï¬cient-B0 Efï¬cient-B1 Efï¬cient-B2 Efï¬cient-B3 512 640 768 896 62.5 (V) 50.0 (V) 41.7 (V) 23.8 (V) 33.8% 39.6% 43.0% 45.8% 52.2% 58.6% 62.3% 65.0% 35.8% 42.3% 46.2% 49.3% 51.2% 38.3% 12.0% 56.0% 44.3% 17.9% 22.5% 47.0% 58.4% 59.8% 49.4% 26.6% YOLOv3 + ASFF* YOLOv3 + ASFF* YOLOv3 + ASFF* YOLOv3 + ASFF* Learning Spatial Fusion for Single-Shot Object Detection [48] 16.1% 42.1% 57.4% 20.3% 45.1% 60.6% 63.0% 47.4% 25.5% 27.0% 49.2% 64.1% Darknet-53 Darknet-53 Darknet-53 Darknet-53 320 416 608à 800à 60 (V) 54 (V) 45.5 (V) 29.4 (V) 38.1% 40.6% 42.4% 43.9% 41.6% 44.2% 45.7% 46.6% 53.6% 54.1% 52.3% 53.4% HarDNet: A Low Memory Trafï¬c Network [4] RFBNet RFBNet HarDNet68 HarDNet85 512 512 41.5 (V) 37.1 (V) 33.9% 36.8% 54.3% 57.1% 36.2% 39.5% 14.7% 16.9% 36.6% 40.5% 50.5% 52.9% Focal Loss for Dense Object Detection [45] RetinaNet RetinaNet RetinaNet RetinaNet ResNet-50 ResNet-101 ResNet-50 ResNet-101 640 640 1024 1024 37 (V) 29.4 (V) 19.6 (V) 15.4 (V) 37.0% 37.9% 40.1% 41.1% - - - - - - - - - - - - - - - - - - - - SM-NAS: Structural-to-Modular Neural Architecture Search for Object Detection [88] SM-NAS: E2 SM-NAS: E3 SM-NAS: E5 - - - 800Ã600 800Ã600 1333Ã800 25.3 (V) 19.7 (V) 9.3 (V) 40.0% 42.8% 45.9% 58.2% 61.2% 64.6% 43.4% 46.5% 49.6% 21.1% 23.5% 27.1% 42.4% 45.5% 49.0% 51.7% 55.6% 58.0% NAS-FPN: Learning scalable feature pyramid architecture for object detection [17] NAS-FPN NAS-FPN ResNet-50 ResNet-50 640 1024 24.4 (V) 12.7 (V) 39.9% 44.2% - - - - - - - - - - Bridging the Gap Between Anchor-based and Anchor-free Detection via Adaptive Training Sample Selection [94] ATSS ATSS ResNet-101 ResNet-101-DCN 800à 800à 17.5 (V) 13.7 (V) 43.6% 46.3% 62.1% 64.7% 47.4% 50.4% 26.1% 27.7% 47.0% 49.8% 53.6% 58.4% RDSNet: A New Deep Architecture for Reciprocal Object Detection and Instance Segmentation [83] RDSNet RDSNet ResNet-101 ResNet-101 600 800 16.8 (V) 10.9 (V) 36.0% 38.1% 55.2% 58.5% 38.7% 40.8% 17.4% 21.2% 39.6% 41.5% 49.7% 48.2% CenterMask: Real-Time Anchor-Free Instance Segmentation [40] CenterMask CenterMask ResNet-101-FPN VoVNet-99-FPN 800à 800à 15.2 (V) 12.9 (V) 44.0% 46.5% - - - - 25.8% 28.7% 46.8% 48.9% 54.9% 57.2%
13
# References
[1] Navaneeth Bodla, Bharat Singh, Rama Chellappa, and Larry S Davis. Soft-NMSâimproving object detection with one line of code. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 5561â5569, 2017. 4
[2] Zhaowei Cai and Nuno Vasconcelos. Cascade R-CNN: Delving into high quality object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 6154â6162, 2018. 12
[3] Jiale Cao, Yanwei Pang, Jungong Han, and Xuelong Li. Hi- In Proceedings of the IEEE In- erarchical shot detector. ternational Conference on Computer Vision (ICCV), pages 9705â9714, 2019. 12
[4] Ping Chao, Chao-Yang Kao, Yu-Shan Ruan, Chien-Hsiang Huang, and Youn-Long Lin. HarDNet: A low memory traf- ï¬c network. Proceedings of the IEEE International Confer- ence on Computer Vision (ICCV), 2019. 13
[5] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. DeepLab: Semantic im- age segmentation with deep convolutional nets, atrous con- IEEE Transactions volution, and fully connected CRFs. on Pattern Analysis and Machine Intelligence (TPAMI), 40(4):834â848, 2017. 2, 4
[6] Pengguang Chen. GridMask data augmentation. arXiv preprint arXiv:2001.04086, 2020. 3
[7] Yukang Chen, Tong Yang, Xiangyu Zhang, Gaofeng Meng, Xinyu Xiao, and Jian Sun. DetNAS: Backbone search for object detection. In Advances in Neural Information Pro- cessing Systems (NeurIPS), pages 6638â6648, 2019. 2 [8] Jiwoong Choi, Dayoung Chun, Hyun Kim, and Hyuk-Jae Lee. Gaussian YOLOv3: An accurate and fast object de- tector using localization uncertainty for autonomous driv- ing. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 502â511, 2019. 7 [9] Jifeng Dai, Yi Li, Kaiming He, and Jian Sun. R-FCN: Object detection via region-based fully convolutional net- works. In Advances in Neural Information Processing Sys- tems (NIPS), pages 379â387, 2016. 2
[10] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical im- age database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 248â255, 2009. 5
Improved reg- ularization of convolutional neural networks with CutOut. arXiv preprint arXiv:1708.04552, 2017. 3
[12] Xianzhi Du, Tsung-Yi Lin, Pengchong Jin, Golnaz Ghiasi, Mingxing Tan, Yin Cui, Quoc V Le, and Xiaodan Song. SpineNet: Learning scale-permuted backbone for recog- nition and localization. arXiv preprint arXiv:1912.05027, 2019. 2
[13] Kaiwen Duan, Song Bai, Lingxi Xie, Honggang Qi, Qing- ming Huang, and Qi Tian. CenterNet: Keypoint triplets for object detection. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 6569â6578, 2019. 2, 12
14
[14] Cheng-Yang Fu, Mykhailo Shvets, and Alexander C Berg. RetinaMask: Learning to predict masks improves state- arXiv preprint of-the-art single-shot detection for free. arXiv:1901.03353, 2019. 12
[15] Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A Wichmann, and Wieland Brendel. ImageNet-trained cnns are biased towards texture; increas- ing shape bias improves accuracy and robustness. In Inter- national Conference on Learning Representations (ICLR), 2019. 3
[16] Golnaz Ghiasi, Tsung-Yi Lin, and Quoc V Le. DropBlock: A regularization method for convolutional networks. In Ad- vances in Neural Information Processing Systems (NIPS), pages 10727â10737, 2018. 3
[17] Golnaz Ghiasi, Tsung-Yi Lin, and Quoc V Le. NAS-FPN: Learning scalable feature pyramid architecture for object detection. In Proceedings of the IEEE Conference on Com- puter Vision and Pattern Recognition (CVPR), pages 7036â 7045, 2019. 2, 13
[18] Ross Girshick. Fast R-CNN. In Proceedings of the IEEE In- ternational Conference on Computer Vision (ICCV), pages 1440â1448, 2015. 2
[19] Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object de- tection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recog- nition (CVPR), pages 580â587, 2014. 2, 4
[20] Jianyuan Guo, Kai Han, Yunhe Wang, Chao Zhang, Zhao- hui Yang, Han Wu, Xinghao Chen, and Chang Xu. Hit- Detector: Hierarchical trinity architecture search for object detection. In Proceedings of the IEEE Conference on Com- puter Vision and Pattern Recognition (CVPR), 2020. 2 [21] Kai Han, Yunhe Wang, Qi Tian, Jianyuan Guo, Chunjing Xu, and Chang Xu. GhostNet: More features from cheap In Proceedings of the IEEE Conference on operations. Computer Vision and Pattern Recognition (CVPR), 2020. 5
[22] Bharath Hariharan, Pablo Arbel´aez, Ross Girshick, and Jitendra Malik. Hypercolumns for object segmentation and ï¬ne-grained localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 447â456, 2015. 4
[23] Kaiming He, Georgia Gkioxari, Piotr Doll´ar, and Ross Gir- In Proceedings of the IEEE In- shick. Mask R-CNN. ternational Conference on Computer Vision (ICCV), pages 2961â2969, 2017. 2
[24] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectiï¬ers: Surpassing human-level per- In Proceedings of formance on ImageNet classiï¬cation. the IEEE International Conference on Computer Vision (ICCV), pages 1026â1034, 2015. 4
[25] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Spatial pyramid pooling in deep convolutional networks for IEEE Transactions on Pattern Analy- visual recognition. sis and Machine Intelligence (TPAMI), 37(9):1904â1916, 2015. 2, 4, 7
[26] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceed-
ings of the IEEE Conference on Computer Vision and Pat- tern Recognition (CVPR), pages 770â778, 2016. 2
[27] Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang, Vijay Vasudevan, et al. Searching for Mo- bileNetV3. In Proceedings of the IEEE International Con- ference on Computer Vision (ICCV), 2019. 2, 4
[28] Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco An- dreetto, and Hartwig Adam. MobileNets: Efï¬cient con- volutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017. 2, 4
[29] Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Com- puter Vision and Pattern Recognition (CVPR), pages 7132â 7141, 2018. 4
[30] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kil- ian Q Weinberger. Densely connected convolutional net- In Proceedings of the IEEE Conference on Com- works. puter Vision and Pattern Recognition (CVPR), pages 4700â 4708, 2017. 2
[31] Forrest N Iandola, Song Han, Matthew W Moskewicz, Khalid Ashraf, William J Dally, and Kurt Keutzer. SqueezeNet: AlexNet-level accuracy with 50x fewer pa- arXiv preprint rameters and¡ 0.5 MB model size. arXiv:1602.07360, 2016. 2
[32] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal co- variate shift. arXiv preprint arXiv:1502.03167, 2015. 6 [33] Md Amirul Islam, Shujon Naha, Mrigank Rochan, Neil Bruce, and Yang Wang. Label reï¬nement network for arXiv preprint coarse-to-ï¬ne semantic segmentation. arXiv:1703.00551, 2017. 3
[34] Seung-Wook Kim, Hyong-Keun Kook, Jee-Young Sun, Mun-Cheon Kang, and Sung-Jea Ko. Parallel feature pyra- In Proceedings of the mid network for object detection. European Conference on Computer Vision (ECCV), pages 234â250, 2018. 11
[35] G¨unter Klambauer, Thomas Unterthiner, Andreas Mayr, and Sepp Hochreiter. Self-normalizing neural networks. In Advances in Neural Information Processing Systems (NIPS), pages 971â980, 2017. 4
and Gregory Shakhnarovich. FractalNet: Ultra-deep neural net- works without residuals. arXiv preprint arXiv:1605.07648, 2016. 6
[37] Hei Law and Jia Deng. CornerNet: Detecting objects as paired keypoints. In Proceedings of the European Confer- ence on Computer Vision (ECCV), pages 734â750, 2018. 2, 11
[38] Hei Law, Yun Teng, Olga Russakovsky, and Jia Deng. CornerNet-Lite: Efï¬cient keypoint based object detection. arXiv preprint arXiv:1904.08900, 2019. 2
[39] Svetlana Lazebnik, Cordelia Schmid, and Jean Ponce. Be- yond bags of features: Spatial pyramid matching for recog- nizing natural scene categories. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 2, pages 2169â2178. IEEE, 2006. 4
15
[40] Youngwan Lee and Jongyoul Park. CenterMask: Real-time In Proceedings of the anchor-free instance segmentation. IEEE Conference on Computer Vision and Pattern Recog- nition (CVPR), 2020. 12, 13
[41] Shuai Li, Lingxiao Yang, Jianqiang Huang, Xian-Sheng Hua, and Lei Zhang. Dynamic anchor feature selection for single-shot object detection. In Proceedings of the IEEE In- ternational Conference on Computer Vision (ICCV), pages 6609â6618, 2019. 12
[42] Yanghao Li, Yuntao Chen, Naiyan Wang, and Zhaoxiang Zhang. Scale-aware trident networks for object detection. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 6054â6063, 2019. 12 [43] Zeming Li, Chao Peng, Gang Yu, Xiangyu Zhang, Yang- dong Deng, and Jian Sun. DetNet: Design backbone for object detection. In Proceedings of the European Confer- ence on Computer Vision (ECCV), pages 334â350, 2018. 2
[44] Tsung-Yi Lin, Piotr Doll´ar, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2117â2125, 2017. 2
[45] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Doll´ar. Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Com- puter Vision (ICCV), pages 2980â2988, 2017. 2, 3, 11, 13 [46] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick. Microsoft COCO: Common objects in context. In Proceedings of the European Conference on Computer Vision (ECCV), pages 740â755, 2014. 5
[47] Songtao Liu, Di Huang, et al. Receptive ï¬eld block net for In Proceedings of the accurate and fast object detection. European Conference on Computer Vision (ECCV), pages 385â400, 2018. 2, 4, 11
[48] Songtao Liu, Di Huang, and Yunhong Wang. Learning spa- tial fusion for single-shot object detection. arXiv preprint arXiv:1911.09516, 2019. 2, 4, 13
[49] Shu Liu, Lu Qi, Haifang Qin, Jianping Shi, and Jiaya Jia. Path aggregation network for instance segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 8759â8768, 2018. 1, 2, 7
[50] Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexander C Berg. SSD: Single shot multibox detector. In Proceedings of the European Conference on Computer Vision (ECCV), pages 21â37, 2016. 2, 11
[51] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In Pro- ceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 3431â3440, 2015. 4
[52] Ilya Loshchilov and Frank Hutter. tic gradient descent with warm restarts. arXiv:1608.03983, 2016. 7 SGDR: Stochas- arXiv preprint
[53] Ningning Ma, Xiangyu Zhang, Hai-Tao Zheng, and Jian Sun. Shufï¬eNetV2: Practical guidelines for efï¬cient cnn
architecture design. In Proceedings of the European Con- ference on Computer Vision (ECCV), pages 116â131, 2018. 2
[54] Andrew L Maas, Awni Y Hannun, and Andrew Y Ng. Rec- tiï¬er nonlinearities improve neural network acoustic mod- In Proceedings of International Conference on Ma- els. chine Learning (ICML), volume 30, page 3, 2013. 4 Mish:
[55] Diganta Misra. A self monotonic neural activation function. arXiv:1908.08681, 2019. 4 regularized non- arXiv preprint
[56] Vinod Nair and Geoffrey E Hinton. Rectiï¬ed linear units In Proceedings improve restricted boltzmann machines. of International Conference on Machine Learning (ICML), pages 807â814, 2010. 4
[57] Jing Nie, Rao Muhammad Anwer, Hisham Cholakkal, Fa- had Shahbaz Khan, Yanwei Pang, and Ling Shao. Enriched feature guided reï¬nement network for object detection. In Proceedings of the IEEE International Conference on Com- puter Vision (ICCV), pages 9537â9546, 2019. 12
[58] Jiangmiao Pang, Kai Chen, Jianping Shi, Huajun Feng, Wanli Ouyang, and Dahua Lin. Libra R-CNN: Towards bal- anced learning for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recog- nition (CVPR), pages 821â830, 2019. 2, 12
[59] Prajit Ramachandran, Barret Zoph, and Quoc V Le. arXiv preprint Searching for activation functions. arXiv:1710.05941, 2017. 4
[60] Abdullah Rashwan, Agastya Kalra, and Pascal Poupart. Matrix Nets: A new deep architecture for object detection. In Proceedings of the IEEE International Conference on Computer Vision Workshop (ICCV Workshop), pages 0â0, 2019. 2
[61] Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. You only look once: Uniï¬ed, real-time object de- tection. In Proceedings of the IEEE Conference on Com- puter Vision and Pattern Recognition (CVPR), pages 779â 788, 2016. 2
[62] Joseph Redmon and Ali Farhadi. YOLO9000: better, faster, stronger. In Proceedings of the IEEE Conference on Com- puter Vision and Pattern Recognition (CVPR), pages 7263â 7271, 2017. 2
[63] Joseph Redmon and Ali Farhadi. YOLOv3: An incremental improvement. arXiv preprint arXiv:1804.02767, 2018. 2, 4, 7, 11
[64] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster R-CNN: Towards real-time object detection with re- gion proposal networks. In Advances in Neural Information Processing Systems (NIPS), pages 91â99, 2015. 2
[65] Hamid Rezatoï¬ghi, Nathan Tsoi, JunYoung Gwak, Amir Sadeghian, Ian Reid, and Silvio Savarese. Generalized in- tersection over union: A metric and a loss for bounding box regression. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 658â666, 2019. 3
[66] Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. MobileNetV2: In- In Proceedings verted residuals and linear bottlenecks.
16
of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4510â4520, 2018. 2
[67] Abhinav Shrivastava, Abhinav Gupta, and Ross Girshick. Training region-based object detectors with online hard ex- ample mining. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 761â769, 2016. 3
[68] Karen Simonyan and Andrew Zisserman. Very deep convo- lutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. 2
[69] Krishna Kumar Singh, Hao Yu, Aron Sarmasi, Gautam Pradeep, and Yong Jae Lee. Hide-and-Seek: A data aug- mentation technique for weakly-supervised localization and beyond. arXiv preprint arXiv:1811.02545, 2018. 3
Filter response normalization layer: Eliminating batch dependence in arXiv preprint the training of deep neural networks. arXiv:1911.09737, 2019. 6
[71] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. DropOut: A simple way to prevent neural networks from overï¬tting. The jour- nal of machine learning research, 15(1):1929â1958, 2014. 3
[72] K-K Sung and Tomaso Poggio. Example-based learning for view-based human face detection. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 20(1):39â51, 1998. 3
[73] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception ar- chitecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2818â2826, 2016. 3
[74] Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, Mark Sandler, Andrew Howard, and Quoc V Le. MNAS- net: Platform-aware neural architecture search for mobile. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2820â2828, 2019. 2
[75] Mingxing Tan and Quoc V Le. Efï¬cientNet: Rethinking model scaling for convolutional neural networks. In Pro- ceedings of International Conference on Machine Learning (ICML), 2019. 2
[76] Mingxing Tan and Quoc V Le. MixNet: Mixed depthwise In Proceedings of the British Ma- convolutional kernels. chine Vision Conference (BMVC), 2019. 5
[77] Mingxing Tan, Ruoming Pang, and Quoc V Le. Efï¬cient- Det: Scalable and efï¬cient object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 2, 4, 13
[78] Zhi Tian, Chunhua Shen, Hao Chen, and Tong He. FCOS: Fully convolutional one-stage object detection. In Proceed- ings of the IEEE International Conference on Computer Vi- sion (ICCV), pages 9627â9636, 2019. 2
[79] Jonathan Tompson, Ross Goroshin, Arjun Jain, Yann Le- Cun, and Christoph Bregler. Efï¬cient object localization using convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 648â656, 2015. 6
[80] Li Wan, Matthew Zeiler, Sixin Zhang, Yann Le Cun, and Rob Fergus. Regularization of neural networks using Drop- In Proceedings of International Conference on Connect. Machine Learning (ICML), pages 1058â1066, 2013. 3 [81] Chien-Yao Wang, Hong-Yuan Mark Liao, Yueh-Hua Wu, Ping-Yang Chen, Jun-Wei Hsieh, and I-Hau Yeh. CSPNet: A new backbone that can enhance learning capability of cnn. Proceedings of the IEEE Conference on Computer Vi- sion and Pattern Recognition Workshop (CVPR Workshop), 2020. 2, 7
[82] Jiaqi Wang, Kai Chen, Shuo Yang, Chen Change Loy, and Dahua Lin. Region proposal by guided anchoring. In Pro- ceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2965â2974, 2019. 12
[83] Shaoru Wang, Yongchao Gong, Junliang Xing, Lichao Huang, Chang Huang, and Weiming Hu. RDSNet: A new deep architecture for reciprocal object detection and instance segmentation. arXiv preprint arXiv:1912.05070, 2019. 13
[84] Tiancai Wang, Rao Muhammad Anwer, Hisham Cholakkal, Fahad Shahbaz Khan, Yanwei Pang, and Ling Shao. Learn- ing rich features at high-speed for single-shot object detec- tion. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 1971â1980, 2019. 11
[85] Sanghyun Woo, Jongchan Park, Joon-Young Lee, and In So Kweon. CBAM: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), pages 3â19, 2018. 1, 2, 4
[86] Saining Xie, Ross Girshick, Piotr Doll´ar, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1492â1500, 2017. 2
[87] Ze Yang, Shaohui Liu, Han Hu, Liwei Wang, and Stephen Lin. RepPoints: Point set representation for object detec- tion. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), pages 9657â9666, 2019. 2, 12 [88] Lewei Yao, Hang Xu, Wei Zhang, Xiaodan Liang, and Zhenguo Li. SM-NAS: Structural-to-modular neural archi- tecture search for object detection. In Proceedings of the AAAI Conference on Artiï¬cial Intelligence (AAAI), 2020. 13
[89] Zhuliang Yao, Yue Cao, Shuxin Zheng, Gao Huang, and Stephen Lin. Cross-iteration batch normalization. arXiv preprint arXiv:2002.05712, 2020. 1, 6
[90] Jiahui Yu, Yuning Jiang, Zhangyang Wang, Zhimin Cao, and Thomas Huang. UnitBox: An advanced object detec- tion network. In Proceedings of the 24th ACM international conference on Multimedia, pages 516â520, 2016. 3 [91] Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo. CutMix: Regu- larization strategy to train strong classiï¬ers with localizable features. In Proceedings of the IEEE International Confer- ence on Computer Vision (ICCV), pages 6023â6032, 2019. 3
[92] Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. MixUp: Beyond empirical risk mini- mization. arXiv preprint arXiv:1710.09412, 2017. 3
17
[93] Hang Zhang, Kristin Dana, Jianping Shi, Zhongyue Zhang, Xiaogang Wang, Ambrish Tyagi, and Amit Agrawal. Con- In Proceedings text encoding for semantic segmentation. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 7151â7160, 2018. 6
[94] Shifeng Zhang, Cheng Chi, Yongqiang Yao, Zhen Lei, and Stan Z Li. Bridging the gap between anchor-based and anchor-free detection via adaptive training sample selec- tion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020. 13
[95] Shifeng Zhang, Longyin Wen, Xiao Bian, Zhen Lei, and Stan Z Li. Single-shot reï¬nement neural network for ob- ject detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 4203â4212, 2018. 11
[96] Xiaosong Zhang, Fang Wan, Chang Liu, Rongrong Ji, and Qixiang Ye. FreeAnchor: Learning to match anchors for visual object detection. In Advances in Neural Information Processing Systems (NeurIPS), 2019. 12
[97] Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun. Shufï¬eNet: An extremely efï¬cient convolutional neural In Proceedings of the IEEE network for mobile devices. Conference on Computer Vision and Pattern Recognition (CVPR), pages 6848â6856, 2018. 2
[98] Qijie Zhao, Tao Sheng, Yongtao Wang, Zhi Tang, Ying Chen, Ling Cai, and Haibin Ling. M2det: A single-shot object detector based on multi-level feature pyramid net- work. In Proceedings of the AAAI Conference on Artiï¬cial Intelligence (AAAI), volume 33, pages 9259â9266, 2019. 2, 4, 11
[99] Zhaohui Zheng, Ping Wang, Wei Liu, Jinze Li, Rongguang Ye, and Dongwei Ren. Distance-IoU Loss: Faster and bet- ter learning for bounding box regression. In Proceedings of the AAAI Conference on Artiï¬cial Intelligence (AAAI), 2020. 3, 4
[100] Zhun Zhong, Liang Zheng, Guoliang Kang, Shaozi Li, and Yi Yang. Random erasing data augmentation. arXiv preprint arXiv:1708.04896, 2017. 3
[101] Chenchen Zhu, Fangyi Chen, Zhiqiang Shen, and Mar- ios Savvides. Soft anchor-point object detection. arXiv preprint arXiv:1911.12448, 2019. 12
[102] Chenchen Zhu, Yihui He, and Marios Savvides. Feature se- lective anchor-free module for single-shot object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 840â849, 2019. 11 | {
"id": "1608.03983"
} |
2004.10964 | Don't Stop Pretraining: Adapt Language Models to Domains and Tasks | Language models pretrained on text from a wide variety of sources form the
foundation of today's NLP. In light of the success of these broad-coverage
models, we investigate whether it is still helpful to tailor a pretrained model
to the domain of a target task. We present a study across four domains
(biomedical and computer science publications, news, and reviews) and eight
classification tasks, showing that a second phase of pretraining in-domain
(domain-adaptive pretraining) leads to performance gains, under both high- and
low-resource settings. Moreover, adapting to the task's unlabeled data
(task-adaptive pretraining) improves performance even after domain-adaptive
pretraining. Finally, we show that adapting to a task corpus augmented using
simple data selection strategies is an effective alternative, especially when
resources for domain-adaptive pretraining might be unavailable. Overall, we
consistently find that multi-phase adaptive pretraining offers large gains in
task performance. | http://arxiv.org/pdf/2004.10964 | Suchin Gururangan, Ana Marasović, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, Noah A. Smith | cs.CL, cs.LG | ACL 2020 | null | cs.CL | 20200423 | 20200505 | 0 2 0 2
y a M 5 ] L C . s c [
3 v 4 6 9 0 1 . 4 0 0 2 : v i X r a
# Donât Stop Pretraining: Adapt Language Models to Domains and Tasks
# Suchin Gururanganâ Ana Marasovi´câ ⦠Swabha Swayamdiptaâ Kyle Loâ Iz Beltagyâ Doug Downeyâ Noah A. Smithâ â¦
â Allen Institute for Artiï¬cial Intelligence, Seattle, WA, USA â¦Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA, USA {suching,anam,swabhas,kylel,beltagy,dougd,noah}@allenai.org
# Abstract
Language models pretrained on text from a wide variety of sources form the foundation of todayâs NLP. In light of the success of these broad-coverage models, we investigate whether it is still helpful to tailor a pretrained model to the domain of a target task. We present a study across four domains (biomedi- cal and computer science publications, news, and reviews) and eight classiï¬cation tasks, showing that a second phase of pretraining in- domain (domain-adaptive pretraining) leads to performance gains, under both high- and low-resource settings. Moreover, adapting to the taskâs unlabeled data (task-adaptive pretraining) improves performance even after domain-adaptive pretraining. Finally, we show that adapting to a task corpus augmented us- ing simple data selection strategies is an effec- tive alternative, especially when resources for domain-adaptive pretraining might be unavail- able. Overall, we consistently ï¬nd that multi- phase adaptive pretraining offers large gains in task performance.
original LM domain ~~ target domain
Figure 1: An illustration of data distributions. Task data is comprised of an observable task distribution, usually non-randomly sampled from a wider distribu- tion (light grey ellipsis) within an even larger target do- main, which is not necessarily one of the domains in- cluded in the original LM pretraining domain â though overlap is possible. We explore the beneï¬ts of contin- ued pretraining on data from the task distribution and the domain distribution.
# Introduction
Todayâs pretrained language models are trained on massive, heterogeneous corpora (Raffel et al., 2019; Yang et al., 2019). For instance, ROBERTA (Liu et al., 2019) was trained on over 160GB of uncom- pressed text, with sources ranging from English- language encyclopedic and news articles, to literary works and web content. Representations learned by such models achieve strong performance across many tasks with datasets of varying sizes drawn from a variety of sources (e.g., Wang et al., 2018, 2019). This leads us to ask whether a tasks textual domainâa term typically used to denote a distribu- tion over language characterizing a given topic or genre (such as âscienceâ or âmystery novelsâ)âis still relevant. Do the latest large pretrained mod- els work universally or is it still helpful to build
separate pretrained models for speciï¬c domains? While some studies have shown the beneï¬t of continued pretraining on domain-speciï¬c unlabeled data (e.g., Lee et al., 2019), these studies only con- sider a single domain at a time and use a language model that is pretrained on a smaller and less di- verse corpus than the most recent language mod- els. Moreover, it is not known how the beneï¬t of continued pretraining may vary with factors like the amount of available labeled task data, or the proximity of the target domain to the original pre- training corpus (see Figure 1).
We address this question for one such high- performing model, ROBERTA (Liu et al., 2019) (§2). We consider four domains (biomedical and computer science publications, news, and reviews; §3) and eight classiï¬cation tasks (two in each do- main). For targets that are not already in-domain for ROBERTA, our experiments show that contin-
ued pretraining on the domain (which we refer to as domain-adaptive pretraining or DAPT) consistently improves performance on tasks from the target do- main, in both high- and low-resource settings.
Above, we consider domains deï¬ned around gen- res and forums, but it is also possible to induce a domain from a given corpus used for a task, such as the one used in supervised training of a model. This raises the question of whether pretraining on a corpus more directly tied to the task can fur- ther improve performance. We study how domain- adaptive pretraining compares to task-adaptive pre- training, or TAPT, on a smaller but directly task- relevant corpus: the unlabeled task dataset (§4), drawn from the task distribution. Task-adaptive pretraining has been shown effective (Howard and Ruder, 2018), but is not typically used with the most recent models. We ï¬nd that TAPT provides a large performance boost for ROBERTA, with or without domain-adaptive pretraining.
Finally, we show that the beneï¬ts from task- adaptive pretraining increase when we have addi- tional unlabeled data from the task distribution that has been manually curated by task designers or an- notators. Inspired by this success, we propose ways to automatically select additional task-relevant un- labeled text, and show how this improves perfor- mance in certain low-resource cases (§5). On all tasks, our results using adaptive pretraining tech- niques are competitive with the state of the art.
In summary, our contributions include: ⢠a thorough analysis of domain- and task- adaptive pretraining across four domains and eight tasks, spanning low- and high-resource settings;
an investigation into the transferability of adapted LMs across domains and tasks; and ⢠a study highlighting the importance of pre- training on human-curated datasets, and a sim- ple data selection strategy to automatically approach this performance.
Our code as well as pretrained models for multiple domains and tasks are publicly available.1
# 2 Background: Pretraining
Learning for most NLP research systems since 2018 consists of training in two stages. First, a neural language model (LM), often with millions of parameters, is trained on large unlabeled cor-
1https://github.com/allenai/ dont-stop-pretraining
pora. The word (or wordpiece; Wu et al. 2016) representations learned in the pretrained model are then reused in supervised training for a downstream task, with optional updates (ï¬ne-tuning) of the rep- resentations and network from the ï¬rst stage.
One such pretrained LM is ROBERTA (Liu et al., 2019), which uses the same transformer- based architecture (Vaswani et al., 2017) as its It is predecessor, BERT (Devlin et al., 2019). trained with a masked language modeling objec- tive (i.e., cross-entropy loss on predicting randomly masked tokens). The unlabeled pretraining corpus for ROBERTA contains over 160 GB of uncom- pressed raw text from different English-language corpora (see Appendix §A.1). ROBERTA attains better performance on an assortment of tasks than its predecessors, making it our baseline of choice. Although ROBERTAâs pretraining corpus is de- rived from multiple sources, it has not yet been established if these sources are diverse enough to generalize to most of the variation in the English language. In other words, we would like to un- derstand what is out of ROBERTAâs domain. To- wards this end, we explore further adaptation by continued pretraining of this large LM into two categories of unlabeled data: (i) large corpora of domain-speciï¬c text (§3), and (ii) available unla- beled data associated with a given task (§4).
# 3 Domain-Adaptive Pretraining
Our approach to domain-adaptive pretraining (DAPT) is straightforwardâwe continue pretrain- ing ROBERTA on a large corpus of unlabeled domain-speciï¬c text. The four domains we focus on are biomedical (BIOMED) papers, computer sci- ence (CS) papers, newstext from REALNEWS, and AMAZON reviews. We choose these domains be- cause they have been popular in previous work, and datasets for text classiï¬cation are available in each. Table 1 lists the speciï¬cs of the unlabeled datasets in all four domains, as well as ROBERTAâs training corpus.1
# 3.1 Analyzing Domain Similarity
Before performing DAPT, we attempt to quantify the similarity of the target domain to ROBERTAâs pretraining domain. We consider domain vocab- ularies containing the top 10K most frequent uni- grams (excluding stopwords) in comparably sized
1For BIOMED and CS, we used an internal version of S2ORC that contains papers that cannot be released due to copyright restrictions.
Domain Pretraining Corpus # Tokens Size LROB. LDAPT BIOMED CS NEWS REVIEWS 2.68M full-text papers from S2ORC (Lo et al., 2020) 2.22M full-text papers from S2ORC (Lo et al., 2020) 11.90M articles from REALNEWS (Zellers et al., 2019) 24.75M AMAZON reviews (He and McAuley, 2016) 7.55B 8.10B 6.66B 2.11B 47GB 48GB 39GB 11GB 1.32 1.63 1.08 2.10 0.99 1.34 1.16 1.93 ROBERTA (baseline) see Appendix §A.1 N/A 160GB â¡1.19 -
Table 1: List of the domain-speciï¬c unlabeled datasets. In columns 5 and 6, we report ROBERTAâs masked LM loss on 50K randomly sampled held-out documents from each domain before (LROB.) and after (LDAPT) DAPT (lower implies a better ï¬t on the sample). â¡ indicates that the masked LM loss is estimated on data sampled from sources similar to ROBERTAâs pretraining corpus.
PT 19.2 News 17.3 Reviews 12.7 BioMed 21.4 cs 19.2 17.3 12.7 PT News Reviews BioMed cs
# 3.2 Experiments
Our LM adaptation follows the settings prescribed for training ROBERTA. We train ROBERTA on each domain for 12.5K steps, which amounts to single pass on each domain dataset, on a v3-8 TPU; see other details in Appendix B. This second phase of pretraining results in four domain-adapted LMs, one for each domain. We present the masked LM loss of ROBERTA on each domain before and after DAPT in Table 1. We observe that masked LM loss decreases in all domains except NEWS after DAPT, where we observe a marginal increase. We discuss cross-domain masked LM loss in Appendix §E.
Figure 2: Vocabulary overlap (%) between do- mains. PT denotes a sample from sources similar to ROBERTAâs pretraining corpus. Vocabularies for each domain are created by considering the top 10K most frequent words (excluding stopwords) in documents sampled from each domain.
Under each domain, we consider two text clas- siï¬cation tasks, as shown in Table 2. Our tasks represent both high- and low-resource (⤠5K la- beled training examples, and no additional unla- beled data) settings. For HYPERPARTISAN, we use the data splits from Beltagy et al. (2020). For RCT, we represent all sentences in one long sequence for simultaneous prediction.
random samples of held-out documents in each do- mainâs corpus. We use 50K held-out documents for each domain other than REVIEWS, and 150K held-out documents in REVIEWS, since they are much shorter. We also sample 50K documents from sources similar to ROBERTAâs pretraining corpus (i.e., BOOKCORPUS, STORIES, WIKIPEDIA, and REALNEWS) to construct the pretraining domain vocabulary, since the original pretraining corpus is not released. Figure 2 shows the vocabulary overlap across these samples. We observe that ROBERTAâs pretraining domain has strong vocab- ulary overlap with NEWS and REVIEWS, while CS and BIOMED are far more dissimilar to the other domains. This simple analysis suggests the degree of beneï¬t to be expected by adaptation of ROBERTA to different domainsâthe more dissim- ilar the domain, the higher the potential for DAPT.
Baseline As our baseline, we use an off-the-shelf ROBERTA-base model and perform supervised ï¬ne-tuning of its parameters for each classiï¬cation task. On average, ROBERTA is not drastically be- hind the state of the art (details in Appendix §A.2), and serves as a good baseline since it provides a single LM to adapt to different domains.
Classiï¬cation Architecture Following standard practice (Devlin et al., 2019) we pass the ï¬nal layer [CLS] token representation to a task-speciï¬c feed- forward layer for prediction (see Table 14 in Ap- pendix for more hyperparameter details).
Results Test results are shown under the DAPT column of Table 3 (see Appendix §C for valida- tion results). We observe that DAPT improves over ROBERTA in all domains. For BIOMED, CS, and REVIEWS, we see consistent improve-
Domain Task Label Type Train (Lab.) Train (Unl.) Dev. Test Classes BIOMED CHEMPROT â RCT relation classiï¬cation abstract sent. roles 4169 18040 - - 2427 30212 3469 30135 13 5 CS ACL-ARC SCIERC citation intent relation classiï¬cation 1688 3219 - - 114 455 139 974 6 7 NEWS HYPERPARTISAN â AGNEWS partisanship topic 515 115000 5000 - 65 5000 65 7600 2 4 REVIEWS â HELPFULNESS â IMDB review helpfulness review sentiment 115251 20000 - 50000 5000 5000 25000 25000 2 2
Table 2: Speciï¬cations of the various target task datasets. â indicates high-resource settings. Sources: CHEMPROT (Kringelum et al., 2016), RCT (Dernoncourt and Lee, 2017), ACL-ARC (Jurgens et al., 2018), SCIERC (Luan et al., 2018), HYPERPARTISAN (Kiesel et al., 2019), AGNEWS (Zhang et al., 2015), HELPFULNESS (McAuley et al., 2015), IMDB (Maas et al., 2011).
Dom. Task ROBA. DAPT ¬DAPT BM CHEMPROT 81.91.0 â RCT 87.20.1 84.20.2 87.60.1 79.41.3 86.90.1 CS ACL-ARC SCIERC 63.05.8 77.31.9 75.42.5 80.81.5 66.44.1 79.20.9 NEWS REV. HYP. 86.60.9 â AGNEWS 93.90.2 â HELPFUL. 65.13.4 â IMDB 95.00.2 88.25.9 93.90.2 66.51.4 95.40.2 76.44.9 93.50.2 65.12.8 94.10.4
and Table 3: Comparison of ROBERTA (ROBA.) DAPT to adaptation to an irrelevant domain (¬ DAPT). Reported results are test macro-F1, except for CHEMPROT and RCT, for which we report micro-F1, following Beltagy et al. (2019). We report averages across ï¬ve random seeds, with standard deviations as subscripts. â indicates high-resource settings. Best task performance is boldfaced. See §3.3 for our choice of irrelevant domains.
regardless of the domain. In this setting, for NEWS, we use a CS LM; for REVIEWS, a BIOMED LM; for CS, a NEWS LM; for BIOMED, a REVIEWS LM. We use the vocabulary overlap statistics in Figure 2 to guide these choices.
Our results are shown in Table 3, where the last column (¬DAPT) corresponds to this setting. For each task, DAPT signiï¬cantly outperforms adapting to an irrelevant domain, suggesting the importance of pretraining on domain-relevant data. Further- more, we generally observe that ¬DAPT results in worse performance than even ROBERTA on end-tasks. Taken together, these results indicate that in most settings, exposure to more data with- out considering domain relevance is detrimental to end-task performance. However, there are two tasks (SCIERC and ACL-ARC) in which ¬DAPT marginally improves performance over ROBERTA. This may suggest that in some cases, continued pre- training on any additional data is useful, as noted in Baevski et al. (2019).
ments over ROBERTA, demonstrating the beneï¬t of DAPT when the target domain is more distant from ROBERTAâs source domain. The pattern is consistent across high- and low- resource settings. Although DAPT does not increase performance on AGNEWS, the beneï¬t we observe in HYPERPAR- TISAN suggests that DAPT may be useful even for tasks that align more closely with ROBERTAâs source domain.
# 3.3 Domain Relevance for DAPT
Additionally, we compare DAPT against a setting where for each task, we adapt the LM to a domain outside the domain of interest. This controls for the case in which the improvements over ROBERTA might be attributed simply to exposure to more data,
# 3.4 Domain Overlap
Our analysis of DAPT is based on prior intuitions about how task data is assigned to speciï¬c domains. For instance, to perform DAPT for HELPFULNESS, we only adapt to AMAZON reviews, but not to any REALNEWS articles. However, the gradations in Figure 2 suggest that the boundaries between do- mains are in some sense fuzzy; for example, 40% of unigrams are shared between REVIEWS and NEWS. As further indication of this overlap, we also qualitatively identify documents that overlap cross-domain: in Table 4, we showcase reviews and REALNEWS articles that are similar to these reviews (other examples can be found in Appendix §D). In fact, we ï¬nd that adapting ROBERTA to
# IMDB review
The Shop Around the Corner is one of the great ï¬lms from director Ernst Lubitsch . In addition to the talents of James Stewart and Margaret Sullavan , itâs ï¬lled with a terriï¬c cast of top character actors such as Frank Morgan and Felix Bressart. [...] The makers of Youâve Got Mail claim their ï¬lm to be a remake , but thatâs just nothing but a lot of inï¬ated self praise. Anyway, if you have an affection for romantic comedies of the 1940 âs, youâll ï¬nd The Shop Around the Corner to be nothing short of wonderful. Just as good with repeat viewings.
# REALNEWS article
[...] Three great festive ï¬lms... The Shop Around the Corner (1940) Delightful Comedy by Ernst Lubitsch stars James Stewart and Margaret Sulla- van falling in love at Christmas. Remade as Youve Got Mail. [...]
# HELPFULNESS review
# REALNEWS article
Simply the Best! Iâve owned countless Droids and iPhones, but this one destroys them all. Samsung really nailed it with this one, extremely fast , very pocketable, gorgeous display , exceptional battery life , good audio quality, perfect GPS & WiFi performance, transparent status bar, battery percentage, ability to turn off soft key lights, superb camera for a smartphone and more! [...]
Were living in a world with a new Samsung. [...] more on battery life later [...] Exposure is usually spot on and focusing is very fast. [...] The design, display, camera and performance are all best in class, and the phone feels smaller than it looks. [...]
Table 4: Examples that illustrate how some domains might have overlaps with others, leading to unexpected positive transfer. We highlight expressions in the reviews that are also found in the REALNEWS articles.
NEWS not as harmful to its performance on RE- VIEWS tasks (DAPT on NEWS achieves 65.52.3 on HELPFULNESS and 95.00.1 on IMDB).
Although this analysis is by no means compre- hensive, it indicates that the factors that give rise to observable domain differences are likely not mu- tually exclusive. It is possible that pretraining be- yond conventional domain boundaries could result in more effective DAPT; we leave this investiga- tion to future work. In general, the provenance of data, including the processes by which corpora are curated, must be kept in mind when designing pre- training procedures and creating new benchmarks that test out-of-domain generalization abilities.
# 4 Task-Adaptive Pretraining
Datasets curated to capture speciï¬c tasks of inter- est tend to cover only a subset of the text avail- able within the broader domain. For example, the CHEMPROT dataset for extracting relations be- tween chemicals and proteins focuses on abstracts of recently-published, high-impact articles from hand-selected PubMed categories (Krallinger et al., 2017, 2015). We hypothesize that such cases where the task data is a narrowly-deï¬ned subset of the broader domain, pretraining on the task dataset itself or data relevant to the task may be helpful.
Task-adaptive pretraining (TAPT) refers to pre- training on the unlabeled training set for a given task; prior work has shown its effectiveness (e.g. Howard and Ruder, 2018). Compared to domain- adaptive pretraining (DAPT; §3), the task-adaptive approach strikes a different trade-off: it uses a far smaller pretraining corpus, but one that is much
more task-relevant (under the assumption that the training set represents aspects of the task well). This makes TAPT much less expensive to run than DAPT, and as we show in our experiments, the per- formance of TAPT is often competitive with that of DAPT.
# 4.1 Experiments
Similar to DAPT, task-adaptive pretraining consists of a second phase of pretraining ROBERTA, but only on the available task-speciï¬c training data. In contrast to DAPT, which we train for 12.5K steps, we perform TAPT for 100 epochs. We artiï¬cially augment each dataset by randomly masking differ- ent words (using the masking probability of 0.15) across epochs. As in our DAPT experiments, we pass the ï¬nal layer [CLS] token representation to a task-speciï¬c feedforward layer for classiï¬cation (see Table 14 in Appendix for more hyperparameter details).
Our results are shown in the TAPT column of Ta- ble 5. TAPT consistently improves the ROBERTA baseline for all tasks across domains. Even on the news domain, which was part of ROBERTA pre- training corpus, TAPT improves over ROBERTA, showcasing the advantage of task adaptation. Par- ticularly remarkable are the relative differences be- tween TAPT and DAPT. DAPT is more resource in- tensive (see Table 9 in §5.3), but TAPT manages to match its performance in some of the tasks, such as SCIERC. In RCT, HYPERPARTISAN, AGNEWS, HELPFULNESS, and IMDB, the results even ex- ceed those of DAPT, highlighting the efï¬cacy of this cheaper adaptation technique.
Additional Pretraining Phases Domain Task ROBERTA DAPT TAPT BIOMED CHEMPROT â RCT 81.91.0 87.20.1 84.20.2 87.60.1 82.60.4 87.70.1 84.40.4 87.80.1 CS ACL-ARC SCIERC 63.05.8 77.31.9 75.42.5 80.81.5 67.41.8 79.31.5 75.63.8 81.31.8 NEWS REVIEWS HYPERPARTISAN â AGNEWS â HELPFULNESS â IMDB 86.60.9 93.90.2 65.13.4 95.00.2 88.25.9 93.90.2 66.51.4 95.40.1 90.45.2 94.50.1 68.51.9 95.50.1 90.06.6 94.60.1 68.71.8 95.60.1
# DAPT + TAPT
Table 5: Results on different phases of adaptive pretraining compared to the baseline ROBERTA (col. 1). Our approaches are DAPT (col. 2, §3), TAPT (col. 3, §4), and a combination of both (col. 4). Reported results follow the same format as Table 3. State-of-the-art results we can compare to: CHEMPROT (84.6), RCT (92.9), ACL-ARC (71.0), SCIERC (81.8), HYPERPARTISAN (94.8), AGNEWS (95.5), IMDB (96.2); references in §A.2.
BIOMED RCT CHEMPROT CS ACL-ARC SCIERC TAPT Transfer-TAPT 87.70.1 87.10.4 (â0.6) 82.60.5 80.40.6 (â2.2) TAPT Transfer-TAPT 67.41.8 64.12.7 (â3.3) 79.31.5 79.12.5 (â0.2) NEWS HYPERPARTISAN AGNEWS REVIEWS HELPFULNESS IMDB TAPT Transfer-TAPT 89.99.5 82.27.7 (â7.7) 94.50.1 93.90.2 (â0.6) TAPT Transfer-TAPT 68.51.9 65.02.6 (â3.5) 95.70.1 95.00.1 (â0.7)
Table 6: Though TAPT is effective (Table 5), it is harmful when applied across tasks. These ï¬ndings illustrate differences in task distributions within a domain.
Combined DAPT and TAPT We investigate the effect of using both adaptation techniques together. We begin with ROBERTA and apply DAPT then TAPT under this setting. The three phases of pre- training add up to make this the most computation- ally expensive of all our settings (see Table 9). As expected, combined domain- and task-adaptive pre- training achieves the best performance on all tasks (Table 5).2
Overall, our results show that DAPT followed by TAPT achieves the best of both worlds of domain and task awareness, yielding the best performance. While we speculate that TAPT followed by DAPT would be susceptible to catastrophic forgetting of the task-relevant corpus (Yogatama et al., 2019), al- ternate methods of combining the procedures may result in better downstream performance. Future work may explore pretraining with a more sophisti- cated curriculum of domain and task distributions.
Cross-Task Transfer We complete the compari- son between DAPT and TAPT by exploring whether adapting to one task transfers to other tasks in the same domain. For instance, we further pretrain the LM using the RCT unlabeled data, ï¬ne-tune it with the CHEMPROT labeled data, and observe the effect. We refer to this setting as Transfer-TAPT. Our results for tasks in all four domains are shown in Table 6. We see that TAPT optimizes for single task performance, to the detriment of cross-task transfer. These results demonstrate that data distri- butions of tasks within a given domain might differ. Further, this could also explain why adapting only to a broad domain is not sufï¬cient, and why TAPT after DAPT is effective.
# 5 Augmenting Training Data for Task-Adaptive Pretraining
2Results on HYPERPARTISAN match those of TAPT, within a standard deviation arising from the ï¬ve seeds.
In §4, we continued pretraining the LM for task adaptation using only the training data for a super- vised task. Inspired by the success of TAPT, we next investigate another setting where a larger pool of unlabeled data from the task distribution exists,
Pretraining BIOMED NEWS REVIEWS IMDB â RCT-500 HYP. TAPT DAPT + TAPT 79.81.4 83.00.3 90.45.2 95.50.1 90.06.6 95.60.1 Curated-TAPT DAPT + Curated-TAPT 83.40.3 83.80.5 89.99.5 95.70.1 92.13.6 95.80.1
Table 7: Mean test set macro-F1 (for HYP. and IMDB) and micro-F1 (for RCT-500), with Curated- TAPT across ï¬ve random seeds, with standard devia- tions as subscripts. â indicates high-resource settings.
typically curated by humans.
We explore two scenarios. First, for three tasks (RCT, HYPERPARTISAN, and IMDB) we use this larger pool of unlabeled data from an available human-curated corpus (§5.1). Next, we explore retrieving related unlabeled data for TAPT, from a large unlabeled in-domain corpus, for tasks where extra human-curated data is unavailable (§5.2).
# 5.1 Human Curated-TAPT
Dataset creation often involves collection of a large unlabeled corpus from known sources. This corpus is then downsampled to collect annotations, based on the annotation budget. The larger unlabeled cor- pus is thus expected to have a similar distribution to the taskâs training data. Moreover, it is usually available. We explore the role of such corpora in task-adaptive pretraining.
Data We simulate a low-resource setting RCT- 500, by downsampling the training data of the RCT dataset to 500 examples (out of 180K available), and treat the rest of the training data as unlabeled. The HYPERPARTISAN shared task (Kiesel et al., 2019) has two tracks: low- and high-resource. We use 5K documents from the high-resource setting as Curated-TAPT unlabeled data and the original low- resource training documents for task ï¬ne-tuning. For IMDB, we use the extra unlabeled data man- ually curated by task annotators, drawn from the same distribution as the labeled data (Maas et al., 2011).
Results We compare Curated-TAPT to TAPT and DAPT + TAPT in Table 7. Curated-TAPT further improves our prior results from §4 across all three datasets. Applying Curated-TAPT after adapting to the domain results in the largest boost in perfor- mance on all tasks; in HYPERPARTISAN, DAPT + Curated-TAPT is within standard deviation of Curated-TAPT. Moreover, curated-TAPT achieves
Biomedical domain ChemProt- ape VAMPIRE ~~ Biomedical sétionces "BDZs and other positive GABA modulators... can also inhibit L-type voltage-gated calcium channels... which contributes | to reduced neuronal excitability.â "11 Antibodies against the P/Q-type voltage-gated calcium channel reduce < the influx of calcium into the presynaptic terminal, which is key to the release of acetylcholine." VAMPIRE embedding space
Figure 3: An illustration of automated data selec- tion (§5.2). We map unlabeled CHEMPROT and 1M BIOMED sentences to a shared vector space using the VAMPIRE model trained on these sentences. Then, for each CHEMPROT sentence, we identify k nearest neighbors, from the BIOMED domain.
Pretraining BIOMED CS CHEMPROT RCT-500 ACL-ARC ROBERTA TAPT 81.91.0 82.60.4 79.30.6 79.81.4 63.05.8 67.41.8 RAND-TAPT 50NN-TAPT 150NN-TAPT 500NN-TAPT 81.90.6 83.30.7 83.20.6 83.30.7 80.60.4 80.80.6 81.20.8 81.70.4 69.73.4 70.72.8 73.32.7 75.51.9 DAPT 84.20.2 82.50.5 75.42.5
Table 8: Mean test set micro-F1 (for CHEMPROT and RCT) and macro-F1 (for ACL-ARC), across ï¬ve random seeds, with standard deviations as subscripts, comparing RAND-TAPT (with 50 candidates) and kNN- TAPT selection. Neighbors of the task data are selected from the domain data.
95% of the performance of DAPT + TAPT with the fully labeled RCT corpus (Table 5) with only 0.3% of the labeled data. These results suggest that curat- ing large amounts of data from the task distribution is extremely beneï¬cial to end-task performance. We recommend that task designers release a large pool of unlabeled task data for their tasks to aid model adaptation through pretraining.
# 5.2 Automated Data Selection for TAPT
Consider a low-resource scenario without access to large amounts of unlabeled data to adequately bene- ï¬t from TAPT, as well as absence of computational resources necessary for DAPT (see Table 9 for de- tails of computational requirements for different pretraining phases). We propose simple unsuper-
vised methods to retrieve unlabeled text that aligns with the task distribution, from a large in-domain corpus. Our approach ï¬nds task-relevant data from the domain by embedding text from both the task and domain in a shared space, then selects candi- dates from the domain based on queries using the task data. Importantly, the embedding method must be lightweight enough to embed possibly millions of sentences in a reasonable time.
Given these constraints, we employ VAMPIRE (Gururangan et al., 2019; Figure 3), a lightweight bag-of-words language model. We pretrain VAM- PIRE on a large deduplicated3 sample of the do- main (1M sentences) to obtain embeddings of the text from both the task and domain sample. We then select k candidates of each task sentence from the domain sample, in embeddings space. Candi- dates are selected (i) via nearest neighbors selection (kNN-TAPT)4, or (ii) randomly (RAND-TAPT). We continue pretraining ROBERTA on this augmented corpus with both the task data (as in TAPT) as well as the selected candidate pool.
Results Results in Table 8 show that kNN-TAPT outperforms TAPT for all cases. RAND-TAPT is gen- erally worse than kNN-TAPT, but within a standard deviation arising from 5 seeds for RCT and ACL- ARC. As we increase k, kNN-TAPT performance steadily increases, and approaches that of DAPT. Appendix F shows examples of nearest neighbors of task data. Future work might consider a closer study of kNN-TAPT, more sophisticated data selec- tion methods, and the tradeoff between the diversity and task relevance of selected examples.
# 5.3 Computational Requirements
The computational requirements for all our adap- tation techniques on RCT-500 in the BIOMED do- main in Table 9. TAPT is nearly 60 times faster to train than DAPT on a single v3-8 TPU and stor- age requirements for DAPT on this task are 5.8M times that of TAPT. Our best setting of DAPT + TAPT amounts to three phases of pretraining, and at ï¬rst glance appears to be very expensive. However, once the LM has been adapted to a broad domain, it can be reused for multiple tasks within that domain, with only a single additional TAPT phase per task. While Curated-TAPT tends to achieve the best cost-
3We deduplicated this set to limit computation, since dif- ferent sentences can share neighbors.
4We use a ï¬at search index with cosine similarity between embeddings with the FAISS (Johnson et al., 2019) library.
Pretraining Steps Docs. Storage F1 ROBERTA - - - 79.30.6 TAPT 50NN-TAPT 150NN-TAPT 500NN-TAPT Curated-TAPT DAPT DAPT + TAPT 500 0.2K 24K 1.1K 3.2K 66K 9.0K 185K 8.8K 180K 25M 12.5K 25M 12.6K 80KB 79.81.4 3MB 80.80.6 8MB 81.20.8 24MB 81.70.4 27MB 83.40.3 47GB 82.50.5 47GB 83.00.3
Table 9: Computational requirements for adapting to the RCT-500 task, comparing DAPT (§3) and the vari- ous TAPT modiï¬cations described in §4 and §5.
beneï¬t ratio in this comparison, one must also take into account the cost of curating large in-domain data. Automatic methods such as kNN-TAPT are much cheaper than DAPT.
# 6 Related Work
Transfer learning for domain adaptation Prior work has shown the beneï¬t of continued pretraining in domain (Alsentzer et al., 2019; Chakrabarty et al., 2019; Lee et al., 2019).5 We have contributed further investigation of the effects of a shift between a large, diverse pretraining corpus and target domain on task performance. Other studies (e.g., Huang et al., 2019) have trained language models (LMs) in their domain of interest, from scratch. In contrast, our work explores multiple domains, and is arguably more cost effective, since we continue pretraining an already powerful LM.
Task-adaptive pretraining Continued pretrain- ing of a LM on the unlabeled data of a given task (TAPT) has been show to be beneï¬cial for end- task performance (e.g. in Howard and Ruder, 2018; Phang et al., 2018; Sun et al., 2019). In the pres- ence of domain shift between train and test data distributions of the same task, domain-adaptive pre- training (DAPT) is sometimes used to describe what we term TAPT (Logeswaran et al., 2019; Han and Eisenstein, 2019). Related approaches include lan- guage modeling as an auxiliary objective to task classiï¬er ï¬ne-tuning (Chronopoulou et al., 2019; Radford et al., 2018) or consider simple syntactic structure of the input while adapting to task-speciï¬c
5In contrast, Peters et al. (2019) ï¬nd that the Jensen- Shannon divergence on term distributions between BERTâs pretraining corpora and each MULTINLI domain (Williams et al., 2018) does not predict its performance, though this might be an isolated ï¬nding speciï¬c to the MultiNLI dataset.
Training Data Domain Task Task (Unlabeled) (Unlabeled) (Labeled) ROBERTA v DAPT v v TAPT v v DAPT + TAPT v v v KkNN-TAPT (Subset) v v Curated-TAPT (Extra) v
Table 10: Summary of strategies for multi-phase pre- training explored in this paper.
data (Swayamdipta et al., 2019). We compare DAPT and TAPT as well as their interplay with respect to dataset size for continued pretraining (hence, ex- pense of more rounds of pretraining), relevance to a data sample of a given task, and transferability to other tasks and datasets. See Table 11 in Appendix §A for a summary of multi-phase pretraining strate- gies from related work.
Data selection for transfer learning Selecting data for transfer learning has been explored in NLP (Moore and Lewis, 2010; Ruder and Plank, 2017; Zhang et al., 2019, among others). Dai et al. (2019) focus on identifying the most suitable corpus to pretrain a LM from scratch, for a single task: NER, whereas we select relevant examples for various tasks in §5.2. Concurrent to our work, Aharoni and Goldberg (2020) propose data selection methods for NMT based on cosine similarity in embedding space, using DISTILBERT (Sanh et al., 2019) for efï¬ciency. In contrast, we use VAMPIRE, and focus on augmenting TAPT data for text classiï¬- cation tasks. Khandelwal et al. (2020) introduced kNN-LMs that allows easy domain adaptation of pretrained LMs by simply adding a datastore per domain and no further training; an alternative to integrate domain information in an LM. Our study of human-curated data §5.1 is related to focused crawling (Chakrabarti et al., 1999) for collection of suitable data, especially with LM reliance (Remus and Biemann, 2016).
What is a domain? Despite the popularity of domain adaptation techniques, most research and practice seems to use an intuitive understanding of domains. A small body of work has attempted to address this question (Lee, 2001; Eisenstein et al., 2014; van der Wees et al., 2015; Plank, 2016; Ruder et al., 2016, among others). For instance, Aharoni and Goldberg (2020) deï¬ne domains by implicit
clusters of sentence representations in pretrained LMs. Our results show that DAPT and TAPT com- plement each other, which suggests a spectra of domains deï¬ned around tasks at various levels of granularity (e.g., Amazon reviews for a speciï¬c product, all Amazon reviews, all reviews on the web, the web).
# 7 Conclusion
We investigate several variations for adapting pre- trained LMs to domains and tasks within those do- mains, summarized in Table 10. Our experiments reveal that even a model of hundreds of millions of parameters struggles to encode the complexity of a single textual domain, let alone all of language. We show that pretraining the model towards a spe- ciï¬c task or small corpus can provide signiï¬cant beneï¬ts. Our ï¬ndings suggest it may be valuable to complement work on ever-larger LMs with par- allel efforts to identify and use domain- and task- relevant corpora to specialize models. While our results demonstrate how these approaches can im- prove ROBERTA, a powerful LM, the approaches we studied are general enough to be applied to any pretrained LM. Our work points to numerous future directions, such as better data selection for TAPT, efï¬cient adaptation large pretrained language models to distant domains, and building reusable language models after adaptation.
# Acknowledgments
The authors thank Dallas Card, Mark Neumann, Nelson Liu, Eric Wallace, members of the Al- lenNLP team, and anonymous reviewers for help- ful feedback, and Arman Cohan for providing data. This research was supported in part by the Ofï¬ce of Naval Research under the MURI grant N00014-18- 1-2670. TPU machines for conducting experiments were provided by Google.
# References
Roee Aharoni and Yoav Goldberg. 2020. Unsupervised domain clusters in pretrained language models. In ACL. To appear.
Emily Alsentzer, John Murphy, William Boag, Wei- Hung Weng, Di Jindi, Tristan Naumann, and Matthew McDermott. 2019. Publicly available clin- ical BERT embeddings. In Proceedings of the 2nd Clinical Natural Language Processing Workshop.
Alexei Baevski, Sergey Edunov, Yinhan Liu, Luke Zettlemoyer, and Michael Auli. 2019. Cloze-driven pretraining of self-attention networks. In EMNLP.
Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciB- ERT: A pretrained language model for scientiï¬c text. In EMNLP.
Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv:2004.05150.
Soumen Chakrabarti, Martin van den Berg, and Byron Dom. 1999. Focused Crawling: A New Approach to Topic-Speciï¬c Web Resource Discovery. Comput. Networks, 31:1623â1640.
Tuhin Chakrabarty, Christopher Hidey, and Kathy McKeown. 2019. IMHO ï¬ne-tuning improves claim detection. In NAACL.
Ciprian Chelba, Tomas Mikolov, Michael Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robinson. 2014. One billion word benchmark for measuring progress in statistical language modeling. In INTERSPEECH.
and Alexandros Potamianos. 2019. An embarrassingly simple approach for transfer learning from pre- trained language models. In NAACL.
Arman Cohan, Iz Beltagy, Daniel King, Bhavana Dalvi, and Dan Weld. 2019. Pretrained language models for sequential sentence classiï¬cation. In EMNLP.
Xiang Dai, Sarvnaz Karimi, Ben Hachey, and Cecile Paris. 2019. Using similarity measures to select pre- training data for NER. In NAACL.
Franck Dernoncourt and Ji Young Lee. 2017. Pubmed 200k RCT: a dataset for sequential sentence classiï¬- cation in medical abstracts. In IJCNLP.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In NAACL.
Jesse Dodge, Suchin Gururangan, Dallas Card, Roy Schwartz, and Noah A Smith. 2019. Show your work: Improved reporting of experimental results. In EMNLP.
Jacob Eisenstein, Brendan Oâconnor, Noah A. Smith, and Eric P. Xing. 2014. Diffusion of lexical change in social media. PloS ONE.
Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Pe- ters, Michael Schmitz, and Luke Zettlemoyer. 2018. AllenNLP: A deep semantic natural language pro- cessing platform. In NLP-OSS.
Aaron Gokaslan and Vanya Cohen. 2019. OpenWeb- Text Corpus.
Suchin Gururangan, Tam Dang, Dallas Card, and Noah A. Smith. 2019. Variational pretraining for semi-supervised text classiï¬cation. In ACL.
Xiaochuang Han and Jacob Eisenstein. 2019. Unsuper- vised domain adaptation of contextualized embed- dings for sequence labeling. In EMNLP.
Ruining He and Julian McAuley. 2016. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative ï¬ltering. In WWW.
Matthew Honnibal and Ines Montani. 2017. spaCy 2: Natural language understanding with Bloom embed- dings, convolutional neural networks and incremen- tal parsing.
Jeremy Howard and Sebastian Ruder. 2018. Universal language model ï¬ne-tuning for text classiï¬cation. In ACL.
Kexin Huang, Jaan Altosaar, and Rajesh Ranganath. 2019. ClinicalBERT: Modeling clinical notes and predicting hospital readmission. arXiv:1904.05342.
Jeff Johnson, Matthijs Douze, and Herv´e J´egou. 2019. IEEE Billion-scale similarity search with gpus. Transactions on Big Data.
David Jurgens, Srijan Kumar, Raine Hoover, Daniel A. McFarland, and Dan Jurafsky. 2018. Measuring the evolution of a scientiï¬c ï¬eld through citation frames. TACL.
Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2020. Generalization through memorization: Nearest neighbor language models. In ICLR. To appear.
Johannes Kiesel, Maria Mestre, Rishabh Shukla, Em- manuel Vincent, Payam Adineh, David Corney, Benno Stein, and Martin Potthast. 2019. SemEval- 2019 Task 4: Hyperpartisan news detection. In Se- mEval.
Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR.
Martin Krallinger, Obdulia Rabal, Saber Ahmad Akhondi, Mart´ın P´erez P´erez, J´es´us L´opez Santa- mar´ıa, Gael P´erez Rodr´ıguez, Georgios Tsatsaro- nis, Ander Intxaurrondo, Jos´e Antonio Baso L´opez, Umesh Nandal, E. M. van Buel, A. Poorna Chan- drasekhar, Marleen Rodenburg, Astrid Lægreid, Marius A. Doornenbal, Julen Oyarz´abal, An´alia Loureno, and Alfonso Valencia. 2017. Overview of the biocreative vi chemical-protein interaction track. In Proceedings of the BioCreative VI Workshop.
Martin Krallinger, Obdulia Rabal, Florian Leitner, Miguel Vazquez, David Salgado, Zhiyong Lu, Robert Leaman, Yanan Lu, Donghong Ji, Daniel M Lowe, et al. 2015. The chemdner corpus of chemi- cals and drugs and its annotation principles. Journal of cheminformatics, 7(1):S2.
Jens Kringelum, Sonny Kim Kjærulff, Søren Brunak, Ole Lund, Tudor I. Oprea, and Olivier Taboureau. 2016. ChemProt-3.0: a global chemical biology dis- eases mapping. In Database.
David YW Lee. 2001. Genres, registers, text types, do- mains and styles: Clarifying the concepts and nav- igating a path through the BNC jungle. Language Learning & Technology.
Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. BioBERT: A pre-trained biomedical for biomedical text mining. Bioinformatics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv:1907.11692.
Kyle Lo, Lucy Lu Wang, Mark Neumann, Rodney Kin- ney, and Daniel S. Weld. 2020. S2ORC: The Se- mantic Scholar Open Research Corpus. In ACL. To appear.
Lajanugen Logeswaran, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, and Honglak Lee. 2019. Zero-shot entity linking by reading entity de- scriptions. In ACL.
Yi Luan, Luheng He, Mari Ostendorf, and Hannaneh Hajishirzi. 2018. Multi-task identiï¬cation of enti- ties, relations, and coreference for scientiï¬c knowl- edge graph construction. In EMNLP.
Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In ACL.
Julian McAuley, Christopher Targett, Qinfeng Shi, and Anton Van Den Hengel. 2015. Image-based recom- In ACM SI- mendations on styles and substitutes. GIR.
Arindam Mitra, Pratyay Banerjee, Kuntal Kumar Pal, Swaroop Ranjan Mishra, and Chitta Baral. 2020. Exploring ways to incorporate additional knowledge to improve natural language commonsense question answering. arXiv:1909.08855v3.
Robert C. Moore and William Lewis. 2010. Intelligent selection of language model training data. In ACL.
Sebastian Nagel. 2016. CC-NEWS.
Mark Neumann, Daniel King, Iz Beltagy, and Waleed Ammar. 2019. Scispacy: Fast and robust models for biomedical natural language processing. Proceed- ings of the 18th BioNLP Workshop and Shared Task.
Matthew E. Peters, Sebastian Ruder, and Noah A. To tune or not to tune? Adapt- In Smith. 2019. ing pretrained representations to diverse tasks. RepL4NLP.
Jason Phang, Thibault F´evry, and Samuel R. Bow- man. 2018. Sentence encoders on STILTs: Supple- mentary training on intermediate labeled-data tasks. arXiv:1811.01088.
Barbara Plank. 2016. What to do about non-standard (or non-canonical) language in NLP. In KONVENS.
Alec Radford, Karthik Narasimhan, Tim Salimans, and Improving language under- Ilya Sutskever. 2018. standing by generative pre-training.
Colin Raffel, Noam Shazeer, Adam Kaleo Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Ex- ploring the limits of transfer learning with a uniï¬ed text-to-text transformer. arXiv:1910.10683.
Steffen Remus and Chris Biemann. 2016. Domain- Speciï¬c Corpus Expansion with Focused Webcrawl- ing. In LREC.
Sebastian Ruder, Parsa Ghaffari, and John G. Breslin. 2016. Towards a continuous modeling of natural lan- In Workshop on Uphill Battles in guage domains. Language Processing: Scaling Early Achievements to Robust Methods.
Sebastian Ruder and Barbara Plank. 2017. Learning to select data for transfer learning with Bayesian opti- mization. In EMNLP.
Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. In EMC2 @ NeurIPS.
Chi Sun, Xipeng Qiu, Yige Xu, and Xuanjing Huang. 2019. How to ï¬ne-tune BERT for text classiï¬cation? In CCL.
Swabha Swayamdipta, Matthew Peters, Brendan Roof, Chris Dyer, and Noah A Smith. 2019. Shallow syn- tax in deep water. arXiv:1908.11047.
Tan Thongtan and Tanasanee Phienthrakul. 2019. Sen- timent classiï¬cation using document embeddings trained with cosine similarity. In ACL SRW.
Trieu H. Trinh and Quoc V. Le. 2018. A simple method for commonsense reasoning. arXiv:1806.02847.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS.
Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. SuperGLUE: A stickier benchmark for general-purpose language understanding systems. In NeurIPS.
Alex Wang, Amanpreet Singh, Julian Michael, Fe- lix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis plat- form for natural language understanding. In Black- boxNLP @ EMNLP.
Marlies van der Wees, Arianna Bisazza, Wouter Weerkamp, and Christof Monz. 2015. Whatâs in a domain? Analyzing genre and topic differences in statistical machine translation. In ACL.
Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In NAACL.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Rmi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. HuggingFaceâs Trans- formers: State-of-the-art natural language process- ing. arXiv:1910.03771.
Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Googleâs neural machine translation system: Bridging the gap between human and machine translation.
Hu Xu, Bing Liu, Lei Shu, and Philip Yu. 2019a. BERT post-training for review reading comprehension and aspect-based sentiment analysis. In NAACL.
Hu Xu, Bing Liu, Lei Shu, and Philip S. Yu. 2019b. Review conversational reading comprehen- sion. arXiv:1902.00821v2.
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Car- bonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. XLNet: Generalized autoregressive pretraining for language understanding. In NeurIPS.
Dani Yogatama, Cyprien de Masson dâAutume, Jerome Connor, Tom´as Kocisk´y, Mike Chrzanowski, Ling- peng Kong, Angeliki Lazaridou, Wang Ling, Lei Yu, Chris Dyer, and Phil Blunsom. 2019. Learning and evaluating general linguistic intelligence.
Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2019. Defending against neural fake news. In NeurIPS.
Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text clas- siï¬cation. In NeurIPS.
Xuan Zhang, Pamela Shapiro, Gaurav Kumar, Paul Mc- Namee, Marine Carpuat, and Kevin Duh. 2019. Cur- riculum learning for domain adaptation in neural ma- chine translation. In NAACL.
Yukun Zhu, Ryan Kiros, Richard S. Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In ICCV.
# Appendix Overview
In this supplementary material, we provide: (i) additional information for producing the results in the paper, and (ii) results that we could not ï¬t into the main body of the paper.
Appendix A. A tabular overview of related work described in Section §6, a description of the corpus used to train ROBERTA in Liu et al. (2019), and references to the state of the art on our tasks.
Appendix B. Details about the data preprocessing, training, and implementation of domain- and task- adaptive pretraining.
Appendix C. Development set results.
Appendix D. Examples of domain overlap.
Appendix E. The cross-domain masked LM loss and reproducibility challenges.
Appendix F. Illustration of our data selection method and examples of nearest neighbours.
# A Related Work
Table 11 shows which of the strategies for contin- ued pretraining have already been explored in the prior work from the Related Work (§6). As evident from the table, our work compares various strate- gies as well as their interplay using a pretrained language model trained on a much more heteroge- neous pretraining corpus.
# A.1 ROBERTAâs Pretraining Corpus
ROBERTA was trained on data from BOOKCOR- PUS (Zhu et al., 2015),6 WIKIPEDIA,7 a portion of the CCNEWS dataset (Nagel, 2016),8 OPENWEB- TEXT corpus of Web content extracted from URLs shared on Reddit (Gokaslan and Cohen, 2019),9 and a subset of CommonCrawl that it is said to resemble the âstory-likeâ style of WINOGRAD schemas (STORIES; Trinh and Le, 2018).10
# A.2 State of the Art
In this section, we specify the models achieving state of the art on our tasks. See the caption of
6https://github.com/soskek/bookcorpus 7https://github.com/google-research/ bert
# 8https://github.com/fhamborg/
# news-please
9https://github.com/jcpeterson/ openwebtext
10https://github.com/tensorflow/models/ tree/master/research/lm_commonsense
Table 5 for the reported performance of these mod- els. For ACL-ARC, that is SCIBERT (Beltagy et al., 2019), a BERT-base model for trained from scratch on scientiï¬c text. For CHEMPROT and SCI- ERC, that is S2ORC-BERT (Lo et al., 2020), a similar model to SCIBERT. For AGNEWS and IMDB, XLNet-large, a much larger model. For RCT, Cohan et al. (2019). For HYPERPARTISAN, LONGFORMER, a modiï¬ed Transformer language model for long documents (Beltagy et al., 2020). Thongtan and Phienthrakul (2019) report a higher number (97.42) on IMDB, but they train their word vectors on the test set. Our baseline establishes the ï¬rst benchmark for the HELPFULNESS dataset.
# B Experimental Setup
Preprocessing for DAPT The unlabeled corpus in each domain was pre-processed prior to lan- guage model training. Abstracts and body para- graphs from biomedical and computer science articles were used after sentence splitting using scispaCy (Neumann et al., 2019). We used sum- maries and full text of each news article, and the entire body of review from Amazon reviews. For both news and reviews, we perform sentence split- ting using spaCy (Honnibal and Montani, 2017).
Training details for DAPT We train ROBERTA on each domain for 12.5K steps. We focused on matching all the domain dataset sizes (see Table 1) such that each domain is exposed to the same amount of data as for 12.5K steps it is trained for. AMAZON reviews contain more documents, but each is shorter. We used an effective batch size of 2048 through gradient accumulation, as recom- mended in Liu et al. (2019). See Table 13 for more hyperparameter details.
Training details for TAPT We use the same pre- training hyperparameters as DAPT, but we artiï¬- cially augmented each dataset for TAPT by ran- domly masking different tokens across epochs, us- ing the masking probability of 0.15. Each dataset was trained for 100 epochs. For tasks with less than 5K examples, we used a batch size of 256 through gradient accumulation. See Table 13 for more hyperparameter details.
Optimization We used the Adam optimizer (Kingma and Ba, 2015), a linear learning rate sched- uler with 6% warm-up, a maximum learning rate of 0.0005. When we used a batch size of 256, we
DAPT Domains DAPT kNN- Curated- " - Tasks Model DAPT TAPT (if applicable) +TAPT TAPT TAPT biomedical & computer 8 classification This Paper science papers, news, hack ssmcans ROBERTA v v v v v reviews sxe . DISTILBERT + _ Aharoni and Goldberg (2020) - NMT Transformer NMT - - - similar - Alsentzer et al. (2019) clinical text NER, NL, : (BIO)BERT v - - - - de-identification Chakrabarty et al. (2019) opinionated claims from oy detection ULMFIT v v - - - Reddit Chronopoulou et al. (2019) - P Cassification ULMFITt - similar - - Han and Eisenstein (2019) - NER inhistorical 51 Mo, BERT - v - - - Howard and Ruder (2018) - 6 classification ULMFIT - v - - - tasks Khandelwal et al. (2020) - language modeling Transformer LM - - - similar - Lee et al. (2019) biomedical papers NER, QA,relation BERT v - - - - extraction zero-shot entity Logeswaran et al. (2019) - linking in Wikia BERT - v - - - Mitra et al. (2020) - commonsense QA BERT - v - - - ELMo, BERT, Phang et al. (2018) - GLUE tasks GPT - v - - - NLL, QA, Radford et al. (2018) - similarity, GPT - similar - - - classification Sun et al. (2019) sentiment, question, 7 classification BERT Vv Vv . . . topic tasks Swayamdipta et al. (2019) - NER, parsing, ELMo - similar - - classification RC, aspect extract., Xu et al. (2019a) reviews sentiment BERT v v v - - classification Xu et al. (20196) restaurant reviews, conversational RC BERT vo - : : laptop reviews
# Curated- TAPT
Table 11: Overview of prior work across strategies for continued pre-training summarized in Table 10. ULMFIT is pretrained on English Wikipedia; ULMFITâ on English tweets; ELMO on the 1BWORDBENCHMARK (newswire; Chelba et al., 2014); GPT on BOOKCORPUS; BERT on English Wikipedia and BOOKCORPUS. In comparison to these pretraining corpora, ROBERTAâs pretraining corpus is substantially more diverse (see Appendix §A.1).
used a maximum learning rate of 0.0001, as rec- ommended in Liu et al. (2019). We observe a high variance in performance between random seeds when ï¬ne-tuning ROBERTA to HYPERPARTISAN, because the dataset is extremely small. To produce ï¬nal results on this task, we discard and resample degenerate seeds. We display the full hyperparam- eter settings in Table 13.
iment was performed on a single v3-8 TPU from Google Cloud.13 For the text classiï¬cation tasks, we used AllenNLP (Gardner et al., 2018). Fol- lowing standard practice (Devlin et al., 2019) we pass the ï¬nal layer [CLS] token representation to a task-speciï¬c feedforward layer for prediction.
# C Development Set Results
Implementation Our LM implementation uses the HuggingFace transformers library (Wolf et al., 2019)11 and PyTorch XLA for TPU compatibility.12 Each adaptive pretraining exper-
Adhering to the standards suggested by Dodge et al. (2019) for replication, we report our development set results in Tables 15, 17, and 18.
11https://github.com/huggingface/ transformers
# 12https://github.com/pytorch/xla
13http://github.com/allenai/ tpu-pretrain
# D Analysis of Domain Overlap
In Table 20 we display additional examples that highlight the overlap between IMDB reviews and REALNEWS articles, relevant for analysis in §3.1.
# E Analysis of Cross-Domain Masked LM Loss
In Section §3.2, we provide ROBERTAâs masked LM loss before and after DAPT. We display cross- domain masked-LM loss in Table 12, where we evaluate masked LM loss on text samples in other domains after performing DAPT.
We observe that the cross-domain masked-LM loss mostly follows our intuition and insights from the paper, i.e. ROBERTAâs pretraining corpus and NEWS are closer, and BIOMED to CS (relative to other domains). However, our analysis in §3.1 il- lustrates that REVIEWS and NEWS also have some similarities. This is supported with the loss of ROBERTA that is adapted to NEWS, calculated on a sample of REVIEWS. However, ROBERTA that is adapted to REVIEWS results in the highest loss for a NEWS sample. This is the case for all domains. One of the properties that distinguishes REVIEWS from all other domains is that its doc- uments are signiï¬cantly shorter. In general, we ï¬nd that cross-DAPT masked-LM loss can in some cases be a noisy predictor of domain similarity.
# F k-Nearest Neighbors Data Selection
In Table 21, we display nearest neighbor docu- ments in the BIOMED domain identiï¬ed by our selection method, on the RCT dataset.
Data Sample Unseen During DAPT PT BIOMED CS NEWS REVIEWS DAPT ROBERTA BIOMED CS NEWS REVIEWS 1.19 1.63 1.82 1.33 2.07 1.32 0.99 1.43 1.50 2.23 1.63 1.63 1.34 1.82 2.44 1.08 1.69 1.92 1.16 2.27 2.10 2.59 2.78 2.16 1.93
Table 12: ROBERTAâs (row 1) and domain-adapted ROBERTAâs (rows 2â5) masked LM loss on randomly sam- pled held-out documents from each domain (lower implies a better ï¬t). PT denotes a sample from sources similar to ROBERTAâs pretraining corpus. The lowest masked LM for each domain sample is boldfaced.
# Computing Infrastructure
# Google Cloud v3-8 TPU
Model implementations https://github.com/allenai/tpu_pretrain
Hyperparameter Assignment number of steps 100 epochs (TAPT) or 12.5K steps (DAPT) batch size 256 or 2058 maximum learning rate 0.0001 or 0.0005 learning rate optimizer Adam Adam epsilon 1e-6 Adam beta weights 0.9, 0.98 learning rate scheduler None or warmup linear Weight decay 0.01 Warmup proportion 0.06 learning rate decay linear
Table 13: Hyperparameters for domain- and task- adaptive pretraining.
# Computing Infrastructure
# Quadro RTX 8000 GPU
Model implementation https://github.com/allenai/dont-stop-pretraining
Hyperparameter Assignment number of epochs 3 or 10 patience 3 batch size 16 learning rate 2e-5 dropout 0.1 feedforward layer 1 feedforward nonlinearity tanh classiï¬cation layer 1
Table 14: Hyperparameters for ROBERTA text classiï¬er.
Additional Pretraining Phases Domain Task ROBERTA DAPT TAPT DAPT + TAPT BIOMED CHEMPROT â RCT 83.21.4 88.10.05 84.10.5 88.50.1 83.00.6 88.30.1 84.10.5 88.50.1 CS ACL-ARC SCIERC 71.32.8 83.81.1 73.21.5 88.41.7 73.23.6 85.90.8 78.62.9 88.01.3 NEWS HYPERPARTISAN â AGNEWS 84.01.5 94.30.1 79.13.5 94.30.1 82.73.3 94.70.1 80.82.3 94.90.1 REVIEWS â HELPFULNESS â IMDB 65.53.4 94.80.1 66.51.4 95.30.1 69.22.4 95.40.1 69.42.1 95.70.2
Table 15: Results on different phases of adaptive pretraining compared to the baseline ROBERTA (col. 1). Our approaches are DAPT (col. 2, §3), TAPT (col. 3, §4), and a combination of both (col. 4). Reported results are devel- opment macro-F1, except for CHEMPROT and RCT, for which we report micro-F1, following Beltagy et al. (2019). We report averages across ï¬ve random seeds, with standard deviations as subscripts. â indicates high-resource set- tings. Best task performance is boldfaced. State-of-the-art results we can compare to: CHEMPROT (84.6), RCT (92.9), ACL-ARC (71.0), SCIERC (81.8), HYPERPARTISAN (94.8), AGNEWS (95.5), IMDB (96.2); references in §A.2.
Dom. Task ROB. DAPT ¬DAPT BM CHEMPROT â RCT 83.21.4 88.10.0 84.10.5 88.50.1 80.90.5 87.90.1 CS ACL-ARC SCIERC 71.32.8 83.81.1 73.21.5 88.41.7 68.15.4 83.90.9 NEWS HYP. â AGNEWS 84.01.5 94.30.1 79.13.5 94.30.1 71.64.6 94.00.1 REV. â HELPFUL. â IMDB 65.53.4 94.80.1 66.51.4 95.30.1 65.53.0 93.80.2
Table 16: Development comparison of ROBERTA (ROBA.) and DAPT to adaptation to an irrelevant domain (¬ DAPT). See §3.3 for our choice of irrelevant domains. Reported results follow the same format as Table 5.
BIOMED RCT CHEMPROT CS ACL-ARC SCIERC TAPT Transfer-TAPT 88.30.1 88.00.1 (â 0.3) 83.00.6 81.10.5 (â 1.9) TAPT Transfer-TAPT 73.23.6 74.04.5 (â 1.2) 85.90.8 85.51.1 (â 0.4) NEWS HYPERPARTISAN AGNEWS AMAZON reviews HELPFULNESS IMDB TAPT Transfer-TAPT 82.73.3 77.63.6 (â 5.1) 94.70.1 94.40.1 (â 0.4) TAPT Transfer-TAPT 69.22.4 65.42.7 (â 3.8) 95.40.1 94.90.1 (â 0.5)
Table 17: Development results for TAPT transferability.
Pretraining BIOMED RCT-500 HYPERPARTISAN NEWS REVIEWS â IMDB TAPT DAPT + TAPT 80.51.3 83.90.3 82.73.3 80.82.3 95.40.1 95.70.2 Curated-TAPT DAPT + Curated-TAPT 84.40.3 84.50.3 84.91.9 83.13.7 95.80.1 96.00.1
Table 18: Mean development set macro-F1 (for HYPERPARTISAN and IMDB) and micro-F1 (for RCT-500), with Curated-TAPT across ï¬ve random seeds, with standard deviations as subscripts. â indicates high-resource settings.
Pretraining BIOMED CS CHEMPROT RCT-500 ACL-ARC ROBERTA TAPT 83.21.4 83.00.6 80.30.5 80.51.3 71.32.8 73.23.6 RAND-TAPT 50NN-TAPT 150NN-TAPT 500NN-TAPT 83.30.5 83.30.8 83.30.9 84.50.4 81.60.6 81.70.5 81.90.8 82.60.4 78.74.0 70.13.5 78.52.2 77.42.3 DAPT 84.10.5 83.50.8 73.21.5
Table 19: Mean development set macro-F1 (for HYP. and IMDB) and micro-F1 (for RCT), across ï¬ve random seeds, with standard deviations as subscripts, comparing RAND-TAPT (with 50 candidates) and kNN-TAPT selec- tion. Neighbors of the task data are selected from the domain data.
# IMDB review
# REALNEWS article
Spooks is enjoyable trash, featuring some well directed sequences, ridiculous plots and dialogue, and some third rate acting. Many have described this is a UK version of 24, and one can see the similarities. The American version shares the weak silly plots, but the execution is so much slicker, sexier and I suspect, expensive. Some people describe weak comedy as gentle comedy. This is gentle spy story hour, the exact opposite of anything created by John Le Carre. Give me Smiley any day.
[...] Remember poor Helen Flynn from Spooks? In 2002, the headlong BBC spy caper was in such a hurry to establish the high-wire stakes of its morally compromised world that Lisa Faulkners keen-as-mustard MI5 rookie turned out to be a lot more expendable than her prominent billing suggested. [...] Functioning as both a shocking twist and rather callous statement that No-One Is Safe, it gave the slick drama an instant patina of edginess while generating a record-breaking number of complaints. [...]
The Sopranos is perhaps the most mind-opening series you could possibly ever want to watch. Itâs smart, itâs quirky, itâs funny - and it carries the maï¬a genre so well that most people canât resist watching. The best aspect of this show is the overwhelming realism of the characters, set in the subterranean world of the New York crime families. For most of the time, you really donât know whether the wise guys will stab someone in the back, or buy them lunch. Further adding to the realistic approach of the characters in this show is the depth of their personalities - These are dangerous men, most of them murderers, but by God if you donât love them too. Iâve laughed at their wisecracks, been torn when theyâve made err in judgement, and felt scared at the sheer ruthlessness of a serious criminal. [...]
The drumbeat regarding the Breaking Bad ï¬nale has led to the inevitable speculation on whether the ï¬nal chapter in this serialized gem will live up to the hype or disappoint (thank you, Dexter, for setting that bar pretty low), with debate, second-guessing and graduate-thesis-length analysis sure to follow. The Most Memorable TV Series Finales of All-Time [...] No ending in recent years has been more divisive than The Sopranos for some, a brilliant ï¬ash (literally, in a way) of genius; for others (including yours truly), a too-cute copout, cryptically leaving its characters in perpetual limbo. The precedent to that would be St. Elsewhere, which irked many with its provocative, surreal notion that the whole series was, in fact, conjured in the mind of an autistic child. [...]
The Wicker Man, starring Nicolas Cage, is by no means a good movie, but I canât really say itâs one I regret watching. I could go on and on about the negative aspects of the movie, like the terrible acting and the lengthy scenes where Cage is looking for the girl, has a hallucination, followed by another hallucination, followed by a dream sequence- with a hallucination, etc., but itâs just not worth dwelling on when it comes to a movie like this. Instead, hereâs ï¬ve reasons why you SHOULD watch The Wicker Man, even though itâs bad: 5. Itâs hard to deny that it has some genuinely creepy ideas to it, the only problem is in its cheesy, unintentionally funny execution. If nothing else, this is a movie that may inspire you to see the original 1973 ï¬lm, or even read the short story on which it is based. 4. For a cheesy horror/thriller, it is really aesthetically pleasing. [...] NOTE: The Unrated version of the movie is the best to watch, and itâs better to watch the Theatrical version just for its little added on epilogue, which features a cameo from James Franco.
[...] What did you ultimately feel about âThe Wicker Manâ movie when all was said and done? [...] Im a fan of the original and Im glad that I made the movie because they dont make movies like that anymore and probably the result of what âWicker Manâ did is the reason why they dont make movies like that anymore. Again, its kind of that 70s sensibility, but Im trying to do things that are outside the box. Sometimes that means itll work and other times it wont. Again though Im going to try and learn from anything that I do. I think that it was a great cast, and Neil La Bute is one of the easiest directors that Ive ever worked with. He really loves actors and he really gives you a relaxed feeling on the set, that you can achieve whatever it is that youre trying to put together, but at the end of the day the frustration that I had with The Wicker Man, which I think has been remedied on the DVD because I believe the DVD has the directors original cut, is that they cut the horror out of the horror ï¬lm to try and get a PG-13 rating. I mean, I dont know how to stop something like that. So Im not happy with the way that the picture ended, but Im happy with the spirit with which it was made. [...]
Dr. Seuss would sure be mad right now if he was alive. Cat in the Hat proves to show how movie productions can take a classic story and turn it into a mindless pile of goop. We have Mike Myers as the infamous Cat in the Hat, big mistake! Myers proves he canât act in this ï¬lm. He acts like a prissy show girl with a thousand tricks up his sleeve. The kids in this movie are all right, somewhere in between the lines of dull and annoying. The story is just like the original with a couple of tweaks and like most movies based on other stories, never tweak with the original story! Bringing in the evil neighbor Quin was a bad idea. He is a stupid villain that would never get anywhere in life. [...]
The Cat in the Hat, [...] Based on the book by Dr. Seuss [...] From the moment his tall, red-and-white-striped hat appears at their door, Sally and her brother know that the Cat in the Hat is the most mischievous cat they will ever meet. Suddenly the rainy afternoon is transformed by the Cat and his antics. Will their house ever be the same? Can the kids clean up before mom comes home? With some tricks (and a ï¬sh) and Thing Two and Thing One, with the Cat in The Hat, the funâs never done!Dr. Seuss is known worldwide as the imaginative master of childrenâs literature. His books include a wonderful blend of invented and actual words, and his rhymes have helped many children and adults learn and better their understanding of the English language. [...]
Table 20: Additional examples that highlight the overlap between IMDB reviews and REALNEWS articles.
Source During median follow-up of 905 days ( IQR 773-1050 ) , 49 people died and 987 unplanned admissions were recorded ( totalling 5530 days in hospital ) . Neighbor 0 Neighbor 1 Neighbor 2 Neighbor 3 Neighbor 4 Of this group, 26% died after discharge from hospital, and the median time to death was 11 days (interquartile range, 4.0-15.0 days) after discharge. The median hospital stay was 17 days (range 8-26 days), and all the patients were discharged within 1 month. The median hospital stay was 17 days (range 8-26 days). The median time between discharge and death was 25 days (mean, 59.1 days) and no patient was alive after 193 days. The length of hospital stay after colostomy formation ranged from 3 days to 14 days with a median duration of 6 days (+IQR of 4 to 8 days). Source Randomized , controlled , parallel clinical trial . Neighbor 0 Neighbor 1 Neighbor 2 Neighbor 3 Neighbor 4 Design: Unblinded, randomised clinical controlled trial. These studies and others led to the phase III randomized trial RTOG 0617/NCCTG 0628/ CALGB 30609. -Deï¬nitive randomized controlled clinical trial (RCT): RCT 1 4 randomized controlled trial. randomized controlled trial [ Fig. 3(A)]. Source Forty primary molar teeth in 40 healthy children aged 5-9 years were treated by direct pulp capping . Neighbor 0 In our study, we speciï¬cally determined the usefulness of the Er:YAG laser in caries removal and cavity preparation of primary and young permanent teeth in children ages 4 to 1 8 years. Neighbor 1 Males watched more TV than females, although it was only in primary school-aged children and on weekdays. Neighbor 2 Neighbor 3 Neighbor 4 Assent was obtained from children and adolescents aged 7-17 years. Cardiopulmonary resuscitation was not applied to children aged ¡5 years (Table 2). It measures HRQoL in children and adolescents aged 2 to 25 years.
Table 21: 5 nearest neighbors of sentences from the RCT dataset (Source) in the BIOMED domain (Neighbors 0â4). | {
"id": "1904.05342"
} |
2004.10340 | The iWildCam 2020 Competition Dataset | Camera traps enable the automatic collection of large quantities of image
data. Biologists all over the world use camera traps to monitor animal
populations. We have recently been making strides towards automatic species
classification in camera trap images. However, as we try to expand the
geographic scope of these models we are faced with an interesting question: how
do we train models that perform well on new (unseen during training) camera
trap locations? Can we leverage data from other modalities, such as citizen
science data and remote sensing data? In order to tackle this problem, we have
prepared a challenge where the training data and test data are from different
cameras spread across the globe. For each camera, we provide a series of remote
sensing imagery that is tied to the location of the camera. We also provide
citizen science imagery from the set of species seen in our data. The challenge
is to correctly classify species in the test camera traps. | http://arxiv.org/pdf/2004.10340 | Sara Beery, Elijah Cole, Arvi Gjoka | cs.CV | Fine-Grained Visual Categorization Workshop at CVPR 2020 | null | cs.CV | 20200421 | 20200421 | 0 2 0 2
r p A 1 2 ] V C . s c [
1 v 0 4 3 0 1 . 4 0 0 2 : v i X r a
# The iWildCam 2020 Competition Dataset
Sara Beery*â , Elijah Cole*, Arvi Gjokaâ California Institute of Technology* Googleâ
# Abstract
Camera traps enable the automatic collection of large quantities of image data. Biologists all over the world use camera traps to monitor animal populations. We have re- cently been making strides towards automatic species clas- siï¬cation in camera trap images. However, as we try to expand the geographic scope of these models we are faced with an interesting question: how do we train models that perform well on new (unseen during training) camera trap locations? Can we leverage data from other modalities, such as citizen science data and remote sensing data? In order to tackle this problem, we have prepared a challenge where the training data and test data are from different cam- eras spread across the globe. For each camera, we provide a series of remote sensing imagery that is tied to the loca- tion of the camera. We also provide citizen science imagery from the set of species seen in our data. The challenge is to correctly classify species in the test camera traps.
Figure 1. The iWildCam 2020 dataset. This yearâs dataset in- cludes data from multiple modalities: camera traps, citizen scien- tists, and remote sensing. Here we can see an example of data from a camera trap paired with a visualization of the infrared channel of the paired remote sensing imagery.
# 1. Introduction
In order to understand the effects of pollution, exploita- tion, urbanization, global warming, and conservation pol- icy on our planetâs biodiversity, we need access to accurate, consistent biodiversity measurements. Researchers often use camera traps â static, motion-triggered cameras placed in the wild â to study changes in species diversity, popu- lation density, and behavioral patterns. These cameras can take thousands of images per day, and the time it takes for human experts to identify species in the data is a major bot- tleneck. By automating this process, we can provide an im- portant tool for scalable biodiversity assessment.
Camera trap images are taken automatically based on a triggered sensor, so there is no guarantee that the animal will be centered, focused, well-lit, or at an appropriate scale (they can be either very close or very far from the camera, each causing its own problems). See Fig. 2 for examples of these challenges. Further, up to 70% of the photos at any given location may be triggered by something other than an animal, such as wind in the trees. Automating camera trap labeling is not a new challenge for the computer vision
community [3, 5, 7â9, 12, 16, 17, 17â20, 23â26]. However, most of the proposed solutions have used the same camera locations for both training and testing the performance of an automated system. If we wish to build systems that are trained to detect and classify animals and then deployed to new locations without further training, we must measure the ability of machine learning and computer vision to general- ize to new environments [6, 20]. This is central to the 2018 [6], 2019 [4], and 2020 iWildCam challenges.
The 2020 iWildCam challenge includes a new compo- nent: the use of multiple data modalities (see Fig. 1). An ecosystem can be monitored in a variety of ways (e.g. cam- era traps, citizen scientists, remote sensing) each of which has its own strengths and limitations. To facilitate the ex- ploration of techniques for combining these complemen- tary data streams, we provide a time series of remote sens- ing imagery for each camera trap location as well as cu- rated subsets of the iNaturalist competition datasets match- ing the species seen in the camera trap data. It has been shown that species classiï¬cation performance can be dra- matically improved by using information beyond the image itself [8, 10, 15] so we expect that participants will ï¬nd cre- ative and effective uses for this data.
1
(1) Illumination (2) Blur (3) ROI Size
(5) Camouï¬age
# (6) Perspective
# (4) Occlusion
Figure 2. Common data challenges in camera trap images. (1) Illumination: Animals are not always well-lit. (2) Motion blur: common with poor illumination at night. (3) Size of the region of interest (ROI): Animals can be small or far from the camera. (4) Occlusion: e.g. by bushes or rocks. (5) Camouï¬age: decreases saliency in animalsâ natural habitat. (6) Perspective: Animals can be close to the camera, resulting in partial views of the body.
# 2. Data Preparation
The dataset consists of three primary components: (i) camera trap images, (ii) citizen science images, and (iii) multispectral imagery for each camera location.
# 2.1. Camera Trap Data
The camera trap data (along with expert annotations) is provided by the Wildlife Conservation Society (WCS) [2]. We split the data by camera location, so no images from the test cameras are included in the training set to avoid overï¬tting to one set of backgrounds [7].
The training set contains 217, 959 images from 441 loca- tions, and the test set contains 62, 894 images from 111 lo- cations. These 552 locations are spread across 12 countries in different parts of the world. Each image is associated with a location ID so that images from the same location can be linked. As is typical for camera traps, approximately 50% of the total number of images are empty (this varies per location).
There are 276 species represented in the camera trap im- ages. The class distribution is long-tailed, as shown in Fig. 3. Since we have split the data by location, some classes ap- pear only in the training set. Any images with classes that appeared only in the test set were removed.
# 2.2. iNaturalist Data
iNaturalist is an online community where citizen scien- tists post photos of plants and animals and collaboratively identify the species [1]. To facilitate the use of iNaturalist data, we provide a mapping from our classes into the iNatu-
0 50 100 150 Sorted Class Index 200
Figure 3. Camera trap class distribution. Per-class distribution of the camera trap data, which exhibits a long tail. We show exam- ples of both a common class (the African giant pouched rat) and a rare class (the Indonesian mountain weasel). Within the plot we show images of each species, centered and focused, from iNatural- ist. On the right we show images of each species within the frame of a camera trap, from WCS.
ralist taxonomy.1 We also provide the subsets of the iNatu- ralist 2017-2019 competition datasets [22] that correspond to species seen in the camera trap data. This data provides 13, 051 additional images for training, covering 75 classes. Though small relative to the camera trap data, the iNat- uralist data has some unique characteristics. First, the class distribution is completely different (though it is still long tailed). Second, iNaturalist images are typically higher quality than the corresponding camera trap images, provid- ing valuable examples for hard classes. See Fig. 4 for a comparison between iNaturalist images and camera trap im- ages.
# 2.3. Remote Sensing Data
For each camera location we provide multispectral im- agery collected by the Landsat 8 satellite [21]. All data comes from the the Landsat 8 Tier 1 Surface Reï¬ectance dataset [13] provided by Google Earth Engine [14]. This data has been been atmospherically corrected and meets certain radiometric and geometric quality standards.
Data collection. The precise location of a camera trap is generally considered to be sensitive information, so we ï¬rst obfuscate the coordinates of the camera. For each time point when imagery is available (the Landsat 8 satellite im- ages the Earth once every 16 days), we extract a square patch centered at the obfuscated coordinates consisting of 9 bands of multispectral imagery and 2 bands of per-pixel metadata. Each patch covers an area of 6km à 6km. Since one Landsat 8 pixel covers an area of 30m2, each patch is 200 à 200 à 11 pixels. Note that the bit depth of Landsat 8 data is 16.
The multispectral imagery consists of 9 different bands,
1Note that for the purposes of the competition, competitors may only use iNaturalist data from the iNaturalist competition datasets.
(1) Class ID 101
(2) Class ID 563
(3) Class ID 154
Figure 4. Camera trap data (left) vs iNaturalist data (right). (1) Animal is large, so camera trap image does not fully capture it. (2) Animal is small, so it makes up a small part of the camera trap images. (3) Quality is equivalent, although iNaturalist images have more camera pose and animal pose variation.
ordered by descending frequency / ascending wavelength. Band 1 is ultra-blue. Bands 2, 3, and 4 are traditional blue, green, and red. Band 5-9 are infrared. Note that bands 8 and 9 are from a different sensor than bands 1-7 and have been upsampled from 100m2/pixel to 30m2/pixel. Refer to [13] or [21] for more details.
Each patch of imagery has two corresponding quality as- sessment (QA) bands which carry per-pixel metadata. The ï¬rst QA band (pixelqa) contains automatically generated labels for classes like clear, water, cloud, or cloud shadow which can help to interpret the pixel values. The second QA band (radsatqa) labels the pixels in each
band for which the sensor was saturated. Cloud cover and saturated pixels are common issues in remote sensing data, and the QA bands may provide some assistance. However, they are automatically generated and cannot be trusted com- pletely. See [13] for more details.
# 3. Baseline Results
We trained a basic image classiï¬er as a baseline for com- parison. The model is a randomly initialized Inception-v3 with input size 299 à 299, which was trained using only camera trap images. During training, images were ran- domly cropped and perturbed in brightness, saturation, hue, and contrast. We used the rmsprop optimizer with an ini- tial learning rate of 0.0045 and a decay factor of 0.94.
Let C be the number of classes. We trained using a class balanced loss from [11], given by
1 L'(p,y) = Ay)
where p â RC is the vector of predicted class probabilities (after softmax), y â {1, . . . , C} is the ground truth class, L is the categorical cross-entropy loss, ny is the number of samples for class y, and β is a hyperparameter which we set to 0.9.
This baseline achieved a macro-averaged F1 score of 0.62 and an accuracy of 62% on the iWildCam 2020 test set.
# 4. Conclusion
The iWildCam 2020 dataset provides a test bed for studying generalization to new locations at a larger geo- graphic scale than previous iWildCam competitions [4, 6]. it facilitates exploration of multimodal ap- In addition, proaches to camera trap image classiï¬cation and pairs re- mote sensing imagery with camera trap imagery for the ï¬rst time.
In subsequent years, we plan to extend the iWildCam challenge by adding additional data streams and tasks, such as detection and segmentation. We hope to use the knowl- edge we gain throughout these challenges to facilitate the development of systems that can accurately provide real- time species ID and counts in camera trap images at a global scale. Any forward progress made will have a direct impact on the scalability of biodiversity research geographically, temporally, and taxonomically.
# 5. Acknowledgements
We would like to thank Dan Morris and Siyu Yang (Mi- crosoft AI for Earth) for their help curating the dataset, pro- viding bounding boxes from the MegaDetector, and hosting the data on Azure. We also thank the Wildlife Conservation
Society for providing the camera trap data and annotations. We thank Kaggle for supporting the iWildCam competition for the past three years. Thanks also to the FGVC Work- shop, Visipedia, and our advisor Pietro Perona for contin- ued support. This work was supported in part by NSF GRFP Grant No. 1745301. The views are those of the authors and do not necessarily reï¬ect the views of the NSF.
# References
# [1] iNaturalist. https://www.inaturalist.org/. 2 [2] Wildlife Dataset. wcscameratraps. 2
# Traps
[3] Sara Beery, Yang Liu, Dan Morris, Jim Piavis, Ashish Kapoor, Neel Joshi, Markus Meister, and Pietro Perona. Syn- thetic examples improve generalization for rare classes. In The IEEE Winter Conference on Applications of Computer Vision, pages 863â873, 2020. 1
[4] Sara Beery, Dan Morris, and Pietro Perona. The iwildcam 2019 challenge dataset. ArXiv, abs/1907.07617, 2019. 1, 3
[5] Sara Beery, Dan Morris, and Siyu Yang. pipeline for camera trap image review. arXiv:1907.06772, 2019. 1
[6] Sara Beery, Grant van Horn, Oisin MacAodha, and Pietro Perona. The iwildcam 2018 challenge dataset. arXiv preprint arXiv:1904.05986, 2019. 1, 3
[7] Sara Beery, Grant Van Horn, and Pietro Perona. Recognition in terra incognita. In Proceedings of the European Confer- ence on Computer Vision (ECCV), pages 456â473, 2018. 1, 2
[8] Sara Beery, Guanhang Wu, Vivek Rathod, Ronny Votel, and Jonathan Huang. Context r-cnn: Long term tempo- ral context for per-camera object detection. arXiv preprint arXiv:1912.03538, 2020. 1
[9] Guobin Chen, Tony X Han, Zhihai He, Roland Kays, and Tavis Forrester. Deep convolutional neural network based In Image species recognition for wild animal monitoring. Processing (ICIP), 2014 IEEE International Conference on, pages 858â862. IEEE, 2014. 1
[10] Grace Chu, Brian Potetz, Weijun Wang, Andrew Howard, Yang Song, Fernando Brucher, Thomas Leung, and Hartwig Adam. Geo-aware networks for ï¬ne-grained recognition. ICCV Workshop on Computer Vision for Wildlife Conserva- tion, 2019. 1
[11] Yin Cui, Menglin Jia, Tsung-Yi Lin, Yang Song, and Serge J. Belongie. Class-balanced loss based on effective number of samples. CoRR, abs/1901.05555, 2019. 3
[12] Jhony-Heriberto Giraldo-Zuluaga, Augusto Salazar, Alexan- der Gomez, and Ang´elica Diaz-Pulido. Camera-trap images segmentation using multi-layer robust principal component analysis. The Visual Computer, pages 1â13, 2017. 1 [13] Google Earth Engine. USGS Landsat 8 Surface Reï¬ectance https://developers.google.com/
Tier 1. earth-engine/datasets/catalog/LANDSAT_ LC08_C01_T1_SR. 2, 3
[14] Noel Gorelick, Matt Hancher, Mike Dixon, Simon Ilyushchenko, David Thau, and Rebecca Moore. Google earth engine: Planetary-scale geospatial analysis for every-
one. Remote Sensing of Environment, 2017. 2
[15] Oisin Mac Aodha, Elijah Cole, and Pietro Perona. Presence- only geographical priors for ï¬ne-grained image classiï¬ca- tion. ICCV, 2019. 1
[16] Agnieszka Miguel, Sara Beery, Erica Flores, Loren Klemes- rud, and Rana Bayrakcismith. Finding areas of motion in camera trap images. In Image Processing (ICIP), 2016 IEEE International Conference on, pages 1334â1338. IEEE, 2016. 1
[17] Mohammad Sadegh Norouzzadeh, Dan Morris, Sara Beery, Neel Joshi, Nebojsa Jojic, and Jeff Clune. A deep active learning system for species identiï¬cation and counting in camera trap images. arXiv preprint arXiv:1910.09716, 2019. 1
[18] Mohammed Sadegh Norouzzadeh, Anh Nguyen, Margaret Kosmala, Ali Swanson, Craig Packer, and Jeff Clune. Au- tomatically identifying wild animals in camera trap images with deep learning. arXiv preprint arXiv:1703.05830, 2017. [19] Stefan Schneider, Graham W Taylor, and Stefan Kremer. Deep learning object detection methods for ecological cam- In 2018 15th Conference on Computer and era trap data. Robot Vision (CRV), pages 321â328. IEEE, 2018.
[20] Michael A Tabak, Mohammad S Norouzzadeh, David W Wolfson, Erica J Newton, Raoul K Boughton, Jacob S Ivan, Eric A Odell, Eric S Newkirk, Reesa Y Conrey, Jennifer L Stenglein, et al. Improving the accessibility and transfer- ability of machine learning algorithms for identiï¬cation of animals in camera trap images: Mlwic2. bioRxiv, 2020. 1
[21] U.S. Geological Survey. Landsat 8 https://www.usgs.gov/land-resources/ nli/landsat/landsat-8. 2, 3
# Imagery.
[22] Grant Van Horn, Oisin Mac Aodha, Yang Song, Yin Cui, Chen Sun, Alex Shepard, Hartwig Adam, Pietro Perona, and Serge Belongie. The inaturalist species classiï¬cation and de- tection dataset. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 8769â8778, 2018. 2
[23] Alexander Gomez Villa, Augusto Salazar, and Francisco Vargas. Towards automatic wild animal monitoring: Iden- tiï¬cation of animal species in camera-trap images using very deep convolutional neural networks. Ecological Informatics, 41:24â32, 2017. 1
[24] Michael J Wilber, Walter J Scheirer, Phil Leitner, Brian Heï¬in, James Zott, Daniel Reinke, David K Delaney, and Terrance E Boult. Animal recognition in the mojave desert: Vision tools for ï¬eld biologists. In Applications of Computer Vision (WACV), 2013 IEEE Workshop on, pages 206â213. IEEE, 2013.
[25] Hayder Yousif, Jianhe Yuan, Roland Kays, and Zhihai He. Fast human-animal detection from highly cluttered camera- trap images using joint background modeling and deep learn- In Circuits and Systems (ISCAS), 2017 ing classiï¬cation. IEEE International Symposium on, pages 1â4. IEEE, 2017. [26] Zhi Zhang, Zhihai He, Guitao Cao, and Wenming Cao. Ani- mal detection from highly cluttered natural scenes using spa- tiotemporal object region proposals and patch veriï¬cation. IEEE Transactions on Multimedia, 18(10):2079â2092, 2016. 1 | {
"id": "1907.06772"
} |
2004.10151 | Experience Grounds Language | Language understanding research is held back by a failure to relate language
to the physical world it describes and to the social interactions it
facilitates. Despite the incredible effectiveness of language processing models
to tackle tasks after being trained on text alone, successful linguistic
communication relies on a shared experience of the world. It is this shared
experience that makes utterances meaningful.
Natural language processing is a diverse field, and progress throughout its
development has come from new representational theories, modeling techniques,
data collection paradigms, and tasks. We posit that the present success of
representation learning approaches trained on large, text-only corpora requires
the parallel tradition of research on the broader physical and social context
of language to address the deeper questions of communication. | http://arxiv.org/pdf/2004.10151 | Yonatan Bisk, Ari Holtzman, Jesse Thomason, Jacob Andreas, Yoshua Bengio, Joyce Chai, Mirella Lapata, Angeliki Lazaridou, Jonathan May, Aleksandr Nisnevich, Nicolas Pinto, Joseph Turian | cs.CL, cs.AI, cs.LG | Empirical Methods in Natural Language Processing (EMNLP), 2020 | null | cs.CL | 20200421 | 20201102 | 0 2 0 2
v o N 2 ] L C . s c [
3 v 1 5 1 0 1 . 4 0 0 2 : v i X r a
# Experience Grounds Language
# Yonatan Bisk*
Ari Holtzman*
# Jesse Thomason*
Joyce Chai Jonathan May Aleksandr Nisnevich Nicolas Pinto Joseph Turian
# Abstract
Language understanding research is held back by a failure to relate language to the physical world it describes and to the social interactions it facilitates. Despite the incredible effective- ness of language processing models to tackle tasks after being trained on text alone, success- ful linguistic communication relies on a shared experience of the world. It is this shared expe- rience that makes utterances meaningful.
Natural language processing is a diverse ï¬eld, and progress throughout its development has come from new representational theories, mod- eling techniques, data collection paradigms, and tasks. We posit that the present success of representation learning approaches trained on large, text-only corpora requires the paral- lel tradition of research on the broader physi- cal and social context of language to address the deeper questions of communication.
Improvements in hardware and data collection have galvanized progress in NLP across many benchmark tasks. Impressive performance has been achieved in language modeling (Radford et al., 2019; Zellers et al., 2019b; Keskar et al., 2019) and span-selection question answering (Devlin et al., 2019; Yang et al., 2019b; Lan et al., 2020) through massive data and massive models. With models exceeding human performance on such tasks, now is an excellent time to reï¬ect on a key question:
# Where is NLP going?
In this paper, we consider how the data and world a language learner is exposed to deï¬ne and con- strains the scope of that learnerâs semantics. Mean- ing does not arise from the statistical distribution of words, but from their use by people to communi- cate. Many of the assumptions and understandings on which communication relies lie outside of text. We must consider what is missing from models
Meaning is not a unique property of language, but a general characteristic of human activity ... We cannot say that each morpheme or word has a single or central meaning, or even that it has a continuous or coherent range of meanings ... there are two separate uses and meanings of language â the concrete ... and the abstract.
Zellig S. Harris (Distributional Structure 1954)
trained solely on text corpora, even when those cor- pora are meticulously annotated or Internet-scale. You canât learn language from the radio. Nearly every NLP course will at some point make this claim. The futility of learning language from lin- guistic signal alone is intuitive, and mirrors the belief that humans lean deeply on non-linguistic knowledge (Chomsky, 1965, 1980). However, as a ï¬eld we attempt this futility: trying to learn lan- guage from the Internet, which stands in as the modern radio to deliver limitless language. In this piece, we argue that the need for language to attach to âextralinguistic events" (Ervin-Tripp, 1973) and the requirement for social context (Baldwin et al., 1996) should guide our research.
Drawing inspiration from previous work in NLP, Cognitive Science, and Linguistics, we propose the notion of a World Scope (WS) as a lens through which to audit progress in NLP. We describe ï¬ve WSs, and note that most trending work in NLP operates in the second (Internet-scale data). We deï¬ne ï¬ve levels of World Scope:
WS1. Corpus (our past) Internet (most of current NLP) WS2. WS3. Perception (multimodal NLP) WS4. Embodiment WS5. Social
These World Scopes go beyond text to consider the contextual foundations of language: grounding, embodiment, and social interaction. We describe a brief history and ongoing progression of how con- textual information can factor into representations and tasks. We conclude with a discussion of how
this integration can move the ï¬eld forward. We be- lieve this World Scope framing serves as a roadmap for truly contextual language understanding.
# 1 WS1: Corpora and Representations
The story of data-driven language research begins with the corpus. The Penn Treebank (Marcus et al., 1993) is the canonical example of a clean subset of naturally generated language, processed and anno- tated for the purpose of studying representations. Such corpora and the model representations built from them exemplify WS1. Community energy was initially directed at ï¬nding formal linguistic structure, such as recovering syntax trees. Recent success on downstream tasks has not required such explicitly annotated signal, leaning instead on un- structured fuzzy representations. These representa- tions span from dense word vectors (Mikolov et al., 2013) to contextualized pretrained representations (Peters et al., 2018; Devlin et al., 2019).
Word representations have a long history predat- ing the recent success of deep learning methods. Outside of NLP, philosophy (Austin, 1975) and lin- guistics (Lakoff, 1973; Coleman and Kay, 1981) recognized that meaning is ï¬exible yet structured. Early experiments on neural networks trained with sequences of words (Elman, 1990; Bengio et al., 2003) suggested that vector representations could capture both syntax and semantics. Subsequent experiments with larger models, documents, and corpora have demonstrated that representations learned from text capture a great deal of informa- tion about meaning in and out of context (Collobert and Weston, 2008; Turian et al., 2010; Mikolov et al., 2013; McCann et al., 2017).
The intuition of such embedding representations, that context lends meaning, has long been acknowl- edged (Firth, 1957; Turney and Pantel, 2010). Ear- lier on, discrete, hierarchical representations, such as agglomerative clustering guided by mutual in- formation (Brown et al., 1992), were constructed with some innate interpretability. A wordâs position in such a hierarchy captures semantic and syntac- tic distinctions. When the BaumâWelch algorithm (Welch, 2003) is applied to unsupervised Hidden Markov Models, it assigns a class distribution to every word, and that distribution is a partial rep- resentation of a wordâs âmeaning.â If the set of classes is small, syntax-like classes are induced; if the set is large, classes become more semantic. These representations are powerful in that they cap-
nn ° S ââ Harris 1954 ââ Firth 1957 â Chomsky 1957 NBO ow eoosd % of 2019 Citations ° 2000 2010 1960 1970 1980 1990 Year 2020
Academic interest in Firth and Harris increases dramatically around 2010, perhaps due to the popularization of Firth (1957) âYou shall know a word by the company it keeps."
ture linguistic intuitions without supervision, but they are constrained by the structure they impose with respect to the number of classes chosen.
The intuition that meaning requires a large con- text, that âYou shall know a word by the company it keeps." â Firth (1957), manifested early via La- tent Semantic Indexing/Analysis (Deerwester et al., 1988, 1990; Dumais, 2004) and later in the gen- erative framework of Latent Dirichlet Allocation (Blei et al., 2003). LDA represents a document as a bag-of-words conditioned on latent topics, while LSI/A use singular value decomposition to project a co-occurrence matrix to a low dimensional word vector that preserves locality. These methods dis- card sentence structure in favor of the document.
Representing words through other words is a comfortable proposition, as it provides the illusion of deï¬nitions by implicit analogy to thesauri and related words in a dictionary deï¬nition. However, the recent trends in deep learning approaches to language modeling favor representing meaning in ï¬xed-length vectors with no obvious interpretation. The question of where meaning resides in âconnec- tionistâ systems like Deep Neural Networks is an old one (Pollack, 1987; James and Miikkulainen, 1995). Are concepts distributed through edges or local to units in an artiï¬cial neural network?
â... there has been a long and unresolved debate between those who favor localist representations in which each process- ing element corresponds to a meaningful concept and those who favor distributed representations.â Hinton (1990) Special Issue on Connectionist Symbol Processing
In connectionism, words were no longer deï¬ned over interpretable dimensions or symbols, which were perceived as having intrinsic meaning. The tension of modeling symbols and distributed repre- sentations is articulated by Smolensky (1990), and alternative representations (Kohonen, 1984; Hinton
et al., 1986; Barlow, 1989) and approaches to struc- ture and composition (Erk and Padó, 2008; Socher et al., 2012) span decades of research.
The Brown Corpus (Francis, 1964) and Penn Treebank (Marcus et al., 1993) deï¬ned context and structure in NLP for decades. Only relatively re- cently (Baroni et al., 2009) has the cost of annota- tions decreased enough, and have large-scale web- crawls become viable, to enable the introduction of more complex text-based tasks. This transition to larger, unstructured context (WS2) induced a richer semantics than was previously believed possible under the distributional hypothesis.
# 2 WS2: The Written World
Corpora in NLP have broadened to include large web-crawls. The use of unstructured, unlabeled, multi-domain, and multilingual data broadens our world scope, in the limit, to everything humanity has ever written.1 We are no longer constrained to a single author or source, and the temptation for NLP is to believe everything that needs knowing can be learned from the written world. But, a large and noisy text corpus is still a text corpus.
This move towards using large scale raw data has led to substantial advances in performance on existing and novel community benchmarks (Devlin et al., 2019; Brown et al., 2020). Scale in data and modeling has demonstrated that a single represen- tation can discover both rich syntax and semantics without our help (Tenney et al., 2019). This change is perhaps best seen in transfer learning enabled by representations in deep models. Traditionally, transfer learning relied on our understanding of model classes, such as English grammar. Domain adaptation simply required sufï¬cient data to cap- ture lexical variation, by assuming most higher- level structure would remain the same. Unsuper- vised representations today capture deep associ- ations across multiple domains, and can be used successfully transfer knowledge into surprisingly diverse contexts (Brown et al., 2020).
These representations require scale in terms of both data and parameters. Concretely, Mikolov et al. (2013) trained on 1.6 billion tokens, while Pennington et al. (2014) scaled up to 840 billion tokens from Common Crawl. Recent approaches
1A parallel discussion would focus on the hardware re- quired to enable advances to higher World Scopes. Playsta- tions (Pinto et al., 2009) and then GPUs (Krizhevsky et al., 2012) made many WS2 advances possible. Perception, inter- action, and robotics leverage other new hardware.
have made progress by substantially increasing the number of model parameters to better consume these vast quantities of data. Where Peters et al. (2018) introduced ELMo with â¼108 parameters, Transformer models (Vaswani et al., 2017) have continued to scale by orders of magnitude between papers (Devlin et al., 2019; Radford et al., 2019; Zellers et al., 2019b) to â¼1011 (Brown et al., 2020).
Current models are the next (impressive) step in language modeling which started with Good (1953), the weights of Kneser and Ney (1995); Chen and Goodman (1996), and the power-law distributions of Teh (2006). Modern approaches to learning dense representations allow us to bet- ter estimate these distributions from massive cor- pora. However, modeling lexical co-occurrence, no matter the scale, is still modeling the written world. Models constructed this way blindly search for symbolic co-occurences void of meaning.
How can models yield both âimpressive resultsâ and âdiminishing returnsâ? Language modelingâ the modern workhorse of neural NLP systemsâis a canonical example. Recent pretraining literature has produced results that few could have predicted, crowding leaderboards with âsuper-human" accu- racy (Rajpurkar et al., 2018). However, there are diminishing returns. For example, on the LAM- BADA dataset (Paperno et al., 2016), designed to capture human intuition, GPT2 (Radford et al., 2019) (1.5B), Megatron-LM (Shoeybi et al., 2019) (8.3B), and TuringNLG (Rosset, 2020) (17B) per- form within a few points of each other and very far from perfect (<68%). When adding another order of magnitude of parameters (175B) Brown et al. (2020) gain 8 percentage-points, impressive but still leaving 25% unsolved. Continuing to expand hardware, data sizes, and ï¬nancial compute cost by orders of magnitude will yield further gains, but the slope of the increase is quickly decreasing.
The aforementioned approaches for learning transferable representations demonstrate that sen- tence and document context provide powerful sig- nals for learning aspects of meaning, especially se- mantic relations among words (Fu et al., 2014) and inferential relationships among sentences (Wang et al., 2019a). The extent to which they capture deeper notions of contextual meaning remains an open question. Past work has found that pretrained word and sentence representations fail to capture many grounded features of words (Lucy and Gau- thier, 2017) and sentences, and current NLU sys-
tems fail on the thick tail of experience-informed in- ferences, such as hard coreference problems (Peng et al., 2015). âI parked my car in the compact park- ing space because it looked (big/small) enough.â still presents problems for text-only learners.
As text pretraining schemes seem to be reach- ing the point of diminishing returns, even for some syntactic phenomena (van Schijndel et al., 2019), we posit that other forms of supervision, such as multimodal perception (Ilharco et al., 2019), are necessary to learn the remaining aspects of mean- ing in context. Learning by observation should not be a purely linguistic process, since leveraging and combining the patterns of multimodal perception can combinatorially boost the amount of signal in data through cross-referencing and synthesis.
# 3 WS3: The World of Sights and Sounds
Language learning needs perception, because per- ception forms the basis for many of our semantic axioms. Learned, physical heuristics, such as the fact that a falling cat will land quietly, are general- ized and abstracted into language metaphors like as nimble as a cat (Lakoff, 1980). World knowl- edge forms the basis for how people make entail- ment and reasoning decisions, commonly driven by mental simulation and analogy (Hofstadter and Sander, 2013). Perception is the foremost source of reporting bias. The assumption that we all see and hear the same things informs not just what we name, but what we choose to assume and leave un- written. Further, there exists strong evidence that children require grounded sensory perception, not just speech, to learn language (Sachs et al., 1981; OâGrady, 2005; Vigliocco et al., 2014).
Perception includes auditory, tactile, and visual input. Even restricted to purely linguistic sig- nals, sarcasm, stress, and meaning can be implied through prosody. Further, tactile senses lend mean- ing, both physical (Sinapov et al., 2014; Thomason et al., 2016) and abstract, to concepts like heavy and soft. Visual perception is a rich signal for modeling a vastness of experiences in the world that cannot be documented by text alone (Harnad, 1990).
For example, frames and scripts (Schank and Abelson, 1977; Charniak, 1977; Dejong, 1981; Mooney and Dejong, 1985) require understand- ing often unstated sets of pre- and post-conditions about the world. To borrow from Charniak (1977), how should we learn the meaning, method, and im- plications of painting? A web crawl of knowledge
Wf âa aaa
Eugene Charniak (A Framed PAINTING: The Representation of a Common Sense Knowledge Fragment 1977)
from an exponential number of possible how-to, text-only guides and manuals (Bisk et al., 2020) is misdirected without some fundamental referents to which to ground symbols. Models must be able to watch and recognize objects, people, and activi- ties to understand the language describing them (Li et al., 2019b; Krishna et al., 2017; Yatskar et al., 2016; Perlis, 2016) and access ï¬ne-grained notions of causality, physics, and social interactions.
While the NLP community has played an im- portant role in the history of grounding (Mooney, 2008), recently remarkable progress has taken place in the Computer Vision community. It is tempting to assume that vision models trained to identify 1,000 ImageNet classes (Russakovsky et al., 2015)2 are limited to extracting a bag of vi- sual words. In reality, Computer Vision has been making in-roads into complex visual, physical, and social phenomena, while providing reusable infras- tructure.3 The stability of these architectures allows for new research into more challenging world mod- eling. Mottaghi et al. (2016) predicts the effects of forces on objects in images. Bakhtin et al. (2019) extends this physical reasoning to complex puzzles of cause and effect. Sun et al. (2019b,a) models scripts and actions, and alternative unsupervised training regimes (Bachman et al., 2019) open up research towards automatic concept formation.
Advances in computer vision have enabled build- ing semantic representations rich enough to inter- act with natural language. In the last decade of work descendant from image captioning (Farhadi et al., 2010; Mitchell et al., 2012), a myriad of tasks on visual question answering (Antol et al., 2015; Das et al., 2018; Yagcioglu et al., 2018), natural language and visual reasoning (Suhr et al., 2019b), visual commonsense (Zellers et al., 2019a),
2Or the 1,600 classes of Anderson et al. (2017). 3Torchvision/Detectron2 include dozens of trained models.
and multilingual captioning/translation via video (Wang et al., 2019b) have emerged. These com- bined text and vision benchmarks are rich enough to train large-scale, multimodal transformers (Li et al., 2019a; Lu et al., 2019; Zhou et al., 2019) without language pretraining (e.g. via conceptual captions (Sharma et al., 2018)) or further broad- ened to include audio (Tsai et al., 2019). Vision can also help ground speech signals (Srinivasan et al., 2020; Harwath et al., 2019) to facilitate discovery of linguistic concepts (Harwath et al., 2020).
At the same time, NLP resources contributed to the success of these vision backbones. Hierar- chical semantic representations emerge from Im- ageNet classiï¬cation pretraining partially due to class hypernyms owed to that datasetâs WordNet origins. For example, the person class sub-divides into many professions and hobbies, like ï¬reï¬ghter, gymnast, and doctor. To differentiate such sibling classes, learned vectors can also encode lower-level characteristics like clothing, hair, and typical sur- rounding scenes. These representations allow for pixel level masks and skeletal modeling, and can be extended to zero-shot settings targeting all 20K Im- ageNet categories (Chao et al., 2016; Changpinyo et al., 2017). Modern architectures also learn to dif- ferentiate instances within a general class, such as face. For example, facial recognition benchmarks require distinguishing over 10K unique faces (Liu et al., 2015). While vision is by no means âsolved,â benchmarks have led to off-the-shelf tools for build- ing representations rich enough to identify tens of thousands of objects, scenes, and individuals.
A WS3 agent, having access to potentially end- less hours of video data showing the intricate de- tails of daily comings and goings, procedures, and events, reduces susceptibility to the reporting bias of WS2. An ideal WS3 agent will exhibit bet- ter long-tail generalization and understanding than any language-only system could. This generaliza- tion should manifest in existing benchmarks, but would be most prominent in a test of zero-shot cir- cumstances, such as âWill this car ï¬t through that tunnel?,â and rarely documented behaviors as ex- amined in script learning. Yet the WS3 agent will likely fail to answer, "Would a ceramic or paper plate make a better frisbee?" The agent has not tried to throw various objects and understand how their velocity and shape interact with the atmosphere to create lift. The agent cannot test novel hypotheses by intervention and action in the world.
If A and B have some environments in common and some not ... we say that they have different meanings, the amount of meaning difference corresponding roughly to the amount of difference in their environments ...
Zellig S. Harris (Distributional Structure 1954)
# 4 WS4: Embodiment and Action
In human development, interactive multimodal sen- sory experience forms the basis of action-oriented categories (Thelen and Smith, 1996) as children learn how to manipulate their perception by ma- nipulating their environment. Language grounding enables an agent to connect words to these action- oriented categories for communication (Smith and Gasser, 2005), but requires action to fully discover such connections. Embodimentâsituated action takingâis therefore a natural next broader context. An embodied agent, whether in a virtual world, such as a 2D Maze (MacMahon et al., 2006), a grid world (Chevalier-Boisvert et al., 2019), a sim- ulated house (Anderson et al., 2018; Thomason et al., 2019b; Shridhar et al., 2020), or the real world (Tellex et al., 2011; Matuszek, 2018; Thoma- son et al., 2020; Tellex et al., 2020) must translate from language to action. Control and action taking open several new dimensions to understanding and actively learning about the world. Queries can be resolved via dialog-based exploration with a hu- man interlocutor (Liu and Chai, 2015), even as new object properties, like texture and weight (Thoma- son et al., 2017), or feedback, like muscle activa- tions (Moro and Kennington, 2018), become avail- able. We see the need for embodied language with complex meaning when thinking deeply about even the most innocuous of questions:
Is an orange more like a baseball or more like a banana?
WS1 is likely not to have an answer beyond that the objects are common nouns that can both be held. WS2 may capture that oranges and baseballs both roll, but is not the deformation strength, surface tex- ture, or relative sizes of these objects (Elazar et al., 2019). WS3 may realize the relative deformability of these objects, but is likely to confuse how much force is necessary given that baseballs are used much more roughly than oranges. WS4 can appre- ciate the nuances of the questionâthe orange and baseball afford similar manipulation because they
have similar texture and weight, while the orange and banana both contain peels, deform, and are edible. People can reason over rich representations of common objects that these words evoke.
Planning is where people ï¬rst learn abstraction and simple examples of post-conditions through trial and error. The most basic scripts humans learn start with moving our own bodies and achieving simple goals as children, such as stacking blocks. In this space, we have unlimited supervision from the environment and can learn to generalize across plans and actions. In general, simple worlds do not entail simple concepts: even in a block world concepts like âmirroringâ appear (Bisk et al., 2018). Humans generalize and apply physical phenomena to abstract concepts with ease.
In addition to learning basic physical proper- ties of the world from interaction, WS4 also al- lows the agent to construct rich pre-linguistic rep- resentations from which to generalize. Hespos and Spelke (2004) show pre-linguistic category forma- tion within children that are then later codiï¬ed by social constructs. Mounting evidence seems to indi- cate that children have trouble transferring knowl- edge from the 2D world of books (Barr, 2013) and iPads (Lin et al., 2017) to the physical 3D world. So while we might choose to believe that we can en- code parameters (Chomsky, 1981) more effectively and efï¬ciently than evolution provided us, develop- mental experiments indicate doing so without 3D interaction may prove difï¬cult.
Part of the problem is that much of the knowl- edge humans hold about the world is intuitive, possibly incommunicable by language, but still required to understand language. Much of this knowledge revolves around physical realities that real-world agents will encounter. Consider how many explicit and implicit metaphors are based on the idea that far-away things have little inï¬uence on manipulating local space: âa distant concernâ and âweâll cross that bridge when we come to it.â
Robotics and embodiment are not available in the same off-the-shelf manner as computer vision models. However, there is rapid progress in simu- lators and commercial robotics, and as language re- searchers we should match these advances at every step. As action spaces grow, we can study complex language instructions in simulated homes (Shrid- har et al., 2020) or map language to physical robot control (Blukis et al., 2019; Chai et al., 2018). The last few years have seen massive advances in both
In order to talk about concepts, we must understand the importance of mental models... we set up a model of the world which serves as a framework in which to organize our thoughts. We abstract the presence of particular objects, having properties, and entering into events and relationships.
Terry Winograd - 1971
high ï¬delity simulators for robotics (Todorov et al., 2012; Coumans and Bai, 2016â2019; NVIDIA, 2019; Xiang et al., 2020) and the cost and avail- ability of commodity hardware (Fitzgerald, 2013; Campeau-Lecours et al., 2019; Murali et al., 2019). As computers transition from desktops to perva- sive mobile and edge devices, we must make and meet the expectation that NLP can be deployed in any of these contexts. Current representations have very limited utility in even the most basic robotic settings (Scalise et al., 2019), making collaborative robotics (Rosenthal et al., 2010) largely a domain of custom engineering rather than science.
# 5 WS5: The Social World
Interpersonal communication is the foundational use case of natural language (Dunbar, 1993). The physical world gives meaning to metaphors and instructions, but utterances come from a source with a purpose. Take J.L. Austinâs classic example of âBULLâ being written on the side of a fence in a large ï¬eld (Austin, 1975). It is a fundamentally social inference to realize that this word indicates the presence of a dangerous creature, and that the word is written on the opposite side of the fence from where that creature lives.
Interpersonal dialogue as a grand test for AI is older than the term âartiï¬cial intelligence,â begin- ning at least with Turing (1950)âs Imitation Game. Turing was careful to show how easily a naïve tester could be tricked. Framing, such as suggesting that a chatbot speaks English as a second language (Sam- ple and Hern, 2014), can create the appearance of genuine content where there is none (Weizenbaum, 1966). This phenomenon has been noted countless times, from criticisms of Speech Recognition as âdeceit and glamourâ (Pierce, 1969) to complaints of humanityâs âgullibility gapâ (Marcus and Davis, 2019). We instead focus on why the social world is vital to language learning.
Language that Does Something Work in the philosophy of language has long suggested that
function is the source of meaning, as famously il- lustrated through Wittgensteinâs âlanguage gamesâ (Wittgenstein, 1953, 1958). In linguistics, the usage-based theory of language acquisition sug- gests that constructions that are useful are the build- ing blocks for everything else (Langacker, 1987, 1991). The economy of this notion of use has been the subject of much inquiry and debate (Grice, 1975). In recent years, these threads have begun to shed light on what use-cases language presents in both acquisition and its initial origins in our species (Tomasello, 2009; Barsalou, 2008), indicating the fundamental role of the social world.
WS1, WS2, WS3, and WS4 expand the fac- torizations of information available to linguistic meaning. allows language to be a cause instead of just a source of data. This is the ultimate goal for a language learner: to generate language that does something to the world.
Passive creation and evaluation of generated lan- guage separates generated utterances from their effects on other people, and while the latter is a rich learning signal it is inherently difï¬cult to annotate. In order to learn the effects language has on the world, an agent must participate in lin- guistic activity, such as negotiation (Yang et al., 2019a; He et al., 2018; Lewis et al., 2017), collab- oration (Chai et al., 2017), visual disambiguation (Anderson et al., 2018; Lazaridou et al., 2017; Liu and Chai, 2015), or providing emotional support (Rashkin et al., 2019). These activities require in- ferring mental states and social outcomesâa key area of interest in itself (Zadeh et al., 2019).
What âlameâ means in terms of discriminative information is always at question: it can be deï¬ned as âundesirable,â but what it tells one about the processes operating in the environment requires social context to determine (Bloom, 2002). It is the toddlerâs social experimentation with âYouâre so lame!â that gives the word weight and deï¬nite intent (Ornaghi et al., 2011). In other words, the discriminative signal for the most foundational part of a wordâs meaning can only be observed by its ef- fect on the world, and active experimentation is key to learning that effect. Active experimentation with language starkly contrasts with the disembodied chat bots that are the focus of the current dialogue community (Roller et al., 2020; Adiwardana et al., 2020; Zhou et al., 2020; Chen et al., 2018; Serban et al., 2017), which often do not learn from individ- ual experiences and whose environments are not
persistent enough to learn the effects of actions.
Theory of Mind When attempting to get what we want, we confront people who have their own desires and identities. The ability to consider the feelings and knowledge of others is now com- monly referred to as the âTheory of Mindâ (Ne- matzadeh et al., 2018). This paradigm has also been described under the âSpeaker-Listenerâ model (Stephens et al., 2010), and a rich theory to describe this computationally is being actively developed under the Rational Speech Act Model (Frank and Goodman, 2012; Bergen et al., 2016).
A series of challenges that attempt to address this fundamental aspect of communication have been introduced (Nematzadeh et al., 2018; Sap et al., 2019). These works are a great start towards deeper understanding, but static datasets can be problem- atic due to the risk of embedding spurious patterns and bias (de Vries et al., 2020; Le et al., 2019; Gururangan et al., 2018; Glockner et al., 2018), especially because examples where annotators can- not agree (which are usually thrown out before the dataset is released) still occur in real use cases. More ï¬exible, dynamic evaluation (Zellers et al., 2020; Dinan et al., 2019) are a partial solution, but true persistence of identity and adaption to change are both necessary and still a long way off.
Training data in WS1-4, complex and large as it can be, does not offer the discriminatory signals that make the hypothesizing of consistent identity or mental states an efï¬cient path towards lowering perplexity or raising accuracy (Liu et al., 2016; De- Vault et al., 2006). First, there is a lack of inductive bias (Martin et al., 2018). Models learn what they need to discriminate between potential labels, and it is unlikely that universal function approximators such as neural networks would ever reliably posit that people, events, and causality exist without be- ing biased towards such solutions (Mitchell, 1980). Second, current cross entropy training losses ac- tively discourage learning the tail of the distribu- tion properly, as statistically infrequent events are drowned out (Pennington et al., 2014; Holtzman et al., 2020). Meanwhile, it is precisely humanâs ability to draw on past experience and make zero- shot decisions that AI aims to emulate.
Language in a Social Context Whenever lan- guage is used between people, it exists in a concrete social context: status, role, intention, and countless other variables intersect at a speciï¬c point (Ward-
haugh, 2011). These complexities are overlooked through selecting labels on which crowd workers agree. Current notions of ground truth in dataset construction are based on crowd consensus bereft of social context. We posit that ecologically valid evaluation of generative models will require the construction of situations where artiï¬cial agents are considered to have enough identity to be granted social standing for these interactions.
Social interaction is a precious signal, but ini- tial studies have been strained by the training- validation-test set scenario and reference-backed evaluations. Collecting data about rich natural sit- uations is often impossible. To address this gap, learning by participation, where users can freely interact with an agent, is a necessary step to the ultimately social venture of communication. By exhibiting different attributes and sending varying signals, the sociolinguistic construction of identity (Ochs, 1993) could be examined more deeply. Such experimentation in social intelligence is simply not possible with a ï¬xed corpus. Once models are ex- pected to be interacted with when tested, probing their decision boundaries for simpliï¬cations of re- ality and a lack of commonsense knowledge as in Gardner et al.; Kaushik et al. will become natural.
# 6 Self-Evaluation
We use the notion of World Scopes to make the following concrete claims:
# You canât learn language ...
# ... from the radio (Internet). WS2 â WS3
A task learner cannot be said to be in WS3 if it can succeed without perception (e.g., visual, auditory).
# ... from a television.
# WS3 â WS4
A task learner cannot be said to be in WS4 if the space of its world actions and consequences can be enumerated.
# ... by yourself.
# WS4 â WS5
A task learner cannot be said to be in WS5 unless achieving its goals requires cooperating with a human in the loop.
By these deï¬nitions, most of NLP research still resides in WS2. This fact does not invalidate the utility or need for any of the research within NLP, but it is to say that much of that existing research targets a different goal than language learning.
These problems include the need to bring meaning and reasoning into systems that perform natural language processing, the need to infer and represent causality, the need to develop computationally-tractable representations of uncertainty and the need to develop systems that formulate and pursue long-term goals.
Michael Jordan (Artiï¬cial intelligence â the revolution hasnât happened yet, 2019)
Where Should We Start? Many in our commu- nity are already examining phenomena in WSs 3-5. Note that research can explore higher WS phenomena without a resultant learner being in a higher WS. For example, a chatbot can investigate principles of the social world, but still lack the un- derlying social standing required for WS5. Next we describe four language use contexts which we believe are both research questions to be tackled and help illustrate the need to move beyond WS2.
Second language acquisition when visiting a foreign country leverages a shared, social world model that allows pointing to referent objects and miming internal states like hunger. The interlingua is physical and experiential. Such a rich internal world model should also be the goal for MT models: starting with images (Huang et al., 2020), moving through simulation, and then to the real world.
Coreference and WSD leverage a shared scene and theory of mind. To what extent are current coreference resolution issues resolved if an agent models the listenerâs desires and experiences explic- itly rather than looking solely for adjacent lexical items? This setting is easiest to explore in embod- ied environments, but is not exclusive to them (e.g., TextWorld (Côté et al., 2018)).
Novel word learning from tactile knowledge and use: What is the instrument that you wear like a guitar but play like a piano? Objects can be de- scribed with both gestures and words about appear- ance and function. Such knowledge could begin to tackle physical metaphors that current NLP sys- tems struggle with.
Personally charged language: How should a dialogue agent learn what is hurtful to a speciï¬c person? To someone who is sensitive about their grades because they had a period of struggle in school, the sentiment of âDonât be a fool!â can be hurtful, while for others it may seem playful. Social knowledge is requisite for realistic understanding of sentiment in situated human contexts.
Relevant recent work The move from WS2 to WS3 requires rethinking existing tasks and investi- gating where their semantics can be expanded and grounded. This idea is not new (Chen and Mooney, 2008; Feng and Lapata, 2010; Bruni et al., 2014; Lazaridou et al., 2016) and has accelerated in the last few years. Elliott et al. (2016) reframes ma- chine translation with visual observations, a trend extended into videos (Wang et al., 2019b). Regneri et al. (2013) introduce a foundational dataset align- ing text descriptions and semantic annotations of actions with videos. Vision can even inform core tasks like syntax (Shi et al., 2019) and language modeling (Ororbia et al., 2019). Careful design is key, as visually augmented tasks can fail to require sensory perception (Thomason et al., 2019a).
Language-guided, embodied agents invoke many of the challenges of WS4. Language-based nav- igation (Anderson et al., 2018) and task comple- tion (Shridhar et al., 2020) in simulation environ- ments ground language to actions, but even com- plex simulation action spaces can be discretized and enumerated. Real world, language-guided robots for task completion (Tellex et al., 2014) and learning (She et al., 2014) face challenging, con- tinuous perception and control (Tellex et al., 2020). Consequently, research in this space is often re- stricted to small grammars (Paul et al., 2018; Walter et al., 2013) or controlled dialog responses (Thoma- son et al., 2020). These efforts to translate language instructions to actions build towards using language for end-to-end, continuous control (WS4).
Collaborative games have long served as a testbed for studying language (Werner and Dyer, 1991) and emergent communication (Schlangen, 2019a; Lazaridou et al., 2018; Chaabouni et al., 2020). Suhr et al. (2019a) introduced an environ- ment for evaluating language understanding in the service of a shared goal, and Andreas and Klein (2016) use a visual paradigm for studying pragmat- ics. Such efforts help us examine how inductive biases and environmental pressures build towards socialization (WS5), even if full social context is still too difï¬cult and expensive to be practical.
Most of these works provide resources such as data, code, simulators and methodology for evaluat- ing the multimodal content of linguistic representa- tions (Schlangen, 2019b; Silberer and Lapata, 2014; Bruni et al., 2012). Moving forward, we encourage a broad re-examination of how NLP frames the rela- tionship between meaning and context (Bender and
Koller, 2020) and how pretraining obfuscates our ability to measure generalization (Linzen, 2020).
# 7 Conclusions
Our World Scopes are steep steps. WS5 implies a persistent agent experiencing time and a personal- ized set of experiences. With few exceptions (Carl- son et al., 2010), machine learning models have been conï¬ned to IID datasets that lack the structure in time from which humans draw correlations about long-range causal dependencies. What if a machine was allowed to participate consistently? This is dif- ï¬cult to test under current evaluation paradigms for generalization. Yet, this is the structure of gener- alization in human development: drawing analo- gies to episodic memories and gathering new data through non-independent experiments.
As with many who have analyzed the history of NLP, its trends (Church, 2007), its maturation toward a science (Steedman, 2008), and its major challenges (Hirschberg and Manning, 2015; Mc- Clelland et al., 2019), we hope to provide momen- tum for a direction many are already heading. We call for and embrace the incremental, but purpose- ful, contextualization of language in human expe- rience. With all that we have learned about what words can tell us and what they keep implicit, now is the time to ask: What tasks, representations, and inductive-biases will ï¬ll the gaps?
Computer vision and speech recognition are ma- ture enough for investigation of broader linguistic contexts (WS3). The robotics industry is rapidly developing commodity hardware and sophisticated software that both facilitate new research and ex- pect to incorporate language technologies (WS4). Simulators and videogames provide potential envi- ronments for social language learners (WS5). Our call to action is to encourage the community to lean in to trends prioritizing grounding and agency, and explicitly aim to broaden the corresponding World Scopes available to our models.
# Acknowledgements
Thanks to Raymond Mooney for suggestions, Paul Smolensky for disagreements, Catriona Silvey for developmental psychology help, and to a superset of: Emily Bender, Ryan Cotterel, Jesse Dunietz, Edward Grefenstette, Dirk Hovy, Casey Kenning- ton, Ajay Divakaran, David Schlangend, Diyi Yang, and Semih Yagcioglu for pointers and suggestions.
# References
Daniel Adiwardana, Minh-Thang Luong, David R So, Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang, Apoorv Kulshreshtha, Gaurav Nemade, Yifeng Lu, et al. 2020. Towards a human-like open-domain chatbot. arXiv preprint arXiv:2001.09977.
Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2017. Bottom-up and top-down attention for image captioning and visual question answering. Vi- sual Question Answering Challenge at CVPR 2017.
Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sünderhauf, Ian Reid, Stephen Gould, and Anton van den Hengel. 2018. Vision- and-Language Navigation: Interpreting visually- grounded navigation instructions in real environ- ments. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Jacob Andreas and Dan Klein. 2016. Reasoning about pragmatics with neural listeners and speakers. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1173â1182, Austin, Texas.
Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Mar- garet Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. Vqa: Visual question an- swering. In Proceedings of the IEEE international conference on computer vision, pages 2425â2433.
John Langshaw Austin. 1975. How to do things with words. Oxford university press.
Philip Bachman, R Devon Hjelm, and William Buch- walter. 2019. Learning representations by maximiz- ing mutual information across views. In Advances in Neural Information Processing Systems 32.
Anton Bakhtin, Laurens van der Maaten, Justin John- son, Laura Gustafson, and Ross Girshick. 2019. Phyre: A new benchmark for physical reasoning. In Advances in Neural Information Processing Systems 32 (NIPS 2019).
Dare A. Baldwin, Ellen M. Markman, Brigitte Bill, Re- nee N. Desjardins, Jane M. Irwin, and Glynnis Tid- ball. 1996. Infantsâ reliance on a social criterion for establishing word-object relations. Child Develop- ment, 67(6):3135â3153.
H.B. Barlow. 1989. Unsupervised learning. Neural Computation, 1(3):295â311.
Marco Baroni, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. The wacky wide web: a collection of very large linguistically processed web- crawled corpora. Language resources and evalua- tion, 43(3):209â226.
Rachel Barr. 2013. Memory constraints on infant learn- ing from picture books, television, and touchscreens. Child Development Perspectives, 7(4):205â210.
Lawrence W Barsalou. 2008. Grounded cognition. Annu. Rev. Psychol., 59:617â645.
Emily M Bender and Alexander Koller. 2020. Climb- ing towards nlu: On meaning, form, and understand- ing in the age of data. In Association for Computa- tional Linguistics (ACL).
Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic lan- Journal of Machine Learning Re- guage model. search, 3:1137â1155.
Leon Bergen, Roger Levy, and Noah Goodman. 2016. Pragmatic reasoning through semantic inference. Semantics and Pragmatics, 9.
Yonatan Bisk, Kevin Shih, Yejin Choi, and Daniel Marcu. 2018. Learning Interpretable Spatial Oper- ations in a Rich 3D Blocks World . In Proceedings of the Thirty-Second Conference on Artiï¬cial Intelli- gence (AAAI-18).
Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jian- feng Gao, and Yejin Choi. 2020. PIQA: Reasoning about physical commonsense in natural language. In Thirty-Fourth AAAI Conference on Artiï¬cial Intelli- gence.
David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. Journal of Ma- chine Learning Research, 3:993â1022.
Paul Bloom. 2002. How children learn the meanings of words. MIT press.
Valts Blukis, Yannick Terme, Eyvind Niklasson, Ross A. Knepper, and Yoav Artzi. 2019. Learning to map natural language instructions to physical quad- copter control using simulated ï¬ight. In 3rd Confer- ence on Robot Learning (CoRL).
Peter F Brown, Peter V deSouza, Robert L Mercer, Vin- cent J Della Pietra, and Jenifer C Lai. 1992. Class- based n-gram models of natural language. Compu- tational Linguistics, 18.
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam Mc- Candlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learn- ers. In preprint.
Elia Bruni, Gemma Boleda, Marco Baroni, and Nam- Khanh Tran. 2012. Distributional semantics in tech- In Proceedings of the 50th Annual Meet- nicolor. ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 136â145, Jeju Is- land, Korea.
Elia Bruni, Nam Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. Journal of Ar- tiï¬cial Intelligence Research, 49:1â47.
Alexandre Campeau-Lecours, Hugo Lamontagne, Si- mon Latour, Philippe Fauteux, Véronique Maheu, François Boucher, Charles Deguire, and Louis- Joseph Caron LâEcuyer. 2019. Kinova modular robot arms for service robotics applications. In Rapid Automation: Concepts, Methodologies, Tools, and Applications, pages 693â719. IGI Global.
Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R Hruschka, and Tom M Mitchell. 2010. Toward an architecture for never-ending lan- guage learning. In Twenty-Fourth AAAI Conference on Artiï¬cial Intelligence.
Rahma Chaabouni, Eugene Kharitonov, Diane Boucha- court, Emmanuel Dupoux, and Marco Baroni. 2020. Compositionality and generalization in emergent In Association for Computational Lin- languages. guistics (ACL).
Joyce Y. Chai, Rui Fang, Changsong Liu, and Lanbo She. 2017. Collaborative language grounding to- ward situated human-robot dialogue. AI Magazine, 37(4):32â45.
Joyce Y. Chai, Qiaozi Gao, Lanbo She, Shaohua Yang, Sari Saba-Sadiya, and Guangyue Xu. 2018. Lan- guage to action: Towards interactive task learning with physical agents. In Proceedings of the Twenty- Seventh International Joint Conference on Artiï¬cial Intelligence (IJCAI-18).
Soravit Changpinyo, Wei-Lun Chao, and Fei Sha. 2017. Predicting visual exemplars of unseen classes for zero-shot learning. In ICCV.
Wei-Lun Chao, Soravit Changpinyo, Boqing Gong, and Fei Sha. 2016. An empirical study and analysis of generalized zero-shot learning for object recog- In ECCV, pages 52â68, Cham. nition in the wild. Springer International Publishing.
Eugene Charniak. 1977. A framed painting: The rep- resentation of a common sense knowledge fragment. Cognitive Science, 1(4):355â394.
Chun-Yen Chen, Dian Yu, Weiming Wen, Yi Mang Yang, Jiaping Zhang, Mingyang Zhou, Kevin Jesse, Austin Chau, Antara Bhowmick, Shreenath Iyer, et al. 2018. Gunrock: Building a human-like social bot by leveraging large scale real user data. Alexa Prize Proceedings.
David L. Chen and Raymond J. Mooney. 2008. Learn- ing to sportscast: A test of grounded language ac- quisition. In Proceedings of the 25th International Conference on Machine Learning (ICML), Helsinki, Finland.
SF Chen and Joshua Goodman. 1996. An empirical study of smoothing techniques for language model- In Association for Computational Linguistics, ing. pages 310â318.
Maxime Chevalier-Boisvert, Dzmitry Bahdanau, Salem Lahlou, Lucas Willems, Chitwan Saharia, Thien Huu Nguyen, and Yoshua Bengio. 2019. Babyai: First steps towards grounded language learning with a human in the loop. In ICLRâ2019.
Noam Chomsky. 1965. Aspects of the Theory of Syntax. MIT Press.
Noam Chomsky. 1980. Language and learning: the de- bate between Jean Piaget and Noam Chomsky. Har- vard University Press.
Noam Chomsky. 1981. Lectures on Government and Binding. Mouton de Gruyter.
Kenneth Church. 2007. A pendulum swung too far. Linguistic Issues in Language Technology â LiLT, 2.
L. Coleman and P. Kay. 1981. The english word âlie". Linguistics, 57.
Ronan Collobert and Jason Weston. 2008. A uniï¬ed architecture for natural language processing: deep neural networks with multitask learning. In ICML.
Marc-Alexandre Côté, Ãkos Kádár, Xingdi Yuan, Ben Kybartas, Tavian Barnes, Emery Fine, James Moore, Ruo Yu Tao, Matthew Hausknecht, Layla El Asri, Mahmoud Adada, Wendy Tay, and Adam Trischler. 2018. Textworld: A learning environment for text- based games. ArXiv, abs/1806.11532.
Erwin Coumans and Yunfei Bai. 2016â2019. Pybullet, a python module for physics simulation for games, robotics and machine learning. http://pybullet. org.
Abhishek Das, Samyak Datta, Georgia Gkioxari, Ste- fan Lee, Devi Parikh, and Dhruv Batra. 2018. Em- In Proceedings of the bodied question answering. IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 2054â2063.
Scott Deerwester, Susan T. Dumais, George W. Furnas, Thomas K. Landauer, and Richard Harshman. 1988. Improving information retrieval with latent semantic indexing. In Proceedings of the 51st Annual Meet- ing of the American Society for Information Science 25, pages 36 â 40.
Scott Deerwester, Susan T. Dumais, George W. Fur- nas, Thomas K. Landauer, and Richard Harshman. Indexing by latent semantic analysis. Jour- 1990. nal of the American Society for Information Science, 41(6):391â407.
Gerald Dejong. 1981. Generalizations based on expla- nations. In Proceedings of the 7th international joint conference on Artiï¬cial intelligence (IJCAI).
David DeVault, Iris Oved, and Matthew Stone. 2006. Societal grounding is essential to meaningful lan- guage use. In Proceedings of the National Confer- ence on Artiï¬cial Intelligence, volume 21, page 747.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In North American Chapter of the As- sociation for Computational Linguistics (NAACL).
Emily Dinan, Samuel Humeau, Bharath Chintagunta, and Jason Weston. 2019. Build it break it ï¬x it for dialogue safety: Robustness from adversarial human In Proceedings of the 2019 Conference on attack. Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 4529â4538.
Susan T. Dumais. 2004. Latent semantic analysis. An- nual Review of Information Science and Technology, 38(1):188â230.
Robin IM Dunbar. 1993. Coevolution of neocortical size, group size and language in humans. Behav- ioral and brain sciences, 16(4):681â694.
Yanai Elazar, Abhijit Mahabal, Deepak Ramachandran, Tania Bedrax-Weiss, and Dan Roth. 2019. How large are lions? inducing distributions over quanti- tative attributes. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 3973â3983.
Desmond Elliott, Stella Frank, Khalil Simaâan, and Lu- cia Specia. 2016. Multi30k: Multilingual english- german image descriptions. In Workshop on Vision and Langauge at ACL â16.
J Elman. 1990. Finding structure in time. Cognitive Science, 14(2):179â211.
Katrin Erk and Sebastian Padó. 2008. A structured vector space model for word meaning in context. In Proceedings of the 2008 Conference on Empiri- cal Methods in Natural Language Processing, pages 897â906, Honolulu, Hawaii.
Susan Ervin-Tripp. 1973. Some strategies for the ï¬rst two years. In Timothy E. Moore, editor, Cognitive Development and Acquisition of Language, pages 261 â 286. Academic Press, San Diego.
Ali Farhadi, M Hejrati, M Sadeghi, Peter Young, Cyrus Rashtchian, Julia Hockenmaier, and David Forsyth. 2010. Every picture tells a story: Generating sen- In European Conference on tences from images. Computer Vision. Springer.
Yansong Feng and Mirella Lapata. 2010. Topic models for image annotation and text illustration. In Human Language Technologies: The 2010 Annual Confer- ence of the North American Chapter of the Associa- tion for Computational Linguistics, pages 831â839, Los Angeles, California.
J. R. Firth. 1957. A synopsis of linguistic theory, 1930- 1955. Studies in Linguistic Analysis.
In 2013 IEEE Conference on Technologies for Practical Robot Applications (TePRA).
W. Nelson Francis. 1964. A standard sample of present-day english for use with digital computers. Report to the U.S Ofï¬ce of Education on Coopera- tive Research Project No. E-007.
Michael C Frank and Noah D Goodman. 2012. Pre- dicting pragmatic reasoning in language games. Sci- ence, 336(6084):998â998.
Ruiji Fu, Jiang Guo, Bing Qin, Wanxiang Che, Haifeng Wang, and Ting Liu. 2014. Learning semantic hier- archies via word embeddings. In Proceedings of the 52nd Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 1199â1209.
Matt Gardner, Yoav Artzi, Victoria Basmova, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hanna Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nel- son F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, Ally Zhang, and Ben Zhou. 2020. Evaluating NLP Models via Contrast Sets. arXiv:2004.02709.
Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking nli systems with sentences that re- In Proceedings of quire simple lexical inferences. the 56th Annual Meeting of the Association for Com- putational Linguistics (Volume 2: Short Papers), pages 650â655.
The population frequencies of species and the estimation of population parameters. Biometrika, 40:237â264.
Herbert P Grice. 1975. Logic and conversation. Speech acts, pages 41â58. Brill. In
Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A Smith. 2018. Annotation artifacts in natural lan- In Proceedings of the 2018 guage inference data. Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 107â112.
Stevan Harnad. 1990. The symbol grounding problem. Physica D, 42:335â346.
Zellig S Harris. 1954. Distributional structure. Word, 10:146â162.
David Harwath, Wei-Ning Hsu, and James Glass. 2020. Learning hierarchical discrete linguistic units from visually-grounded speech. In ICLR 2020.
David Harwath, Adrià Recasens, DÃdac SurÃs, Galen Chuang, Antonio Torralba, and James Glass. 2019. Jointly discovering visual objects and spoken words International Journal of from raw sensory input. Computer Vision.
He He, Derek Chen, Anusha Balakrishnan, and Percy Liang. 2018. Decoupling strategy and generation in In Proceedings of the 2018 negotiation dialogues. Conference on Empirical Methods in Natural Lan- guage Processing, pages 2333â2343.
Susan J. Hespos and Elizabeth S. Spelke. 2004. Con- ceptual precursors to language. Nature, 430.
G. E. Hinton, J. L. McClelland, and D. E. Rumelhart. 1986. Distributed representations. Parallel Dis- tributed Processing: Explorations in the Microstruc- ture of Cognition, Volume 1: Foundations.
Geoffrey E. Hinton. 1990. Preface to the special issue on connectionist symbol processing. Artiï¬cial Intel- ligence, 46(1):1 â 4.
Julia Hirschberg and Christopher D Manning. 2015. Advances in natural language processing. Science, 349(6245):261â266.
Douglas Hofstadter and Emmanuel Sander. 2013. Sur- faces and essences: Analogy as the fuel and ï¬re of thinking. Basic Books.
Ari Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degener- ation. In ICLR 2020.
Po-Yao Huang, Junjie Hu, Xiaojun Chang, and Alexan- der Hauptmann. 2020. Unsupervised multimodal neural machine translation with pseudo visual piv- In Proceedings of the 58th Annual Meet- oting. ing of the Association for Computational Linguistics, pages 8226â8237, Online.
Gabriel Ilharco, Yuan Zhang, and Jason Baldridge. 2019. Large-scale representation learning from visu- ally grounded untranscribed speech. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 55â65, Hong Kong, China.
Daniel L. James and Risto Miikkulainen. 1995. Sard- net: A self-organizing feature map for sequences. In Advances in Neural Information Processing Systems 7 (NIPSâ94), pages 577â584, Denver, CO. Cam- bridge, MA: MIT Press.
Michael I Jordan. 2019. Artiï¬cial intelligence â the rev- olution hasnât happened yet. Harvard Data Science Review.
Divyansh Kaushik, Eduard Hovy, and Zachary Lipton. 2020. Learning the difference that makes a differ- ence with counterfactually-augmented data. In Inter- national Conference on Learning Representations.
Nitish Shirish Keskar, Bryan McCann, Lav R Varshney, Caiming Xiong, and Richard Socher. 2019. CTRL: A conditional transformer language model for controllable generation. arXiv preprint arXiv:1909.05858.
Reinhard Kneser and Hermann Ney. 1995. Improved backing-off for m-gram language modeling. In Pro- ceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing.
Teuvo Kohonen. 1984. Self-Organization and Associa- tive Memory. Springer.
Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin John- son, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, Michael S. Bernstein, and Fei-Fei Li. 2017. Vi- sual genome: Connecting language and vision us- ing crowdsourced dense image annotations. Interna- tional Journal of Computer Vision, 123(1):32â73.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hin- Imagenet classiï¬cation with deep con- ton. 2012. In F. Pereira, C. J. C. volutional neural networks. Burges, L. Bottou, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 1097â1105. Curran Associates, Inc.
George Lakoff. 1973. Hedges: A study in meaning criteria and the logic of fuzzy concepts. Journal of Philosophical Logic, 2:458â508.
George Lakoff. 1980. Metaphors We Live By. Univer- sity of Chicago Press.
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. Albert: A lite bert for self-supervised learning In International Con- of language representations. ference on Learning Representations.
Ronald W Langacker. 1987. Foundations of cogni- tive grammar: Theoretical prerequisites, volume 1. Stanford university press.
Ronald W Langacker. 1991. Foundations of Cognitive Grammar: descriptive application., volume 2. Stan- ford university press.
Angeliki Lazaridou, Karl Moritz Hermann, Karl Tuyls, and Stephen Clark. 2018. Emergence of linguis- tic communication from referential games with sym- bolic and pixel input. In Internationl Conference on Learning Representations.
Angeliki Lazaridou, Alexander Peysakhovich, and Marco Baroni. 2017. Multi-agent cooperation and the emergence of (natural) language. In ICLR 2017.
Angeliki Lazaridou, Nghia The Pham, and Marco Ba- roni. 2016. The red one!: On learning to refer to In Pro- things based on discriminative properties. ceedings of the 54th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Papers), pages 213â218, Berlin, Germany.
Matthew Le, Y-Lan Boureau, and Maximilian Nickel. 2019. Revisiting the evaluation of theory of mind through question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5871â5876, Hong Kong, China.
Mike Lewis, Denis Yarats, Yann Dauphin, Devi Parikh, and Dhruv Batra. 2017. Deal or no deal? end-to-end learning of negotiation dialogues. In Proceedings of the 2017 Conference on Empirical Methods in Natu- ral Language Processing, pages 2443â2453.
Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2019a. VisualBERT: A Simple and Performant Baseline for Vision and Lan- guage. In Work in Progress.
Yong-Lu Li, Liang Xu, Xinpeng Liu, Xijie Huang, Yue Xu, Mingyang Chen, Ze Ma, Shiyi Wang, Hao-Shu Fang, and Cewu Lu. 2019b. HAKE: Human Activ- ity Knowledge Engine. arXiv:1904.06539.
Ling-Yi Lin, Rong-Ju Cherng, and Yung-Jung Chen. 2017. Effect of touch screen tablet use on ï¬ne motor development of young children. Physical & Occupa- tional Therapy In Pediatrics, 37(5):457â467. PMID: 28071977.
Tal Linzen. 2020. How can we accelerate progress to- wards human-like linguistic generalization? In As- sociation for Computational Linguistics (ACL).
Changsong Liu and Joyce Yue Chai. 2015. Learning to mediate perceptual differences in situated human- In Proceedings of the 29th AAAI robot dialogue. Conference on Artiï¬cial Intelligence, pages 2288â 2294.
Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Nose- worthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An em- pirical study of unsupervised evaluation metrics for dialogue response generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2122â2132.
Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. 2015. Deep learning face attributes in the wild. In Proceedings of International Conference on Com- puter Vision (ICCV).
Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visi- olinguistic representations for vision-and-language tasks. In Advances in Neural Information Process- ing Systems, pages 13â23.
Li Lucy and Jon Gauthier. 2017. Are distributional representations ready for the real world? evaluat- ing word vectors for grounded perceptual meaning. In Proceedings of the First Workshop on Language Grounding for Robotics, pages 76â85, Vancouver, Canada. Association for Computational Linguistics.
Matt MacMahon, Brian Stankiewicz, and Benjamin Kuipers. 2006. Walk the talk: Connecting language, knowledge, and action in route instructions. In Pro- ceedings of the 21st National Conference on Artiï¬- cial Intelligence (AAAI-2006), Boston, MA, USA.
Gary Marcus and Ernest Davis. 2019. Rebooting AI: Building Artiï¬cial Intelligence We Can Trust. Pan- theon.
Mitchell P Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of english: The penn treebank. Computa- tional Linguistics, 19:313â330.
Lara J Martin, Prithviraj Ammanabrolu, Xinyu Wang, William Hancock, Shruti Singh, Brent Harrison, and Mark O Riedl. 2018. Event representations for au- tomated story generation with deep neural nets. In Thirty-Second AAAI Conference on Artiï¬cial Intelli- gence.
Cynthia Matuszek. 2018. Grounded language learn- ing: Where robotics and nlp meet (early career spot- light). In Proceedings of the 27th International Joint Conference on Artiï¬cial Intelligence (IJCAI), Stock- holm, Sweden.
Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Con- textualized word vectors. In Advances in Neural In- formation Processing Systems, pages 6297â6308.
James L. McClelland, Felix Hill, Maja Rudolph, Ja- son Baldridge, and Hinrich Schütze. 2019. Ex- tending Machine Language Models toward Human- Level Language Understanding. arXiv:1912.05877.
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Cor- rado, and Jeffrey Dean. 2013. Distributed represen- tations of words and phrases and their composition- ality. Advances in Neural Information Processing Systems, 26.
Margaret Mitchell, Jesse Dodge, Amit Goyal, Kota Ya- maguchi, Karl Stratos, Xufeng Han, Alyssa Men- sch, Alexander C. Berg, Tamara L. Berg, and Hal Daumé III. 2012. Midge: Generating image descrip- tions from computer vision detections. In European Chapter of the Association for Computational Lin- guistics (EACL).
Tom M Mitchell. 1980. The need for biases in learning generalizations. Department of Computer Science, Laboratory for Computer Science Research.
Raymond J. Mooney. 2008. Learning to connect lan- In Proceedings of the 23rd guage and perception. AAAI Conference on Artiï¬cial Intelligence (AAAI), pages 1598â1601, Chicago, IL. Senior Member Pa- per.
Raymond J Mooney and Gerald Dejong. 1985. Learn- ing schemata for natural language processing. In Proceedings of the Ninth International Joint Confer- ence on Artiï¬cial Intelligence (IJCAI-85).
Daniele Moro and Casey Kennington. 2018. Multi- modal visual and simulated muscle activations for grounded semantics of hand-related descriptions. In Workshop on the Semantics and Pragmatics of Dia- logue. SEMDIAL.
Roozbeh Mottaghi, Mohammad Rastegari, Abhinav Gupta, and Ali Farhadi. 2016. âwhat happens if...â learning to predict the effect of forces in images. In Computer Vision â ECCV 2016, pages 269â285, Cham. Springer International Publishing.
Adithyavairavan Murali, Tao Chen, Kalyan Vasudev Alwala, Dhiraj Gandhi, Lerrel Pinto, Saurabh Gupta, and Abhinav Gupta. 2019. Pyrobot: An open-source robotics framework for research and benchmarking. arXiv preprint arXiv:1906.08236.
Aida Nematzadeh, Kaylee Burns, Erin Grant, Alison Gopnik, and Tom Grifï¬ths. 2018. Evaluating theory In Proceedings of of mind in question answering. the 2018 Conference on Empirical Methods in Natu- ral Language Processing, pages 2392â2400.
NVIDIA. 2019. NVIDIA Isaac software develop- https://developer.nvidia.com/ ment kit. isaac-sdk. Accessed 2019-12-09.
Elinor Ochs. 1993. Constructing social identity: A lan- guage socialization perspective. Research on lan- guage and social interaction, 26(3):287â306.
William OâGrady. 2005. How Children Learn Lan- guage. Cambridge University Press.
Veronica Ornaghi, Jens Brockmeier, and Ilaria Graz- zani Gavazzi. 2011. The role of language games in childrenâs understanding of mental states: A train- ing study. Journal of cognition and development, 12(2):239â259.
Alexander Ororbia, Ankur Mali, Matthew Kelly, and David Reitter. 2019. Like a baby: Visually situated neural language acquisition. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 5127â5136, Florence, Italy.
Denis Paperno, Germán Kruszewski, Angeliki Lazari- dou, Ngoc-Quan Pham, Raffaella Bernardi, San- dro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernández. 2016. The LAMBADA dataset: Word prediction requiring a broad discourse context. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1525â1534.
Rohan Paul, Jacob Arkin, Derya Aksaray, Nicholas Roy, and Thomas M Howard. 2018. Efï¬cient grounding of abstract spatial concepts for nat- ural language interaction with robot platforms. The International Journal of Robotics Research, 37(10):1269â1299.
Haoruo Peng, Daniel Khashabi, and Dan Roth. 2015. Solving hard coreference problems. In Proceedings of the 2015 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 809â819.
Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 1532â1543, Doha, Qatar.
Don Perlis. 2016. Five dimensions of reasoning in the wild. In Association for the Advancement of Artiï¬- cial Intelligence (AAAI).
Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. In North American Chapter of the Asso- ciation for Computational Linguistics (NAACL).
John R Pierce. 1969. Whither speech recognition? The journal of the acoustical society of america, 46(4B):1049â1051.
Nicolas Pinto, David Doukhan, James J DiCarlo, and David D Cox. 2009. A high-throughput screening approach to discovering good forms of biologically inspired visual representation. PLoS computational biology, 5(11):e1000579.
Jordan B. Pollack. 1987. On Connectionist Models of Natural Language Processing. Ph.D. thesis, Univer- sity of Illinois.
Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you donât know: Unanswerable ques- tions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics (Volume 2: Short Papers), pages 784â789.
Hannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2019. Towards empathetic open- domain conversation models: A new benchmark and In Proceedings of the 57th Annual Meet- dataset. ing of the Association for Computational Linguistics, pages 5370â5381, Florence, Italy.
Michaela Regneri, Marcus Rohrbach, Dominikus Wet- zel, Stefan Thater, Bernt Schiele, and Manfred Pinkal. 2013. Grounding action descriptions in videos. Transactions of the Association for Compu- tational Linguistics (TACL), 1:25â36.
Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, and Jason Weston. 2020. Recipes for building an open- domain chatbot. In arXiv.
Stephanie Rosenthal, Joydeep Biswas, and Manuela Veloso. 2010. An effective personal mobile robot agent through symbiotic human-robot interaction. In Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: vol- ume 1-Volume 1, pages 915â922. International Foun- dation for Autonomous Agents and Multiagent Sys- tems.
Corby Rosset. 2020. Turing-NLG: A 17-billion- parameter language model by Microsoft.
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, An- drej Karpathy, Aditya Khosla, Michael Bernstein, Ima- Alexander C. Berg, and Li Fei-Fei. 2015. geNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211â252.
Jacqueline Sachs, Barbara Bard, and Marie L Johnson. 1981. Language learning with restricted input: Case studies of two hearing children of deaf parents. Ap- plied Psycholinguistics, 2(1):33â54.
Ian Sample and Alex Hern. 2014. Scientists dispute whether computer âeugene goostmanâ passed turing test. The Guardian, 9.
Maarten Sap, Hannah Rashkin, Derek Chen, Ronan Le Bras, and Yejin Choi. 2019. Social IQa: Com- monsense reasoning about social interactions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 4462â 4472, Hong Kong, China.
Rosario Scalise, Jesse Thomason, Yonatan Bisk, and Siddhartha Srinivasa. 2019. Improving robot suc- cess detection using static object data. In Proceed- ings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems.
Roger C. Schank and Robert P. Abelson. 1977. Scripts, Plans, Goals and Understanding: an Inquiry into Human Knowledge Structures. L. Erlbaum, Hills- dale, NJ.
and Tal Linzen. 2019. Quantity doesnât buy quality syn- tax with neural language models. arXiv preprint arXiv:1909.00111.
David Schlangen. 2019a. Grounded agreement games: Emphasizing conversational grounding in visual dia- logue settings. arXiv.
David Schlangen. 2019b. Language tasks and language games: On methodology in current natural language processing research. arXiv.
Iulian V. Serban, Chinnadhurai Sankar, Mathieu Ger- main, Saizheng Zhang, Zhouhan Lin, Sandeep Sub- ramanian, Taesup Kim, Michael Pieper, Sarath
Chandar, Nan Rosemary Ke, Sai Rajeshwar, Alexan- Jose M. R. Sotelo, Dendi dre de Brebisson, Suhubdy, Vincent Michalski, Alexandre Nguyen, Joelle Pineau, and Yoshua Bengio. 2017. A deep arXiv preprint reinforcement arXiv:1709.02349.
Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. 2018. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for au- In Proceedings of the tomatic image captioning. 56th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 2556â2565, Melbourne, Australia.
Lanbo She, Shaohua Yang, Yu Cheng, Yunyi Jia, Joyce Y. Chai, and Ning Xi. 2014. Back to the blocks world: Learning new actions through situated human-robot dialogue. In Proceedings of 15th SIG- DIAL Meeting on Discourse and Dialogue.
Haoyue Shi, Jiayuan Mao, Kevin Gimpel, and Karen Livescu. 2019. Visually grounded neural syntax ac- quisition. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 1842â1861, Florence, Italy.
Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catan- zaro. 2019. Megatron-lm: Training multi-billion parameter language models using gpu model paral- lelism. arXiv preprint arXiv:1909.08053.
Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han, Roozbeh Mottaghi, Luke Zettlemoyer, and Dieter Fox. 2020. ALFRED: A benchmark for interpreting grounded instructions for everyday tasks. Computer Vision and Pattern Recognition (CVPR).
Learn- ing grounded meaning representations with autoen- In Proceedings of the 52nd Annual Meet- coders. ing of the Association for Computational Linguis- tics (Volume 1: Long Papers), pages 721â732, Balti- more, Maryland.
and Alexander Stoytchev. 2014. Learning relational object cate- gories using behavioral exploration and multimodal In IEEE International Conference on perception. Robotics and Automation.
Linda Smith and Michael Gasser. 2005. The develop- ment of embodied cognition: Six lessons from ba- bies. Artiï¬cial life, 11(1-2):13â29.
Paul Smolensky. 1990. Tensor product variable bind- ing and the representation of symbolic structures Artiï¬cial Intelligence, in connectionist systems. 46:159â216.
Richard Socher, Brody Huval, Christopher Manning, and Andrew Ng. 2012. Semantic compositional- In Em- ity through recursive matrix-vector spaces. pirical Methods in Natural Language Processing (EMNLP).
T. Srinivasan, R. Sanabria, and F. Metze. 2020. Look- ing enhances listening: Recovering missing speech using images. In ICASSP 2020 - 2020 IEEE Interna- tional Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6304â6308.
Mark Steedman. 2008. Last words: On becoming a dis- cipline. Computational Linguistics, 34(1):137â144.
Greg J Stephens, Lauren J Silbert, and Uri Hasson. 2010. Speakerâlistener neural coupling underlies successful communication. Proceedings of the Na- tional Academy of Sciences, 107(32):14425â14430.
Alane Suhr, Claudia Yan, Jack Schluger, Stanley Yu, Hadi Khader, Marwa Mouallem, Iris Zhang, and Yoav Artzi. 2019a. Executing instructions in situ- In Proceedings of ated collaborative interactions. the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2119â2130, Hong Kong, China.
Alane Suhr, Stephanie Zhou, Ally Zhang, Iris Zhang, Huajun Bai, and Yoav Artzi. 2019b. A corpus for reasoning about natural language grounded in pho- tographs. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 6418â6428, Florence, Italy.
Chen Sun, Fabien Baradel, Kevin Murphy, and Cordelia Schmid. 2019a. Contrastive bidirectional transformer for temporal representation learning. arxiv:1906.05743.
Chen Sun, Austin Myers, Carl Vondrick, Kevin Mur- phy, and Cordelia Schmid. 2019b. VideoBERT: A Joint Model for Video and Language Representation Learning. In International Conference on Computer vision.
Yee-Whye Teh. 2006. A hierarchical bayesian lan- In guage model based on pitman-yor processes. Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meet- ing of the Association for Computational Linguistics, pages 985â992, Sydney, Australia.
Stefanie Tellex, Nakul Gopalan, Hadas Kress-Gazit, and Cynthia Matuszek. 2020. Robots that use lan- guage. The Annual Review of Control, Robotics, and Autonomous Systems, 15.
Stefanie Tellex, Ross Knepper, Adrian Li, Daniela Rus, and Nicholas Roy. 2014. Asking for help using in- In Proceedings of Robotics: Sci- verse semantics. ence and Systems (RSS), Berkeley, California.
Stefanie Tellex, Thomas Kollar, Steven Dickerson, Matthew R Walter, Ashis Gopal Banerjee, Seth Teller, and Nicholas Roy. 2011. Understanding nat- ural language commands for robotic navigation and In Proceedings of the Na- mobile manipulation. tional Conference on Artiï¬cial Intelligence.
Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. In BERT rediscovers the classical NLP pipeline. Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4593â 4601, Florence, Italy.
Esther Thelen and Linda B. Smith. 1996. A Dynamic Systems Approach to the Development of Cognition and Action. MIT Press.
Jesse Thomason, Daniel Gordon, and Yonatan Bisk. 2019a. Shifting the baseline: Single modality perfor- mance on visual navigation & QA. In North Amer- ican Chapter of the Association for Computational Linguistics (NAACL).
Jesse Thomason, Michael Murray, Maya Cakmak, and Luke Zettlemoyer. 2019b. Vision-and-dialog navi- gation. In Conference on Robot Learning (CoRL).
Jivko Sinapov, Justin Hart, Peter Stone, and Raymond J. Mooney. 2017. Opportunistic active learning for In Pro- grounding natural language descriptions. ceedings of the 1st Annual Conference on Robot Learning (CoRL).
Jivko Sinapov, Nick Walker, Yuqian Jiang, Harel Yedid- sion, Justin Hart, Peter Stone, and Raymond J. Mooney. 2020. Jointly improving parsing and per- ception for natural language commands through human-robot dialog. The Journal of Artiï¬cial Intel- ligence Research (JAIR), 67.
Jesse Thomason, Jivko Sinapov, Maxwell Svetlik, Pe- ter Stone, and Raymond J. Mooney. 2016. Learning multi-modal grounded linguistic semantics by play- In International Joint Conference on ing âI spyâ. Artiï¬cial Intelligence (IJCAI).
Emanuel Todorov, Tom Erez, and Yuval Tassa. 2012. Mujoco: A physics engine for model-based con- In 2012 IEEE/RSJ International Conference trol. on Intelligent Robots and Systems, pages 5026â5033. IEEE.
Michael Tomasello. 2009. Constructing a language. Harvard university press.
Yao-Hung Hubert Tsai, Shaojie Bai, Paul Pu Liang, J. Zico Kolter, Louis-Philippe Morency, and Rus- lan Salakhutdinov. 2019. Multimodal transformer In for unaligned multimodal language sequences. Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 6558â 6569, Florence, Italy.
Joseph Turian, Lev-Arie Ratinov, and Yoshua Bengio. 2010. Word representations: A simple and general In Proceed- method for semi-supervised learning. ings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 384â394.
Alan M Turing. 1950. Computing machinery and intel- ligence. Mind.
Peter D Turney and Patrick Pantel. 2010. From fre- quency to meaning: Vector space models of se- mantics. Journal of artiï¬cial intelligence research, 37:141â188.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all In Advances in neural information pro- you need. cessing systems, pages 5998â6008.
Gabriella Vigliocco, Pamela Perniss, and David Vinson. 2014. Language as a multimodal phenomenon: im- plications for language learning, processing and evo- lution.
Harm de Vries, Dzmitry Bahdanau, and Christopher Towards ecologically valid re- Manning. 2020. search on language user interfaces. In arXiv.
Matthew Walter, Sachithra Hemachandra, Bianca Homberg, Stefanie Tellex, and Seth Teller. 2013. Learning semantic maps from natural language de- scriptions. In Proceedings of Robotics: Science and Systems (RSS), Berlin, Germany.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2019a. GLUE: A multi-task benchmark and analysis plat- In Inter- form for natural language understanding. national Conference on Learning Representations.
Xin Wang, Jiawei Wu, Junkun Chen, Lei Li, Yuan- Fang Wang, and William Yang Wang. 2019b. Vatex: A large-scale, high-quality multilingual dataset for video-and-language research. In The IEEE Interna- tional Conference on Computer Vision (ICCV).
Ronald Wardhaugh. 2011. An introduction to sociolin- guistics, volume 28. John Wiley & Sons.
Joseph Weizenbaum. 1966. Eliza â a computer pro- gram for the study of natural language communica- tion between man and machine. Communications of the ACM, 9(1):36â45.
Lloyd R Welch. 2003. Hidden markov models and the IEEE Information Theory baum-welch algorithm. Society Newsletter, 53(4):1â24.
Gregory M Werner and Michael G Dyer. 1991. Evolu- tion of communication in artiï¬cial organisms. ALife.
Terry Winograd. 1971. Procedures as a representation for data in a computer program for understanding natural language. Technical report, Massachusetts Institute of Technology, Project MAC.
Ludwig Wittgenstein. 1953. Philosophical Investiga- tions. Macmillan.
Ludwig Wittgenstein. 1958. The blue and brown books. Basil Blackwell.
Fanbo Xiang, Yuzhe Qin, Kaichun Mo, Yikuan Xia, Hao Zhu, Fangchen Liu, Minghua Liu, Hanxiao Jiang, Yifu Yuan, He Wang, Li Yi, Angel X. Chang, Leonidas J. Guibas, and Hao Su. 2020. SAPIEN: A simulated part-based interactive environment. In Computer Vision and Pattern Recognition (CVPR).
Semih Yagcioglu, Aykut Erdem, Erkut Erdem, and Na- zli Ikizler-Cinbis. 2018. RecipeQA: A challenge dataset for multimodal comprehension of cooking recipes. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1358â1368, Brussels, Belgium.
Diyi Yang, Jiaao Chen, Zichao Yang, Dan Jurafsky, and Eduard Hovy. 2019a. Letâs make your request more persuasive: Modeling persuasive strategies via semi- supervised neural nets on crowdfunding platforms. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers).
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Ruslan Salakhutdinov, and Quoc V Le. 2019b. Xlnet: Generalized autoregressive pretrain- ing for language understanding. In Advances in Neu- ral Information Processing Systems 32 (NIPS 2019).
Mark Yatskar, Luke Zettlemoyer, and Ali Farhadi. 2016. Situation recognition: Visual semantic role labeling for image understanding. In Conference on Computer Vision and Pattern Recognition.
Amir Zadeh, Michael Chan, Paul Pu Liang, Edmund Tong, and Louis-Philippe Morency. 2019. Social-iq: A question answering benchmark for artiï¬cial social intelligence. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Rowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019a. From recognition to cognition: Vi- sual commonsense reasoning. In The IEEE Confer- ence on Computer Vision and Pattern Recognition (CVPR).
Rowan Zellers, Ari Holtzman, Elizabeth Clark, Lianhui Qin, Ali Farhadi, and Yejin Choi. 2020. Evaluating machines by their real-world language use. arXiv preprint arXiv:2004.03607.
Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2019b. Defending against neural fake In Thirty-third Conference on Neural Infor- news. mation Processing Systems.
Li Zhou, Jianfeng Gao, Di Li, and Heung-Yeung Shum. 2020. The design and implementation of xiaoice, an empathetic social chatbot. Computational Linguis- tics, 46(1):53â93.
Luowei Zhou, Hamid Palangi, Lei Zhang, Houdong Hu, Jason J. Corso, and Jianfeng Gao. 2019. Uni- ï¬ed vision-language pre-training for image caption- ing and vqa. In Thirty-Fourth AAAI Conference on Artiï¬cial Intelligence. | {
"id": "2004.03607"
} |
2004.08994 | Adversarial Training for Large Neural Language Models | Generalization and robustness are both key desiderata for designing machine
learning methods. Adversarial training can enhance robustness, but past work
often finds it hurts generalization. In natural language processing (NLP),
pre-training large neural language models such as BERT have demonstrated
impressive gain in generalization for a variety of tasks, with further
improvement from adversarial fine-tuning. However, these models are still
vulnerable to adversarial attacks. In this paper, we show that adversarial
pre-training can improve both generalization and robustness. We propose a
general algorithm ALUM (Adversarial training for large neural LangUage Models),
which regularizes the training objective by applying perturbations in the
embedding space that maximizes the adversarial loss. We present the first
comprehensive study of adversarial training in all stages, including
pre-training from scratch, continual pre-training on a well-trained model, and
task-specific fine-tuning. ALUM obtains substantial gains over BERT on a wide
range of NLP tasks, in both regular and adversarial scenarios. Even for models
that have been well trained on extremely large text corpora, such as RoBERTa,
ALUM can still produce significant gains from continual pre-training, whereas
conventional non-adversarial methods can not. ALUM can be further combined with
task-specific fine-tuning to attain additional gains. The ALUM code is publicly
available at https://github.com/namisan/mt-dnn. | http://arxiv.org/pdf/2004.08994 | Xiaodong Liu, Hao Cheng, Pengcheng He, Weizhu Chen, Yu Wang, Hoifung Poon, Jianfeng Gao | cs.CL | 13 pages, 9 tables, 2 figures | null | cs.CL | 20200420 | 20200429 | 0 2 0 2
r p A 9 2 ] L C . s c [
2 v 4 9 9 8 0 . 4 0 0 2 : v i X r a
# Adversarial Training for Large Neural Language Models
# Xiaodong Liuâ , Hao Chengâ , Pengcheng Heâ¡, Weizhu Chenâ¡, Yu Wangâ , Hoifung Poonâ , Jianfeng Gaoâ
# â Microsoft Research
# â¡ Microsoft Dynamics 365 AI
# {xiaodl,chehao,penhe,wzchen,yuwan,hoifung,jfgao}@microsoft.com
# Abstract
Generalization and robustness are both key desiderata for designing machine learning methods. Adversarial training can enhance robustness, but past work often ï¬nds it hurts generalization. In natural language process- ing (NLP), pre-training large neural language models such as BERT have demonstrated im- pressive gain in generalization for a variety of tasks, with further improvement from ad- versarial ï¬ne-tuning. However, these mod- els are still vulnerable to adversarial attacks. In this paper, we show that adversarial pre- training can improve both generalization and robustness. We propose a general algorithm ALUM (Adversarial training for large neu- ral LangUage Models), which regularizes the training objective by applying perturbations in the embedding space that maximizes the ad- versarial loss. We present the ï¬rst comprehen- sive study of adversarial training in all stages, including pre-training from scratch, continual pre-training on a well-trained model, and task- speciï¬c ï¬ne-tuning. ALUM obtains substan- tial gains over BERT on a wide range of NLP tasks, in both regular and adversarial scenarios. Even for models that have been well trained on extremely large text corpora, such as RoBERTa, ALUM can still produce signiï¬cant gains from continual pre-training, whereas conventional non-adversarial meth- ods can not. ALUM can be further combined with task-speciï¬c ï¬ne-tuning to attain addi- tional gains. The ALUM code is publicly avail- able at https://github.com/namisan/mt-dnn.
# Introduction
Generalization and robustness are two fundamental considerations in assessing machine learning meth- ods. Ideally, a learned model should perform well on unseen test examples and withstand adversarial attacks. In natural language processing (NLP), pre- training neural language models on unlabeled text
has proven very effective to improve generaliza- tion performance for a variety of downstream tasks, as exempliï¬ed by BERT (Devlin et al., 2018) and other transformer-based models (Liu et al., 2019c; Radford et al., 2018; Clark et al., 2020; Dong et al., 2019; Bao et al., 2020). However, these models may still suffer catastrophic failures in adversarial scenarios (Nie et al., 2019; Hsieh et al., 2019). For example, Jin et al. (2019) show that classiï¬cation accuracy on a Yelp dataset drops from 95.6% on standard test to 6.8% on robust test for a BERT model.
Adversarial training (Madry et al., 2017; Good- fellow et al., 2014) has been well studied in com- puter vision, but past work shows that it often hurts generalization (Raghunathan et al., 2019; Min et al., 2020). In NLP, there is growing interest in adver- sarial training, but existing work typically focuses on assessing the impact on generalization (Zhu et al., 2019; Jiang et al., 2019; Cheng et al., 2019; Wang et al., 2019). Moreover, adversarial training is generally limited to task-speciï¬c ï¬ne-tuning1. See Minaee et al. (2020a) for a recent survey.
In this paper, we present the ï¬rst comprehensive study on adversarial pre-training, and show that it can improve both generalization and robustness for a wide range of NLP tasks. We propose a unifying algorithm ALUM (Adversarial training for large neural LangUage Models), which augments the standard training objective with an additional term that maximizes the adversarial loss via applying perturbation in the embedding space. ALUM is generally applicable to pre-training and ï¬ne-tuning, on top of any Transformer-based language models. We conduct a comprehensive evaluation on vari- ous NLP tasks across multiple benchmark datasets, including GLUE, SQuAD v1.1/v2.0, SNLI, Sci- Tail for assessing model generalization, and ANLI,
1A notable exception is Wang et al. (2019), but it only applied adversarial training to generative language modeling.
1
HELLSWAG, SWAG, Adversarial SQuAD for as- sessing model robustness. Experimental results show that by conducting adversarial pre-training, ALUM attains signiï¬cant improvements, often out- performing previous state of the art by a large mar- gin. This is true even for the extremely well-trained RoBERTa model, where continual pre-training without adversarial training fails to attain any gain. Remarkably, in addition to improving generaliza- tion, we ï¬nd that adversarial pre-training also sub- stantially improves robustness, as exempliï¬ed by the resulting large gains in adversarial datasets such as ANLI, Adversarial-SQuAD, HELLASWAG, which signiï¬cantly reduces the gap between stan- dard errors and robust errors for popular models like BERT and RoBERTa. This suggests that ad- versarial training on unlabeled data can provide a promising direction to reconcile the apparent con- ï¬ict between generalization and robustness as ob- served in prior work (Raghunathan et al., 2019; Min et al., 2020). We also show that adversarial pre-training can be combined with adversarial ï¬ne- tuning, resulting in extra gains.
Our contributions are summarized as follows:
⢠We propose ALUM, a general algorithm to in- corporate adversarial training for pre-training and ï¬ne-tuning large neural language models.
⢠We conduct a comprehensive evaluation on a wide range of NLP tasks and assess the impact of adversarial training in pre-training from scratch, continual pre-training, task-speciï¬c ï¬ne-tuning, and their combinations.
⢠We obtain signiï¬cant improvements over prior state of the art, including extremely well- trained models such as RoBERTa, in both gen- eralization and robustness.
⢠To facilitate research, we will release our code and pre-trained models.
# 2 Preliminary
In this section, we give a quick overview of lan- guage model pre-training, using BERT (Devlin et al., 2018) as a running example for transformer- based neural language models.
# 2.1 Input Representation
We assume that the input consists of text spans (typically sentences) separated by a special to- ken [SEP ]. To address the problem of out-of- vocabulary words, tokens are divided into subword
2
units, using Byte-Pair Encoding (BPE) (Sennrich et al., 2015) or its variants (Kudo and Richardson, 2018), which generates a ï¬xed-size subword vo- cabulary to compactly represent words in training text corpora.
# 2.2 Model Architecture
Following recent pre-training methods (Devlin et al., 2018; Liu et al., 2019c), we use transformer- based models (Vaswani et al., 2017) to lever- age a multi-head attention mechanism, which have demonstrated superiority in parallel computa- tion and modeling long-range dependencies, com- pared to recurrent neural networks such as LSTM (Hochreiter and Schmidhuber, 1997). The input is ï¬rst passed to a lexical encoder, which combines a token embedding, a (token) position embedding and a segment embedding (i.e., which text span the token belongs to) by element-wise summation. The embedding layer is then passed to multiple layers of transformer modules to generate the contextual representation (Vaswani et al., 2017).
# 2.3 Self Supervision
A key innovation in BERT (Devlin et al., 2018) is the use of Masked Language Model (MLM) for self-supervised pre-training. Instead of predicting the next token based on the preceding tokens, as in traditional generative language models, MLM randomly replaces a subset of tokens by a special token (e.g., [M ASK]), and asks the model to pre- dict them. Essentially, it is a cloze task (Taylor, 1953), where the training objective is the cross- entropy loss between the original tokens and the predicted ones. In BERT and RoBERTa, 15% of the input tokens are chosen, among which a ran- dom 80% are replaced by [M ASK], 10% are left unchanged and 10% are randomly replaced by a token from the vocabulary. In our experiments, instead of using a ï¬xed masked rate of 15%, we gradually increase it from 5% to 25% with 5% in- crement for every 20% of training epochs, as we ï¬nd this makes pre-training more stable.
Additionally, BERT also uses Next Sentence Prediction (NSP), which is a binary classiï¬ca- tion task that for a given sentence pair determines whether one sentence follows the other in the orig- inal text. There have debates on how much NSP helps (Liu et al., 2019c). But we include it in our experiments for a fair comparison with BERT.
# 3 ALUM (Adversarial training for large neural LangUage Models)
In this section, we ï¬rst present a unifying view of standard training objectives and prior approaches to adversarial training. We then present ALUM, which is a general adversarial training algorithm applicable to pre-training and ï¬ne-tuning, on top of any transformer-based neural language models.
# 3.1 Standard Training Objectives
Both pre-training and ï¬ne-tuning can be viewed as minimizing the standard error on training data, with the training objectives derived from self- supervision (MLM and NSP in pre-training) or di- rect supervision (labeled examples in task-speciï¬c ï¬ne-tuning), respectively.
Speciï¬cally, the training algorithms seek to learn a function f (x; θ) : x â C, parametrized by θ. In MLM, C is the vocabulary, and f (x; θ) tries to pre- dict the masked token y. In ï¬ne-tuning, C is the task-speciï¬c label set, and f (x; θ) is the classiï¬er. Given a training dataset D of input-output pairs (x, y) and the loss function l(., .) (e.g., cross en- tropy), f (x; θ) is trained to minimize the empirical risk:
min θ E(x,y)â¼D[l(f (x; θ), y)] (1)
# 3.2 Adversarial Training
Pre-training a large neural language model such as BERT has proven effective to improve gener- alization performance in task-speciï¬c ï¬ne-tuning (Devlin et al., 2018). However, such models can still suffer catastrophic loss in adversarial scenar- ios (Nie et al., 2019; Hsieh et al., 2019; Madry et al., 2017; Jin et al., 2019), with attacks as simple as replacing a few words in input sentences while preserving the semantics.
To improve model robustness and withstand ad- versarial attacks, adversarial training has been pro- posed and studied extensively, predominantly in computer vision literature (Goodfellow et al., 2014; Madry et al., 2017). The key idea is to modify the training objective by applying small perturbation to input images that maximize the adversarial loss:
min θ E(x,y)â¼D[max δ l(f (x + δ; θ), y)] (2)
where the inner maximization can be solved by running a number of projected gradient descent steps (Madry et al., 2017).
While adversarial training has been successful in mitigating adversarial attacks, past work often
3
encounters an apparent conï¬ict between general- ization and robustness (Raghunathan et al., 2019, 2020; Min et al., 2020), as adversarial training could hurt generalization performance.
# 3.3 The ALUM Algorithm
In NLP, applying adversarial training is not straight- forward, since the input are discrete elements (to- ken or subword sequences), but there have been some recent successes (Zhu et al., 2019; Jiang et al., 2019; Cheng et al., 2019; Wang et al., 2019; Mi- naee et al., 2020b). However, aside from Wang et al. (2019), there has not been any prior work on ad- versarial pre-training, and Wang et al. (2019) only applied adversarial training to generative language modeling using LSTM.
ALUM is applicable to both pre-training and ï¬ne-tuning. It builds on several key ideas that have proven useful in prior work. First, instead of ap- plying perturbation to the input text directly, one would perturb the embedding space. Namely, x is the sub-word embedding in f (x; θ) (Jiang et al., 2019; Zhu et al., 2019).
Second, instead of adopting the adversarial train- ing objective of Eq. 2, as in Zhu et al. (2019) and most other approaches, we follow Jiang et al. (2019) to regularize the standard objective using virtual adversarial training (Miyato et al., 2018):
min θ α max δ E(x,y)â¼D[l(f (x; θ), y)+ l(f (x + δ; θ), f (x; θ))] (3)
Effectively, term favors label smoothness in the embedding neighborhood, and α is a hyperparameter that controls the trade-off between standard errors and robust errors.
We found that virtual adversarial training is su- perior to conventional adversarial training, espe- cially when labels might be noisy. E.g., BERT pre- training uses the masked words as self-supervised labels, but in many cases, they could be replaced by other words to form completely legitimate text. Em- pirically, we veriï¬ed that this is indeed the case, as pre-training beneï¬ts from larger α. We set α = 10 for pre-training, and α = 1 for ï¬ne-tuning in all our experiments.
Compared to standard training, adversarial train- ing is rather expensive due to the inner maximiza- tion. Zhu et al. (2019) adopted the free adversar- ial training idea in Shafahi et al. (2019) for accel- eration, by reusing the backward pass for gradi- ent computation to carry out the inner ascent step
# Algorithm 1 ALUM Input: T :
Input: T: the total number of iterations, Â¥ = {(1,y1), +5 (@n;Yn)}: the dataset, f(a; 0): the machine learning model parametrized by 0, o?: the variance of the random initialization of perturbation 6, â¬: perturbation bound, /c: the number of iterations for perturbation estima- tion, 7): the step size for updating perturbation, T: the global learning rate, a: the smoothing proportion of adversarial training in the aug- mented learning objective, II: the projection operation.
1: for t = 1, .., T do 2:
2: for (x,y) ⬠¥ do 3: On N(0, o?1) 4: for m = 1,.., K do 5: Jadv â Vol f(a; 8), f(@ +4; 9)) 6: Oe Thyso<e(6 + "Gadv) 7: end for 8: go < Vol( f(a; 9), y) +aVol(f(0;8), fla + 6:4) 9: 06+ 0-âTg6 10: end for 11: end for Output: 0
and outer descent step simultaneously. Inspired by ERNIE (Sun et al., 2019) and other continual pre-training approaches, we instead adopt a curricu- lum learning approach: ï¬rst train the model using the standard objective (1); and then continue the training with virtual adversarial training (3).
Jiang et al. (2019) also incorporated a momen- tum term using the Bregman proximate point method, which can be quite costly in training time. We found that our curriculum learning approach largely rendered this unnecessary and simpliï¬ed our algorithm without using this term.
Algorithm 1 shows the details of ALUM. Line 4-6 run K projected gradient steps to ï¬nd the per- turbation δ that maximizes the adversarial loss (vi- olation of local smoothness). Note that a larger K leads to better approximation (Madry et al., 2017; Qin et al., 2019), but it is more expensive. To attain a good trade-off between speed and performance, we set K = 1 in all our experiments.
# 3.4 Generalization vs. Robustness
Empirically, we found that by applying adversarial pre-training using ALUM, we were able to improve both generalization and robustness for a wide range
4
of NLP tasks, as seen in Section 4. This is very interesting as prior work often ï¬nds that adversarial training hurts generalization, even with theoretical justiï¬cation (Raghunathan et al., 2019, 2020; Min et al., 2020).
We hypothesize that adversarial pre-training might be the key for reconciling this apparent in- congruence, as prior work on the conï¬ict between generalization and robustness generally focuses on the supervised learning setting. Interestingly, some nascent results in reconciling the two also leverage unlabeled data, such as self-training (Raghunathan et al., 2020). Additionally, we hypothesize that by perturbing the embedding space rather than the input space, adversarial training in NLP might in- advertently bias toward on-manifold perturbation than regular perturbation, which helps generaliza- tion (Stutz et al., 2019). We leave the theoretical analysis of all these connections to future work.
4 Experiments In this section, we present a comprehensive study of adversarial training on large neural language models. We show that ALUM substantially im- proves both generalization and robustness in a wide range of NLP tasks, for both the standard BERT model and the extremely well-trained RoBERTa model. We also show that ALUM can be applied to adversarial pre-training and ï¬ne-tuning alike and attain further gain by combining the two.
# 4.1 Datasets
Pre-training: For BERT pre-training, we use Wikipedia (English Wikipedia dump2; 13GB). For continual pre-training of RoBERTa, we use Wikipedia (13GB), OPENWEBTEXT (public Red- dit content (Gokaslan and Cohen); 38GB), STO- RIES (a subset of CommonCrawl (Trinh and Le, 2018); 31GB).
NLP application benchmarks: To assess the impact of adversarial training on generalization, we use standard benchmarks such as GLUE (Wang et al., 2018) and SQuAD (v1.1 and v2.0) (Ra- jpurkar et al., 2016, 2018), as well as three named entity recognition (NER) tasks in the biomedical domain. To evaluate the impact of adversarial training on robustness, we use ANLI (Nie et al., 2019), Adversarial SQuAD (Jia and Liang, 2017), and HELLASWAG (Hampel, 1974). To assess the combination of adversarial pre-training and
2https://dumps.wikimedia.org/enwiki/
ï¬ne-tuning, we follow Jiang et al. (2019) and use MNLI (Williams et al., 2018) (from GLUE), ANLI, SWAG (Zellers et al., 2018), SNLI (Bowman et al., 2015), SciTail (Khot et al., 2018). These bench- marks cover a wide range of NLP tasks such as named entity recognition, textual entailment, and machine reading comprehension, spanning classi- ï¬cation, ranking, and regression. For details, see Appendix A.
# Implementation Details
We perform three types of adversarial training in our experiments: pre-training from scratch, con- tinual pre-training on a well-trained model, and task-speciï¬c ï¬ne-tuning.
We pre-train BERT models from scratch us- ing Wikipedia®. The training code is based on Megatron, implemented in PyTorch (Shoeybi et al., 2019)*. We use ADAM (Kingma and Ba, 2014) for the optimizer with a standard learning rate schedule that increases linearly from zero to the peak rate of 1 x 107? in first one percent of steps, and then decays linearly to zero in the remaining 99% of steps. Following Devlin et al. (2018), training is done for one million steps with batch size of 256. We set the perturbation size « = 1 x 10~°, the step sizen = 1x 10â%, and the variance for initializ- ing perturbation ¢ = 1 x 107°. We set a = 10 for heightened regularization in virtual adversarial training, and set K = 1 for training efficiency (i.e., one projected gradient step). The training takes 10 days on one DGX-2 machine with 16 V100 GPUs. For continual pre-training of RoBERTa (Liu et al., 2019c), we use ROBERTaâs default train- ing parameters, except a smaller learning rate (4 x 107), and run for 100K training steps with a batch size of 256 on the union of Wikipedia, OPEN- WEBTEXT, and STORIES (total size 82GB). The code is based on FairSeq>. The training takes 7 days on two DGX-2 machines.
For ï¬ne-tuning with or without adversarial train- ing, we use the MT-DNN open-sourced toolkit (Liu et al., 2020, 2015)6. We follow Jiang et al. (2019) for head-to-head comparison, which uses ADAM (Kingma and Ba, 2014) and RADAM (Liu et al., 2019a) as our optimizers, with peak learning rates of {5 à 10â6, 8 à 10â6, 1 à 10â5, 2 à 10â5}, and batch sizes of 16, 32 or 64, depending on the tasks.
3BookCorpus is no longer publicly available. 4https://github.com/NVIDIA/Megatron-LM 5https://github.com/pytorch/fairseq 6https://github.com/namisan/mt-dnn
5
The dropout rate is set to 0.1 for all the task-speciï¬c layers, except 0.3 for MNLI and 0.05 for CoLA. To avoid gradient exploding, the gradient is clipped to keep the norm within 1. All the texts are tokenized using WordPiece and chopped to spans up to 512 tokens. We conduct ï¬ne-tuning for up to 10 epochs and pick the best model using the dev set.
# 4.3 Improving Generalization
In this subsection, we study the impact of adver- sarial pre-training on generalization, by comparing the performance of pre-trained models in various downstream applications. First, we study the sce- nario of pre-training from scratch, by comparing three BERT models:
⢠BERTBASE is the standard BERT base model trained using the same setting as Devlin et al. (2018) (i.e., 1M steps with a batch size of 256).
⢠BERT+BASE is similar to BERTBASE, except that it is trained with 1.6M steps, which takes roughly the same amount of time as that of ad- versarial pre-training (see ALUMBERT-BASE below).
⢠ALUMBERT-BASE is a BERT model trained using the same setting as BERTBASE, except that ALUM is used in the last 500K steps. Each adversarial training step takes approxi- mately 1.5 times longer than a step in standard training7.
Model SQuAD v1.1/v2.0 F1/EM F1/EM MNLI m/mm Acc BERTBASE 88.5/81.0 76.5/72.9 84.5/84.4 89.6/82.4 77.8/74.0 85.0/84.8 BERT+BASE ALUMBERT-BASE 90.8/83.7 80.2/76.6 85.8/86.1
Table 1: Comparison of standard and adversar- ial pre-training on SQuAD (v1.1 and v2.0) and MNLI (in-domain and out-domain). BERTBASE and ALUMBERT-BASE both use 1M pre-training steps, and BERT+BASE use 1.6M steps.
Table 1 compares these pre-trained models on three standard benchmarks (SQuAD v1.1 (Ra- jpurkar et al., 2016) and v2.0 (Rajpurkar et al.,
7With K=1 in Algorithm 1, ALUM requires two more forward passes and one more backward pass compared to standard training.
ied a 2 84 = oS J. & 82 ae 3 - 2 - 80, se BERT âam ALUM ok 200k 300k 400k 500k 600k 700k 800k 900k 1M Number of pre-training steps
Figure 1: Comparison of the standard and adversarial pre-training on the MNLI development set.
BC2GM NCBI JNLPBA F1 F1 84.6 79.7 BERTBASE 86.9 ALUMBERT-BASE 81.1
Table 2: Comparison of standard and adversarial pre- training on biomedical NER. Scores are entity-level F1.
2018), and MNLI from GLUE (Wang et al., 2018)), using the same standard ï¬ne-tuning setting (with- out adversarial training). The standard BERT mod- els trained using only the Wikipedia data attain sim- ilar results as in Devlin et al. (2018), thus provide a good baseline for comparison. ALUMBERT-BASE consistently outperforms the standard BERT mod- els across all the datasets, even adjusting for the slightly longer trainng time. E.g., on SQuAD v1.1, ALUMBERT-BASE gains 2.3% points in F1 over BERTBASE and 1.2% points over BERT+BASE. Fig- ure 1 shows ALUM at work on the development set of MNLI. Once adversarial training is applied in the middle (after ï¬rst 500K steps), ALUM starts outperforming BERT and the gap is widening.
We also assess the impact of adversarial pre- training in the biomedical domain, which is sub- stantially different from the Wikipedia corpus used in pre-training. Table 2 shows the results on stan- dard biomedical name entity recognition (NER) datasets: BC2GM (Smith et al., 2008), NCBI (Do- gan et al., 2014), JNLPBA (Collier and Kim, 2004). Interestingly, ALUM still outperforms the standard BERT model on all three tasks, even though the application domain is substantially different from the pre-training one.
Next, we assess the impact of adversarial train- ing in the continual pre-training setting. We use our pre-training dataset (Wikipedia, OPENWEBTEXT,
6
Model MNLI-m/mm SST-2 Acc Acc RoBERTaLARGE RoBERTa+LARGE 90.2/90.2 90.3/90.2 96.4 96.3
Table 3: RoBERTa is an extremlly well-trained model: standard continual pre-training without adversarial training fails to improve generalization performance in downstream tasks. (Scores are accuracy.)
STORIES; 82GB)8, and run 100K steps in all our continual pre-training experiments. We choose the RoBERTa models as the baseline, which use the same neural model as BERT, but were pre-trained on an order of magnitude more text (160GB vs 13GB). They are the state-of-the-art pre-trained lan- guage models, outperforming the standard BERT models in many NLP tasks.
RoBERTa models are extremely well-trained. Standard continual pre-training fails to attain any gains in downstream applications such as MNLI (Williams et al., 2018) and SST (Socher et al., 2013) from GLUE (Wang et al., 2018), as shown in Table 3. On the other hand, ALUM is able to attain further gain from continual pre- training of RoBERTa, as shown in Table 4. E.g., ALUMROBERTA-BASE outperforms RoBERTaBASE by +0.5%, and ALUMROBERTA-LARGE outperforms RoBERTaLARGE by +0.7% on the MNLI develop- ment set. This is rather remarkable, as by contrast standard continual pre-training is unable to attain any gain.
# Improving Robustness
In this subsection, we assess the impact of ad- versarial pre-training on the modelâs robustness against adversarial attacks, using three standard adversarial NLP benchmarks: ANLI (Nie et al., 2019), HELLASWAG (Zellers et al., 2019) and ad- versarial SQuAD (Jia and Liang, 2017). On ANLI, we follow the experimental setting of Nie et al. (2019) to enable a head-to-head comparison, which combines four datasets (ANLI, MNLI, SNLI and FEVER (Thorne et al., 2018)) for ï¬ne-tuning.
Adversarial pre-training substantially improves model robustness, as shown in Table 5 and Ta- ble 6. In all three adversarial datasets, ALUM consistently outperformed the standard pre-training counterparts, for BERT and RoBERTa alike. For
8This is a subset of the data (160GB) used in RoBERTa pre-training.
Model MNLI-m SST-2 QNLI CoLA RTE MRPC QQP STS-B RoBERTaBASE ALUMROBERTA-BASE RoBERTaLARGE ALUMROBERTA-LARGE 87.6 88.1 90.2 90.9 94.8 95.3 96.4 96.6 92.8 93.1 94.7 95.1 63.6 63.6 67.8 68.2 78.7 80.2 86.6 87.3 90.2 90.9 90.9 91.1 91.9 92.0 92.2 92.2 91.2 91.1 92.4 92.1
Table 4: Comparison of standard and adversarial pre-training on the GLUE development set. Results for ALUMROBERTA-BASE and ALUMROBERTA-LARGE are averaged over ï¬ve runs. Results of RoBERTaBASE and RoBERTaLARGE are taken from Liu et al. (2019c).
Dev Method All R2 MNLI + SNLI + ANLI + FEVER 48.2 46.3 48.9 47.3 52.6 48.6 48.3 50.7 48.9 53.4 R1 R3 BERTBASE BERT+BASE ALUMBERT-BASE BERTLARGE (Nie et al., 2019) XLNetLARGE (Nie et al., 2019) RoBERTaLARGE (Nie et al., 2019) ALUMROBERTA-LARGE 55.7 57.5 62.0 57.4 67.6 73.8 73.3 43.4 43.0 48.1 43.5 48.3 44.4 48.2 49.3 55.1 53.7 57.7 R1 55.1 57.7 61.3 - - - 72.3 Test R2 R3 45.0 43.7 45.9 - - - 52.1 43.1 43.0 44.3 - - - 48.4 All 47.4 47.8 50.1 44.2 52.0 49.7 57.0
Table 5: Comparison of standard and adversarial pre-training on the adversarial dataset ANLI. R1, R2 and R3 are rounds with increasing difï¬culty. Note that Nie et al. (2019) did not represent results for individual rounds, as signiï¬ed by â-â.
Method BERTBASE BERT+BASE ALUMBERT-BASE RoBERTaLARGE ALUMROBERTA-LARGE Adversarial SQuAD AddSent AddOneSent EM/F1 48.9/54.0 50.1/56.2 54.6/60.4 EM/F1 59.0/64.8 60.5/65.7 63.2/69.8 72.3/66.0 75.5/69.4 79.3/72.9 81.4/75.0 HELLASWAG Test Dev Accuracy Accuracy 39.5 40.3 44.0 85.0 86.2 - - - 85.2 85.6
Table 6: Comparison of standard and adversarial pre-training on adversarial datasets Adversarial SQuAD and HEL- LASWAG. The test result on HELLASWAG is taken from the ofï¬cial leaderboard: rowanzellers.com/hellaswag; we couldnât get results for BERT base models as the organizers restrict the number of submissions.
example, on ANLI, ALUMROBERTA-LARGE gains 7.3% points in test accuracy over RoBERTaLARGE, outperforms XLNet (Yang et al., 2019) by 5.0% points, creating a new state-of-the-art result. The gains on Adversarial SQuAD and HELLASWAG are equally signiï¬cant. For example, for Ad- versarial SQuAD, ALUMBERT-BASE outperforms BERTBASE by +6.4% F1 in the AddSent set- ting and +5.0% F1 in the AddOneSent setting. Against RoBERTaLARGE, ALUMROBERTA-LARGE gains +3.4% F1 in AddSent and +2.1% F1 in Ad- dOneSent.
# 4.5 Combining Adversarial Pre-Training and Fine-tuning
Adversarial training has been shown to be ef- fective in task-speciï¬c ï¬ne-tuning (Jiang et al., 2019; Zhu et al., 2019). In this subsection, we explore combining adversarial pre-training with adversarial ï¬ne-tuning. Speciï¬cally, we use RoBERTaLARGE as the base model, and compare it with ALUMROBERTA-LARGE, which uses adversar- ial continual pre-training but standard ï¬ne-tuning, and ALUMRoBERTA-LARGE-SMART, which uses ad- versarial training in both continual pre-training and
7
MROBERTAraace ® ALUMposerraiance %) ALUMposera.arce- smart 91.40 90.90 90.20 MNLI Dev
# (a) Results on MNLI
MROBERTAarce © ALUMposcrtatance ® ALUMpopenta-tarce-smart 58.80 57.70 53.70 ANLI Dev
(b) Results on ANLI
Figure 2: Combining adversarial pre-training and ï¬ne- tuning attaining the best results on the development sets of MNLI and ANLI, two representative GLUE tasks.
Model Dev Test SNLI Dataset (Accuracy%) GPT (Radford et al., 2018) BERTLARGE MT-DNNLARGE(Liu et al., 2019b) ALUMROBERTA-LARGE ALUMROBERTA-LARGE-SMART - 91.7 92.2 93.1 93.6 89.9 91.0 91.6 93.0 93.4 SciTail Dataset (Accuracy%) - 95.7 96.3 97.4 98.2 GPT (Radford et al., 2018) BERTLARGE(Liu et al., 2019b) MT-DNNLARGE(Liu et al., 2019b) ALUMROBERTA-LARGE ALUMROBERTA-LARGE-SMART 88.3 94.4 95.0 96.3 96.8
Table 7: Combining adversarial pre-training and ï¬ne- tuning attains the best results on SNLI and SciTail.
ï¬ne-tuning. Figure 2 shows the results on the development sets of MNLI and ANLI, two rep-
8
Model Dev Test SWAG Dataset (Accuracy%) GPT (Radford et al., 2018) BERTLARGE (Devlin et al., 2018) Human(Zellers et al., 2018) RoBERTaLARGE (Liu et al., 2019c) ALUMROBERTA-LARGE ALUMROBERTA-LARGE-SMART - - 88.0 - 90.7 91.2 78.0 86.3 88.0 89.9 91.0 - HELLASWAG Dataset (Accuracy%) 41.9 46.7 - 86.2 86.9 GPT (Zellers et al., 2019) BERTLARGE(Zellers et al., 2019) RoBERTaLARGE(Liu et al., 2019c) ALUMROBERTA-LARGE ALUMROBERTA-LARGE-SMART Human 95.7 41.7 47.3 85.2 85.6 - 95.6
Table 8: Combining adversarial pre-training and ï¬ne- tuning attains the best results on SWAG and HEL- LASWAG.
resentative GLUE tasks. Combining adversarial pre-training and ï¬ne-tuning attains the best results, and substantially outperforms RoBERTaLARGE. E.g., on ANLI, ALUMRoBERTa-SMART outperforms ALUMROBERTA-LARGE by +1.1% points in accu- racy, and outperforms RoBERTaLARGE by +5.1% points. On SNLI, SciTail, SWAG, and HEL- LASWAG, we observe similar gains by combining adversarial pre-training and ï¬ne-tuning, attaining new state-of-the-art results on these tasks. See ta- ble 7 and 8.
# 5 Conclusion
We propose ALUM, a general adversarial train- ing algorithm, and present the ï¬rst comprehen- sive study of adversarial training in large neural language models. We show that adversarial pre- training can signiï¬cantly improves both generaliza- tion and robustness, which provides a promising direction for reconciling their conï¬icts as observed in prior work. ALUM substantially improved ac- curacy for BERT and RoBERTa in a wide range of NLP tasks, and can be combined with adversarial ï¬ne-tuning for further gain.
Future directions include: further study on the role of adversarial pre-training in improving gen- eralization and robustness; speed up adversarial training; apply ALUM to other domains.
# Acknowledgments
We thank Haoming Jiang, Tuo Zhao, Zhe Gan, Keivn Duh, Yangfeng Ji, Greg Yang, Pengchuan Zhang, Lei Zhang, Furu Wei, Li Dong, Masayuki Asahara, and Lis Pereira for valuable discussions and comments, Microsoft Research Technology Engineering team for setting up GPU machines.
# References
Hangbo Bao, Li Dong, Furu Wei, Wenhui Wang, Nan Yang, Xiaodong Liu, Yu Wang, Songhao Piao, Jian- feng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2020. Unilmv2: Pseudo-masked language models for uni- ï¬ed language model pre-training. arXiv preprint arXiv:2002.12804.
Roy Bar-Haim, Ido Dagan, Bill Dolan, Lisa Ferro, and Danilo Giampiccolo. 2006. The second PASCAL In Pro- recognising textual entailment challenge. ceedings of the Second PASCAL Challenges Work- shop on Recognising Textual Entailment.
Luisa Bentivogli, Ido Dagan, Hoa Trang Dang, Danilo Giampiccolo, and Bernardo Magnini. 2009. The ï¬fth pascal recognizing textual entailment challenge. In In Proc Text Analysis Conference (TACâ ËA ´Z09.
Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large anno- tated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics.
Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez- Gazpio, and Lucia Specia. 2017. Semeval-2017 task 1: Semantic textual similarity-multilingual and arXiv preprint cross-lingual focused evaluation. arXiv:1708.00055.
Yong Cheng, Lu Jiang, and Wolfgang Macherey. 2019. Robust neural machine translation with doubly ad- versarial inputs. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 4324â4333, Florence, Italy. Associa- tion for Computational Linguistics.
Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. ELECTRA: Pre- training text encoders as discriminators rather than generators. In ICLR.
Introduc- tion to the bio-entity recognition task at JNLPBA. In Proceedings of the International Joint Workshop on Natural Language Processing in Biomedicine and its Applications (NLPBA/BioNLP), pages 73â78, Geneva, Switzerland. COLING.
Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The pascal recognising textual entailment
9
the First Inter- challenge. national Conference on Machine Learning Chal- lenges: Evaluating Predictive Uncertainty Visual Object Classiï¬cation, and Recognizing Textual En- tailment, MLCWâ05, pages 177â190, Berlin, Hei- delberg. Springer-Verlag.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.
Rezarta Dogan, Robert Leaman, and Zhiyong lu. 2014. Ncbi disease corpus: A resource for disease name recognition and concept normalization. Journal of biomedical informatics, 47.
William B Dolan and Chris Brockett. 2005. Automati- cally constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005).
Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Uniï¬ed language model pre-training for natural language arXiv preprint understanding and generation. arXiv:1905.03197.
Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. 2007. The third PASCAL recogniz- ing textual entailment challenge. In Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, pages 1â9, Prague. Association for Computational Linguistics.
Aaron Gokaslan and Vanya Cohen. Openwebtext cor- pus.
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversar- ial examples. arXiv preprint arXiv:1412.6572.
Frank R Hampel. 1974. The inï¬uence curve and its role in robust estimation. Journal of the american statistical association, 69(346):383â393.
Sepp Hochreiter and Jürgen Schmidhuber. 1997. Neural computation, Long short-term memory. 9(8):1735â1780.
Yu-Lun Hsieh, Minhao Cheng, Da-Cheng Juan, Wei Wei, Wen-Lian Hsu, and Cho-Jui Hsieh. 2019. On the robustness of self-attentive models. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1520â1529.
Robin Jia and Percy Liang. 2017. Adversarial exam- ples for evaluating reading comprehension systems. arXiv preprint arXiv:1707.07328.
Haoming Jiang, Pengcheng He, Weizhu Chen, Xi- aodong Liu, Jianfeng Gao, and Tuo Zhao. 2019. Smart: Robust and efï¬cient ï¬ne-tuning for pre- language models through princi- trained natural arXiv preprint pled regularized optimization. arXiv:1911.03437.
Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2019. Is bert really robust? natural lan- guage attack on text classiï¬cation and entailment. arXiv preprint arXiv:1907.11932.
Tushar Khot, Ashish Sabharwal, and Peter Clark. 2018. SciTail: A textual entailment dataset from science question answering. In AAAI.
Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
Taku Kudo and John Richardson. 2018. Sentencepiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing. arXiv preprint arXiv:1808.06226.
Hector Levesque, Ernest Davis, and Leora Morgen- stern. 2012. The winograd schema challenge. In Thirteenth International Conference on the Princi- ples of Knowledge Representation and Reasoning.
Liyuan Liu, Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Jiawei Han. 2019a. On the variance of the adaptive learning rate and beyond. arXiv preprint arXiv:1908.03265.
Xiaodong Liu, Jianfeng Gao, Xiaodong He, Li Deng, Kevin Duh, and Ye-Yi Wang. 2015. Representation learning using multi-task deep neural networks for semantic classiï¬cation and information retrieval. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 912â921.
Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jian- feng Gao. 2019b. Multi-task deep neural networks for natural language understanding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4487â4496, Flo- rence, Italy. Association for Computational Linguis- tics.
Xiaodong Liu, Yu Wang, Jianshu Ji, Hao Cheng, Xueyun Zhu, Emmanuel Awa, Pengcheng He, Weizhu Chen, Hoifung Poon, Guihong Cao, and Jianfeng Gao. 2020. The microsoft toolkit of multi- task deep neural networks for natural language un- derstanding. arXiv preprint arXiv:2002.07972.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019c. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2017. Towards deep learning models resistant to adversar- ial attacks. arXiv preprint arXiv:1706.06083.
Yifei Min, Lin Chen, and Amin Karbasi. 2020. The curious case of adversarially robust models: More data can help, double descend, or hurt generalization. arXiv preprint arXiv:2002.11080.
Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, and Jianfeng Gao. 2020a. Deep learning based text classiï¬- cation: A comprehensive review. arXiv preprint arXiv:2004.03705.
Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, and Jianfeng Gao. 2020b. Deep learning based text classiï¬- arXiv preprint cation: a comprehensive review. arXiv:2004.03705.
Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. 2018. Virtual adversarial training: a regularization method for supervised and semi- IEEE transactions on pat- supervised learning. tern analysis and machine intelligence, 41(8):1979â 1993.
Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2019. Ad- versarial nli: A new benchmark for natural language understanding. arXiv preprint arXiv:1910.14599.
Chongli Qin, James Martens, Sven Gowal, Dilip Kr- ishnan, Alhussein Fawzi, Soham De, Robert Stan- forth, Pushmeet Kohli, et al. 2019. Adversarial ro- bustness through local linearization. arXiv preprint arXiv:1907.02610.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2018. Language models are unsupervised multitask learners.
Aditi Raghunathan, Sang Michael Xie, Fanny Yang, John Duchi, and Percy Liang. 2020. Understanding and mitigating the tradeoff between robustness and accuracy. arXiv preprint arXiv:2002.10716.
Aditi Raghunathan, Sang Michael Xie, Fanny Yang, John C Duchi, and Percy Liang. 2019. Adversar- ial training can hurt generalization. arXiv preprint arXiv:1906.06032.
Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you donât know: Unanswerable ques- tions for squad. arXiv preprint arXiv:1806.03822.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2383â2392, Austin, Texas. Association for Computational Linguistics.
Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909.
Ali Shafahi, Mahyar Najibi, Amin Ghiasi, Zheng Xu, John Dickerson, Christoph Studer, Larry S Davis, Gavin Taylor, and Tom Goldstein. 2019. arXiv preprint Adversarial arXiv:1904.12843.
10
Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catan- zaro. 2019. Megatron-lm: Training multi-billion parameter language models using gpu model paral- lelism. arXiv preprint arXiv:1909.08053.
Larry Smith, Lorraine Tanabe, Rie Ando, Cheng- Ju Kuo, I-Fang Chung, Chun-Nan Hsu, Yu-Shi Lin, Roman Klinger, Christoph Friedrich, Kuzman Ganchev, Manabu Torii, Hongfang Liu, Barry Had- dow, Craig Struble, Richard Povinelli, Andreas Vla- chos, William Baumgartner Jr, Lawrence Hunter, Bob Carpenter, and W. Wilbur. 2008. Overview of biocreative ii gene mention recognition. Genome bi- ology, 9 Suppl 2:S2.
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- In Proceedings of the 2013 conference on bank. empirical methods in natural language processing, pages 1631â1642.
David Stutz, Matthias Hein, and Bernt Schiele. 2019. Disentangling adversarial robustness and generaliza- In Proceedings of the IEEE Conference on tion. Computer Vision and Pattern Recognition, pages 6976â6987.
Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Hao Tian, Hua Wu, and Haifeng Wang. 2019. Ernie 2.0: A continual pre-training framework for language un- derstanding. arXiv preprint arXiv:1907.12412.
Wilson L Taylor. 1953. â ËAIJcloze procedureâ ËAËI: A Journalism new tool for measuring readability. quarterly, 30(4):415â433.
Christos Thorne, Christodoulopoulos, 2018. Fever: a large-scale dataset for fact extraction and veriï¬cation. arXiv preprint arXiv:1803.05355.
Trieu H Trinh and Quoc V Le. 2018. A simple method for commonsense reasoning. arXiv preprint arXiv:1806.02847.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz Kaiser, and Illia Polosukhin. 2017. Attention is all In Advances in neural information pro- you need. cessing systems, pages 5998â6008.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461.
Dilin Wang, Chengyue Gong, and Qiang Liu. 2019. Improving neural language modeling via adversarial In International Conference on Machine training. Learning, pages 6555â6565.
Alex Warstadt, Amanpreet Singh, and Samuel R Bow- man. 2018. Neural network acceptability judgments. arXiv preprint arXiv:1805.12471.
Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112â1122. Association for Computational Linguistics.
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretrain- arXiv preprint ing for language understanding. arXiv:1906.08237.
Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. Swag: A large-scale adversarial dataset for grounded commonsense inference. arXiv preprint arXiv:1808.05326.
Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. Hellaswag: Can a In Proceed- machine really ï¬nish your sentence? ings of the 57th Annual Meeting of the Association for Computational Linguistics.
Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Thomas Goldstein, and Jingjing Liu. 2019. Freelb: En- hanced adversarial training for language understand- ing. arXiv preprint arXiv:1909.11764.
11
# A NLP Application Benchmarks
GLUE. The General Language Understanding Evaluation (GLUE) benchmark is a collection of nine natural language understanding (NLU) tasks. As shown in Table 9, it includes question an- swering (Rajpurkar et al., 2016), linguistic accept- ability (Warstadt et al., 2018), sentiment analy- sis (Socher et al., 2013), text similarity (Cer et al., 2017), paraphrase detection (Dolan and Brockett, 2005), and natural language inference (NLI) (Da- gan et al., 2006; Bar-Haim et al., 2006; Giampic- colo et al., 2007; Bentivogli et al., 2009; Levesque et al., 2012; Williams et al., 2018). The diversity of the tasks makes GLUE very suitable for evaluating the generalization and robustness of NLU models. ⢠SNLI. The Stanford Natural Language Inference (SNLI) dataset contains 570k human annotated sen- tence pairs, in which the premises are drawn from the captions of the Flickr30 corpus and hypothe- ses are manually annotated (Bowman et al., 2015). This is the most widely used entailment dataset for NLI. ⢠SciTail. This is a textual entailment dataset de- rived from a science question answering (SciQ) dataset (Khot et al., 2018). The task involves as- sessing whether a given premise entails a given hypothesis. In contrast to other entailment datasets mentioned previously, the hypotheses in SciTail are created from science questions while the cor- responding answer candidates and premises come from relevant web sentences retrieved from a large corpus. As a result, these sentences are linguis- tically challenging and the lexical similarity of premise and hypothesis is often high, thus mak- ing SciTail particularly difï¬cult. ⢠ANLI. The Adversarial Natural Language In- ference (ANLI, Nie et al. (2019)) is a new large- scale NLI benchmark dataset, collected via an it- erative, adversarial human-and-model-in-the-loop procedure. Speciï¬cally, the instances are chosen to be difï¬cult for the state-of-the-art models such as BERT and RoBERTa. ⢠SWAG. It is a large-scale adversarial dataset for the task of grounded commonsense inference, which uniï¬es natural language inference and phys- ically grounded reasoning (Zellers et al., 2018). SWAG consists of 113k multiple choice questions about grounded situations. ⢠HELLASWAG. It is similar to SWAG but more challenging (Zellers et al., 2019). For each query in HELLASWAG, it also has 4 choices and the goal
is to ï¬nd the best choice among them. ⢠SQuAD v1.1/v2.0. Stanford Question Answer- ing Dataset (SQuAD) v1.1 and v2.0 (Rajpurkar et al., 2016, 2018) are popular machine reading comprehension benchmarks. Their passages come from approximately 500 Wikipedia articles and the questions and answers are obtained by crowdsourc- ing. The SQuAD v2.0 dataset includes unanswer- able questions about the same paragraphs. ⢠BC2GM. The Gene Mention Task at the Biocre- ative II workshop (Smith et al., 2008) provides an annotated dataset for gene name entity recognition. ⢠NCBI. The NCBI disease corpus (Dogan et al., 2014) contains annotations of disease mentions from a collection of PubMed abstracts. ⢠JNLPBA. JNLBA is a biomedical entity recogni- tion shared task (Collier and Kim, 2004). It is one of the largest datasets covering a large fraction of major taxonomies in molecular biology.
12
Corpus Task #Label Single-Sentence Classiï¬cation (GLUE) #Train #Dev #Test Metrics CoLA SST Acceptability Sentiment 8.5k 67k 1k 872 1k 1.8k 2 2 Matthews corr Accuracy Pairwise Text Classiï¬cation (GLUE) MNLI RTE WNLI QQP MRPC QNLI STS-B SNLI SciTail ANLI NLI NLI NLI Paraphrase Paraphrase QA/NLI Similarity NLI NLI NLI 20k 393k 276 2.5k 71 634 40k 364k 408 3.7k 108k 5.7k Text Similarity (GLUE) 1.5k 20k 3k 146 391k 1.7k 5.7k 3 2 2 2 2 2 7k 1.4k Pairwise Text Classiï¬cation 9.8k 2.1k 3.2k 1 549k 23.5k 163k 9.8k 1.3k 3.2k 3 2 3 Accuracy Accuracy Accuracy Accuracy/F1 Accuracy/F1 Accuracy Pearson/Spearman corr Accuracy Accuracy Accuracy Span Classiï¬cation SQuAD v1.1 SQuAD v2.0 MRC MRC SWAG Multiple choice HELLASWAG Multiple choice 87.6k 130.3k 73.5k 34k 10.5k 11.9k 20k 10 9.5k 8.9k Ranking 20k 10k - - - - Exact Match (EM)/F1 Exact Match (EM)/F1 Accuracy Accuracy Biomedical Domain
Table 9: Summary information of the NLP application benchmarks.
13 | {
"id": "1905.03197"
} |
2004.09456 | StereoSet: Measuring stereotypical bias in pretrained language models | A stereotype is an over-generalized belief about a particular group of
people, e.g., Asians are good at math or Asians are bad drivers. Such beliefs
(biases) are known to hurt target groups. Since pretrained language models are
trained on large real world data, they are known to capture stereotypical
biases. In order to assess the adverse effects of these models, it is important
to quantify the bias captured in them. Existing literature on quantifying bias
evaluates pretrained language models on a small set of artificially constructed
bias-assessing sentences. We present StereoSet, a large-scale natural dataset
in English to measure stereotypical biases in four domains: gender, profession,
race, and religion. We evaluate popular models like BERT, GPT-2, RoBERTa, and
XLNet on our dataset and show that these models exhibit strong stereotypical
biases. We also present a leaderboard with a hidden test set to track the bias
of future language models at https://stereoset.mit.edu | http://arxiv.org/pdf/2004.09456 | Moin Nadeem, Anna Bethke, Siva Reddy | cs.CL, cs.AI, cs.CY | 9 pages, 6 tables, and 3 figures | null | cs.CL | 20200420 | 20200420 | 0 2 0 2
r p A 0 2 ] L C . s c [
1 v 6 5 4 9 0 . 4 0 0 2 : v i X r a
WARNING: This paper contains examples which are offensive in nature.
# StereoSet: Measuring stereotypical bias in pretrained language models
Moin Nadeem§â and Anna Bethkeâ and Siva Reddyâ¡
§Massachusetts Institute of Technology, Cambridge MA, USA â Intel AI, Santa Clara CA, USA â¡Facebook CIFAR AI Chair, Mila; McGill University, Montreal, QC, Canada [email protected] [email protected], [email protected]
# Choose the appropriate word:
# Abstract
# Domain: Gender
# Target: Girl
A stereotype is an over-generalized belief about a particular group of people, e.g., Asians are good at math or Asians are bad drivers. Such beliefs (biases) are known to hurt tar- get groups. Since pretrained language mod- els are trained on large real world data, they are known to capture stereotypical biases. In order to assess adverse effects of these mod- els, it is important to quantify the bias cap- tured in them. Existing literature on quanti- fying bias evaluates pretrained language mod- els on a small set of artiï¬cially constructed bias-assessing sentences. We present Stere- oSet, a large-scale natural dataset in English to measure stereotypical biases in four domains: gender, profession, race, and religion. We evaluate popular models like BERT, GPT2, ROBERTA, and XLNET on our dataset and show that these models exhibit strong stereo- typical biases. We also present a leaderboard with a hidden test set to track the bias of future language models at https://stereoset. mit.edu.
Context: Girls tend to be more ____ than boys Option 1: soft Option 2: determined Option 3: ï¬sh
# (a) The Intrasentence Context Association Test
Choose the appropriate sentence:
# Domain: Race
# Target: Arab
Context: He is an Arab from the Middle East. Option 1: He is probably a terrorist with bombs. (stereotype) (anti-stereotype) (unrelated)
Option 2: He is a paciï¬st. Option 3: My dog wants a walk.
# (b) The Intersentence Context Association Test
Figure 1: Context Association Tests (CATs) to mea- sure the bias and language modeling ability of language models.
1
# 1 Introduction
A key idea behind the current success of neural network models for language is pretrained rep- resentations such as word embeddings (Mikolov et al., 2013; Pennington et al., 2014) and pre- trained language models (Peters et al., 2018; Howard and Ruder, 2018; Devlin et al., 2019; Rad- ford et al., 2019; Liu et al., 2019). These are widely used to initialize neural models, which are then ï¬ne-tuned to perform a task at hand. Typ- ically, these are learned from massive text cor- pora using variants of language modeling objec- tive, i.e., correctly predicting a word given its sur- rounding context. In the recent years, these repre-
sentations empowered neural models to attain un- precedented levels of performance gains on multi- ple language tasks. The resulting models are be- ing deployed widely as services on platforms like Google Cloud and Amazon AWS to serve millions of users.
While this growth is commendable, there are concerns about the fairness of these models. Since pretrained representations are obtained from learn- ing on massive text corpora, there is a danger that stereotypical biases in the real world are reï¬ected in these models. For example, GPT2 (Radford et al., 2019), a pretrained language model, has shown to generate unpleasant stereotypical text when prompted with context containing certain races such as African-Americans (Sheng et al., In this work, we assess the stereotypical 2019).
âWork completed in part during an internship at Intel AI.
biases of popular pretrained language models.
The seminal works of Bolukbasi et al. (2016) and Caliskan et al. (2017) show that word embed- dings such as word2vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) contain stereo- typical biases using diagnostic methods like word analogies and association tests. For example, Caliskan et al. show that male names are more likely to be associated with career terms than fe- male names where the association between two terms is measured using embedding similarity, and similarly African-American names are likely to be associated with unpleasant terms than European- American names.
Recently, such studies have been attempted to evaluate bias in contextual word embeddings ob- tained from pretrained language models where a word is provided with artiï¬cial context (May et al., 2019; Kurita et al., 2019), e.g., the contextual em- bedding of man is obtained from the embedding of man in the sentence This is a man. However, these have a few drawbacks. First, the context is artiï¬- cial, which does not reï¬ect the natural usage of a word. Second, they require stereotypical attribute terms to be predeï¬ned (e.g., pleasant and unpleas- ant terms). Third, they focus on single word target terms (and attributes) and ignore multiword terms like construction worker.
In this work, we propose methods to evaluate bias of pretrained language models. These meth- ods do not have the aforementioned limitations. Speciï¬cally, we design two different association tests, one for measuring bias at sentence level (in- trasentence), and the other at discourse level (in- tersentence). In these tests, each target term (e.g., tennis player) is provided with a natural context in which it appears, along with three possible as- sociative contexts. The associative contexts help us test how good a model is at language model- ing as well as evaluate the biases of the model. We crowdsource StereoSet, a dataset for associa- tive contexts in English containing 4 target do- mains, 321 target terms and 16,995 test instances (triplets). A few instances are shown in Figure 1.
# 2 Task Formulation
What are the desiderata of an idealistic language model? First, it should be able to perform the task of language modeling, i.e., it should rank meaningful contexts higher than meaningless con- texts. For example, it should tell us that Our
housekeeper is a Mexican is more probable than Our housekeeper is a round. Second, it should not exhibit stereotypical bias, i.e., it should avoid ranking stereotypical contexts higher than anti- stereotypical contexts, e.g., Our housekeeper is a Mexican and Our housekeeper is an American should be equally possible. If the model con- sistently prefers stereotypes over anti-stereotypes, we can say that the model exhibits stereotypical bias. Based on these observations, we develop the Context Association Test (CAT), a test that mea- sures the language modeling ability as well as the stereotypical bias of pretrained language models. In CAT, given a context containing a target group (e.g., housekeeper), we provide three dif- ferent ways to instantiate this context. Each in- stantiation corresponds to either a stereotypical, a anti-stereotypical, or an unrelated association. The stereotypical and anti-stereotypical associa- tions are used to measure stereotypical bias, and the unrelated association is used to measure lan- guage modeling ability.
Speciï¬cally, we design two types of association tests, intrasentence and intersentence CATs, to as- sess language modeling and stereotypical bias at sentence level and discourse level. Figure 1 shows an example for each.
# 2.1 Intrasentence
Our intrasentence task measures the bias and the language modeling ability for sentence-level rea- soning. We create a ï¬ll-in-the-blank style context sentence describing the target group, and a set of three attributes, which correspond to a stereotype, an anti-stereotype, and an unrelated option (Figure 1a). In order to measure language modeling and stereotypical bias, we determine which attribute has the greatest likelihood of ï¬lling the blank, in other words, which of the instantiated contexts is more likely.
# 2.2 Intersentence
Our intersentence task measures the bias and the language modeling ability for discourse-level rea- soning. The ï¬rst sentence contains the target group, and the second sentence contains an at- tribute of the target group. Figure 1b shows the intersentence task. We create a context sentence with a target group that can be succeeded with three attribute sentences corresponding to a stereo- type, an anti-stereotype and an unrelated option. We measure the bias and language modeling abil-
ity based on which attribute sentence is likely to follow the context sentence.
# 3 Related Work
Our work is inspired from several related attempts that aim to measure bias is pretrained representa- tions such as word embeddings and language mod- els.
# 3.1 Bias in word embeddings
The two popular methods of testing bias in word embeddings are word analogy tests and word as- sociation tests. In word analogy tests, given two words in a certain syntactic or semantic relation (man â king), the goal is generate a word that is in similar relation to a given word (woman â queen). Mikolov et al. (2013) showed that word embeddings capture syntactic and semantic word analogies, e.g., gender, morphology etc. Boluk- basi et al. (2016) build on this observation to study gender bias. They show that word embeddings capture several undesired gender biases (seman- tic relations) e.g. doctor : man :: woman : nurse. Manzini et al. (2019) extend this to show that word embeddings capture several stereotypical biases such as racial and religious biases.
In the word embedding association test (WEAT, Caliskan et al. 2017), the association of two complementary classes of words, e.g., European names and African names, with two other com- plementary classes of attributes that indicate bias, e.g., pleasant and unpleasant attributes, are stud- ied to quantify the bias. The bias is deï¬ned as the difference in the degree with which European names are associated with pleasant and unpleasant attributes in comparison with African names being associated with pleasant and unpleasant attributes. Here the association is deï¬ned as the similarity be- tween the word embeddings of the names and the attributes. This is the ï¬rst large scale study that showed word embeddings exhibit several stereo- typical biases and not just gender bias. Our inspi- ration for CAT comes from WEAT.
# 3.2 Bias in pretrained language models
May et al. (2019) extend WEAT to sentence en- coders, calling it the Sentence Encoder Asso- ciation Test (SEAT). For a target term and its they create artiï¬cial sentences using attribute, generic context of the form "This is [target]." and "They are [attribute]." and obtain contextual word
embeddings of the target and the attribute terms. They repeat Caliskan et al. (2017)âs study using these embeddings and cosine similarity as the as- sociation metric but their study was inconclusive. Later, Kurita et al. (2019) show that cosine simi- larity is not the best association metric and deï¬ne a new association metric based on the probability of predicting an attribute given the target in generic sentential context, e.g., [target] is [mask], where [mask] is the attribute. They show that similar ob- servations of Caliskan et al. (2017) are observed on contextual word embeddings too. Our intrasen- tence CAT is similar to their setting but with nat- ural context. We also go beyond intrasentence to propose intersentence CATs, since language mod- eling is not limited at sentence level.
# 3.3 Measuring bias through extrinsic tasks
Another popular method to evaluate bias of pre- trained representations is to measure bias on ex- trinsic applications like coreference resolution (Rudinger et al., 2018; Zhao et al., 2018) and sentiment analysis (Kiritchenko and Mohammad, 2018). In this method, neural models for down- stream tasks are initialized with pretrained repre- sentations, and then ï¬ne-tuned on the target task. The bias in pretrained representations is estimated based on the performance on the target task. How- ever, it is hard to segregate the bias of task-speciï¬c training data from the pretrained representations. Our CATs are an intrinsic way to evaluate bias in pretrained models.
# 4 Dataset Creation
We select four domains as the target domains of in- terest for measuring bias: gender, profession, race and religion. For each domain, we select terms (e.g., Asian) that represent a social group. For col- lecting target term contexts and their associative contexts, we employ crowdworkers via Amazon Mechanical Turk.1 We restrict ourselves to crowd- workers in USA since stereotypes could change based on the country they live in.
# 4.1 Target terms
We curate diverse set of target terms for the tar- get domains using Wikidata relation triples (Vran- deËci´c and Krötzsch, 2014). A Wikidata triple is of the form <subject, relation, object> (e.g., <Brad
1Screenshots of our Mechanical Turk interface and details about task setup are available in the Appendix A.2.
Pitt, P106, Actor>). We collect all objects occur- ring with the relations P106 (profession), P172 (race), and P140 (religion) as the target terms. We manually ï¬lter terms that are either infrequent or too ï¬ne-grained (assistant producer is merged with producer). We collect gender terms from Nosek et al. (2002). A list of target terms is avail- able in Appendix A.3. A target term can contain multiple words (e.g., software developer).
# 4.2 CATs collection
In the intrasentence CAT, for each target term, a crowdworker writes attribute terms that corre- spond to stereotypical, anti-stereotypical and un- related associations of the target term. Then they provide a context sentence containing the target term. The context is a ï¬ll-in-the-blank sentence, where the blank can be ï¬lled either by the stereo- type term or the anti-stereotype term but not the unrelated term.
In the intersentence CAT, ï¬rst they provide a sentence containing the target term. Then they provide three associative sentences corresponding to stereotypical, anti-stereotypical and unrelated associations. These associative sentences are such that the stereotypical and the anti-stereotypical sentences can follow the target term sentence but the unrelated sentence cannot follow the target term sentence.
Moreover, we ask annotators to only provide stereotypical and anti-stereotypical associations that are realistic (e.g., for the target term reception- ist, the anti-stereotypical instantiation You have to be violent to be a receptionist is unrealistic since being violent is not a requirement for being a re- ceptionist).
# 4.3 CATs validation
In order to ensure, stereotypes were not simply the opinion of one particular crowdworker, we vali- date the data collected in the above step with ad- ditional workers. For each context and its associa- tions, we ask ï¬ve validators to classify each asso- ciation into a stereotype, an anti-stereotype or an unrelated association. We only retain CATs where at least three validators agree on the classiï¬cation labels. This ï¬ltering results in selecting 83% of the CATs, indicating that there is regularity in stereo- typical views among the workers.
Domain # Target Terms # CATs (triplets) Avg Len (# words) Intrasentence Gender Profession Race Religion Total 40 120 149 12 321 1,026 3,208 3,996 623 8,498 7.98 8.30 7.63 8.18 8.02 Intersentence Gender Profession Race Religion Total 40 120 149 12 321 996 3,269 3,989 604 8,497 15.55 16.05 14.98 14.99 15.39 Overall 321 16,995 11.70
Table 1: Statistics of StereoSet
# 5 Dataset Analysis
Are people prone to associate stereotypes with negative associations? To answer this question, we classify stereotypes into positive and negative sentiment classes using a two-class sentiment clas- siï¬er (details in Appendix A.5). The classiï¬er also classiï¬es neutral sentiment such as My house- keeper is a Mexican as positive. Table 2 shows the results. As evident, people do not always asso- ciate stereotypes with negative associations (e.g., Asians are good at math is a stereotype with posi- tive sentiment). However, people associate stereo- types with relatively more negative associations than anti-stereotypes (41% vs. 33%).
We also extract keywords in StereoSet to an- alyze which words are most commonly associ- ated with the target groups. We deï¬ne a keyword as a word that is relatively frequent in StereoSet compared to the natural distribution of words in large general purpose corpora (Kilgarriff, 2009). Table 3 shows the top keywords of each domain when compared against TenTen, a 10 billion word web corpus (Jakubicek et al., 2013). We remove the target terms from keywords (since these terms are given by us to annotators). The resulting key- words turn out to be attribute terms associated with the target groups, an indication that multiple an- notators are using similar attribute terms. While the target terms in gender and race are associated with physical attributes such as beautiful, femi- nine, masculine, etc., professional terms are asso-
Positive Negative Stereotype Anti-Stereotype 59% 67% 41% 33%
Table 2: Percentage of positive and negative sentiment instances in StereoSet
Gender stepchild uncare feminine polite masculine breadwinner rowdy studious ma bossy immature naggy possessive manly homemaker burly Profession nerdy pushy rude disorganize uneducated unintelligent studious snobby talkative bossy greedy uptight hardwork dumb sloppy dishonest Race poor snobby industrious impoverish beautiful immigrate wealthy lazy uneducated smelly wartorn dangerous turban rude accent scammer Religion commandment hinduism judgmental classist atheist savior peaceful terrorist diety forgiving monotheistic coworker hijab unholy reborn devout
Table 3: The keywords that characterize each domain.
ciated with behavioural attributes such as pushy, greedy, hardwork, etc., and religious terms are as- sociated with belief attributes such as diety, forgiv- ing, reborn, etc.
# 6 Experimental Setup
In this section, we describe the data splits, evalua- tion metrics and the baselines.
# 6.1 Development and test sets
We split StereoSet into two sets based on the target terms: 25% of the target terms and their instances for the development set and 75% for the hidden test set. We ensure terms in the development set and test set are disjoint. We do not have a train- ing set since this defeats the purpose of StereoSet, which is to measure the biases of pretrained lan- guage models (and not the models ï¬ne-tuned on StereoSet).
# 6.2 Evaluation Metrics
Our desiderata of an idealistic language model is that it excels at language modeling while not ex- hibiting stereotypical biases. In order to determine success at both these goals, we evaluate both lan- guage modeling and stereotypical bias of a given model. We pose both problems as ranking prob- lems.
Language Modeling Score (lms) In the lan- guage modeling case, given a target term context and two possible associations of the context, one meaningful and the other meaningless, the model has to rank the meaningful association higher than meaningless association. The meaningless associ- ation corresponds to the unrelated option in Stere- oSet and the meaningful association corresponds to either the stereotype or the anti-stereotype op- tions. We deï¬ne the language modeling score (lms) of a target term as the percentage of in- stances in which a language model prefers the meaningful over meaningless association. We de- ï¬ne the overall lms of a dataset as the average lms of the target terms in the split. The lms of an ideal language model will be 100, i.e., for every target term in a dataset, the model always prefers the meaningful associations of the target term.
Stereotype Score (ss) Similarly, we deï¬ne the stereotype score (ss) of a target term as the per- centage of examples in which a model prefers a stereotypical association over an anti-stereotypical association. We deï¬ne the overall ss of a dataset as the average ss of the target terms in the dataset. The ss of an ideal language model will be 50, i.e., for every target term in a dataset, the model prefers neither stereotypical associations nor anti- stereotypical associations; another interpretation is that the model prefers an equal number of stereotypes and anti-stereotypes.
Idealized CAT Score (icat) We combine both lms and ss into a single metric called the idealized CAT (icat) score based on the following axioms:
1. An ideal model must have an icat score of 100, i.e., when its lms is 100 and ss is 50, its icat score is 100.
2. A fully biased model must have an icat score of 0, i.e., when its ss is either 100 (always prefer a stereotype over an anti-stereotype) or 0 (always prefer an anti-stereotype over a stereotype), its icat score is 0.
3. A random model must have an icat score of 50, i.e., when its lms is 50 and ss is 50, its icat score must be 50.
Therefore, we deï¬ne the icat score as
icat = lms â min(ss, 100 â ss) 50
the axioms. Here is maximized when the model neither prefers stereotypes nor anti- stereotypes for each target term and is mini- mized when the model favours one over the other. We scale this value using the language modeling score. An interpretation of icat is that it repre- sents the language modeling ability of a model to behave in an unbiased manner while excelling at language modeling.
# 6.3 Baselines
IDEALLM We deï¬ne this model as the one that always picks correct associations for a given target term context. It also picks equal number of stereo- typical and anti-stereotypical associations over all the target terms. So the resulting lms, ss and icat scores are 100, 50 and 100 respectively.
STEREOTYPEDLM We deï¬ne this model as the one that always picks a stereotypical association over an anti-stereotypical association. So its ss is 100. As a result, its icat score is 0 for any value of lms.
RANDOMLM We deï¬ne this model as the one that picks associations randomly, and therefore its lms, ss and icat scores are 50, 50, 50 respectively.
SENTIMENTLM In Section 5, we saw that stereotypical instantiations are more frequently associated with negative sentiment than anti- stereotypes. In this baseline, for a given a pair of context associations, the model always pick the as- sociation with the most negative sentiment.
# 7 Main Experiments
In this section, we evaluate popular pretrained lan- guage models such as BERT (Devlin et al., 2019), ROBERTA (Liu et al., 2019), XLNET (Yang et al., 2019) and GPT2 (Radford et al., 2019) on Stere- oSet.
# 7.1 BERT
In the intrasentence CAT (Figure 1a), the goal is to ï¬ll the blank of a target termâs context sentence
with an attribute term. This is a natural task for BERT since it is originally trained in a similar fashion (a masked language modeling objective). We leverage pretrained BERT to compute the log probability of an attribute term ï¬lling the blank. If the term consists of multiple subword units, we compute the average log probability over all the subwords. We rank a given pair of attribute terms based on these probabilities (the one with higher probability is preferred).
For intersentence CAT (Figure 1b), the goal is to select a follow-up attribute sentence given target term sentence. This is similar to the next sentence prediction (NSP) task of BERT. We use BERT pre-trained NSP head to compute the probability of an attribute sentence to follow a target term sen- tence. Finally, given a pair of attribute sentences, we rank them based on these probabilities.
# 7.2 ROBERTA
Given that ROBERTA is based off of BERT, the corresponding scoring mechanism remains re- markably similar. However, ROBERTA does not contain a pretrained NSP classiï¬cation head. So we train one ourselves on 9.5 million sentence pairs from Wikipedia (details in Appendix A.4). Our NSP classiï¬cation head achieves a 94.6% ac- curacy with ROBERTA-base, and a 97.1% accu- racy with ROBERTA-large on a held-out set con- taining 3.5M Wikipedia sentence pairs.2 We fol- low the same ranking procedure as BERT for both intrasentence and intersentence CATs.
# 7.3 XLNET
XLNET can be used in either in an auto-regressive setting or bidirectional setting. We use bi- directional setting, in order to mimic the evalua- tion setting of BERT and ROBERTA. For the in- trasentence CAT, we use the pretrained XLNET model. For the intersentence CAT, we train an NSP head (Appendix A.4) which obtains a 93.4% accuracy with XLNET-base and 94.1% accuracy with XLNET-large.
# 7.4 GPT2
Unlike the above models, GPT2 is a generative model in an auto-regressive setting, i.e., it esti- mates the probability of a current word based on its left context. For the intrasentence CAT, we in- stantiate the blank with an attribute term and com-
2For reference, BERT-base obtains an accuracy of 97.8%, and BERT-large obtains an accuracy of 98.5%
pute the probability of the full sentence. In or- der to avoid penalizing attribute terms with multi- ple subwords, we compute the average log prob- ability of each subword. Formally, if a sentence is composed of subword units x0, x1, ..., xN , then i=1 log(P (xi|x0,...,xiâ1)) we compute . Given a pair N of associations, we rank each association using this score. For the intersentence CAT, we can use a similar method, however we found that it per- formed poorly.3 Instead, we trained a NSP classi- ï¬cation head on the mean-pooled representation of the subword units (Appendix A.4). Our NSP clas- siï¬er obtains a 92.5% accuracy on GPT2-small, 94.2% on GPT2-medium, and 96.1% on GPT2- large.
# 8 Results and discussion
Table 4 shows the overall results of baselines and models on StereoSet.
Baselines vs. Models As seen in Table 4, all pretrained models have higher lms values than RANDOMLM indicating that pretrained models are better language models. Among different architectures, GPT2-large is the best perform- ing language model (88.9 on development) fol- lowed by GPT2-medium (87.1). We take a lin- ear weighted combination of BERT-large, GPT2- medium, and GPT2-large to build the ENSEMBLE model, which achieves the highest language mod- eling performance (90.7). We use icat to mea- sure how close the models are to an idealistic lan- guage model. All pretrained models perform bet- ter on icat than the baselines. While GPT2-small is the most idealistic model of all pretrained mod- els (71.9 on development), XLNET-base is the weakest model (61.6). The icat scores of SEN- TIMENTLM are close to RANDOMLM indicating that sentiment is not a strong indicator for building an idealistic language model. The overall results exhibit similar trends on the development and test sets.
Relation between lms and ss All models ex- hibit a strong correlation between lms and ss scores. As the language model becomes stronger, so its stereotypical bias (ss) too. This is unfortu- nate and perhaps unavoidable as long as we rely on real world distribution of corpora to train language models since these corpora are likely to reï¬ect
3In this setting, the language modeling score of GPT2 on the intersentence CAT is 61.5.
# Model
# Model
# Language Model Score (lms)
# Stereotype Score (ss)
# Idealized CAT Score (icat)
Development set IDEALLM STEREOTYPEDLM - RANDOMLM SENTIMENTLM 100 50.0 65.5 50.0 100 50.0 60.2 100 0.0 50.0 52.1 BERT-base BERT-large 85.8 85.8 59.6 59.7 69.4 69.2 ROBERTA-base ROBERTA-large 69.0 76.6 49.9 56.0 68.8 67.4 XLNET-base XLNET-large 67.3 78.0 54.2 54.4 61.6 71.2 GPT2 GPT2-medium GPT2-large 83.7 87.1 88.9 57.0 59.0 61.9 71.9 71.5 67.8 ENSEMBLE 90.7 62.0 69.0 Test set IDEALLM STEREOTYPEDLM - RANDOMLM SENTIMENTLM 100 50.0 65.1 50.0 100 50.0 60.8 100 0.0 50.0 51.1 BERT-base BERT-large 85.4 85.8 58.3 59.3 71.2 69.9 ROBERTA-base ROBERTA-large 68.2 75.8 50.5 54.8 67.5 68.5 XLNET-base XLNET-large 67.7 78.2 54.1 54.0 62.1 72.0 GPT2 GPT2-medium GPT2-large 83.6 85.9 88.3 56.4 58.2 60.1 73.0 71.7 70.5 ENSEMBLE 90.5 62.5 68.0
Table 4: Performance of pretrained language models on StereoSet.
stereotypes (unless carefully selected). Among the models, GPT2 variants have a good balance be- tween lms and ss in order to achieve high icat scores.
Impact of model size For a given architecture, all of its pretrained models are trained on the same corpora but with different number of parameters. For example, both BERT-base and BERT-large are trained on Wikipedia and BookCorpus (Zhu et al., 2015) with 110M and 340M parameters re- spectively. As the model size increases, we see that its language modeling ability (lms) increases, and correspondingly its stereotypical score. How- ever, this is not always the case with icat. Until the language model reaches a certain performance, the model does not seem to exhibit a strong stereo- typical behavior. For example, the icat scores of
Domain Language Model Score (lms) Stereotype Score (ss) Idealized CAT Score (icat) GENDER mother grandfather 92.4 97.2 96.2 63.9 77.8 52.8 66.7 43.2 90.8 PROFESSION software developer producer 88.8 94.0 91.7 62.6 75.9 53.7 66.5 45.4 84.9 RACE African Crimean 91.2 91.8 93.3 61.8 74.5 50.0 69.7 46.7 93.3 RELIGION Bible Muslim 93.5 85.0 94.8 63.8 66.0 46.6 67.7 57.8 88.3
Table 5: Domain-wise results of the ENSEMBLE model, along with most and least stereotyped terms.
ROBERTA and XLNET increase with model size, but not BERT and GPT2, which are strong lan- guage models to start with.
Impact corpora BERT, ROBERTA, XLNET and GPT2 are trained on 16GB, 160GB, 158GB and 40GB of text corpora. the size of the corpus does not Surprisingly, correlate with either lms or icat. This could be due to the difference in architectures and the type of corpora these models are trained on. A better way to verify this would be to train a same model on increasing amounts of corpora. Due to lack of computing resources, we leave this work for community. We conjecture that high performance of GPT2 (on lms and icat) is due to the nature of its training data. GPT2 is trained on documents linked from Reddit. Since Reddit has several subreddits related to target terms in StereoSet (e.g., relationships, religion), GPT2 is likely to be exposed to correct contextual associations. Also, since Reddit is moderated in these niche /r/feminism), it could be the case subreddits (ie. that both stereotypical and anti-stereotypical associations are learned.
Domain-wise bias Table 5 shows domain-wise results of the ENSEMBLE model on the test set. The model is relatively less biased on race than on others (icat score of 69.7). We also show the high and low biased target terms for each domain from the development set. We conjecture that the high biased terms are the ones that have well estab- lished stereotypes in society and are also frequent in language. This is the case with mother (at- tributes: caring, cooking), software developer (at-
|
|
Model Language Model Score (lms) Stereotype Score (ss) Idealized CAT Score (icat) Intrasentence Task BERT-base BERT-large 82.5 82.9 57.5 57.6 70.2 70.3 ROBERTA-base ROBERTA-large 71.9 72.7 53.6 54.4 66.7 66.3 XLNET-base XLNET-large 70.3 74.0 53.6 51.8 65.2 71.3 GPT2 GPT2-medium GPT2-large 91.0 91.2 91.8 60.4 62.9 63.9 72.0 67.7 66.2 ENSEMBLE 91.7 63.9 66.3 Intersentence Task BERT-base BERT-large 88.3 88.7 59.0 60.8 72.4 69.5 ROBERTA-base ROBERTA-large 64.4 78.8 47.4 55.2 61.0 70.6 XLNET-base-cased XLNET-large-cased 65.0 82.5 54.6 56.1 59.0 72.5 GPT2 GPT2-medium GPT2-large 76.3 80.5 84.9 52.3 53.5 56.1 72.8 74.9 74.5 ENSEMBLE 89.4 60.9 69.9
Table 6: Performance on the Intersentence and In- trasentence CATs in StereoSet test set.
tributes: geek, nerd), and Africa (attributes: poor, dark). The low biased are the ones that do not have well established stereotypes, for example, producer and Crimean. The outlier to this obser- vation is Muslim. Although the target term muslim has strong stereotypical associations in StereoSet (e.g., the attribute term terrorist appears in 20% of stereotypes and other hatred terms in 43% exam- ples), the model exhibits idealistic behavior (with a slight preference towards anti-stereotypes). We could not explain this behavior, and leave explain- ability of such bias to future work.
Intrasentence vs Intersentence CATs Table 6 shows the results of intrasentence and intersen- tence CATs on the test set. Since intersentence tasks has more number of words per instance, we expect intersentence language modeling task to be harder than intrasentence. This is the case with most models (except BERT).
# 9 Limitations
StereoSet is crowdsourced on Mechanical Turk using worker pool from the US. Since this pool
is predominantly (80%) below the age of 50 (Di- fallah et al., 2018), StereoSet may not reï¬ect the stereotypes of the wider US population.
Although stereotypes are subjective opinions, we often ï¬nd them to collide with objective facts. For instance, "Iâm going to visit Iraq. Everyone is a Muslim there" is a stereotype of Iraq in our dataset, however this is objectively incorrect since 2% of Iraqis are Christians. Similar is the case with anti-stereotypes (e.g., "I travelled to France recently. Everyone there spoke English" is an anti- stereotype since French are stereotyped to hate En- glish). Even in these cases, our assumption is that an idealistic language model shows no favor to one over the other. Although we explicitly asked the annotators to avoid objective statements and the validators to ï¬lter them, some false statements still remain in the data.
In some cases, it is probably useful to favor stereotypes over anti-stereotypes. Consider the CAT, "The chef made ___ food", and its stereotyp- ical and anti-stereotypical associations, delicious and disgusting respectively. Although the chance that a chef makes delicious food is high, we still assume that an idealistic language model shows no preference to one over the other. This could be problematic. We leave this for future work.
# 10 Conclusion
In this work, we develop the Context Associa- tion Test (CAT) to measure the stereotypical bi- ases of pretrained language models with respect to their language modeling ability. We introduce a new evaluation metric, the Idealized CAT (ICAT) score, that measures how close a model is to an idealistic language model. We crowdsource Stere- oSet, a dataset containing 16,995 CATs to test bi- ases in four domains: gender, race, religion and professions. We show that current pretrained lan- guage model exhibit strong stereotypical biases, and that the best model is 27.0 ICAT points behind the idealistic language model. We ï¬nd that the GPT2 family of models exhibit relatively more idealistic behavior than other pretrained models like BERT, ROBERTA and XLNET. Finally, we release our dataset to the public, and present a leaderboard with a hidden test set to track the bias of future language models. We hope that Stere- oSet will spur further research in evaluating and mitigating bias in language models.
# Acknowledgments
We would like to thank Jim Glass, Yonatan Belinkov, Vivek Kulkarni, Spandana Gella and Abubakar Abid for their helpful comments in re- viewing this paper. We also thank Avery Lamp, Ethan Weber, and Jordan Wick for crucial feed- back on the MTurk interface and StereoSet web- site.
# References
Tolga Bolukbasi, Kai-Wei Chang, James Y. Zou, Venkatesh Saligrama, and Adam T. Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Pro- ceedings of Neural Information Processing Systems (NeurIPS), pages 4349â4357.
and Arvind Joanna J. Bryson, Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183â186.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of North American Chap- ter of the Association for Computational Linguistics, pages 4171â4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
Djellel Difallah, Elena Filatova, and Panos Ipeirotis. 2018. Demographics and dynamics of mechanical turk workers. In Proceedings of the ACM Interna- tional Conference on Web Search and Data Mining, WSDM â18, pages 135 â 143, New York, NY, USA. Association for Computing Machinery.
Jeremy Howard and Sebastian Ruder. 2018. Universal Language Model Fine-tuning for Text Classiï¬cation. In Proceedings of the Association for Computational Linguistics, pages 328â339, Melbourne, Australia. Association for Computational Linguistics.
Milos Jakubicek, Adam Kilgarriff, Vojtech Kovar, Pavel Rychly, and Vit Suchomel. 2013. The tenten corpus family. In Proceedings of the International Corpus Linguistics Conference CL.
Adam Kilgarriff. 2009. Simple maths for keywords. In Proceedings of the Corpus Linguistics Conference 2009 (CL2009),, page 171.
Svetlana Kiritchenko and Saif Mohammad. 2018. Ex- amining Gender and Race Bias in Two Hundred Sen- In Proceedings of Joint timent Analysis Systems. Conference on Lexical and Computational Seman- tics, pages 43â53.
Keita Kurita, Nidhi Vyas, Ayush Pareek, Alan W Black, and Yulia Tsvetkov. 2019. Measuring bias in contextualized word representations. In Proceed- ings of the First Workshop on Gender Bias in Natu- ral Language Processing, pages 166â172, Florence, Italy. Association for Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.
Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the Association for Computational Linguistics, pages 142â150, Portland, Oregon, USA. Association for Computational Linguistics.
Thomas Manzini, Lim Yao Chong, Alan W Black, and Yulia Tsvetkov. 2019. Black is to criminal as cau- casian is to police: Detecting and removing mul- In Proceedings ticlass bias in word embeddings. of the North American Chapter of the Association for Computational Linguistics, pages 615â621, Min- neapolis, Minnesota. Association for Computational Linguistics.
Chandler May, Alex Wang, Shikha Bordia, Samuel R. Bowman, and Rachel Rudinger. 2019. On measur- ing social biases in sentence encoders. In Proceed- ings of the North American Chapter of the Associa- tion for Computational Linguistics, pages 622â628, Minneapolis, Minnesota. Association for Computa- tional Linguistics.
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Cor- rado, and Jeffrey Dean. 2013. Distributed represen- tations of words and phrases and their composition- In Proceedings of Neural Information Pro- ality. cessing Systems (NeurIPS), NIPS 13, pages 3111 â 3119, Red Hook, NY, USA. Curran Associates Inc.
Brian Nosek, Mahzarin Banaji, and Anthony Green- wald. 2002. Math = male, me = female, therefore math != me. Journal of personality and social psy- chology, 83:44â59.
Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vectors In Proceedings of Em- for word representation. pirical Methods in Natural Language Processing (EMNLP), pages 1532â1543.
Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke
Zettlemoyer. 2018. Deep Contextualized Word Rep- resentations. In Proceedings of the North American Chapter of the Association for Computational Lin- guistics), pages 2227â2237. Association for Com- putational Linguistics.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8).
Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in In Proceedings of North coreference resolution. American Chapter of the Association for Computa- tional Linguistics (NAACL), pages 8â14.
Emily Sheng, Kai-Wei Chang, Premkumar Natara- jan, and Nanyun Peng. 2019. The woman worked as a babysitter: On biases in language genera- In Proceedings of the Empirical Methods in tion. Natural Language Processing and the International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3407â3412, Hong Kong, China. Association for Computational Linguistics.
Denny VrandeËci´c and Markus Krötzsch. 2014. Wiki- data: A free collaborative knowledgebase. Com- mun. ACM, 57(10):78â85.
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretrain- ing for language understanding. In H. Wallach, H. Larochelle, A. Beygelzimer, F. dâe Buc, E. Fox, and R. Garnett, editors, Proceedings of Neural Infor- mation Processing Systems (NeurIPS), pages 5753â 5763. Curran Associates, Inc.
Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Or- donez, and Kai-Wei Chang. 2018. Gender Bias in Coreference Resolution: Evaluation and Debiasing Methods. In Proceedings of North American Chap- ter of the Association for Computational Linguistics, pages 15â20.
Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhut- dinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the IEEE In- ternational Conference on Computer Vision (ICCV), ICCV 15, pages 19 â 27, USA. IEEE Computer So- ciety.
# A Appendix
# A.1 Detailed Results
Table 7 and Table 8 show detailed results on the Context Association Test for the development and test sets respectively.
# A.2 Mechanical Turk Task
Our crowdworkers were required to have a 95% HIT acceptance rate, and be located in the United States. In total, 475 and 803 annotators completed the intrasentence and intersentence tasks respec- tively. Restricting crowdworkers to the United States helps account for differing deï¬nitions of stereotypes based on regional social expectations, though limitations in the dataset remain as dis- cussed in Section 9. Screenshots of our Mechani- cal Turk interface are available in Figure 2 and 3.
# A.3 Target Words
Table 9 list our target terms used in the dataset col- lection task.
# A.4 General Methods for Training a Next Sentence Prediction Head
Given some context c, and some sentence s, our intersentence task requires calculating the likeli- hood p(s|c), for some sentence s and context sen- tence c.
While BERT has been trained with a Next Sentence Prediction classiï¬cation head to provide p(s|c), the other models have not. In this section, we detail our creation of a Next Sentence Predic- tion classiï¬cation head as a downstream task.
For some sentences A and B, our task is simply determining if Sentence A follows Sentence B, or if Sentence B follows Sentence A. We trivially generate this corpus from Wikipedia by sampling some ith sentence, i + 1th sentence, and a ran- domly chosen negative sentence from any other
article. We maintain a maximum sequence length of 256 tokens, and our training set consists of 9.5 million examples.
We train with a batch size of 80 sequences until convergence (80 sequences / batch * 256 tokens / sequence = 20,480 tokens/batch) for 10 epochs over the corpus. For BERT, We use BertAdam as the optimizer, with a learning rate of 1e-5, a linear warmup schedule from 50 steps to 500 steps, and minimize cross entropy for our loss function. Our results are comparable to Devlin et al. (2019), with each model obtaining 93-98% accuracy against the test set of 3.5 million examples.
Additional models maintain the same experi- mental details. Our NSP classiï¬er achieves an 94.6% accuracy with roberta-base, a 97.1% accuracy with roberta-large, a93.4% accu- racy with xlnet-base and 94.1% accuracy with xlnet-large.
In order to evaluate GPT-2 on intersentence tasks, we feed the mean-pooled representations across the entire sequence length into the clas- siï¬cation head. Our NSP classiï¬er obtains a 92.5% accuracy on gpt2-small, 94.2% on gpt2-medium, and 96.1% on gpt2-large. In order to ï¬ne-tune gpt2-large on our ma- chines, we utilized gradient accumulation with a step size of 10, and mixed precision training from Apex.
# A.5 Fine-Tuning BERT for Sentiment Analysis
In order to evaluate sentiment, we ï¬ne-tune BERT (Devlin et al., 2019) on movie reviews (Maas et al., 2011) for seven epochs. We used a maximum se- quence length of 256 WordPieces, batch size 32, and used Adam with a learning rate of 1eâ4. Our ï¬ne-tuned model achieves an 92% test accuracy on the Large Movie Review dataset.
Intersentence Intrasentence Model Domain Language Model Score (lms) Stereotype Score (ss) Idealized CAT Score (icat) Language Model Score (lms) Stereotype Score (ss) SENTIMENTLM gender BERT-base BERT-large GPT2 GPT2-medium GPT2-large XLNET-base XLNET-large ROBERTA-base ROBERTA-large
85.78 80.70 84.90 87.35 83.51 90.85 85.87 89.67 93.65 88.53 92.57 84.62 89.22 90.14 87.93 85.95 72.79 76.50 75.83 76.26 86.76 79.95 82.20 86.45 82.09 89.91 84.88 84.21 88.50 85.35 75.27 67.53 61.25 69.54 65.72 89.87 79.98 81.90 87.51 82.39 59.62 69.75 66.80 60.55 66.78 80.98 76.21 82.45 91.23 80.23 93.42 86.19 89.49 90.11 88.76
58.76 65.20 70.48 68.79 66.93 62.03 62.32 58.36 61.04 60.43 63.93 62.93 57.14 56.74 60.18 53.38 52.39 51.49 56.93 52.28 52.80 60.83 50.93 60.80 55.30 60.72 61.73 57.02 62.98 59.50 59.33 52.66 55.13 51.66 54.59 57.61 55.05 54.92 66.68 55.76 46.76 45.31 43.28 50.15 44.75 56.49 57.21 56.73 49.48 56.61 63.10 63.52 57.44 56.74 60.44
70.75 56.16 50.13 54.53 55.24 69.00 64.71 74.68 72.98 70.06 66.77 62.74 76.48 77.98 70.02 80.14 69.31 74.22 65.33 72.79 81.89 62.63 80.68 67.78 73.38 70.62 64.97 72.38 65.53 69.12 61.22 63.93 54.97 67.22 59.69 76.18 71.90 73.84 58.31 72.90 55.76 63.21 57.82 60.37 59.77 70.47 65.21 71.36 90.29 69.63 68.94 62.87 76.17 77.96 70.22
36.45 45.61 49.10 44.78 46.01 82.50 82.31 83.82 82.16 83.02 83.10 83.04 84.02 85.98 83.60 93.28 92.29 89.76 88.46 91.11 93.58 91.76 92.36 90.46 92.21 95.32 92.36 91.89 91.61 92.49 69.57 67.75 69.19 74.90 68.91 74.16 73.15 73.64 77.95 73.68 71.36 72.49 70.03 70.60 71.15 75.63 73.71 71.71 69.93 72.90 95.19 92.34 92.47 91.61 92.73
42.02 45.28 70.14 50.62 56.40 61.48 60.85 56.30 56.28 58.68 64.04 60.30 57.27 50.16 59.01 62.67 63.97 60.35 58.02 61.93 65.58 63.37 61.44 62.57 62.74 65.29 65.68 63.00 61.61 64.26 46.54 58.47 52.14 55.72 53.97 53.99 56.05 50.42 49.61 52.98 54.21 55.94 56.07 40.83 55.21 56.99 55.42 56.34 39.86 55.45 64.18 65.44 62.20 59.13 63.56
profession race religion overall gender profession race religion overall gender profession race religion overall gender profession race religion overall gender profession race religion overall gender profession race religion overall gender profession race religion overall gender profession race religion overall gender profession race religion overall gender profession race religion overall gender profession race religion overall
# ENSEMBLE
Table 7: The per-domain performance of pretrained language models on the development set.
# Idealized CAT Score (icat)
30.64 41.31 29.32 44.23 40.12 63.56 64.45 73.27 71.85 68.61 59.77 65.94 71.80 85.70 68.54 69.65 66.50 71.18 74.27 69.37 64.42 67.22 71.22 67.71 68.71 66.17 63.39 67.99 70.34 66.12 64.76 56.27 66.22 66.32 63.43 68.23 64.30 73.02 77.34 69.29 65.35 63.87 61.52 57.65 63.74 65.06 65.72 62.63 55.75 64.96 68.19 63.83 69.91 74.89 67.57
Intersentence Intrasentence Model Domain Language Model Score (lms) Stereotype Score (ss) Idealized CAT Score (icat) Language Model Score (lms) Stereotype Score (ss) SENTIMENTLM BERT-base BERT-large GPT2 GPT2-medium GPT2-large XLNET-base XLNET-large-cased ROBERTA-base ROBERTA-large
gender profession race religion overall gender profession race religion overall gender profession race religion overall gender profession race religion overall gender profession race religion overall gender profession race religion overall gender profession race religion overall gender profession race religion overall gender profession race religion overall gender profession race religion overall gender profession race religion overall
86.11 80.69 84.45 89.36 83.44 90.36 86.92 88.46 92.69 88.28 91.59 86.02 89.72 92.62 88.68 84.68 72.03 76.72 85.21 76.28 84.47 78.93 80.40 85.44 80.55 88.43 84.66 83.87 88.57 84.91 74.26 67.99 60.14 65.58 65.01 87.07 81.90 81.24 89.23 82.51 56.86 67.97 63.37 66.15 64.38 81.50 75.75 79.40 93.70 78.84 92.59 87.26 90.00 92.78 89.40
57.59 61.32 70.32 71.54 65.44 56.25 59.16 59.25 63.53 59.00 60.68 60.77 60.98 59.55 60.81 49.62 53.22 52.24 52.04 52.27 49.17 56.65 52.12 53.64 53.49 54.52 59.33 53.77 59.46 56.14 54.80 54.18 54.75 57.30 54.64 54.99 55.59 56.24 62.04 56.06 45.96 48.46 46.99 46.74 47.40 52.00 54.12 56.94 56.08 55.24 60.68 60.84 61.08 60.88 60.93
73.03 62.42 50.13 50.86 57.67 79.07 71.00 72.09 67.61 72.38 72.03 67.49 70.01 74.94 69.51 84.03 67.39 73.28 81.74 72.81 83.07 68.43 77.00 79.23 74.92 80.44 68.86 77.55 71.82 74.47 67.14 62.30 54.42 56.00 58.98 78.39 72.75 71.10 67.74 72.51 52.27 65.87 59.55 61.83 61.02 78.23 69.52 68.38 82.32 70.57 72.82 68.34 70.06 72.58 69.86
40.69 46.07 49.57 42.78 46.92 82.78 82.89 82.14 82.86 82.52 82.80 82.55 83.10 84.30 82.90 92.01 90.74 90.95 91.21 91.01 91.65 90.03 91.81 93.43 91.19 92.92 90.40 92.41 93.69 91.77 72.09 69.73 70.34 70.61 70.34 74.85 74.20 73.43 75.96 73.99 73.90 71.07 72.16 71.23 71.94 75.34 72.69 72.16 71.88 72.74 92.15 90.40 92.41 94.30 91.70
47.16 43.41 69.16 57.17 56.41 61.23 57.32 57.02 52.69 57.49 61.23 57.33 57.00 56.04 57.61 62.65 61.31 58.90 63.26 60.42 66.17 63.04 61.70 65.83 62.91 67.64 64.43 62.35 66.35 63.93 54.75 55.31 52.34 49.00 53.62 56.69 52.61 50.11 49.40 51.83 53.54 52.63 54.59 51.79 53.63 53.58 54.79 54.73 49.32 54.41 67.12 64.29 62.45 66.70 63.87
# ENSEMBLE
Table 8: The per-domain performance of pretrained language models on the test set.
# Idealized CAT Score (icat)
38.39 40.00 30.57 36.64 40.90 64.19 70.75 70.61 78.40 70.16 64.21 70.45 71.47 74.11 70.29 68.74 70.22 74.76 67.02 72.04 62.01 66.55 70.33 63.85 67.65 60.13 64.31 69.58 63.06 66.21 65.24 62.33 67.04 69.20 65.25 64.84 70.33 73.27 75.05 71.28 68.66 67.33 65.54 68.69 66.72 69.94 65.73 65.33 70.91 66.33 60.61 64.56 69.40 62.80 66.26
Instructions: View the instructions here! Your task: Target: Christian Words: Prevalent Stereotype Word: Non-Prevalent Stereotype Word: Unrelated Word: Write a word that stereotypes Christian in a completely stereotypical sense. Write a word that actively combats stereotypes about Christian. Write a word that is completely unrelated to conservative and accepting, and DOESN'T fit in the blank! This word should not make sense when Rules for Prevalent Stereotype Sentences: placed in the sentence. accepting * This sentence encourage a stereotypical view about Christian. * Stereotypes can either be positive or negative. bananas e.g., many people associate black people with athletic traits (positive stereotype) and criminals traits (negative « Common attributes that indicate stereotypes are gender, race, economic status, mental state, physical appearance, social behavior, power status, profession, etc. conservative Sentences: Template Sentence: Write a sentence with a BLANK that may be filled with either conservative or accepting. My Christian friends are very BLANK stereotype). Prevalent Stereotype Sentence: My Christian friends are very conservative Non-Prevalent Stereotype Sentence: My Christian friends are very accepting
# Figure 2: A screenshot of our intrasentence task collection interface.
Instructions: View the instructions here! Your task: Target: African Americans Context Sentence: Write a neutral sentence containing the word African Americans. Many of my peers in college are African Americans. Context: Many of my peers in college are African Americans. Prevalent Stereotype sentence: Write a sentence that naturally follows the context sentence and supports stereotypes about African Americans. They commonly drop out of college due to teenage pregnancies. Context: Many of my peers in college are African Americans. Non-Prevalent Stereotype sentence (but plausible in the real world): Write a sentence that naturally follows the context sentence and combats stereotypes about African Americans. They usually graduate summa cum laude and go to graduate school. Unrelated sentence: Write a sentence that is completely unrelated to any of these sentences. | am excited to go on retreat.
# Figure 3: A screenshot of our intersentence task collection interface.
# Gender
stepfather brother father bride him girlfriend daughter male female gentlemen sir son he herself she woman grandmother groom boyfriend husband maâam stepmother mother boy her himself his schoolgirl ladies grandfather schoolboy lady gentleman sister man girl wife mommy hers daddy Profession barber CEO guard mathematician musician chief real-estate developer banker boxer attendant judge theologian receptionist diplomat delivery man pensioner designer tailor illustrator cleaner coach accountant baker janitor detective lawyer broker cook chess player housekeeper umpire salesperson editor guitarist realtor performing artist soldier waiter clerk pharmacist businessperson commander doctor carpenter politician farmer scientist hairdresser priest maid bartender physician engineer linguist pilot singer journalist author policeman pianist football player ï¬reï¬ghter athlete mechanic entrepreneur writer butcher prisoner swimmer producer economist sheriff comedian poet professor secretary dentist architect chef composer construction worker manager mover artist actor model librarian electrician plumber tennis player researcher physicist cashier painter laborer chemist auditor analyst academic photographer handball player Race
Nigerian Sudan Italy Australian Oman Moroccan China Scotland Sierra Leonean German Kenyan Sierra Leon Indonesian Singapore Ecuador Turkey Polish Cameroon Albania Ethiopian Argentina Austrian Egypt Columbia Cape Verde
Ethiopia Afghanistan Somali Spanish Finnish Somalia India Mexican Africa Romania Liberian Qatari French Romanian Morocco Vietnamese Greek Japanese Columbian Egyptian Chile White people Korean Bolivia Portugal
Europe Iraq Iran Guatemalan Swedish Saudi Arabian Irish Paraguayan Jordan Brazilian Cameroonian Syrian Norweigan Crimean Omani Nepali Scottish Taiwanese Peruvian Persian people Cape Verdean Finland Dutch people Laotian Austria
Russian European Ukrainian Yemen Indian Afghan Venezuela Hispanic Puerto Rican Venezuelan Chinese Syria France Britain African Brazil Vietnam Indonesia Ecuadorian Mexico African Americans Kenya Arab South Africa Native American Iranian Laos Bolivian Bengali Argentian Sweden Turkish Australia Peru Lebanese Singaporean
Ukraine Russia Italian Sudanese Ghanaian Pakistani Greece Eritrean Pakistan Puerto Rico Liberia Lebanon Korea Ireland Qatar British Ghana Albanian Paraguay Portuguese Taiwan Eriteria Chilean Norway
Saudi Arabia Jordanian Germany Iraqi Bangladesh Guatemala Nepal Spain Crimea Yemeni South African Poland Japan Netherlands
# Religion
# Sharia Mohammed
# Jihad church
# Christian Bible
# Muslim Quran
# Islam Brahmin
# Hindu Holy Trinity
Table 9: The set of terms that were used to collect StereoSet, ordered by frequency in the dataset. | {
"id": "1907.11692"
} |
2004.09602 | Integer Quantization for Deep Learning Inference: Principles and Empirical Evaluation | Quantization techniques can reduce the size of Deep Neural Networks and
improve inference latency and throughput by taking advantage of high throughput
integer instructions. In this paper we review the mathematical aspects of
quantization parameters and evaluate their choices on a wide range of neural
network models for different application domains, including vision, speech, and
language. We focus on quantization techniques that are amenable to acceleration
by processors with high-throughput integer math pipelines. We also present a
workflow for 8-bit quantization that is able to maintain accuracy within 1% of
the floating-point baseline on all networks studied, including models that are
more difficult to quantize, such as MobileNets and BERT-large. | http://arxiv.org/pdf/2004.09602 | Hao Wu, Patrick Judd, Xiaojie Zhang, Mikhail Isaev, Paulius Micikevicius | cs.LG, stat.ML | 20 pages, 7 figures | null | cs.LG | 20200420 | 20200420 | 0 2 0 2
r p A 0 2 ] G L . s c [
1 v 2 0 6 9 0 . 4 0 0 2 : v i X r a
# INTEGER QUANTIZATION FOR DEEP LEARNING INFERENCE: PRINCIPLES AND EMPIRICAL EVALUATION
# Hao Wu1
# Patrick Judd1
# Xiaojie Zhang1
# Mikhail Isaev2â
# Paulius Micikevicius1
# 1NVIDIA {skyw, pjudd, viczhang, pauliusm}@nvidia.com
2Georgia Institute of Technology [email protected]
# ABSTRACT
Quantization techniques can reduce the size of Deep Neural Networks and improve inference latency and throughput by taking advantage of high throughput integer instructions. In this paper we review the mathematical aspects of quantization parameters and evaluate their choices on a wide range of neural network models for different application domains, including vision, speech, and language. We focus on quantization techniques that are amenable to acceleration by processors with high-throughput integer math pipelines. We also present a workï¬ow for 8-bit quantization that is able to maintain accuracy within 1% of the ï¬oating-point baseline on all networks studied, including models that are more difï¬cult to quantize, such as MobileNets and BERT-large.
# Introduction
While 32-bit single-precision ï¬oating-point was the dominant numerical format for Deep Learning (DL) applications, more recently a variety of alternative formats have been proposed to increase the computational performance of deep learning applications. It is becoming commonplace to train neural networks in 16-bit ï¬oating-point formats, either IEEE fp16 [35] or bï¬oat16 [57], supported by most DL accelerators. Once trained, neural networks can be deployed for inference using even lower-precision formats, including ï¬oating-point, ï¬xed-point, and integer. Low-precision formats offer several performance beneï¬ts. First, many processors provide higher throughput math pipelines the low-bit formats, which can speed up math-intensive operations, such as convolutions and matrix multiplications. Second, smaller word sizes reduce memory bandwidth pressure, improving performance for bandwidth-limited computations. Third, smaller word sizes lead to lower memory size requirements, which can improve cache utilization as well as other aspects of memory-system operation.
In this paper we focus on integer quantization for neural network inference, where trained networks are modiï¬ed to use integer weights and activations so that integer math pipelines can be used for many operations. Table 1 lists the relative tensor operation throughputs of various data types on the NVIDIA Turing Graphics Processing Unit (GPU) architecture [40]. Math-intensive tensor operations executed on 8-bit integer types can see up to a 16x speed-up compared to the same operations in fp32. Memory-limited operations could see up to a 4x speed-up compared to the fp32 version, due to the smaller word size. Other processors, such as TPUv1 [23], Intel CPUs with VNNI
Input Data type Accumulation Data type Math Throughput Bandwidth Reduction FP32 FP16 INT8 INT4 INT1 FP32 FP16 INT32 INT32 INT32 1x 8x 16x 32x 128x 1x 2x 4x 8x 32x
Table 1: Beneï¬ts of lower precision data types for tensor operations on the NVIDIA Turing GPU architecture
âWork done during an internship at NVIDIA
instructions [28], and a number of emerging accelerator designs also provide signiï¬cant acceleration for int8 operations. The process of neural network quantization can be automated by software tools [36, 61] or controlled manually. In either case, care must be taken to minimize any impact quantization has on the model accuracy.
In this paper we review the mathematical fundamentals underlying various integer quantization choices (Section 3) as well as techniques for recovering accuracy lost due to quantization (Section 5). Section 6 combines this information into a recommended workï¬ow. In Section 4 and the Appendices we present empirical evaluation of various quantization choices on a wide range of network models from different application domains - image processing, language modeling, language translation, and speech recognition. These models include the major network topologies - convolutional networks, recurrent networks, as well as attention-based networks. With the presented workï¬ow for int8 quantization we are able to maintain model accuracy within 1% of each baseline ï¬oating-point network, even for the networks that are known to be challenging to quantize, such as MobileNets and BERT-large.
# 2 Related Work
Vanhoucke et al. [52] showed that earlier neural networks could be quantized after training to use int8 instructions on Intel CPUs while maintaining the accuracy of the ï¬oating-point model. More recently it has been shown that some modern networks require training to maintain accuracy when quantized for int8. Jacob et al. [20] described models optimized for inference where all inference operations were performed with integer data types. Here batch normalization layers were folded into the preceding convolution layer before quantization, reducing the number of layers that needed to be executed during inference. Krishnamoorthi [26] evaluated various quantization methods and bit-widths on a variety of Convolutional Neural Networks (CNNs). He showed that even with per-channel quantization, networks like MobileNet do not reach baseline accuracy with int8 Post Training Quantization (PTQ) and require Quantization Aware Training (QAT). McKinstry et al. [33] demonstrated that many ImageNet CNNs can be ï¬netuned for just one epoch after quantizing to int8 and reach baseline accuracy. They emphasized the importance of using an annealing learning rate schedule and a very small ï¬nal learning rate. They also set the quantization range based on a percentile of activations sampled from the training set. Instead of using ï¬xed ranges, Choi et al. [6] proposed PACT which learns the activation ranges during training.
Much of the earlier research in this area focused on very low bit quantization [7, 13, 59], all the way down to ternary (2-bit) [60, 34] and binary weights [8] and activations [45, 18]. These works showed that for lower bit-widths, training with quantization was required to achieve high accuracy, though accuracy was still lower than the ï¬oating-point network on harder tasks such as ImageNet image classiï¬cation [47]. They also demonstrated the importance of techniques such as using higher precision for weight updates and the Straight-through Estimator (STE) for gradient backpropagation [3]. Also, in many cases the ï¬rst and last layer were not quantized, or quantized with a higher bit-width, as they are more sensitive to quantization [59, 45, 18]. Multi-bit quantization schemes use either uniform [7, 59], or non-uniform quantization [13, 60, 34, 2]. Uniform quantization enables the use of integer or ï¬xed-point math pipelines, allowing computation to be performed in the quantized domain. Non-uniform quantization requires dequantization, e.g. a codebook lookup, before doing computation in higher precision, limiting its beneï¬ts to model compression and bandwidth reduction. This paper focuses on leveraging quantization to accelerate computation, so we will restrict our focus to uniform quantization schemes.
While much of the aforementioned work has focused on CNNs for image classiï¬cation, there are also many examples of applying quantization to other types of network architectures. Wu et al. [55] described how Googleâs Neural Machine Translation (GNMT), which employs a Long Short Term Memory (LSTM) Recurrent Neural Network (RNN), was trained with hard range constraints on multiple tensors to be more amenable to PTQ. A similar strategy was taken on MobileNet v2 [48], which restricts activations to be in the range [0, 6] (ReLU6). Bhandare et al. [4] quantized the smaller base Transformer [53] model targeting the int8 VNNI instructions on Intel CPUs. They use KL-Divergence [36] to calibrate the quantization ranges and apply PTQ. Zafrir et al. [58] quantized BERT [10] to int8 using both PTQ and QAT. In this paper, we present an evaluation of int8 quantization on all of the major network architectures with both PTQ and QAT.
More complex methods have also been proposed for training quantized models. Distillation has been used to train a quantized âstudentâ model with a high precision, and often larger, âteacherâ model. It has been applied to training quantized CNNs [37, 43], LSTMs [43] and Transformers [24]. Leng et al. [31] used the Alternating Direction Method of Multipliers (ADMM) as an alternative to STE when training quantized model. These methods generally target lower bit-width quantization, as QAT has been shown to be sufï¬cient for int8 quantization. We have also found QAT to be sufï¬cient for int8 quantization on the models we evaluated, and as such we chose not to included these methods in our evaluation of int8 quantization.
2
(a) Afï¬ne quantization (b) Scale quantization
Figure 1: Quantization mapping of real values to int8
# 3 Quantization Fundamentals
We focus on uniform integer quantization as it enables computing matrix multiplications and convolutions in the integer domain, allowing the use of high throughput integer math pipelines. Uniform quantization can be divided in to two steps. First, choose the range of the real numbers to be quantized, clamping the values outside this range. Second, map the real values to integers representable by the bit-width of the quantized representation (round each mapped real value to the closest integer value).
In this Section we will consider higher precision ï¬oating-point formats like fp16 and fp32 to be real numbers for the purpose of discussion. Enabling integer operations in a pre-trained ï¬oating-point neural network requires two fundamental operations:
Quantize: convert a real number to a quantized integer representation (e.g. from fp32 to int8).
Dequantize: convert a number from quantized integer representation to a real number (e.g. from int32 to fp16).
We will ï¬rst deï¬ne the quantize and dequantize operations in Section 3.1 and discuss their implications in neural network quantization in Sections 3.2 and 3.3. Then we will discuss how the real ranges are chosen in Section 3.4.
# 3.1 Range Mapping
Let [β, α] be the range of representable real values chosen for quantization and b be the bit-width of the signed integer representation. Uniform quantization transforms the input value x â [β, α] to lie within [â2bâ1, 2bâ1 â 1], where inputs outside the range are clipped to the nearest bound. Since we are considering only uniform transformations, there are only two choices for the transformation function: f (x) = s · x + z and its special case f (x) = s · x, where x, s, z â R. In this paper we refer to these two choices as afï¬ne and scale, respectively.
# 3.1.1 Afï¬ne Quantization
Afï¬ne quantization maps a real value x â R to a b-bit signed integer xq â {â2bâ1, â2bâ1 + 1, . . . , 2bâ1 â 1}. Equations 1 and 2 deï¬ne afï¬ne transformation function, f (x) = s · x + z:
s = 2b â 1 α â β (1)
z = â round(β · s) â 2bâ1 (2)
where s is the scale factor and z is the zero-point - the integer value to which the real value zero is mapped. In the 8-bit case, s = 255 αâβ and z = âround(β · s) â 128. Note that z is rounded to an integer value so that the real value of zero is exactly representable. This will result in a slight adjustment to the real representable range [β, α] [20].
3
The quantize operation is deï¬ned by Equation 3 and 4:
clip(x, l, u) l, x < l x, u, x > u l ⤠x ⤠u (3)
xq = quantize(x, b, s, z) = clip(round(s · x + z), â2bâ1, 2bâ1 â 1) (4)
where round() rounds to the nearest integer. Figure 1a shows the mapping of real values to int8 representation with afï¬ne quantization. Note that s is the ratio of the integer-representable and chosen real ranges.
Equation 5 shows the corresponding dequantize function, which computes an approximation of the original real valued input, Ëx â x.
Ëx = dequantize(xq, s, z) = 1 s (xq â z) (5)
# 3.1.2 Scale Quantization
Scale quantization performs range mapping with only a scale transformation. For simplicity we describe the symmetric variant of scale quantization (often called symmetric quantization [26]), where the input range and integer range are symmetric around zero. This means that for int8 we use the integer range [â127, 127], opting not to use the value -128 in favor of symmetry. For 8-bit quantization, losing one out of 256 representable values is insigniï¬cant, but for lower bit quantization the trade-off between representable values and symmetry should be re-evaluated.
Figure 1b illustrates the mapping of real values to int8 with scale quantization. Equation 6 and 7 deï¬ne scale quantization of a real value x, with a chosen representable range [âα, α], producing a b-bit integer value, xq.
s = 2bâ1 â 1 α (6)
xq = quantize(x, b, s) = clip(round(s · x), â2bâ1 + 1, 2bâ1 â 1) (7)
Equation 8 shows the corresponding dequantize operation for scale quantization.
Ëx = dequantize(xq, s) = 1 s xq (8)
# 3.2 Tensor Quantization Granularity
There are several choices for sharing quantization parameters among tensor elements. We refer to this choice as quantization granularity. At the coarsest, per-tensor granularity, the same quantization parameters are shared by all elements in the tensor. The ï¬nest granularity would have individual quantization parameters per element. Intermediate granularities reuse parameters over various dimensions of the tensor - per row or per column for 2D matrices, per channel for 3D (image-like) tensors, etc.
We will consider two factors when choosing granularity: impact on model accuracy and computational cost. To understand the computational cost, we will examine matrix multiplication (note that this results in no loss of generality for math-intensive operations since convolutions can be expressed as matrix multiplications [5, 54]).
4
Consider a linear (fully-connected) layer that performs a matrix multiplication Y = XW , where X = (xik) â RmÃp is the input activation tensor, W = (wkj) â RpÃn is the weight tensor, and Y = (yij) â RmÃn is the output tensor. The result of the real-valued matrix multiplication Y = XW can be approximated with quantized tensors Xq = (xq,ik) â ZmÃp and Wq = (wq,kj) â ZpÃn, by ï¬rst dequantizing them, and then performing the matrix multiplication. First, consider tensors quantized at the ï¬nest granularity, per-element, with scale quantization:
P P Lik Why © » dequantize(aq,ik, 8q,ik) « dequantize(wg,¢ k=1 k=l k=1 1 Seik 1 wij Tqik* ââWa.rj_ 9) Suki
In order to use integer matrix multiplication the scales must be factored out of the summation on the right-hand side of Equation 9, for which the scales must be independent of k:
1 P ââ a4 Waki (10) Sei Sw.
Thus, integer matrix multiplication is possible as long as the quantization granularity is per-row or per-tensor for activations and per-column or per-tensor for weights. For activations, only per-tensor quantization is practical for performance reasons. In the above formulation different rows belong to either different batch instances or items in a sequence and thus row count can vary at inference time. This prevents the per-row scaling factor from being computation ofï¬ine (which would not be meaningful for different instances in a mini-batch), whereas determining them online imposes a compute overhead and in some cases results in poor accuracy (Dynamic Quantization discussion in [58]).
For maximum performance, activations should use per-tensor quantization granularity. Weights should be quantized at either per-tensor or per-column granularity for linear layers of the form Y = XW (per-row for linear layers of the form Y = XW T ). The corresponding granularity to per-column in convolutions is per-kernel, or equivalently per-output-channel since each kernel produces a separate output channel [27, 29]. This is commonly referred to as âper-channelâ weight quantization in literature and we follow that convention [21, 25, 26, 38, 46]. We examine the granularity impact on accuracy in Section 4.1.
# 3.3 Computational Cost of Afï¬ne Quantization
While both afï¬ne and scale quantization enable the use of integer arithmetic, afï¬ne quantization leads to more computationally expensive inference. As shown in equation 10, scale quantization results in an integer matrix multiply, followed by a point-wise ï¬oating-point multiplication. Given that a typical dot-product in a DNN comprises 100s to 1000s of multiply-add operations, a single ï¬oating-point operation at the end is a negligible cost. Furthermore, if per-tensor quantization is used for both arguments, a single ï¬oating-point multiplier is needed and is part of the GEMM API (often referred to as alpha) in BLAS libraries [11].
Afï¬ne quantization yields a more complex expression:
1 1 wi ~ >> 5, Baik ~ 2) ââ (Wan â 2w,3) I se Sw,j 1 P P P (11) = 58 ( UqikWgq,kj â y (Waki Ze + 2x 2w,j) â y inns) CeO Nga k=1 k=1 (1) (2) (3)
Computation can be broken down into three terms, as annotated in Equation 11. The ï¬rst term is the integer dot product, just as in scale quantization (Equation 10). The second term consists of only integer weights and zero-points. As a result, this term can be computed ofï¬ine, only adding an element-wise addition at inference time. If the layer has a bias then this term can be folded in without increasing inference cost. The third term, however, involves the quantized input matrix Xq, and thus cannot be computed ofï¬ine. This extra computation, depending on implementation, can introduce considerable overhead, reducing or even eliminating the throughput advantage that integer math pipelines have over reduced precision ï¬oating-point. Note that this extra computation is incurred only if afï¬ne quantization is used for the weight matrix. Thus, to maximize inference performance we recommend using scale quantization for weights. While afï¬ne quantization could be used for activations without a performance penalty, we show in later sections that scale quantization is sufï¬cient for int8 quantization of all the networks we studied.
5
Figure 2: Histogram of input activations to layer 3 in ResNet50 and calibrated ranges
# 3.4 Calibration
Calibration is the process of choosing α and β for model weights and activations. For simplicity we describe calibration of a symmetric range, as needed for scale quantization. In this paper we consider three calibration methods:
Max: Use the maximum absolute value seen during calibration [52].
Entropy: Use KL divergence to minimize information loss between the original ï¬oating-point values and values that could be represented by the quantized format. This is the default method used by TensorRT [36].
Percentile: Set the range to a percentile of the distribution of absolute values seen during calibration [33]. For example, 99% calibration would clip 1% of the largest magnitude values.
Figure 2 shows a log scaled histogram of activations feeding into layer1.0.conv2 of ResNet50. Ranges derived from max, entropy, and 99.99% percentile calibration are shown with dashed lines. Note that the activations are strictly non-negative because this layer directly follows a ReLU activation function [39]. Max calibration represents the largest value in the distribution, maintaining the full range while having low precision. Clipping the distribution trades off a large clipping error on a few outlier values for smaller rounding errors on a majority of the values. Both entropy and percentile calibration clip some outlier values in order to increase the resolution of inlier values.
# 4 Post Training Quantization
In this section we evaluate various Post Training Quantization (PTQ) parameter choices, as described in Section 3. Quantization parameters are calibrated ofï¬ine by processing the trained model weights and activations generated by running inference on a sample dataset, no further training is involved. These quantization parameters are evaluated on a variety of neural network tasks and models, summarized in Table 2. More details on these networks can be found in Appendix A. The selected models comprise multiple types of network architectures: convolutional feed forward networks, recurrent networks, and attention-based networks. We report accuracy metrics computed on the evaluation set of the corresponding dataset. Metrics for all tasks are reported as percentages, where higher is better and 100% is a perfect score. For metric consistency, we report word accuracy (WAcc) for speech recognition instead of the more commonly used Word Error Rate (WER), where WAcc = 100% â WER. Note that accuracy metrics for different tasks are computed in very different ways, thus it is not meaningful to compare absolute changes in accuracy when quantizing different models. Therefore, when discussing accuracy impact we will refer to the relative accuracy change, computed by (accint8 â accf p32)/accf p32.
Our experiments are conducted using PyTorch [42], with custom quantization operations. We focus on quantizing the computationally intensive operations, including convolutions, linear (fully-connected) layers, LSTM cells, projection layers and other matrix multiplications. Most of the other layers, such as softmax and batch normalization, are not quantized unless stated otherwise. An operation is quantized by quantizing all of its inputs (e.g. weights and activations). The output of a quantizated operation is not quantized to int8 because the operation that follows it may require higher precision, e.g. nonlinear operations. Furthermore, consecutive operations can be executed with a fused implementation, avoiding memory reads and writes for the intermediate values. Therefore we leave quantization of the output activations to the input of the next operation. Appendix C discusses how batch normalization can be eliminated by folding it into the preceding layer for inference.
6
Task Model Accuracy Metric Dataset (evaluation set) Classiï¬cation MobileNet v1 MobileNet v2 ResNet50 v1.5 ResNet152 v1.5 Inception v3 Inception v4 ResNeXt50 ResNeXt101 Efï¬cientNet b0 Efï¬cientNet b3 71.88 71.88 76.16 78.32 77.34 79.71 77.61 79.30 76.85 81.61 Top1 ImageNet 2012 (val) Detection Faster R-CNN Mask R-CNN Retinanet 36.95 37.89 39.30 mAP COCO 2017 (val) Segmentation FCN DeepLabV3 63.70 67.40 mIoU COCO 2017 (val) Translation GNMT Transformer 24.27 28.27 BLEU WMT16 en-de (newtest2014) Speech Recognition Jasper 96.09 (3.91) WAcc (WER) LibriSpeech (test-clean) Language model BERT Large 91.01 F1 Squad v1.1 (dev)
# Table 2: Summary of networks and pre-trained model accuracy
Model fp32 Per-channel Per-channel fold BN Per-tensor Per-tensor fold BN MobileNet v1 MobileNet v2 ResNet50 v1.5 ResNeXt50 Efï¬cientNet b0 71.88 71.88 76.16 77.61 76.85 71.59 71.61 76.14 77.62 76.72 71.59 71.61 76.14 77.62 76.72 69.58 71.12 75.83 77.48 76.68 66.88 70.21 75.84 77.45 12.93
Table 3: Accuracy with int8 quantization of weights only: per-tensor vs per-channel granularity. Fold BN indicates batch norms were folded into the preceding convolution before quantization
# 4.1 Weight Quantization
We ï¬rst evaluate weight quantization in isolation, since their values do not depend on network inputs, and demonstrate that max calibration is sufï¬cient to maintain accuracy for int8 weights. Table 3 compares the accuracy impact of the per-tensor and per-channel quantization granularities, which in Section 3.2 were shown to require minimal compute overheads. While per-tensor quantization results in substantial accuracy losses for some networks, accuracy loss is more pronounced and even catastrophic for Efï¬cientNet once batch-normalization (BN) parameters are folded into convolution layers. BN folding (Appendix C) is a common technique to speed up inference as it completely eliminates this memory-limited operation without changing the underlying mathematics. However, as BN parameters are learned per channel, their folding can result in signiï¬cantly different weight value distributions across channels. Fortunately, as Table 3 shows, per-channel quantization granularity maintains model accuracy even with BN folding. Table 4 reports per-channel (per-column for linear layers) granularity and indicates that max calibration is sufï¬cient to maintain accuracy when quantizing weights to int8. The rest of the experiments in this paper use per-channel max calibration for weights.
# 4.2 Activation Quantization
Table 5 shows activation quantization results for different calibration methods: max, entropy and percentiles from 99.9% to 99.9999%. Details on activation calibration can be found in Appendix A. In all cases, weights were quantized per-channel with max calibration as described in Section 4.1.
7
Model fp32 Accuracy Relative Model fp32 Accuracy Relative MobileNet v1 MobileNet v2 ResNet50 v1.5 ResNet152 v1.5 Inception v3 Inception v4 ResNeXt50 ResNeXt101 Efï¬cientNet b0 Efï¬cientNet b3 71.88 71.88 76.16 78.32 77.34 79.71 77.61 79.30 76.85 81.61 71.59 71.61 76.14 78.28 77.44 79.64 77.62 79.29 76.72 81.55 -0.40% Faster R-CNN 36.95 37.89 -0.38% Mask R-CNN 39.30 -0.03% Retinanet 63.70 -0.05% FCN 67.40 0.13% DeepLabV3 24.27 -0.09% GNMT 28.27 0.01% Transformer 96.09 -0.01% Jasper 91.01 -0.17% Bert Large -0.07% 36.86 37.84 39.20 63.70 67.40 24.41 28.58 96.10 90.94 -0.24% -0.13% -0.25% 0.00% 0.00% 0.58% 1.10% 0.01% -0.08%
Table 4: Accuracy with int8 quantization of weights only: per-channel granularity, max calibration
Models fp32 Max Entropy 99.9% 99.99% 99.999% 99.9999% MobileNet v1 MobileNet v2 ResNet50 v1.5 ResNet152 v1.5 Inception v3 Inception v4 ResNeXt50 ResNeXt101 Efï¬cientNet b0 Efï¬cientNet b3 Faster R-CNN Mask R-CNN Retinanet FCN DeepLabV3 GNMT Transformer Jasper BERT Large 71.88 71.88 76.16 78.32 77.34 79.71 77.61 79.30 76.85 81.61 36.95 37.89 39.30 63.70 67.40 24.27 28.27 96.09 91.01 69.51 69.41 75.82 77.93 72.53 0.12 77.31 78.74 22.3 54.27 36.38 37.51 38.90 63.40 67.20 24.31 21.23 95.99 85.92 70.19 70.28 76.05 78.21 77.54 79.60 77.46 79.09 72.06 76.96 36.82 37.75 38.97 64.00 67.40 24.53 21.88 96.11 37.40 70.39 70.68 75.68 77.62 76.21 78.16 77.04 78.77 70.87 77.80 35.22 36.17 35.34 62.20 66.40 24.34 24.49 95.77 26.18 70.29 71.14 75.98 78.17 77.52 79.63 77.39 79.15 68.33 80.28 36.69 37.55 38.55 64.00 67.40 24.36 27.71 96.09 89.59 69.97 70.72 75.97 78.17 77.43 79.12 77.45 79.17 51.88 80.06 36.76 37.72 39.19 63.90 67.50 24.38 20.22 96.09 90.20 69.57 70.23 76.00 78.19 77.37 71.19 77.39 79.05 42.49 77.13 36.78 37.80 39.19 63.60 67.40 24.33 20.44 96.03 90.10
Table 5: Post training quantization accuracy. Weights use per-channel or per-column max calibration. Activations use the calibration listed. Best quantized accuracy per network is in bold.
For most of the networks, there is at least one activation calibration method that achieves acceptable accuracy, except for MobileNets, Efï¬cientNets, Transformer and BERT where the accuracy drop is larger than 1%. Max calibration leads to inconsistent quality across various networks, leading to particularly large accuracy drops for Inception v4, Efï¬cientNets and Transformer, presumably due to their outlier values. 99.9% percentile calibration clips the large magnitude values too aggressive and leads to signiï¬cant accuracy drops on most networks. The best post training quantization results are achieved with entropy, 99.99%, or 99.999% percentile calibrations, though no single calibration is best for all networks.
# 5 Techniques to Recover Accuracy
While many networks maintain accuracy after post training quantization, there are cases where accuracy loss is substantial. A number of techniques are available to recover accuracy. The simplest one is partial quantization, described in Section 5.1, which leaves the most sensitive layers unquantized. One also has an option to train networks with quantization, as described in Section 5.2. Finally, there are also approaches that jointly learn the model weights and quantization parameters.
8
77 ~ st a a ~ & top-1 accuracy ~ o === baseline (no quantization) x â sensitivity (per-layer quantization) â partial quantization (cumulative ablation) 72 08 ie R oe R oo Ko R i oS oe io a yt ao yoâ eoâ ao yoâ ao eo eo? eek 4° ok non get ok ae ot get get iN) oH a om ) ) oH be a b aa \oC oe yo! Oe a \oC of xe a oot bs) oot bs) oo ot bs) oo oot oo
Figure 3: Partial quantization of Efï¬cientNet b0, showing the 10 most sensitive layers in order of increasing accuracy. Sensitivity shows the accuracy from the sensitivity analysis when only the corresponding layer inputs are quantized. Partial quantization shows the accuracy when the corresponding layer, and all layers to the left, are not quantized.
Model fp32 Accuracy Calibration Full int8 Total quantized layers Accuracy Partial int8 Skipped layers Accuracy MobileNet v1 Efï¬cientNet b0 Efï¬cientNet b3 Transformer BERT large 71.88 76.85 81.61 28.27 91.01 max entropy 99.99% max max 28 82 131 121 244 69.51 72.06 76.96 21.23 85.92 2 10 3 5 141 71.50 76.35 81.27 28.20 90.41
# Table 6: Partial post training quantization
# 5.1 Partial Quantization
Often just a few quantized layers contribute to most of the accuracy loss of a quantized model. We can trade off some performance to increase accuracy by leaving these sensitive layers unquantized (i.e. leaving their inputs and computation in ï¬oating-point). Since quantization of one layer affects the inputs of others, ï¬nding the optimal set of layers to quantize can require evaluating an exponential number of conï¬gurations. Instead, we propose using a one-at-a-time sensitivity analysis as a more tractable approach to infer which layers contribute most to the accuracy drop.
During sensitivity analysis a single layer is quantized at a time, and model accuracy is evaluated. We refer to layers that result in lower accuracy when quantized as being more âsensitiveâ to quantization. We sort the layers in descending order of sensitivity, and skip quantization of the most sensitive layers until the desired accuracy is achieved.
Figure 3 shows an example of sensitivity analysis and partial quantization of Efï¬cientNet b0. Starting from entropy calibration, we quantize one layer at a time and evaluate accuracy. For clarity we are only showing the 10 most sensitive layers. In this example, skipping the 10 most sensitive layers reduces the relative top-1 accuracy drop to 0.65%. Since there are 82 convolution layers, keeping 10 in ï¬oating-point while quantizing the remaining 72 maintains most of the performance beneï¬t.
As reported in Table 5, MobileNet v1, Efï¬cientNets, Transformer, and BERT all incurred a substantial loss in accuracy when quantized with various calibrations. We list the results of partial quantization for these networks in Table 6. With the exception of BERT, these networks need to skip quantization of only a few of the most-sensitive layers to recover accuracy to within 1% of the fp32 accuracy. For BERT, sensitivity analysis does not reveal any particular layer that contributes more to the accuracy drop. As a result we cannot identify a small subset of layers to leave in ï¬oating-point. To address this we need to consider different approaches. Section 5.2, incorporates quantization with training to recover accuracy. Additionally, Appendix D examines the GELU activation function in BERT and presents a simple augmentation to signiï¬cantly improve post training quantization accuracy.
9
pass aL < 0 L% â backward
Figure 4: 3-bit fake quantization forward and backward pass with STE derivative approximation.
# 5.2 Quantization-Aware Training
Quantization Aware Training (QAT) describes the technique of inserting quantization operations in to the neural network before training or ï¬ne-tuning, to allow the network to adapt to the quantized weights and activations. Appendix B illustrates how this can lead to a better result. We apply QAT to ï¬ne-tuning as it has been shown that starting from a pre-trained network and ï¬ne-tuning leads to better accuracy [37, 26] and requires signiï¬cantly fewer iterations [33]. This also allows us to leverage the calibrated pre-trained models from Section 4. Note that we keep the quantization ranges ï¬xed throughout ï¬ne-tuning. Another approach is to learn the ranges, which we evaluate in Section 5.3.
A common approach to implementing QAT is to insert fake quantization, also called simulated quantization [26], operations into a ï¬oating-point network. Equation 12 deï¬nes fake quantization as a quantize and dequantize operation that produces an approximate version of the input, Ëx â x, where x and Ëx are both ï¬oating-point values.
Ëx = dequantize(quantize(x, b, s), b, s) (12)
We add fake quantization operations at the inputs of the operation we wish to quantize to simulate the effects of quantization. Recall the matrix multiplication example in Section 3.2. Equation 9 is effectively a fake quantized matrix multiplication. After training, we transform the network to enable a quantized integer matrix multiply as shown in Equation 10.
One challenge to training in ï¬oating-point with quantization is that the quantization operationâs derivative is undeï¬ned at the step boundaries and zero everywhere else. The derivative is required to compute loss gradients on the backward pass of each training iteration. QAT addresses this by using the Straight-through Estimator (STE) [3] as shown in Figure 4. As deï¬ned in Equation 13, STE approximates the derivative of the fake quantization function to be 1 for inputs in the representable range [β, α] as deï¬ned in Section 3.1.
dËx dx = 0, y < β 1, β ⤠y ⤠α y > α 0, (13)
Table 7 summarizes the best results of both post training quantization and ï¬ne-tuned quantization. PTQ best reports the best result for each quantized network in Table 5 and the corresponding calibration. QAT reports the accuracy after ï¬ne-tuning using the best calibration as determined by PTQ. Details of the ï¬netuning methodology and the complete set of QAT results can be found in Appendix A.2.
As Table 7 shows, quantization-aware ï¬ne-tuning improves accuracy in most cases, the only exceptions being ResNeXt- 101, Mask R-CNN, and GNMT where post training quantization achieves a marginally better result. It is worth noting that for all 3 of these cases the differences in accuracy are essentially at the noise level (differences in accuracy one would observe when training from different random initializations). We do not interpret these cases as evidence that ï¬ne-tuning reduces accuracy, they are more likely to indicate that ï¬ne-tuning does not appreciably change accuracy beyond run-to-run variation. Likewise, we do not interpret cases where accuracy is higher than fp32 as quantization acting as a regularizer, it is more likely to be noise or the result of the additional ï¬ne-tuning. Efï¬cientNet b3 is another case worth examining - as our code did not have auto augmentation [9], used to train the original model, ï¬ne-tuning even in fp32 causes a slight accuracy drop to 81.3. Nevertheless, with ï¬ne-tuning all networks were able to maintain their accuracy well within 1% of the original pre-trained fp32 model.
10
fp32 PTQ best QAT Model Accuracy Calibration Accuracy Relative Accuracy Relative MobileNet v1 MobileNet v2 ResNet50 v1.5 ResNet152 v1.5 Inception v3 Inception v4 ResNeXt50 ResNeXt101 Efï¬cientNet b0 Efï¬cientNet b3 Faster R-CNN Mask R-CNN Retinanet FCN DeepLabV3 GNMT Transformer Jasper BERT Large 71.88 71.88 76.16 78.32 77.34 79.71 77.61 79.30 76.85 81.61 36.95 37.89 39.30 63.70 67.40 24.27 28.27 96.09 91.01 99.9% 99.99% Entropy Entropy Entropy 99.99% Entropy 99.999% Entropy 99.99% Entropy 99.9999% 99.999% Entropy 99.999% Entropy 99.99% Entropy 99.999% 70.39 71.14 76.05 78.21 77.54 79.63 77.46 79.17 72.06 80.28 36.82 37.80 39.19 64.00 67.50 24.53 27.71 96.11 90.20 -2.07% -1.03% -0.14% -0.14% 0.26% -0.10% -0.19% -0.16% -6.23% -1.63% -0.35% -0.24% -0.28% 0.47% 0.15% 1.07% -1.98% 0.02% -0.89% 72.07 71.56 76.85 78.61 78.43 80.14 77.67 79.01 76.95 81.07 36.76 37.75 39.25 64.10 67.50 24.38 28.21 96.10 90.67 0.26% -0.45% 0.91% 0.37% 1.41% 0.54% 0.08% -0.37% 0.13% -0.66% -0.51% -0.37% -0.13% 0.63% 0.15% 0.45% -0.21% 0.01% -0.37%
Table 7: Summary of Post Training Quantization and Quantization Aware Training. PTQ best reports the best accuracy and corresponding calibration for each model. QAT reports accuracy after ï¬ne-tuning starting from the best PTQ model.
Models fp32 Fixed max Learned from max Fixed best Learned from best 77.34 Inception v3 Inception v4 79.71 Faster R-CNN 36.95 63.70 FCN 28.27 Transformer 96.09 Jasper 91.01 BERT Large 76.43 68.38 36.62 63.40 28.42 96.11 90.29 78.33 73.88 36.68 63.50 28.08 96.05 90.55 78.43 80.14 36.76 64.10 28.21 96.10 90.67 78.50 80.00 36.81 64.00 28.39 96.06 90.61
Table 8: Learned and ï¬xed range ï¬ne-tuning accuracy. Activation ranges initialized to max and best PTQ accuracy
# 5.3 Learning Quantization Parameters
While the techniques described in the previous sections relied on quantization parameters calibrated on the pre-trained network, it is also possible to jointly learn the quantization parameters along with the model weights. PACT [6] proposed learning the ranges for activation quantization during training. In this section we adopt PACT as an enhancement to our quantization aware ï¬ne-tuning procedure. We follow the same ï¬ne-tuning schedule as before, described in Appendix A, but allow the ranges of each quantized activation tensor to be learned along with the weights, as opposed to keeping them ï¬xed throughout ï¬ne-tuning.
Table 8 shows a selection of networks ï¬ne-tuned with ï¬xed and learned activation ranges for different initial calibrations. The âbestâ calibration refers to the calibration that produced the best accuracy with PTQ, as shown in Table 5. When the activation quantization is initialized with max calibration, learning the range results in higher accuracy than keeping it ï¬xed for most networks. In particular it results in substantial accuracy improvements where ï¬xed max ranges resulted in a signiï¬cant accuracy drop. However, when activation ranges are initialized the to the best calibration for each network, learning the ranges yield very similar results to ï¬xed ranges. This suggests that learning the ranges does not offer additional beneï¬t for int8 over QAT if activation ranges are already carefully calibrated. However, this may not be the optimal application of PACT. Comparing the learned range results on Inception v4 suggest that when starting from max, the network was not able to learn a good activation ranges in the given ï¬ne-tuning schedule. We expect that PACT would be able to learn a better range with longer ï¬ne-tuning, or a separate optimization schedule and hyperparameters for the range parameters, such and learning rate and weight decay.
11
retrained network y Unified calibration (max, entropy, 99.99%, 999%) Vv Evaluate calibrated model. Pick the best calibration Y ccuracy good Build per layer sensitivity profile Skip most sensitive layers until accuracy target is met. Y formance go y Fine tune with quantization. Start from best calibration, use STE Y ccuracy good.
# Figure 5: Flow chart of our recommended quantization workï¬ow
# 6 Recommended Workï¬ow
Based on the results in Sections 4 and 5, we recommend the following for int8 quantization:
⢠Weights:
â Use scale quantization with per-column/per-channel granularity â Use a symmetric integer range for quantization [-127, 127]) and max calibration
Activations:
â Use scale quantization with with per-tensor granularity
We recommend the following procedure to quantize a pre-trained neural network.
⢠PTQ: Quantize all the computationally intensive layers (convolution, linear, matrix multiplication, etc.) and run activation calibration including max, entropy and 99.99%, 99.999% percentile. If none of the calibrations yield the desired accuracy continue to partial quantization or QAT.
⢠Partial Quantization: Perform sensitivity analysis to identify the most sensitive layers and leave them in ï¬oating-point. If the impact on computational performance is not acceptable or an acceptable accuracy cannot be reached, continue to QAT.
⢠QAT: Start from the best calibrated quantized model. Use QAT to ï¬ne-tune for around 10% of the original training schedule with an annealing learning rate schedule starting at 1% of the initial training learning rate. Refer to Appendix A.2 for speciï¬c hyperparameter choices.
Figure 5 summarizes the above workï¬ow in a ï¬owchart.
12
# 7 Conclusions
This paper reviewed the mathematical background for integer quantization of neural networks, as well as some performance-related reasons for choosing quantization parameters. We empirically evaluated various choices for int8 quantization of a variety of models, leading to a quantization workï¬ow proposal. Following this workï¬ow we demonstrated that all models we studied can be quantized to int8 with accuracy that either matches or is within 1% of the ï¬oating-point model accuracy. This included networks that are challenging for quantization, such as MobileNets and BERT. The workï¬ow involves only post-training quantization, partial quantization, and quantization-aware ï¬ne-tuning techniques. Some more complex techniques, such as ADMM and distillation, were not required for int8 quantization of these models. However, these techniques should be evaluated when quantizing to even lower-bit integer representations, which we leave to future work.
# References
[1] MartÃn Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. Tensorï¬ow: A system for large-scale machine learning. In 12th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 16), pages 265â283, 2016.
[2] Chaim Baskin, Eli Schwartz, Evgenii Zheltonozhskii, Natan Liss, Raja Giryes, Alex M Bronstein, and Avi Mendelson. Uniq: Uniform noise injection for non-uniform quantization of neural networks. arXiv preprint arXiv:1804.10969, 2018.
[3] Yoshua Bengio, Nicholas Léonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.
[4] Aishwarya Bhandare, Vamsi Sripathi, Deepthi Karkada, Vivek Menon, Sun Choi, Kushal Datta, and Vikram Saletore. Efï¬cient 8-bit quantization of transformer neural machine language translation model. arXiv preprint arXiv:1906.00532, 2019.
[5] Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catanzaro, and Evan Shelhamer. cudnn: Efï¬cient primitives for deep learning. arXiv preprint arXiv:1410.0759, 2014.
[6] Jungwook Choi, Zhuo Wang, Swagath Venkataramani, Pierce I-Jen Chuang, Vijayalakshmi Srinivasan, and Kailash Gopalakrishnan. Pact: Parameterized clipping activation for quantized neural networks. arXiv preprint arXiv:1805.06085, 2018.
[7] Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Training deep neural networks with low precision multiplications. arXiv preprint arXiv:1412.7024, 2014.
[8] Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. BinaryConnect: Training Deep Neural Networks with binary weights during propagations. NIPS, 28:3123â3131, 2015.
[9] Ekin D Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V Le. Autoaugment: Learning augmentation strategies from data. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 113â123, 2019.
[10] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
[11] Jack J Dongarra, Jeremy Du Croz, Sven Hammarling, and Iain S Duff. A set of level 3 basic linear algebra subprograms. ACM Transactions on Mathematical Software (TOMS), 16(1):1â17, 1990.
[12] Boris Ginsburg, Patrice Castonguay, Oleksii Hrinchuk, Oleksii Kuchaiev, Vitaly Lavrukhin, Ryan Leary, Jason Li, Huyen Nguyen, and Jonathan M Cohen. Stochastic gradient methods with layer-wise adaptive moments for training of deep networks. arXiv preprint arXiv:1905.11286, 2019.
[13] Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015.
[14] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770â778, 2016.
13
[15] Dan Hendrycks and Kevin Gimpel. Gaussian Error Linear Units (GELUs). arXiv preprint arXiv:1606.08415, 2016.
[16] Sepp Hochreiter and Jürgen Schmidhuber. Simplifying neural nets by discovering ï¬at minima. In Advances in neural information processing systems, pages 529â536, 1995.
[17] Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efï¬cient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017.
[18] Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized neural networks. In Advances in neural information processing systems, pages 4107â4115, 2016.
[19] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015.
[20] Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew Howard, Hartwig Adam, and Dmitry Kalenichenko. Quantization and training of neural networks for efï¬cient integer-arithmetic-only inference. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2704â2713, 2018.
[21] Sambhav R Jain, Albert Gural, Michael Wu, and Chris H Dick. Trained quantization thresholds for accurate and efï¬cient ï¬xed-point inference of deep neural networks. arXiv preprint arXiv:1903.08066, 2(3):7, 2019.
[22] StanisÅaw JastrzËebski, Zachary Kenton, Devansh Arpit, Nicolas Ballas, Asja Fischer, Yoshua Bengio, and Amos Storkey. Three factors inï¬uencing minima in sgd. arXiv preprint arXiv:1711.04623, 2017.
[23] Norman P. Jouppi, Cliff Young, Nishant Patil, David Patterson, Gaurav Agrawal, Raminder Bajwa, Sarah Bates, Suresh Bhatia, Nan Boden, Al Borchers, Rick Boyle, Pierre-luc Cantin, Clifford Chao, Chris Clark, Jeremy Coriell, Mike Daley, Matt Dau, Jeffrey Dean, Ben Gelb, Tara Vazir Ghaemmaghami, Rajendra Gottipati, William Gulland, Robert Hagmann, C. Richard Ho, Doug Hogberg, John Hu, Robert Hundt, Dan Hurt, Julian Ibarz, Aaron Jaffey, Alek Jaworski, Alexander Kaplan, Harshit Khaitan, Daniel Killebrew, Andy Koch, Naveen Kumar, Steve Lacy, James Laudon, James Law, Diemthu Le, Chris Leary, Zhuyuan Liu, Kyle Lucke, Alan Lundin, Gordon MacKean, Adriana Maggiore, Maire Mahony, Kieran Miller, Rahul Nagarajan, Ravi Narayanaswami, Ray Ni, Kathy Nix, Thomas Norrie, Mark Omernick, Narayana Penukonda, Andy Phelps, Jonathan Ross, Matt Ross, Amir Salek, Emad Samadiani, Chris Severn, Gregory Sizikov, Matthew Snelham, Jed Souter, Dan Steinberg, Andy Swing, Mercedes Tan, Gregory Thorson, Bo Tian, Horia Toma, Erick Tuttle, Vijay Vasudevan, Richard Walter, Walter Wang, Eric Wilcox, and Doe Hyun Yoon. In-datacenter performance analysis of a tensor processing unit. SIGARCH Comput. Archit. News, 45(2):1â12, June 2017.
[24] Marcin Junczys-Dowmunt, Kenneth Heaï¬eld, Hieu Hoang, Roman Grundkiewicz, and Anthony Aue. Marian: Cost-effective high-quality neural machine translation in c++. arXiv preprint arXiv:1805.12096, 2018.
[25] Alexander Kozlov, Ivan Lazarevich, Vasily Shamporov, Nikolay Lyalyushkin, and Yury Gorbachev. Neural network compression framework for fast model inference. arXiv preprint arXiv:2002.08679, 2020.
[26] Raghuraman Krishnamoorthi. Quantizing deep convolutional networks for efï¬cient inference: A whitepaper. arXiv preprint arXiv:1806.08342, 2018.
[27] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â1105, 2012.
[28] Akhilesh Kumar, Sailesh Kottapalli, Ian Steiner, Bob Valentine, Israel Hirsh, Geetha Vearaman, Lily Looi, Mohamed Arafa, Andy Rudoff, Sreenivas Mandava, Bahaa Fahim, and Sujal Vora. Future Intel Xeon Scalable Processor (Codename: Cascade Lake-SP). In Hotchips 2018, 2018.
[29] Yann LeCun, Bernhard E Boser, John S Denker, Donnie Henderson, Richard E Howard, Wayne E Hubbard, and Lawrence D Jackel. Handwritten digit recognition with a back-propagation network. In Advances in neural information processing systems, pages 396â404, 1990.
[30] Dongsoo Lee and Byeongwook Kim. Retraining-based iterative weight quantization for deep neural networks. arXiv preprint arXiv:1805.11233, 2018.
[31] Cong Leng, Zesheng Dou, Hao Li, Shenghuo Zhu, and Rong Jin. Extremely low bit neural network: Squeeze the last bit out with admm. In Thirty-Second AAAI Conference on Artiï¬cial Intelligence, 2018.
[32] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pages 2980â2988, 2017.
[33] Jeffrey L McKinstry, Steven K Esser, Rathinakumar Appuswamy, Deepika Bablani, John V Arthur, Izzet B Yildiz, and Dharmendra S Modha. Discovering low-precision networks close to full-precision networks for efï¬cient embedded inference. arXiv preprint arXiv:1809.04191, 2018.
14
[34] Naveen Mellempudi, Abhisek Kundu, Dheevatsa Mudigere, Dipankar Das, Bharat Kaul, and Pradeep Dubey. Ternary neural networks with ï¬ne-grained quantization. arXiv preprint arXiv:1705.01462, 2017.
[35] Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, and Hao Wu. Mixed precision training. arXiv preprint arXiv:1710.03740, 2017.
[36] Szymon Migacz. Nvidia 8-bit inference width tensorrt. In GPU Technology Conference, 2017.
[37] Asit Mishra and Debbie Marr. Apprentice: Using knowledge distillation techniques to improve low-precision network accuracy. arXiv preprint arXiv:1711.05852, 2017.
[38] Markus Nagel, Mart van Baalen, Tijmen Blankevoort, and Max Welling. Data-free quantization through weight equalization and bias correction. In Proceedings of the IEEE International Conference on Computer Vision, pages 1325â1334, 2019.
[39] Vinod Nair and Geoffrey E Hinton. Rectiï¬ed linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10), pages 807â814, 2010.
[40] NVIDIA. NVIDIA Turing GPU architecture: Graphics reinvented. https://www.nvidia.com/ content/dam/en-zz/Solutions/designvisualization/technologies/turing-architecture/ NVIDIATuring-Architecture-Whitepaper.pdf, 2018.
[41] Vassil Panayotov, Guoguo Chen, Daniel Povey, and Sanjeev Khudanpur. Librispeech: an asr corpus based on public domain audio books. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 5206â5210. IEEE, 2015.
[42] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelz- imer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8024â8035. Curran Associates, Inc., 2019.
[43] Antonio Polino, Razvan Pascanu, and Dan Alistarh. Model compression via distillation and quantization. arXiv preprint arXiv:1802.05668, 2018.
[44] Prajit Ramachandran, Barret Zoph, and Quoc V Le. Searching for activation functions. arXiv preprint arXiv:1710.05941, 2017.
[45] Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classiï¬cation using binary convolutional neural networks. In European conference on computer vision, pages 525â542. Springer, 2016.
[46] Manuele Rusci, Alessandro Capotondi, and Luca Benini. Memory-driven mixed low precision quantization for enabling deep network inference on microcontrollers. arXiv preprint arXiv:1905.13082, 2019.
[47] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211â252, 2015.
[48] Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4510â4520, 2018.
[49] Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alexander A Alemi. Inception-v4, inception-resnet and the impact of residual connections on learning. In Thirty-ï¬rst AAAI conference on artiï¬cial intelligence, 2017.
[50] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. Rethinking the inception In Proceedings of the IEEE conference on computer vision and pattern architecture for computer vision. recognition, pages 2818â2826, 2016.
[51] Mingxing Tan and Quoc V Le. Efï¬cientnet: Rethinking model scaling for convolutional neural networks. arXiv preprint arXiv:1905.11946, 2019.
[52] Vincent Vanhoucke, Andrew Senior, and Mark Z. Mao. Improving the speed of neural networks on cpus. In Deep Learning and Unsupervised Feature Learning Workshop, NIPS, 2011.
[53] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pages 5998â6008, 2017.
15
[54] Pete Warden. Why gemm is at the heart of deep learning. https://petewarden.com/2016/05/03/ how-to-quantize-neural-networks-with-tensorflow, 2015.
[55] Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Googleâs neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016.
[56] Saining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1492â1500, 2017.
[57] Chris Ying, Sameer Kumar, Dehao Chen, Tao Wang, and Youlong Cheng. Image classiï¬cation at supercomputer scale. arXiv preprint arXiv:1811.06992, 2018.
[58] Oï¬r Zafrir, Guy Boudoukh, Peter Izsak, and Moshe Wasserblat. Q8bert: Quantized 8bit bert. arXiv preprint arXiv:1910.06188, 2019.
[59] Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, and Yuheng Zou. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160, 2016.
[60] Chenzhuo Zhu, Song Han, Huizi Mao, and William J Dally. Trained ternary quantization. arXiv preprint arXiv:1612.01064, 2016.
[61] Neta Zmora, Guy Jacob, Lev Zlotnik, Bar Elharar, and Gal Novik. Neural network distiller: A python package for dnn compression research. arXiv preprint arXiv:1910.12232, 2019.
16
# Appendices
# A Evaluation Details
# A.1 Model Deï¬nitions
Model Conï¬guration Calibration samples Source MobileNet v1 width_mult=1.0 1024 github.com/marvis/pytorch-mobilenet MobileNet v2 ResNet50 v1.5 ResNet152 v1.5 Inception v3 Inception v4 ResNeXt50 ResNeXt101 width_mult=1.0 32x4d 32x8d 1024 1024 1024 1024 1024 1024 1024 github.com/pytorch/vision Efï¬cientNet b0 Efï¬cientNet b3 1024 1024 github.com/lukemelas/efï¬cientnet-pytorch Faster R-CNN Mask R-CNN FCN DeepLabV3 resnet50-fpn resnet50-fpn resnet101 resnet101 512 512 512 512 github.com/pytorch/vision Retinanet resnext101-32x4d-fpn 512 github.com/open-mmlab/mmdetection Transformer vaswani_en_de_big 14336 tokens github.com/pytorch/fairseq GNMT Jasper 4 layers 128 full dev-clean github.com/nvidia/deeplearningexamples Bert Large ï¬ne-tuned for QA 128 github.com/huggingface/pytorch-transformers
Table 9: Network details
Table 9 lists additional details of the models listed in Table 2. We evaluated a large variety of CNNs for image classiï¬cation. MobileNets are small networks that target inference on mobile devices [17, 48]. They are parameterized to scale to various channel widths and image resolutions. In this paper we evaluate the base conï¬gurations with width multiplier 1 and resolution 224x224. We also evaluated a number of larger CNNs [14, 56, 50, 49], including Efï¬cientNets [51], which achieve state-of-the-art accuracy on ImageNet. All CNNs use 224x224 inputs except for Inception v3 and v4, which use 299x299. We evaluated two detection and two segmentation networks from Torchvision, and an additional segmentation network, RetinaNet [32]. We evaluated two translation models, the 4 layers GNMT model [55] and the large conï¬guration of Transformer [53]. For speech recognition we evaluated Jasper which achieves state-of-the-art WER on public speech datasets [41]. For language modeling we use BERT large uncased and ï¬ne-tuned for question answering.
Models were calibrated with the number of samples listed from the training set of the respective dataset listed in Table 2, except for Jasper, which was calibrated on the dev set and evaluated on the test set. PyTorch implementations of all the models along were provided by the listed source repositories. We used the pre-trained weights provided by each repository, except for MobileNet v1 and Efï¬cientNets where pre-trained weights were not available. MobileNet v1 was trained using the reference training script and hyperparameters for MobileNet v2 from Torchvision. Pre-trained weights for Efï¬cientNets were converted to PyTorch from weights provided by TensorFlow [1]1.
# 1https://github.com/tensorï¬ow/tpu/tree/master/models/ofï¬cial/efï¬cientnet
17
Task/Model Dataset Optimizer Epochs Initial learning rate Batch size ImageNet 2012 Classiï¬cation Detection COCO 2017 Segmentation COCO 2017 Transformer WMT16 en-de WMT16 en-de GNMT LibriSpeech Jasper SQuAD v1.1 BERT SGD SGD SGD ADAM SGD NovoGrad[12] ADAM 15 3 3 3 1 80 2 1.0e-3 2.5e-5 2.5e-5 5.0e-4 max tokens = 3584 2.0e-3 1.5e-2 3.0e-5 256 16 32 1024, max seq. length = 50 256 12, max seq. length = 384
# Table 10: Fine-tuning schedule and conï¬guration
Models fp32 Max Entropy 99.9% 99.99% 99.999% 99.9999% MobileNet v1 MobileNet v2 ResNet50 v1.5 ResNet152 v1.5 Inception v3 Inception v4 ResNeXt50 ResNeXt101 Efï¬cientNet b0 Efï¬cientNet b3 Faster R-CNN Mask R-CNN Retinanet FCN DeepLabV3 GNMT Transformer Jasper BERT Large 71.88 71.88 76.16 78.32 77.34 79.71 77.61 79.30 76.85 81.61 36.95 37.89 39.30 63.70 67.40 24.27 28.27 96.09 91.01 71.80 71.11 76.68 78.64 76.43 68.38 77.38 78.98 76.16 80.51 36.62 37.63 39.03 63.40 67.10 24.49 28.42 96.11 90.29 72.11 71.50 76.85 78.61 78.43 80.07 77.67 78.99 76.95 80.63 36.76 37.74 39.11 64.10 67.30 24.38 28.46 96.10 90.14 72.07 71.48 76.59 78.61 78.33 80.01 77.48 79.00 76.85 80.62 36.31 37.26 37.76 63.90 66.90 24.35 28.23 95.23 89.97 72.14 71.56 76.67 78.69 78.49 80.14 77.56 78.99 77.09 81.07 36.76 37.67 38.97 64.20 67.20 24.41 28.21 95.94 90.50 71.89 71.28 76.77 78.65 78.36 79.94 77.51 79.01 76.98 81.09 36.83 37.76 39.25 63.90 67.50 24.48 28.04 96.01 90.67 71.91 71.34 76.81 78.65 78.38 78.82 77.51 79.04 76.63 80.92 36.76 37.75 39.20 63.40 67.20 24.35 28.10 96.08 90.60
Table 11: Fine-tuned quantization. Best accuracy in bold. Accuracy from best PTQ calibration per network underlined.
# A.2 Quantization Aware Training
Table 10 shows the ï¬ne-tuning hyperparameters used in the quantization aware ï¬ne-tuning experiments. For networks that are trained on multiple datasets (detection/segmentation networks and BERT) we only ï¬ne-tuned on the ï¬nal dataset (COCO and SQuAD). In general, only the initial learning rate value and learning rate schedule are changed from the original training session. We ï¬ne-tune for around 1/10th of the original training steps. The ï¬ne-tuning learning rate starts at 1/100th of the initial value used in the original training session and is decayed down to 1/100th of the initial ï¬ne-tuning learning rate. BERT is an exception. Since it pre-trains a language model and only ï¬ne-tunes on SQuAD for 2 epochs, we instead repeat the full ï¬ne-tuning schedule for QAT. We used a cosine annealing learning rate schedule which follows the monotonically decreasing half period of the cosine function.
Table 11 shows ï¬ne-tuned quantization accuracy for all networks and activation range calibration settings. Note that we always use the full per-column/per-channel range for weights (max calibration). It shows that with ï¬ne-tuning, accuracy improves for almost all the cases, especially those that suffer large accuracy drops after PTQ, for example max calibration. For many of the models, the best PTQ calibration is also the best calibration for QAT, indicated by results that are both bold and underlined. Even when QAT achieves higher accuracy with a different calibration, the difference in results is marginal. This result suggests that evaluating multiple activation calibrations during PTQ is a good heuristic to choose a calibration for QAT.
18
(a) Post training quantization (b) After quantization aware ï¬ne-tuning
Figure 6: Example 1D loss function. The model, w, is scale quantized with scale factor 1. a) PTQ: model converges to a narrow minimum. b) QAT: model ï¬nds a wide minimum with lower loss quantization points.
# B Intuition for QAT
To gain some intuition for why quantization-aware training may improve accuracy of the quantized model, consider the simple example in Figure 6. Neural networks are trained by minimizing a loss function with stochastic gradient descent. Loss gradients with respect to the network weights, δL δw , are computed and weights are iteratively updated in the direction of the negative gradient until the model converges to some minimum. Figure 6a shows a one-dimensional loss function for a model with a single parameter, w, that has converged to a local minimum, w â â0.5. When post training quantization is applied, with a scale factor of 1 for sake of example, the weight is quantized to the nearest integer, wq = â1, causing a signiï¬cant increase in the loss. In such a case we say that the model has converged to a ânarrowâ minimum, since a small change in the weight leads to a large change in loss.
By training with quantization, we may potentially avoid these narrow minima by computing gradients with respect to the quantized weights, as shown in Figure 6b. In doing so, narrow minima will result in larger gradients, potentially allowing the model to explore the loss landscape for âwideâ [22] or âï¬atâ [16, 30] minima, where quantization points have lower loss, and thus higher accuracy.
# C Batch normalization folding
Batch normalization folding is a common inference optimization applied to neural networks [20]. At inference, batch normalization layers performs the afï¬ne operation shown in Equation 14:
_ Y â= JVarly| + d= - yEly] (14) Varly] + ⬠z=BN(y)=c-yt+d
where 3, 7, Ely], and Var[y] are determined during training and fixed during inference, and ¢ is a constant [19]. Typically, following a fully connected layer the batch normalization is computed per activation. Consider a fully connected layer that performs the matrix multiplication and bias add shown in Equation[I5]
p Yj = SO tp: wey +b; (5) k=1
19
(14)
When the fully connected layer is followed by a batch normalization layer, z = BN(xW + b), the batch normalization can be folded into the weights and biases of the fully connected layer, as show in Equation 16:
P 25 = Doe cj wey tej + bj + dj (16) k=1 â~_â _â why v
resulting in a fully connected layer performing the operation z = xWâ + bâ. Since convolutions can be mapped to fully connected layers, and batch normalization in CNNs is per channel, we can apply the same optimization.
# D Novel activation functions
Two more recently developed activation functions are Swish (Equation 17) [44] and GELU (Equation 18) [15], used in Efï¬cientNets and BERT, respectively.
swish(x) = x · sigmoid(x) (17)
GELU(x) = x 2 (1 + erf( x â 2 )) (18)
These activation functions, shown in Figure 7a, are both smooth and ReLU-like but with small, bounded negative output ranges. Speciï¬cally, Swish has an output range of [â0.2785, â] and GELU has an output range of [â0.1700, â]. This poses a challenge for uniform quantization as it should represent both small negative values and large positive values.
Figure 7b shows the composition of GELU and fake quantization with different symmetric ranges. If the output of GELU is quantized to [-50, 50], then all negative values will round to zero. However, if we restrict the range to [-10, 10] then two negative values can be represented. Table 12 shows the accuracy of post training quantization with GELU outputs clipped to 10 (GELU10), and then calibrated with max calibration. Just by clipping the output of GELU we can achieve the best post training quantization accuracy with a simple max calibration, exceeding the previous best activation calibration by 0.46 F1. Furthermore, this result almost matches the best QAT result of 90.67 F1.
(a) GELU and Swish (b) Quantizing GELU by scale quantization with different ranges
Figure 7: Quantization mapping of real values to int8
Model fp32 Max Entropy 99.9% 99.99% 99.999% 99.9999% Max with GELU10 BERT Large 91.01 85.92 37.40 26.18 89.59 90.20 90.10 90.66
Table 12: BERT int8 post training quantization. Comparing previous calibration results to max with GELU10.
20 | {
"id": "1606.06160"
} |
2004.08728 | SimAlign: High Quality Word Alignments without Parallel Training Data using Static and Contextualized Embeddings | Word alignments are useful for tasks like statistical and neural machine
translation (NMT) and cross-lingual annotation projection. Statistical word
aligners perform well, as do methods that extract alignments jointly with
translations in NMT. However, most approaches require parallel training data,
and quality decreases as less training data is available. We propose word
alignment methods that require no parallel data. The key idea is to leverage
multilingual word embeddings, both static and contextualized, for word
alignment. Our multilingual embeddings are created from monolingual data only
without relying on any parallel data or dictionaries. We find that alignments
created from embeddings are superior for four and comparable for two language
pairs compared to those produced by traditional statistical aligners, even with
abundant parallel data; e.g., contextualized embeddings achieve a word
alignment F1 for English-German that is 5 percentage points higher than
eflomal, a high-quality statistical aligner, trained on 100k parallel
sentences. | http://arxiv.org/pdf/2004.08728 | Masoud Jalili Sabet, Philipp Dufter, François Yvon, Hinrich Schütze | cs.CL | EMNLP (Findings) 2020 | null | cs.CL | 20200418 | 20210416 | 1 2 0 2
r p A 6 1 ] L C . s c [
4 v 8 2 7 8 0 . 4 0 0 2 : v i X r a
# SimAlign: High Quality Word Alignments Without Parallel Training Data Using Static and Contextualized Embeddings
Masoud Jalili Sabetâ1, Philipp Dufterâ1, Franc¸ois Yvon2, Hinrich Sch ¨utze1 1 Center for Information and Language Processing (CIS), LMU Munich, Germany 2 Universit´e Paris-Saclay, CNRS, LIMSI, France {masoud,philipp}@cis.lmu.de,[email protected]
# Abstract
Der Pinguin Nils Olav wurde vom norwegischen Kénig zum Ritter geschlagen SN Nese ir Pingvin Nils Olav Norvegiya qiroli tomonidan ritsar edi
Word alignments are useful for tasks like sta- tistical and neural machine translation (NMT) and cross-lingual annotation projection. Statis- tical word aligners perform well, as do meth- ods that extract alignments jointly with trans- lations in NMT. However, most approaches require parallel training data, and quality de- creases as less training data is available. We propose word alignment methods that require no parallel data. The key idea is to lever- age multilingual word embeddings â both static and contextualized â for word alignment. Our multilingual embeddings are created from monolingual data only without relying on any parallel data or dictionaries. We ï¬nd that align- ments created from embeddings are superior for four and comparable for two language pairs compared to those produced by traditional sta- tistical aligners â even with abundant parallel data; e.g., contextualized embeddings achieve a word alignment F1 for English-German that is 5 percentage points higher than eï¬omal, a high-quality statistical aligner, trained on 100k parallel sentences.
# Introduction
Word alignments are essential for statistical ma- chine translation and useful in NMT, e.g., for im- posing priors on attention matrices (Liu et al., 2016; Chen et al., 2016; Alkhouli and Ney, 2017; Alkhouli et al., 2018) or for decoding (Alkhouli et al., 2016; Press and Smith, 2018). Further, word alignments have been successfully used in a range of tasks such as typological analysis (Lewis and Xia, 2008; ¨Ostling, 2015b), annotation projection (Yarowsky et al., 2001; Pad´o and Lapata, 2009; Asgari and Sch¨utze, 2017; Huck et al., 2019) and creating multilingual embeddings (Guo et al., 2016; Ammar et al., 2016; Dufter et al., 2018).
Sir Nils Olav Ill. CF ~ > #Y knighted by el rey noruego aaa LF LF SNS â¢W Nils Olav der Dritte is a penguin nominato cavaliere par un roi norvégien
Figure 1: Our method does not rely on parallel train- ing data and can align distant language pairs (German- Uzbek, top) and even mixed sentences (bottom). Exam- ple sentence is manually created. Algorithm: Itermax.
Statistical word aligners such as the IBM mod- els (Brown et al., 1993) and their implementations Giza++ (Och and Ney, 2003), fast-align (Dyer et al., 2013), as well as newer models such as eï¬o- mal ( ¨Ostling and Tiedemann, 2016) are widely used for alignment. With the rise of NMT (Bahdanau et al., 2014), attempts have been made to interpret attention matrices as soft word alignments (Cohn et al., 2016; Koehn and Knowles, 2017; Ghader and Monz, 2017). Several methods create align- ments from attention matrices (Peter et al., 2017; Zenkel et al., 2019) or pursue a multitask approach for alignment and translation (Garg et al., 2019). However, most systems require parallel data (in suf- ï¬cient amount to train high quality NMT systems) and their performance deteriorates when parallel text is scarce (Tables 1â2 in (Och and Ney, 2003)).
Recent unsupervised multilingual embedding al- gorithms that use only non-parallel data provide high quality static (Artetxe et al., 2018; Conneau et al., 2018) and contextualized embeddings (De- vlin et al., 2019; Conneau et al., 2020). Our key idea is to leverage these embeddings for word align- ments â by extracting alignments from similarity matrices induced from embeddings â without rely- ing on parallel data. Requiring no or little paral- lel data is advantageous, e.g., in the low-resource case and in domain-speciï¬c settings without par- allel data. A lack of parallel data cannot be easily
â Equal contribution - random order.
remedied: mining parallel sentences is possible (Schwenk et al., 2019) but assumes that compara- ble, monolingual corpora contain parallel sentences. Further, we ï¬nd that large amounts of mined par- allel data do not necessarily improve alignment quality.
Our main contribution is that we show that word alignments obtained from multilingual pre- trained language models are superior for four and comparable for two language pairs, compared to strong statistical word aligners like eï¬omal even in high resource scenarios. Additionally, (1) we introduce three new alignment methods based on the matrix of embedding similarities and two ex- tensions that handle null words and integrate posi- tional information. They permit a ï¬exible tradeoff of recall and precision. (2) We provide evidence that subword processing is beneï¬cial for aligning rare words. (3) We bundle the source code of our methods in a tool called SimAlign, which is avail- able.1 An interactive online demo is available.2
# 2 Methods
# 2.1 Alignments from Similarity Matrices
We propose three methods to obtain alignments from similarity matrices. Argmax is a simple base- line, IterMax a novel iterative algorithm, and Match a graph-theoretical method based on identifying matchings in a bipartite graph.
Consider parallel sentences sl), swith lengths /., 1 in languages e, f. Assume we have access to some embedding function ⬠that maps each word in a sentence to a d-dimensional vector, ie., â¬(s)) ⬠R&*4 fork ⬠fe, f}. Let E(s); denote the vector of the i-th word in sentence s(* For static embeddings â¬(sâ*)); depends only on the word 7 in language k whereas for contextualized embeddings the vector depends on the full context s("), We define the similarity matrix as the matrix Se [0, 1]«*!s induced by the embeddings where Siz = sim (E(s);,E(s),;) is some normal- ized measure of similarity, e.g., cosine-similarity normalized to be between 0 and 1. We now de- scribe our methods for extracting alignments from S, i.e., obtaining a binary matrix A ⬠{0, rylexty, Argmax. A simple baseline is to align i and (e) (f) j when s;â is the most similar word to 3; and
# i
# j
1https://github.com/cisnlp/simalign 2https://simalign.cis.lmu.de/
# Algorithm 1 Itermax.
Algorithm 1 Itermax. 1: procedure ITERMAX(S, Mmax» & ⬠[0, 1]) 2 A, M = zeros _like(S) 3: forn ⬠[1,..., max] do 4 Vi,g: Lif max (Siko Ay, Niko Au) =0 Mi = 9 Oif min (Siko Ay, Dio Au) >0 a otherwise Ato add = get-argmax_alignments(S © M) =A+t Av ada mw return A 6: 7: 8: end for 9 0: end procedure
Figure 2: Description of the Itermax algorithm. ze- ros_like yields a matrix with zeros and with same shape as the input, get_argmax_alignments returns alignments obtained using the Argmax Method, © is elementwise multiplication.
vice-versa. That is, we set Aij = 1 if
(i = arg max l Sl,j) â§ (j = arg max l Si,l)
and Aij = 0 otherwise. In case of ties, which are unlikely in similarity matrices, we choose the smaller index. If all entries in a row i or column j of S are 0 we set Aij = 0 (this case can appear in Itermax). Similar methods have been applied to co-occurrences (Melamed, 2000) (âcompetitive linkingâ), Dice coefï¬cients (Och and Ney, 2003) and attention matrices (Garg et al., 2019).
Itermax. There are many sentences for which Argmax only identiï¬es few alignment edges be- cause mutual argmaxes can be rare. As a remedy, we apply Argmax iteratively. Speciï¬cally, we mod- ify the similarity matrix conditioned on the align- ment edges found in a previous iteration: if two words i and j have both been aligned, we zero out the similarity. Similarly, if neither is aligned we leave the similarity unchanged. In case only one of them is aligned, we multiply the similarity with a discount factor α â [0, 1]. Intuitively, this encour- ages the model to focus on unaligned word pairs. However, if the similarity with an already aligned word is exceptionally high, the model can add an additional edge. Note that this explicitly allows one token to be aligned to multiple other tokens. For details on the algorithm see Figure 2.
Match. Argmax ï¬nds a local, not a global opti- mum and Itermax is a greedy algorithm. To ï¬nd global optima, we frame alignment as an assign-
ment problem: we search for a maximum-weight maximal matching (e.g., (Kuhn, 1955)) in the bi- partite weighted graph which is induced by the similarity matrix. This optimization problem is deï¬ned by
le ly Ata ATZMAX 4 ¢ 49 yelp S- S- Ais Siz i=1 j=l
subject to A being a matching (i.e., each node has at most one edge) that is maximal (i.e., no additional edge can be added). There are known algorithms to solve the above problem in polynomial time (e.g., (Galil, 1986)).
Note that alignments generated with the match method are inherently bidirectional. None of our methods require additional symmetrization as post- processing.
# 2.2 Distortion and Null Extensions
Distortion Correction [Dist]. Distortion, as intro- duced in IBM Model 2, is essential for alignments based on non-contextualized embeddings since the similarity of two words is solely based on their surface form, independent of position. To penalize high distortions, we multiply the similarity matrix S componentwise with
Pi,j = 1 â κ (i/le â j/lf )2 ,
where κ is a hyperparameter to scale the dis- tortion matrix P between [(1 â κ), 1]. We use κ = 0.5. See supplementary for different val- ues. We can interpret this as imposing a locality- preserving prior: given a choice, a word should be aligned to a word with a similar relative posi- tion ((i/le â j/lf )2 close to 0) rather than a more distant word (large (i/le â j/lf )2).
Null. Null words model untranslated words and are an important part of alignment models. We propose to model null words as follows: if a word is not particularly similar to any of the words in the target sentence, we do not align it. Speciï¬- cally, given an alignment matrix A, we remove alignment edges when the normalized entropy of the similarity distribution is above a threshold Ï , a hyperparameter. We use normalized entropy (i.e., entropy divided by the log of sentence length) to account for different sentence lengths; i.e., we set Aij = 0 if
L le rar) rSiplog Sh, LierSk jlo Shy min( Tos k log ly )>r,
Siz/ an Sims and Shy = Skj/ yey Simj. As the ideal value of 7 depends on the actual similarity scores we set 7 to a per- centile of the entropy values of the similarity dis- tribution across all aligned edges (we use the 95th percentile). Different percentiles are in the supple- mentary. h where Sj.
# 3 Experiments
# 3.1 Embedding Learning
Static. We train monolingual embeddings with fastText (Bojanowski et al., 2017) for each lan- guage on its Wikipedia. We then use VecMap (Artetxe et al., 2018) to map the embeddings into a common multilingual space. Note that this algo- rithm works without any crosslingual supervision (e.g., multilingual dictionaries). We use the same procedure for word and subword levels. We use the label fastText to refer to these embeddings as well as the alignments induced by them.
Contextualized. We use the multilingual BERT model (mBERT).3 It is pretrained on the 104 largest Wikipedia languages. This model only provides embeddings at the subword level. To obtain a word embedding, we simply average the vectors of its subwords. We consider word representations from all 12 layers as well as the concatenation of all layers. Note that the model is not ï¬netuned. We denote this method as mBERT[i] (when using em- beddings from the i-th layer, where 0 means using the non-contextualized initial embedding layer) and mBERT[conc] (for concatenation).
In addition, we use XLM-RoBERTa base (Con- neau et al., 2020), which is pretrained on 100 lan- guages on cleaned CommonCrawl data (Wenzek et al., 2020). We denote alignments obtained using the embeddings from the i-th layer by XLM-R[i].
# 3.2 Word and Subword Alignments
We investigate both alignments between subwords such as wordpiece (Schuster and Nakajima, 2012) (which are widely used for contextualized language models) and words. We refer to computing align- ment edges between words as word level and be- tween subwords as subword level. Note that gold standards are all word-level. In order to evaluate alignments obtained at the subword level we con- vert subword to word alignments using the heuristic âtwo words are aligned if any of their subwords are
3https://github.com/google-research/ bert/blob/master/multilingual.md
# Gold Standard
Embeddings Alignments uate Word-level Pall Word-level ry > Convert by averaging, = ifrequired * Convert using a = heuristic > Ski excursions are excellent . aâ~ = â¢âS = Ski ##ausflige sind hervor ##ragend . Ski excursions are excellent . I \ Skiausfliige sind hervorragend .
Figure 3: Subword alignments are always converted to word alignments for evaluation.
alignedâ (see Figure 3). As a result a single word can be aligned with multiple other words.
For the word level, we use the NLTK tokenizer (Bird et al., 2009) (e.g., for tokenizing Wikipedia in order to train fastText). For the subword level, we generally use multilingual BERTâs vocabulary3 and BERTâs wordpiece tokenizer. For XLM-R we use the XLM-R subword vocabulary. Since gold standards are already tokenized, they do not require additional tokenization.
# 3.3 Baselines
We compare to three popular statistical alignment models that all require parallel training data. fast- align/IBM2 (Dyer et al., 2013) is an implemen- tation of an alignment algorithm based on IBM Model 2. It is popular because of its speed and high quality. eï¬omal4 (based on efmaral by ¨Ostling and Tiedemann (2016)), a Bayesian model with Markov Chain Monte Carlo inference, is claimed to outperform fast-align on speed and quality. Fur- ther we use the widely used software package Giza++/IBM4 (Och and Ney, 2003), which imple- ments IBM alignment models. We use its standard settings: 5 iterations each for the HMM model, IBM Models 1, 3 and 4 with p0 = 0.98.
Symmetrization. Probabilistic word alignment models create forward and backward alignments and then symmetrize them (Och and Ney, 2003; Koehn et al., 2005). We compared the symmetriza- tion methods grow-diag-ï¬nal-and (GDFA) and in- tersection and found them to perform comparably; see supplementary. We use GDFA throughout the paper.
4github.com/robertostling/eflomal
# 3.4 Evaluation Measures
Given a set of predicted alignment edges A and a set of sure, possible gold standard edges S, P (where S â P ), we use the following evaluation measures:
prec = F1 = |A â© P | |A| 2 prec rec prec + rec , rec = , |A â© S| |S| , AER = 1 â |A â© S| + |A â© P | |A| + |S| ,
where | · | denotes the cardinality of a set. This is the standard evaluation (Och and Ney, 2003).
# 3.5 Data
Our test data are a diverse set of 6 language pairs: Czech, German, Persian, French, Hindi and Roma- nian, always paired with English. See Table 11 for corpora and supplementary for URLs.
For our baselines requiring parallel training data (i.e., eï¬omal, fast-align and Giza++) we select addi- tional parallel training data that is consistent with the target domain where available. See Table 11 for the corpora. Unless indicated otherwise we use the whole parallel training data. Figure 5 shows the effect of using more or less training data.
Given the large amount of possible experiments when considering 6 language pairs we do not have space to present all numbers for all languages. If we show results for only one pair, we choose ENG- DEU as it is an established and well-known dataset (EuroParl). If we show results for more languages we fall back to DEU, CES and HIN, to show effects on a mid-resource morphologically rich language (CES) and a low-resource language written in a different script (HIN).
# 4 Results
# 4.1 Embedding Layer
Figure 4 shows a parabolic trend across layers of mBERT and XLM-R. We use layer 8 in this paper because it has best performance. This is consis- tent with other work (Hewitt and Manning, 2019; Tenney et al., 2019): in the ï¬rst layers the contex- tualization is too weak for high-quality alignments while the last layers are too specialized on the pre- training task (masked language modeling).
Lang. Gold Standard Gold St. Size |S| |P \ S| Parallel Data Parallel Wikipedia Size Data Size (MareËcek, 2008) ENG-CES EuroParl-baseda ENG-DEU (Tavakoli and Faili, 2014) ENG-FAS WPT2003, (Och and Ney, 2000), ENG-FRA WPT2005b ENG-HIN WPT2005b ENG-RON a www-i6.informatik.rwth-aachen.de/goldAlignment/ b http://web.eecs.umich.edu/Ëmihalcea/wpt05/ 2500 508 400 447 90 203 44292 9612 11606 4038 1409 5033 23132 921 0 13400 0 0 EuroParl (Koehn, 2005) EuroParl (Koehn, 2005) TEP (Pilevar et al., 2011) Hansards (Germann, 2001) Emille (McEnery et al., 2000) Constitution, Newspaperb 646k 1920k 600k 1130k 3k 50k 8M 48M 5M 32M 1M 3M
Table 1: Overview of datasets. âLang.â uses ISO 639-3 language codes. âSizeâ refers to the number of sentences. âParallel Data Sizeâ refers to the number of parallel sentences in addition to the gold alignments that is used for training the baselines. Our sentence tokenized version of the English Wikipedia has 105M sentences.
ENG-CES ENG-DEU ENG-FAS ENG-FRA ENG-HIN ENG-RON Method F1 AER F1 AER F1 AER F1 AER F1 AER F1 AER k r o W r o i r P ( ¨Ostling, 2015a) Bayesian ( ¨Ostling, 2015a) Giza++ (Legrand et al., 2016) Ensemble Method .81 ( ¨Ostling and Tiedemann, 2016) efmaral ( ¨Ostling and Tiedemann, 2016) fast-align (Zenkel et al., 2019) Giza++ (Garg et al., 2019) Multitask .16 .21 .20 .94 .92 .71 .93 .86 .06 .07 .10 .08 .15 .06 .08 .57 .51 .53 .33 .43 .49 .47 .67 .73 .72 .72 .68 .27 .28 .28 .33 .28 s e n i l e s a B d fast-align/IBM2 r Giza++/IBM4 o W eï¬omal d fast-align/IBM2 r o Giza++/IBM4 w b eï¬omal u S .76 .75 .85 .78 .82 .84 .25 .26 .15 .23 .18 .17 .71 .77 .77 .71 .78 .76 .29 .23 .23 .30 .22 .24 .57 .51 .61 .58 .57 .63 .43 .49 .39 .42 .43 .37 .86 .92 .93 .85 .92 .91 .15 .09 .08 .16 .09 .09 .34 .45 .51 .38 .48 .52 .66 .55 .49 .62 .52 .48 .68 .69 .71 .68 .69 .72 .33 .31 .29 .32 .32 .28 k r o W d fastText - Argmax r o W mBERT[8] - Argmax XLM-R[8] - Argmax .70 .87 .87 .30 .13 .13 .60 .79 .79 .40 .21 .21 .50 .67 .70 .50 .33 .30 .77 .94 .93 .22 .06 .06 .49 .54 .59 .52 .47 .41 .47 .64 .70 .53 .36 .30 s i h T d fastText - Argmax r o w b u S mBERT[8] - Argmax XLM-R[8] - Argmax .58 .86 .87 .42 .14 .13 .56 .81 .81 .44 .19 .19 .09 .67 .71 .91 .33 .29 .73 .94 .93 .26 .06 .07 .04 .55 .61 .96 .45 .39 .43 .65 .71 .58 .35 .29
Table 2: Comparison of our methods, baselines and prior work in unsupervised word alignment. Best result per column in bold. A detailed version of the table with precision/recall and Itermax/Match results is in supplementary.
0.8 0.4 XLM â-R 0.8 0.4 == conc = = eng hin 0 2 4 6 & 10 12 Layer
Figure 4: Word alignment performance across layers of mBERT (top) and XLM-R (bottom). Results are F1 with Argmax at the subword level.
formed (except for RON). We outperform all prior work except for FRA where we match the perfor- mance and RON. This comparison is not entirely fair because methods relying on parallel data have access to the parallel sentences of the test data dur- ing training whereas our methods do not.
Romanian might be a special case as it exhibits a large amount of many to one links and further lacks determiners. How determiners are handled in the gold standard depends heavily on the annotation guidelines. Note that one of our settings, XLM- R[8] with Itermax at the subword level, has an F1 of .72 for ENG-RON, which comes very close to the performance by ( ¨Ostling, 2015a) (see Table 3).
# 4.2 Comparison with Prior Work
Contextual Embeddings. Table 2 shows that mBERT and XLM-R consistently perform well with the Argmax method. XLM-R yields mostly higher values than mBERT. Our three baselines, eï¬omal, fast-align and Giza++, are always outper-
In summary, extracting alignments from similar- ity matrices is a very simple and efï¬cient method that performs surprisingly strongly. It outperforms strong statistical baselines and most prior work in unsupervised word alignment for CES, DEU, FAS and HIN and is comparable for FRA and RON. We attribute this to the strong contextualization in mBERT and XLM-R.
0.85 0.80 0.75 0.70 0.65 0.60 , 0.55 ie == mBERT[8](Argmax) wo â fastText(Argmax+Dist) 0.5048 â word sm fast-align subword = eflomal 0.45 103 104 10° 10° 107 #Parallel Sentences
Figure 5: Learning curves of fast-align/eï¬omal vs. embedding-based alignments. Results shown are F1 for ENG-DEU, contrasting subword and word repre- sentations. Up to 1.9M parallel sentences we use Eu- roParl. To demonstrate the effect with abundant paral- lel data we add up to 37M additional parallel sentences from ParaCrawl (Espl`a et al., 2019) (see grey area).
Static Embeddings. fastText shows a solid per- formance on word level, which is worse but comes close to fast-align and outperforms it for HIN. We consider this surprising as fastText did not have access to parallel data or any multilingual signal. VecMap can also be used with crosslingual dictio- naries. We expect this to boost performance and fastText could then become a viable alternative to fast-align.
Amount of Parallel Data. Figure 5 shows that fast-align and eï¬omal get better with more train- ing data with eï¬omal outperforming fast-align, as expected. However, even with 1.9M parallel sen- tences mBERT outperforms both baselines. When adding up to 37M additional parallel sentences from ParaCrawl (Espl`a et al., 2019) performance for fast-align increases slightly, however, eï¬omal decreases (grey area in plot). ParaCrawl contains mined parallel sentences whose lower quality prob- ably harms eï¬omal. fastText (with distortion) is competitive with eï¬omal for fewer than 1000 paral- lel sentences and outperforms fast-align even with 10k sentences. Thus for very small parallel corpora (<10k sentences) using fastText embeddings is an alternative to fast-align.
The main takeaway from Figure 5 is that mBERT- based alignments, a method that does not need any parallel training data, outperforms state-of-the-art aligners like eï¬omal for ENG-DEU, even in the very high resource case.
Emb. Method ENG- ENG- ENG- ENG- ENG- ENG- FRA HIN RON CES DEU FAS
mBERT[8] Argmax Itermax Match .86 .86 .82 .81 .81 .78 .67 .70 .67 .94 .93 .90 .55 .58 .58 .65 .69 .67 XLM-R[8] Argmax Itermax Match .87 .86 .81 .81 .80 .76 .71 .72 .68 .93 .92 .88 .61 .62 .60 .71 .72 .70
Table 3: Comparison of our three proposed methods across all languages for the best embeddings from Ta- ble 2: mBERT[8] and XLM-R[8]. We show F1 at the subword level. Best result per embedding type in bold.
ENG-DEU ENG-CES ENG-HIN . b m E nmax α Prec. Rec. F1 AER Prec. Rec. F1 AER Prec. Rec. F1 AER 1 - .92 .69 .79 .21 .95 .80 .87 .13 .84 .39 .54 .47 ] 8 [ T R E B m 2 .90 .95 1 .85 .77 .81 .19 .83 .80 .81 .19 .77 .79 .78 .22 .87 .87 .87 .14 .85 .89 .87 .13 .80 .86 .83 .17 .75 .47 .58 .42 .73 .48 .58 .42 .63 .46 .53 .47 3 .90 .95 1 .81 .80 .80 .20 .78 .83 .81 .20 .73 .83 .77 .23 .83 .88 .85 .15 .81 .91 .86 .15 .76 .91 .82 .18 .70 .49 .57 .43 .68 .52 .59 .41 .58 .51 .54 .46 1 - .81 .48 .60 .40 .86 .59 .70 .30 .75 .36 .49 .52 t x e T t s a f 2 .90 .95 1 .69 .56 .62 .38 .66 .56 .61 .39 .59 .55 .57 .43 .74 .69 .72 .29 .71 .69 .70 .30 .62 .65 .63 .37 .63 .42 .51 .49 .59 .41 .48 .52 .53 .39 .45 .55 3 .90 .95 1 .63 .59 .61 .39 .59 .59 .59 .41 .53 .58 .55 .45 .67 .72 .70 .31 .63 .73 .68 .33 .55 .70 .62 .39 .57 .43 .49 .51 .53 .44 .48 .52 .48 .43 .45 .55
Table 4: Itermax with different number of iterations (nmax) and different α. Results are at the word level.
# 4.3 Additional Methods and Extensions
We already showed that Argmax yields alignments that are competitive with the state of the art. In this section we compare all our proposed methods and extensions more closely.
Itermax. Table 4 shows results for Argmax (i.e., 1 Iteration) as well as Itermax (i.e., 2 or more iterations of Argmax). As expected, with more iterations precision drops in favor of recall. Overall, Itermax achieves higher F1 scores for the three language pairs (equal for ENG-CES) both for mBERT[8] and fastText embeddings. For Hindi the performance increase is the highest. We hypothe- size that for more distant languages Itermax is more beneï¬cial as similarity between wordpieces may be generally lower, thus exhibiting fewer mutual argmaxes. For the rest of the paper if we use Iter- max we use 2 Iterations with α = 0.9 as it exhibits best performance (5 out of 6 wins in Table 4).
Argmax/Itermax/Match. In Table 3 we com- pare our three proposed methods in terms of F1 across all languages. We chose to show the two
# ENG-DEU
# ENG-CES
# ENG-HIN
# . b m E
Method Prec. Rec. F1 AER Prec. Rec. F1 AER Prec. Rec. F1 AER
Argmax +Dist +Null .81 .48 .60 .40 .84 .54 .65 .35 .81 .46 .59 .41 .86 .59 .70 .30 .89 .68 .77 .23 .86 .56 .68 .32 .75 .36 .49 .52 .64 .30 .41 .59 .74 .34 .46 .54 t x e T t s a f Itermax +Dist +Null .69 .56 .62 .38 .71 .62 .66 .34 .69 .53 .60 .40 .74 .69 .72 .29 .75 .76 .76 .25 .74 .66 .70 .30 .63 .42 .51 .49 .54 .37 .44 .57 .63 .40 .49 .51 Match +Dist +Null .60 .58 .59 .41 .67 .64 .65 .35 .61 .56 .58 .42 .65 .71 .68 .32 .72 .78 .75 .25 .66 .69 .67 .33 .55 .43 .48 .52 .50 .39 .43 .57 .56 .41 .48 .52 ] 8 [ T R E B m Argmax +Dist +Null Itermax +Dist +Null .92 .69 .79 .21 .91 .67 .77 .23 .93 .67 .78 .22 .85 .77 .81 .19 .82 .75 .79 .21 .86 .75 .80 .20 .95 .80 .87 .13 .93 .79 .85 .15 .95 .77 .85 .15 .87 .87 .87 .14 .84 .85 .85 .15 .88 .84 .86 .14 .84 .39 .54 .47 .68 .29 .41 .59 .85 .38 .53 .47 .75 .47 .58 .43 .56 .34 .43 .58 .76 .45 .57 .43 Match +Dist +Null .78 .74 .76 .24 .75 .71 .73 .27 .80 .73 .76 .24 .81 .85 .83 .17 .79 .83 .81 .20 .83 .83 .83 .17 .67 .52 .59 .42 .45 .35 .39 .61 .68 .51 .58 .42
Table 5: Analysis of Null and Distortion Extensions. All alignments are obtained at word-level. Best result per embedding type and method in bold.
best performing settings from Table 2: mBERT[8] and XLM-R[8] at the subword level. Itermax per- forms slightly better than Argmax with 6 wins, 4 losses and 2 ties. Itermax seems to help more for more distant languages such as FAS, HIN and RON, but harms for FRA. Match has the lowest F1, but generally exhibits a higher recall (see e.g., Table 5). Null and Distortion Extensions. Table 5 shows that Argmax and Itermax generally have higher pre- cision, whereas Match has higher recall. Adding Null almost always increases precision, but at the cost of recall, resulting mostly in a lower F1 score. Adding a distortion prior boosts performance for static embeddings, e.g., from .70 to .77 for ENG- CES Argmax F1 and similarly for ENG-DEU. For Hindi a distortion prior is harmful. Dist has little and sometimes harmful effects on mBERT indicat- ing that mBERTâs contextualized representations already match well across languages.
Summary. Argmax and Itermax exhibit the best and most stable performance. For most language pairs Itermax is recommended. If high recall align- ments are required, Match is the recommended algorithm. Except for HIN, a distortion prior is beneï¬cial for static embeddings. Null should be ap- plied when one wants to push precision even higher (e.g., for annotation projection).
# 4.4 Words and Subwords
Table 2 shows that subword processing slightly out- performs word-level processing for most methods. Only fastText is harmed by subword processing.
0.85 0.80 0.75 â0.70 065) 0 x fe ââ word â=â mBERT[8](Argmax) 0.60 subword ++ eflomal - - - â! o<=k<5 5 <=x<25 25 <=x< 125 125 <= x (240) (331) (650) Frequency Bin (9312)
Figure 6: Results for different frequency bins on ENG- DEU. An edge in S, P , or A is attributed to exactly one bin based on the minimum frequency of the involved words (denoted by x). Number of gold edges in brack- ets. Eï¬omal is trained on all 1.9M parallel sentences. Frequencies are computed on the same corpus.
|]
# ADJ ADP ADV AUX NOUN PRON VERB
eï¬omal Word Subword 0.83 0.69 0.72 0.82 0.68 0.71 0.63 0.57 0.85 0.85 0.79 0.77 0.63 0.62 mBERT[8] Word Subword 0.79 0.74 0.71 0.81 0.75 0.72 0.71 0.72 0.81 0.87 0.84 0.84 0.69 0.69
Table 6: Alignment performance (F1) on ENG-DEU for POS. We use mBERT[8](Argmax) and Eï¬omal trained on 1.9M parallel sentences on the word level.
We use VecMap to match (sub)word distributions across languages. We hypothesize that it is harder to match subword than word distributions â this effect is strongest for Persian and Hindi, proba- bly due to different scripts and thus different sub- word distributions. Initial experiments showed that adding supervision in form of a dictionary helps restore performance. We will investigate this in future work.
We hypothesize that subword processing is ben- eï¬cial for aligning rare words. To show this, we compute our evaluation measures for different fre- quency bins. More speciï¬cally, we only consider gold standard alignment edges for the computation where at least one of the member words has a cer- tain frequency in a reference corpus (in our case all 1.9M lines from the ENG-DEU EuroParl corpus). That is, we only consider the edge (i, j) in A, S or P if the minimum of the source and target word frequency is in [γl, γu) where γl and γu are bin boundaries.
Figure 6 shows F1 for different frequency bins. For rare words both eï¬omal and mBERT show a severely decreased performance at the word level, but not at the subword level. Thus, subword pro- cessing is indeed beneï¬cial for rare words.
At the same time , Regulation No 2078 of 1992 on environmentajly compatible agricultural production methods adapted to the landscape has also contributed substantially to this trend. /
# L
Daneben hat die Verordnung 2078 aus dem Jahr 1992 Uber umweltvertragliche und landschaftsgerechte Produktionsweisen in der Landwirtschaft ebenfalls erheblich zu dieser Entwicklung beigetragen .
The Commission , for its part , will continue to play an active partin the intergovernfnent nference., .
~..
Die Kommission wird bei der Regierungskonferenz auch weiterhin eine aktive Rolle spielen .
Figure 7: Example alignment of auxiliary verbs. Same setting as in Table 6. Solid lines: mBERTâs alignment, identical to the gold standard. Dashed lines: eï¬omalâs incorrect alignment.
# 4.5 Part-Of-Speech Analysis
To analyze the performance with respect to differ- ent part-of-speech (POS) tags, the ENG-DEU gold standard was tagged with the Stanza toolkit (Qi et al., 2020). We evaluate the alignment perfor- mance for each POS tag by only considering the alignment edges where at least one of their mem- ber words has this tag. Table 6 shows results for frequent POS tags. Compared to eï¬omal, mBERT aligns auxiliaries, pronouns and verbs better. The relative position of auxiliaries and verbs in German can diverge strongly from that in English because they occur at the end of the sentence (verb-end po- sition) in many clause types. Positions of pronouns can also diverge due to a more ï¬exible word or- der in German. It is difï¬cult for an HMM-based aligner like eï¬omal to model such high-distortion alignments, a property that has been found by prior work as well (Ho and Yvon, 2019). In contrast, mBERT(Argmax) does not use distortion informa- tion, so high distortion is not a problem for it.
Figure 7 gives an example for auxiliaries. The gold alignment (âhasâ â âhatâ) is correctly identi- ï¬ed by mBERT (solid line). Eï¬omal generates an incorrect alignment (âtimeâ â âhatâ): the two words have about the same relative position, indicating that distortion minimization is the main reason for this incorrect alignment. Analyzing all auxiliary alignment edges, the average absolute value of the distance between aligned words is 2.72 for eï¬omal and 3.22 for mBERT. This indicates that eï¬omal is more reluctant than mBERT to generate high- distortion alignments and thus loses accuracy.
# 5 Related Work
Brown et al. (1993) introduced the IBM models, the best known statistical word aligners. More recent aligners, often based on IBM models, include fast- align (Dyer et al., 2013), Giza++ (Och and Ney, 2003) and eï¬omal ( ¨Ostling and Tiedemann, 2016). ( ¨Ostling, 2015a) showed that Bayesian Alignment Models perform well. Neural network based exten- sions of these models have been considered (Ayan et al., 2005; Ho and Yvon, 2019). All of these mod- els are trained on parallel text. Our method instead aligns based on embeddings that are induced from monolingual data only. We compare with prior methods and observe comparable performance.
Prior work on using learned representations for alignment includes (Smadja et al., 1996; Och and Ney, 2003) (Dice coefï¬cient), (Jalili Sabet et al., 2016) (incorporation of embeddings into IBM mod- els), (Legrand et al., 2016) (neural network align- ment model) and (Pourdamghani et al., 2018) (em- beddings are used to encourage words to align to similar words). Tamura et al. (2014) use recur- rent neural networks to learn alignments. They use noise contrastive estimation to avoid supervision. Yang et al. (2013) train a neural network that uses pretrained word embeddings in the initial layer. All of this work requires parallel data. mBERT is used for word alignments in concurrent work: Libovick´y et al. (2019) use the high quality of mBERT align- ments as evidence for the âlanguage-neutralityâ of mBERT. Nagata et al. (2020) phrase word align- ment as crosslingual span prediction and ï¬netune mBERT using gold alignments.
Attention in NMT (Bahdanau et al., 2014) is related to a notion of soft alignment, but often de- viates from conventional word alignments (Ghader and Monz, 2017; Koehn and Knowles, 2017). One difference is that standard attention does not have access to the target word. To address this, Pe- ter et al. (2017) tailor attention matrices to obtain higher quality alignments. Li et al. (2018)âs and Zenkel et al. (2019)âs models perform similarly to and Zenkel et al. (2020) outperform Giza++. Ding et al. (2019) propose better decoding algo- rithms to deduce word alignments from NMT pre- dictions. Chen et al. (2016), Mi et al. (2016) and Garg et al. (2019) obtain alignments and transla- tions in a multitask setup. Garg et al. (2019) ï¬nd that operating at the subword level can be bene- ï¬cial for alignment models. Li et al. (2019) pro- pose two methods to extract alignments from NMT
models, however they do not outperform fast-align. Stengel-Eskin et al. (2019) compute similarity ma- trices of encoder-decoder representations that are leveraged for word alignments, together with super- vised learning, which requires manually annotated alignment. We ï¬nd our proposed methods to be competitive with these approaches. In contrast to our work, they all require parallel data.
# 6 Conclusion
We presented word aligners based on contextual- ized embeddings that outperform in four and match the performance of state-of-the-art aligners in two language pairs; e.g., for ENG-DEU contextualized embeddings achieve an alignment F1 that is 5 per- centage points higher than eï¬omal trained on 100k parallel sentences. Further, we showed that align- ments from static embeddings can be a viable al- ternative to statistical aligner when few parallel training data is available. In contrast to all prior work our methods do not require parallel data for training at all. With our proposed methods and extensions such as Match, Itermax and Null it is easy to obtain higher precision or recall depending on the use case.
Future work includes modeling fertility explic- itly and investigating how to incorporate parallel data into the proposed methods.
# Acknowledgments
We gratefully acknowledge funding through a Zen- trum Digitalisierung.Bayern fellowship awarded to the second author. This work was supported by the European Research Council (# 740516) and the German Federal Ministry of Education and Re- search (BMBF) under Grant No. 01IS18036A. The authors of this work take full responsibility for its content. We thank Matthias Huck, JindËrich Li- bovick´y, Alex Fraser and the anonymous reviewers for interesting discussions and valuable comments. Thanks to JindËrich for pointing out that mBERT can align mixed-language sentences as shown in Figure 1.
# References
Tamer Alkhouli, Gabriel Bretschner, and Hermann Ney. 2018. On the alignment problem in multi-head attention-based neural machine translation. In Pro- ceedings of the Third Conference on Machine Trans- lation: Research Papers, Belgium, Brussels. Associ- ation for Computational Linguistics.
Tamer Alkhouli, Gabriel Bretschner, Jan-Thorsten Pe- ter, Mohammed Hethnawi, Andreas Guta, and Her- mann Ney. 2016. Alignment-based neural machine translation. In Proceedings of the First Conference on Machine Translation: Volume 1, Research Pa- pers, Berlin, Germany. Association for Computa- tional Linguistics.
Tamer Alkhouli and Hermann Ney. 2017. Biasing attention-based recurrent neural networks using ex- ternal alignment information. In Proceedings of the Second Conference on Machine Translation, Copen- hagen, Denmark. Association for Computational Linguistics.
Waleed Ammar, George Mulcaire, Yulia Tsvetkov, Guillaume Lample, Chris Dyer, and Noah A Smith. 2016. Massively multilingual word embeddings. arXiv preprint arXiv:1602.01925.
Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), Melbourne, Australia. Association for Computational Linguistics.
Ehsaneddin Asgari and Hinrich Sch¨utze. 2017. Past, present, future: A computational investigation of the In Proceed- typology of tense in 1000 languages. ings of the 2017 Conference on Empirical Methods in Natural Language Processing, Copenhagen, Den- mark. Association for Computational Linguistics.
Necip Fazil Ayan, Bonnie J. Dorr, and Christof Monz. 2005. NeurAlign: Combining word alignments us- ing neural networks. In Proceedings of Human Lan- guage Technology Conference and Conference on Empirical Methods in Natural Language Processing, Vancouver, British Columbia, Canada. Association for Computational Linguistics.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly In Proceedings of learning to align and translate. the International Conference on Learning Represen- tations.
Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural language processing with Python: analyz- ing text with the natural language toolkit. OâReilly Media, Inc.
Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5.
Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The math- ematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2).
Wenhu Chen, Evgeny Matusov, Shahram Khadivi, and Jan-Thorsten Peter. 2016. Guided alignment training for topic-aware neural machine translation. AMTA 2016.
Trevor Cohn, Cong Duy Vu Hoang, Ekaterina Vy- molova, Kaisheng Yao, Chris Dyer, and Gholamreza Haffari. 2016. Incorporating structural alignment bi- ases into an attentional neural translation model. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 876â885, San Diego, California. Association for Computational Linguistics.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm´an, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, Online. Asso- ciation for Computational Linguistics.
Alexis Conneau, Guillaume Lample, MarcâAurelio Ranzato, Ludovic Denoyer, and Herv´e J´egou. 2018. Word translation without parallel data. In Proceed- ings of the Sixth International Conference on Learn- ing Representations.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), Min- neapolis, Minnesota. Association for Computational Linguistics.
Shuoyang Ding, Hainan Xu, and Philipp Koehn. 2019. Saliency-driven word alignment interpretation for In Proceedings of the neural machine translation. Fourth Conference on Machine Translation (Volume 1: Research Papers), Florence, Italy. Association for Computational Linguistics.
Philipp Dufter, Mengjie Zhao, Martin Schmitt, Alexan- der Fraser, and Hinrich Sch¨utze. 2018. Embedding learning through multilingual concept induction. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), Melbourne, Australia. Association for Computational Linguistics.
Chris Dyer, Victor Chahuneau, and Noah A. Smith. 2013. A simple, fast, and effective reparameteriza- In Proceedings of the 2013 tion of IBM model 2. Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Atlanta, Georgia. Associa- tion for Computational Linguistics.
Miquel Espl`a, Mikel Forcada, Gema Ram´ırez-S´anchez, and Hieu Hoang. 2019. ParaCrawl: Web-scale paral-
lel corpora for the languages of the EU. In Proceed- ings of Machine Translation Summit XVII Volume 2: Translator, Project and User Tracks, Dublin, Ireland. European Association for Machine Translation.
Zvi Galil. 1986. Efï¬cient algorithms for ï¬nding maxi- mum matching in graphs. ACM Computing Surveys (CSUR), 18(1).
Sarthak Garg, Stephan Peitz, Udhyakumar Nallasamy, and Matthias Paulik. 2019. Jointly learning to align and translate with transformer models. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), Hong Kong, China. As- sociation for Computational Linguistics.
Ulrich Germann. 2001. Aligned Hansards of the 36th parliament of Canada.
Hamidreza Ghader and Christof Monz. 2017. What does attention in neural machine translation pay at- In Proceedings of the Eighth Interna- tention to? tional Joint Conference on Natural Language Pro- cessing (Volume 1: Long Papers), Taipei, Taiwan. Asian Federation of Natural Language Processing.
Jiang Guo, Wanxiang Che, David Yarowsky, Haifeng Wang, and Ting Liu. 2016. A representation learn- ing framework for multi-source transfer parsing. In Thirtieth AAAI Conference on Artiï¬cial Intelligence.
John Hewitt and Christopher D. Manning. 2019. A structural probe for ï¬nding syntax in word represen- In Proceedings of the 2019 Conference of tations. the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), Min- neapolis, Minnesota. Association for Computational Linguistics.
Anh Khoa Ngo Ho and Franc¸ois Yvon. 2019. Neural baselines for word alignment. In Proceedings of the 16th International Workshop on Spoken Language Translation.
Matthias Huck, Diana Dutka, and Alexander Fraser. 2019. Cross-lingual annotation projection is ef- In Pro- fective for neural part-of-speech tagging. ceedings of the Sixth Workshop on NLP for Simi- lar Languages, Varieties and Dialects, pages 223â 233, Ann Arbor, Michigan. Association for Compu- tational Linguistics.
Masoud Jalili Sabet, Heshaam Faili, and Gholamreza Improving word alignment of rare Haffari. 2016. In Proceedings of words with word embeddings. COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, Osaka, Japan. The COLING 2016 Organizing Com- mittee.
Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Machine Transla- tion Summit, volume 5.
Philipp Koehn, Amittai Axelrod, Alexandra Birch Mayne, Chris Callison-Burch, Miles Osborne, and David Talbot. 2005. Edinburgh system descrip- tion for the 2005 IWSLT speech translation evalu- In International Workshop on Spoken Lan- ation. guage Translation (IWSLT) 2005.
Philipp Koehn and Rebecca Knowles. 2017. Six chal- In Proceed- lenges for neural machine translation. ings of the First Workshop on Neural Machine Trans- lation, Vancouver. Association for Computational Linguistics.
Harold W Kuhn. 1955. The Hungarian method for the assignment problem. Naval research logistics quar- terly, 2(1-2).
Jo¨el Legrand, Michael Auli, and Ronan Collobert. Neural network-based word alignment 2016. In Proceedings of the through score aggregation. First Conference on Machine Translation: Volume 1, Research Papers, Berlin, Germany. Association for Computational Linguistics.
William D. Lewis and Fei Xia. 2008. Automatically identifying computationally relevant typological fea- In Proceedings of the Third International tures. Joint Conference on Natural Language Processing: Volume-II.
Xintong Li, Guanlin Li, Lemao Liu, Max Meng, and Shuming Shi. 2019. On the word alignment from In Proceedings of the neural machine translation. 57th Annual Meeting of the Association for Compu- tational Linguistics, Florence, Italy. Association for Computational Linguistics.
Xintong Li, Lemao Liu, Zhaopeng Tu, Shuming Shi, and Max Meng. 2018. Target foresight based at- tention for neural machine translation. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), New Orleans, Louisiana. Association for Computational Linguistics.
JindËrich Libovick´y, Rudolf Rosa, and Alexander Fraser. 2019. How language-neutral is multilingual BERT? arXiv preprint arXiv:1911.03310.
Lemao Liu, Masao Utiyama, Andrew Finch, and Ei- ichiro Sumita. 2016. Neural machine translation with supervised attention. In Proceedings of COL- ING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, Os- aka, Japan. The COLING 2016 Organizing Commit- tee.
David MareËcek. 2008. Automatic alignment of tec- trees from Czech-English parallel togrammatical corpus. Masterâs thesis, Charles University, MFF UK.
Anthony McEnery, Paul Baker, Rob Gaizauskas, and Hamish Cunningham. 2000. Emille: Building a cor- pus of South Asian languages. VIVEK-BOMBAY-, 13(3).
I. Dan Melamed. 2000. Models of translation equiv- alence among words. Computational Linguistics, 26(2).
Haitao Mi, Zhiguo Wang, and Abe Ittycheriah. 2016. Supervised attentions for neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, Texas. Association for Computational Linguistics.
Rada Mihalcea and Ted Pedersen. 2003. An evalua- In Proceedings tion exercise for word alignment. of the HLT-NAACL 2003 Workshop on Building and Using Parallel Texts: Data Driven Machine Transla- tion and Beyond.
and Masaaki Nishino. 2020. A supervised word alignment method based on cross-language span predic- arXiv preprint tion using multilingual BERT. arXiv:2004.14516.
Improved statistical alignment models. In Proceedings of the 38th Annual Meeting of the Association for Com- putational Linguistics, Hong Kong. Association for Computational Linguistics.
Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1).
Robert ¨Ostling. 2015a. Bayesian models for multilin- gual word alignment. Ph.D. thesis, Department of Linguistics, Stockholm University.
Robert ¨Ostling. 2015b. Word order typology through multilingual word alignment. In Proceedings of the 53rd Annual Meeting of the Association for Compu- tational Linguistics and the 7th International Joint Conference on Natural Language Processing (Vol- ume 2: Short Papers), Beijing, China. Association for Computational Linguistics.
Robert ¨Ostling and J¨org Tiedemann. 2016. Efï¬cient word alignment with Markov Chain Monte Carlo. The Prague Bulletin of Mathematical Linguistics, 106(1).
Sebastian Pad´o and Mirella Lapata. 2009. Cross- lingual annotation projection for semantic roles. Journal of Artiï¬cial Intelligence Research, 36.
Jan-Thorsten Peter, Arne Nix, and Hermann Ney. 2017. Generating alignments using target fore- sight in attention-based neural machine translation. The Prague Bulletin of Mathematical Linguistics, 108(1).
Mohammad Taher Pilevar, Heshaam Faili, and Ab- dol Hamid Pilevar. 2011. TEP: Tehran English- Persian parallel corpus. In International Conference on Intelligent Text Processing and Computational Linguistics. Springer.
Nima Pourdamghani, Marjan Ghazvininejad, and Kevin Knight. 2018. Using word vectors to improve word alignments for low resource machine transla- In Proceedings of the 2018 Conference of tion. the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 2 (Short Papers), New Orleans, Louisiana. Association for Computational Linguis- tics.
Oï¬r Press and Noah A Smith. 2018. You may not need attention. arXiv preprint arXiv:1810.13409.
Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, Stanza: A and Christopher D. Manning. 2020. Python natural language processing toolkit for many In Proceedings of the 58th An- human languages. nual Meeting of the Association for Computational Linguistics: System Demonstrations, Online. Asso- ciation for Computational Linguistics.
Mike Schuster and Kaisuke Nakajima. 2012. Japanese In 2012 IEEE Interna- and korean voice search. tional Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE.
Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong, and Francisco Guzm´an. 2019. Wiki- matrix: Mining 135m parallel sentences in 1620 arXiv preprint language pairs from wikipedia. arXiv:1907.05791.
Frank Smadja, Kathleen R. McKeown, and Vasileios Hatzivassiloglou. 1996. Translating collocations for bilingual lexicons: A statistical approach. Computa- tional Linguistics, 22(1).
Elias Stengel-Eskin, Tzu-ray Su, Matt Post, and Ben- jamin Van Durme. 2019. A discriminative neural model for cross-lingual word alignment. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), Hong Kong, China. As- sociation for Computational Linguistics.
Akihiro Tamura, Taro Watanabe, and Eiichiro Sumita. 2014. Recurrent neural networks for word align- In Proceedings of the 52nd Annual ment model. Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), Baltimore, Mary- land. Association for Computational Linguistics.
Leila Tavakoli and Heshaam Faili. 2014. Phrase align- ments in parallel corpus using bootstrapping ap- International Journal of Information & proach. Communication Technology Research, 6(3).
Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. In BERT rediscovers the classical NLP pipeline. Proceedings of the 57th Annual Meeting of the As- sociation for Computational Linguistics, Florence, Italy. Association for Computational Linguistics.
Guillaume Wenzek, Marie-Anne Lachaux, Alexis Con- neau, Vishrav Chaudhary, Francisco Guzm´an, Ar- mand Joulin, and Edouard Grave. 2020. CCNet: Extracting high quality monolingual datasets from In Proceedings of The 12th Lan- web crawl data. guage Resources and Evaluation Conference, Mar- seille, France. European Language Resources Asso- ciation.
Nan Yang, Shujie Liu, Mu Li, Ming Zhou, and Neng- hai Yu. 2013. Word alignment modeling with con- text dependent deep neural network. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Soï¬a, Bulgaria. Association for Computational Lin- guistics.
David Yarowsky, Grace Ngai, and Richard Wicen- towski. 2001. Inducing multilingual text analysis tools via robust projection across aligned corpora. In Proceedings of the First International Conference on Human Language Technology Research.
Thomas Zenkel, Joern Wuebker, and John DeNero. 2019. Adding interpretable attention to neural trans- arXiv lation models improves word alignment. preprint arXiv:1901.11359.
Thomas Zenkel, Joern Wuebker, and John DeNero. 2020. End-to-end neural word alignment outper- forms GIZA++. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 1605â1617, Online. Association for Computational Linguistics.
# A Additional Non-central Results
# A.1 Comparison with Prior Work
A more detailed version of Table 2 from the main paper that includes precision and recall and results on Itermax can be found in Table 7.
# A.2 Rare Words
Figure 8 shows the same as Figure 6 from the main paper but now with a reference corpus of 100k/1000k instead of 1920k parallel sentences. The main takeaways are similar.
# A.3 Symmetrization
For asymmetric alignments different symmetriza- tion methods exist. Dyer et al. (2013) provide an overview and implementation (fast-align) for these methods, which we use. We compare intersection and grow-diag-ï¬nal-and (GDFA) in Table 9. In terms of F1, GDFA performs better (Intersection wins four times, GDFA eleven times, three ties). As expected, Intersection yields higher precision while GDFA yields higher recall. Thus intersection is preferable for tasks like annotation projection,
# IMethod
# Method
ENG-CES ENG-DEU ENG-FAS ENG-FRA ENG-HIN ENG-RON Prec.Rec.F1 AER Prec.Rec.F1 AER Prec.Rec.F1 AER Prec.Rec. F1 AER Prec.Rec.F1 AER Prec.Rec.F1 AER
k r o W r o i r P ( ¨Ostling, 2015a) Bayesian ( ¨Ostling, 2015a) Giza++ (Legrand et al., 2016) Ensemble Method ( ¨Ostling and Tiedemann, 2016) efmaral ( ¨Ostling and Tiedemann, 2016) fast-align (Zenkel et al., 2019) Giza++ (Garg et al., 2019) Multitask .79 .83 .81 .16 .21 .20 .96 .92 .94 .06 .85 .43 .57 .43 .91 .61 .73 .27 .98 .87 .92 .07 .63 .44 .51 .49 .85 .63 .72 .28 .59 .90 .71 .10 .93 .08 .86 .15 .06 .08 .53 .47 .33 .67 .72 .28 .68 .33 .28 s e n i l e s a B d fast-align/IBM2 r Giza++/IBM4 o W eï¬omal d fast-align/IBM2 r o Giza++/IBM4 w b eï¬omal u S .71 .81 .76 .25 .70 .73 .71 .29 .60 .54 .57 .43 .81 .93 .86 .15 .34 .33 .34 .66 .69 .67 .68 .33 .71 .79 .75 .26 .79 .75 .77 .23 .55 .48 .51 .49 .90 .95 .92 .09 .47 .43 .45 .55 .74 .64 .69 .31 .84 .86 .85 .15 .80 .75 .77 .23 .68 .55 .61 .39 .91 .94 .93 .08 .61 .44 .51 .49 .81 .63 .71 .29 .72 .84 .78 .23 .67 .74 .71 .30 .60 .56 .58 .42 .80 .92 .85 .16 .39 .37 .38 .62 .69 .67 .68 .32 .79 .86 .82 .18 .78 .78 .78 .22 .58 .56 .57 .43 .89 .95 .92 .09 .52 .44 .48 .52 .74 .64 .69 .32 .80 .88 .84 .17 .74 .78 .76 .24 .66 .60 .63 .37 .88 .95 .91 .09 .58 .47 .52 .48 .78 .67 .72 .28 k r o W d r o W fastText - Itermax mBERT[8] - Itermax XLM-R[8] - Itermax fastText - Argmax mBERT[8] - Argmax XLM-R[8] - Argmax .74 .69 .72 .29 .69 .56 .62 .38 .63 .45 .53 .48 .74 .78 .76 .24 .63 .42 .51 .49 .64 .40 .50 .51 .87 .87 .87 .14 .85 .77 .81 .19 .80 .63 .70 .30 .91 .95 .93 .08 .75 .47 .58 .43 .82 .58 .68 .32 .89 .85 .87 .13 .86 .73 .79 .21 .84 .63 .72 .28 .91 .93 .92 .08 .79 .49 .61 .39 .87 .61 .71 .29 .86 .59 .70 .30 .81 .48 .60 .40 .75 .38 .50 .50 .85 .71 .77 .22 .75 .36 .49 .52 .77 .34 .47 .53 .95 .80 .87 .13 .92 .69 .79 .21 .88 .54 .67 .33 .97 .91 .94 .06 .84 .39 .54 .47 .90 .50 .64 .36 .96 .80 .87 .13 .93 .68 .79 .22 .91 .57 .70 .30 .96 .91 .93 .06 .88 .45 .59 .41 .94 .56 .70 .30 s i h T d r o w b u S fastText - Itermax mBERT[8] - Itermax XLM-R[8] - Itermax fastText - Argmax mBERT[8] - Argmax XLM-R[8] - Argmax .61 .57 .59 .41 .63 .54 .58 .42 .20 .07 .11 .90 .70 .76 .73 .28 .14 .05 .07 .93 .56 .38 .45 .55 .84 .89 .86 .14 .83 .80 .81 .19 .76 .65 .70 .30 .91 .96 .93 .08 .71 .49 .58 .42 .79 .62 .69 .31 .84 .89 .86 .14 .83 .78 .80 .20 .79 .67 .72 .28 .89 .94 .92 .09 .75 .52 .62 .39 .83 .64 .72 .28 .72 .48 .58 .42 .75 .45 .56 .44 .27 .06 .09 .91 .80 .67 .73 .26 .14 .02 .04 .96 .67 .31 .43 .58 .92 .81 .86 .14 .92 .72 .81 .19 .85 .56 .67 .33 .96 .92 .94 .06 .81 .41 .55 .45 .88 .51 .65 .35 .92 .83 .87 .13 .92 .72 .81 .19 .87 .59 .71 .30 .95 .91 .93 .07 .86 .47 .61 .39 .91 .59 .71 .29
Table 7: Comparison of word and subword levels. Best overall result per column in bold.
ENG-DEU ENG-CES ENG-HIN Emb. Method Prec. Rec. F1 AER Prec. Rec. F1 AER Prec. Rec. F1 AER
Argmax +Dist +Null .75 .79 .76 .45 .56 .44 .51 .62 .38 .43 .55 .45 .72 .77 .74 .48 .58 .42 .58 .66 .34 .47 .57 .42 .14 .16 .14 .02 .04 .96 .04 .06 .94 .02 .04 .96 t x e T t s a f Itermax +Dist +Null .63 .67 .64 .54 .58 .42 .60 .64 .36 .52 .57 .43 .61 .63 .62 .57 .59 .41 .66 .65 .36 .56 .59 .41 .14 .15 .14 .05 .07 .93 .07 .09 .91 .04 .07 .93 Match +Dist +Null .51 .59 .52 .58 .54 .46 .66 .62 .38 .57 .54 .46 .44 .54 .46 .61 .52 .49 .71 .61 .39 .60 .52 .48 .10 .10 .10 .08 .09 .91 .09 .09 .91 .08 .09 .91 ] 8 [ T R E B m Argmax +Dist +Null Itermax +Dist +Null .92 .90 .93 .83 .81 .85 .72 .81 .19 .70 .79 .21 .70 .80 .20 .80 .81 .19 .77 .79 .21 .77 .81 .20 .92 .91 .92 .84 .82 .84 .81 .86 .14 .80 .85 .15 .78 .85 .15 .89 .86 .14 .87 .84 .16 .86 .85 .15 .81 .65 .82 .71 .53 .72 .41 .55 .45 .30 .41 .59 .40 .54 .47 .49 .58 .42 .35 .42 .58 .47 .57 .43 Match +Dist +Null .75 .72 .77 .80 .78 .23 .77 .75 .26 .78 .78 .23 .76 .74 .77 .90 .82 .18 .88 .80 .20 .88 .82 .19 .64 .45 .65 .52 .58 .43 .37 .40 .60 .51 .57 .43
0.85 0.80 075 2 â0.70 0.65 â word = |âsâ mBERTIB(Argmax) 0.60 subword ++ eflomal | l t â O<ax<5 S <a x< 25 aaa 128 125 <= x Fequency Bin 0.85 0.80 o7 © 970 : 0.65 word =~ MBERTIB}(Argmax) 9.60 subword |-=» eflomal | o<axes 5<ox<25 25 <=x< 125 125 <= x Frequency Bin
Table 8: Comparison of methods for inducing align- ments from similarity matrices. results are subword-level. Best result per embedding type across columns in bold.
Figure 8: Results for different frequency bins. An edge in S, P , or A is attributed to exactly one bin based on the minimum frequency of the involved words (denoted by x). Top: Eï¬omal trained and frequencies computed on 100k parallel sentences. Bottom: 1000k parallel sen- tences.
whereas GDFA is typically used in statistical ma- chine translation.
# B Hyperparameters
# A.4 Alignment Examples for Different Methods
We show examples in Figure 10, Figure 11, Fig- ure 12, and Figure 13. They provide an overview how the methods actually affect results.
â_
# B.1 Overview
We provide a list of customized hyperparameters used in our computations in Table 10. There are three options how we came up with the hyperpa- rameters: a) We simply used default values of 3rd party software. b) We chose an arbitrary value.
ENG-FAS Method Symm. Prec. Rec. F1 AER Prec. Rec. F1 AER Prec. Rec. F1 AER Prec. Rec. F1 AER Prec. Rec. F1 AER Prec. Rec. F1 AER
.95 .79 .86 .14 .91 .66 .76 .24 .88 .43 .58 .42 .96 .90 .93 .07 .81 .37 .51 .49 .91 .56 .70 .31 Inters. GDFA .84 .86 .85 .15 .80 .75 .77 .23 .68 .55 .61 .39 .91 .94 .93 .08 .61 .44 .51 .49 .81 .63 .71 .29 .89 .69 .78 .22 .87 .60 .71 .29 .78 .43 .55 .45 .93 .84 .88 .11 .55 .22 .31 .69 .89 .50 .64 .36 Inters. GDFA .71 .81 .76 .25 .70 .73 .71 .29 .60 .54 .57 .43 .81 .93 .86 .15 .34 .33 .34 .66 .69 .67 .68 .33
Table 9: Comparison of symmetrization methods at the word level. Best result across rows per method in bold.
<a> Prec. 0.6 Rec. 80 85 90 9 oe Fy Tt (percentile)
As expected, when using the 100th percentile no edges are removed and thus the performance is not changed compared to not having a null-word extension. When decreasing the value of Ï the precision increases and recall goes down, while F1 remains stable. We use the 95th percentile for Ï .
# C Reproducibility Information
# C.1 Computing Infrastructures, Runtimes, Number of Parameters
Figure 9: Top: F1 for ENG-DEU with fastText at word- level for different values of κ. Bottom: Performance for ENG-DEU with mBERT[8] (Match) at word-level when setting the value of Ï to different percentiles. Ï can be used for trading precision against recall. F1 re- mains stable although it decreases slightly when assign- ing Ï the value of a smaller percentile (e.g., 80).
Usually we fell back to well-established and rather conventional values (e.g., embedding dimension 300 for static embeddings). c) We deï¬ned a reason- able but arbitrary range, out of which we selected the best value using grid search. Table 10 lists the ï¬nal values we used as well as how we came up with the speciï¬c value. For option c) the corre- sponding analyses are in Figure 4 and Table 3 in the main paper as well as in §B.2 in this supplementary material.
We did all computations on up to 48 cores of In- tel(R) Xeon(R) CPU E7-8857 v2 with 1TB mem- ory and a single GeForce GTX 1080 GPU with 8GB memory.
Runtimes for aligning 500 parallel sentences on ENG-DEU are reported in Table 12. mBERT and XLM-R computations are done on the GPU. Note that fast-align, GIZA++ and eï¬omal usually need to be trained on much more parallel data to achieve better performance: this increases their runtime.
All our proposed methods are parameter-free. If we consider the parameters of the pretrained lan- guage models and pretrained embeddings then fast- Text has around 1 billion parameters (up to 500k words per language, 7 languages and embedding dimension 300), mBERT has 172 million, XLM-R 270 million parameters.
# B.2 Null and Distortion Extensions
In Figure 9 we plot the performance for different values of κ. We observe that introducing distortion indeed helps (i.e., κ > 0) but the actual value is not decisive for performance. This is rather intuitive, as a small adjustment to the similarities is sufï¬cient while larger adjustments do not necessarily change the argmax or the optimal point in the matching algorithm. We choose κ = 0.5.
Method Runtime[s] fast-align GIZA++ eï¬omal mBERT[8] - Argmax XLM-R[8] - Argmax 4 18 5 15 22
Table 12: Runtime (average across 5 runs) in seconds for each method to align 500 parallel sentences.
# C.2 Data
For Ï in null-word extension, we plot precision, recall and F1 in Figure 9 when assigning Ï different percentile values. Note that values for Ï depend on the similarity distribution of all aligned edges.
Table 11 provides download links to all data used.
System Parameter Value fastText Version Code URL Downloaded on Embedding Dimension 0.9.1 https://github.com/facebookresearch/fastText/archive/v0.9.1.zip 11.11.2019 300 mBERT,XLM-R Code: Huggingface Transformer Maximum Sequence Length Version 2.3.1 128 fastalign Code URL Git Hash Flags https://github.com/clab/fast align 7c2bbca3d5d61ba4b0f634f098c4fcf63c1373e1 -d -o -v eï¬omal Code URL Git Hash Flags https://github.com/robertostling/eï¬omal 9ef1ace1929c7687a4817ec6f75f47ee684f9aff âmodel 3 GIZA++ Code URL Version Iterations p0 http://web.archive.org/web/20100221051856/http://code.google.com/p/giza-pp 1.0.3 5 iter. HMM, 5 iter. Model 1, 5 iter. Model3, 5 iter. Model 4 (DEFAULT) 0.98 Vecmap Code URL Git Hash Manual Vocabulary Cutoff https://github.com/artetxem/vecmap.git b82246f6c249633039f67fa6156e51d852bd73a3 500000 Distortion Ext. κ 0.5 (chosen ouf of [0.0, 0.1, . . . , 1.0] by grid search, criterion: F1) Null Extension Ï 95th percentile of similarity distribution of aligned edges (chosen out of [80, 90, 95, 98, 99, 99.5] by grid search, criterion: F1) Argmax Layer 8 (for mBERT and XLM-R, chosen out of [0, 1, . . . , 12] by grid search, criterion: F1 ) Vecmap α Iterations nmax 0.9 (chosen out of [0.9, 0.95, 1] by grid search, criterion: F1) 2 (chosen out of [1,2,3] by grid search, criterion: F1)
Table 10: Overview on hyperparameters. We only list parameters where we do not use default values. Shown are the values which we use unless speciï¬cally indicated otherwise.
Lang. Name Description Link ENG-CES ENG-DEU ENG-FAS ENG-FRA ENG-HIN ENG-RON (MareËcek, 2008) EuroParl-based (Tavakoli and Faili, 2014) WPT2003, (Och and Ney, 2000), WPT2005 WPT2005 (Mihalcea and Pedersen, 2003) Gold Alignment Gold Alignment Gold Alignment Gold Alignment Gold Alignment Gold Alignment http://ufal.mff.cuni.cz/czech-english-manual-word-alignment www-i6.informatik.rwth-aachen.de/goldAlignment/ http://eceold.ut.ac.ir/en/node/940 http://web.eecs.umich.edu/ mihalcea/wpt/ http://web.eecs.umich.edu/ mihalcea/wpt05/ http://web.eecs.umich.edu/ mihalcea/wpt05/ ENG-CES ENG-DEU ENG-DEU ENG-FAS ENG-FRA ENG-HIN ENG-RON EuroParl (Koehn, 2005) EuroParl (Koehn, 2005) ParaCrawl TEP (Pilevar et al., 2011) Hansards (Germann, 2001) Emille (McEnery et al., 2000) Constitution, Newspaper Parallel Data Parallel Data Parallel Data Parallel Data Parallel Data Parallel Data Parallel Data https://www.statmt.org/europarl/ https://www.statmt.org/europarl/ https://paracrawl.eu/ http://opus.nlpl.eu/TEP.php https://www.isi.edu/natural-language/download/hansard/index.html http://web.eecs.umich.edu/ Ëmihalcea/wpt05/ http://web.eecs.umich.edu/ mihalcea/wpt05/
Table 11: Overview of datasets. âLang.â uses ISO 639-3 language codes.
Argmax(circle) vs. Itermax(box) om x ey x? SS ve 9 â coe PS SF V7 CE we not + believe 5 that + we should = cherty-pick 4
Match(circle) vs. Match+Null(box) do- not + believe | ®| that + we- should = cherty-pick 4
fastText(circle) vs. fastText+Dist(box) ° 3 Ry oF ey ot OF PE FW CLO We do- not + believe 5 that + we should + cherty-pick 4
Comparison of alignment methods. Figure 10: Dark/light green: sure/possible edges in the gold stan- dard. Circles are alignments from the ï¬rst mentioned method in the subï¬gure title, boxes alignments from the second method.
Argmax(circle) vs. Itermax(box) x, so cok Boe SQ oe The + absence / || of + Ll harmonized + rules + governing | intellectual + property is @ not @ unique 4 to + LI europe @h am
Match(circle) vs. Match+Null(box) & xX, oF QU The 4 absence / of harmonized / rules 4 governing | intellectual / property / is 4 Oo not unique 4 to7 Europe 4
fastText(circle) vs. fastText+Dist(bon) ch Re ee &, Boe of &, Pot Ces Thea absence~_|⢠of + C harmonized { rules / governing 4 intellectual / property + is 4 ® not ® unique 4 to+ europe @) 1 1)
Figure 11: More examples.
Argmax(circle) vs. Itermax(box) ° of e ~~ SO, SOE cS Ore oN Pe Px eRe Mar Pye PE Weyere on> a abductions +
fastText(circle) vs. fastText+Dist(box) < oF axe oS ee Os Cy Poke os err DEW GF PF LW gO" g⢠down 4 â4 @ great 4 deal { ony abductions +
Match(circle) vs. Match+Null(box) . sf gs es & SOs Kho CAE Pee DE We down ito great 4 deal { on+ 0 abductions 4
Figure 12: More examples.
Argmax(circle) vs. Itermax(box) These- @ things + : an |@| of us
fastText(circle) vs. fastText+Dist(box) S PO S 5S we S These -~ things; any fo) of
Match(circle) vs. Match+Null(box) o ee é es S ¥ e x These CQ) things + concern > all+ of us fe)
Figure 13: More examples. | {
"id": "2004.14516"
} |
2004.08476 | Learning-to-Rank with BERT in TF-Ranking | This paper describes a machine learning algorithm for document (re)ranking,
in which queries and documents are firstly encoded using BERT [1], and on top
of that a learning-to-rank (LTR) model constructed with TF-Ranking (TFR) [2] is
applied to further optimize the ranking performance. This approach is proved to
be effective in a public MS MARCO benchmark [3]. Our first two submissions
achieve the best performance for the passage re-ranking task [4], and the
second best performance for the passage full-ranking task as of April 10, 2020
[5]. To leverage the lately development of pre-trained language models, we
recently integrate RoBERTa [6] and ELECTRA [7]. Our latest submissions improve
our previously state-of-the-art re-ranking performance by 4.3% [8], and achieve
the third best performance for the full-ranking task [9] as of June 8, 2020.
Both of them demonstrate the effectiveness of combining ranking losses with
BERT representations for document ranking. | http://arxiv.org/pdf/2004.08476 | Shuguang Han, Xuanhui Wang, Mike Bendersky, Marc Najork | cs.IR | 6 pages, 1 figure, 2 tables | null | cs.IR | 20200417 | 20200608 | 0 2 0 2
n u J 8 ] R I . s c [
3 v 6 7 4 8 0 . 4 0 0 2 : v i X r a
# LEARNING-TO-RANK WITH BERT IN TF-RANKING
A PREPRINT
# Shuguang Han, Xuanhui Wang, Michael Bendersky and Marc Najork
# TF-Ranking Team, Google Research, Mountain View, CA {hanshuguang,xuanhui,bemike,najork}@google.com
January 18, 2022
# ABSTRACT
This paper describes a machine learning algorithm for document (re)ranking, in which queries and documents are ï¬rstly encoded using BERT [1], and on top of that a learning-to-rank (LTR) model constructed with TF-Ranking (TFR) [2] is applied to further optimize the ranking performance. This approach is proved to be effective in a public MS MARCO benchmark [3]. Our ï¬rst two submissions achieve the best performance for the passage re-ranking task [4], and the second best performance for the passage full-ranking task as of April 10, 2020 [5]. To leverage the lately development of pre-trained language models, we recently integrate RoBERTa [6] and ELECTRA [7]. Our latest submissions improve our previously state-of-the-art re-ranking performance by 4.3% [8], and achieve the third best performance for the full-ranking task [9] as of June 8, 2020. Both of them demonstrate the effectiveness of combining ranking losses with BERT representations for document ranking.
1
# Introduction
Recently, neural network models built on top of pretrained language models such as BERT [1] have achieved state-of- the-art performance on various machine learning tasks including question answering [10], key-phrase extraction [11], as well as document and passage ranking [12, 13]. In this paper, we are focusing on passage ranking, and particularly the MS MARCO passage full ranking and re-ranking tasks [3].
A common way to incorporate BERT for ranking tasks is to construct a ï¬netuning classiï¬cation model with the goal of determining whether or not a document is relevant to a query [13]. The resulting predictions are then used for ranking documents. We argue that such an approach is less suited for a ranking task, compared to a pairwise or listwise learning-to-rank (LTR) algorithm, which learns to distinguish relevance for document pairs or to optimize the document list as a whole, respectively [14].
To this end, we propose TFR-BERT, a generic document ranking framework that builds a LTR model through ï¬netuning BERT representations of query-document pairs within TF-Ranking1. We apply this approach on the MS MACRO benchmark, and our submissions achieve the best leaderboard performance for the passage re-ranking task [8], and the third best performance for the passage full ranking task [9]. This demonstrates the effectiveness of combining ranking losses with BERT representations for passage ranking.
# 2 TFR-BERT
Our TFR-BERT model can be illustrated by Figure 1. Documents (passages) for a given query will be ï¬rstly ï¬attened to query-document (query-passage) pairs, and then passed through BERT layers2. Speciï¬cally, query and each document (passage) are treated as two sentences, and are further concatenated to the following format:
[CLS] query text [SEP] passage text [SEP]
1TF-Ranking ofï¬cial page: https://github.com/tensorflow/ranking 2We use BERT checkpoints downloaded from the ofï¬cial BERT page: https://github.com/google-research/bert.
A PREPRINT - JANUARY 18, 2022
Query and docs BERT Fine-tuning with ranking loss query doc1 wey Pooled output | doct doc2 s1 query f (CLs) -}-âââââ_> | L doc3 (ISePl $2 doc2 » query S3 | [SEP] doc3 : i loc Scoring Ranking Loss fr Update model i
Figure 1: An illustration of the TFR-BERT framework, in which a Learning-to-Rank model is constructed on top of the BERT representations of query-document pairs.
Here, [CLS] indicates the start of a sequence and [SEP] denotes a separator between the ï¬rst and second sentences. We also truncate the passage text if the whole sequence exceeds the maximum length of 512 tokens.
After that, the pooled BERT outputs (i.e., the hidden units of the [CLS] token) are fed into a ranking model built from TF-Ranking [2]. TF-Ranking provides a variety of pointwise, pairwise and listwise losses, which enable us to compare different LTR approaches in our TFR-BERT model.
# 3 MS MARCO Experiment
To understand the performance of TFR-BERT, we conduct a set of experiments using the publicly available MS MARCO dataset. The dataset contains 1 million real Bing queries (each query is a question), and 8.8 million candidate documents (each document is a passage). For each query, it also provides zero or more respective relevant passages marked by human annotators. In this work, we study both the passage full ranking and re-ranking tasks.
Passage Re-ranking Task. For each query, we are given the top 1000 candidate passages retrieved by BM25. The goal is to re-rank passages by their relevance to the query, i.e. the likelihood to be an answering passage for the question.
Passage Full Ranking Task. While the re-ranking performance is bounded by the recall of top 1000 passages from BM25, in this full ranking task, we are asked to rank relevant documents for each query from the whole collection of 8.8 million passages.
Ranking Dataset. To create the training set, we employ the data from triples.train.full.tsv. In this ï¬le, each data record is a triple containing the content of a query, a relevant passage and an irrelevant passage (query and the relevant passage can repeat multiple times in the dataset). For each query, there are roughly 1000 passages, and (in most cases) only one of them is relevant3.
To better support the pairwise and listwise ranking models, we further group triples by query. Therefore, we obtain a list of up to 1000 passages for each query. With regards to the computer memory limit, we further break this passage list into roughly 90 lists, each taking one relevant passage and 11 irrelevant passages; thereby, creating a set of passage lists with size up to 12. Note that the above process is only used when building the training data. We leave the dev and eval datasets intact â 1000 passages per each query are present for these datasets.
Training. Our models are trained on TPU V3. We set the list size to be 12, as described above. The batch size is set to 32. As a result, a number of 32 * 12 = 384 query-document pairs are used in each training step. We checkpoint each
3More details about this dataset can be found in https://github.com/nyu-dl/dl4marco-bert.
2
A PREPRINT - JANUARY 18, 2022
model at the 50K steps. Our ensemble approach, which will be introduced in Section 4.2, aggregates over multiple models, each following the above training process.
# 4 Experimental Results
In this section, we report the results obtained by TFR-BERT, in which we take into account all of the pointwise, pairwise and listwise ranking approaches. The ranking models are constructed using the open-source TF-Ranking code. For more details about their implementation, the readers may refer to Pasumarthi et al. [2] and Bruch et al. [15].
# 4.1 Our Submissions
We made ï¬ve submissions to the MS MARCO leaderboard (https://microsoft.github.io/msmarco/), as listed below. Submission #1, #2 and #4 focused on the passage re-ranking task (Section 4.2), whereas the other two submissions addressed the passage full ranking task (Section 4.3).
For pre-trained language models, we used the BERT-Large, Uncased checkpoint [1] for submissions #1 to #3. Later on, we switched to the BERT-Large, Uncased (Whole Word Masking) checkpoint for submissions #4 and #5 because of its better performance. For RoBERTa, we adopted the roberta.large checkpoint. And for ELECTRA, we utilized the ELECTRA-Large checkpoint.
More speciï¬cally, Submission #1 was a single run of TFR-BERT with softmax loss; Submission #2 was an ensemble of pointwise, pairwise and listwise TFR-BERT models; Submission #3 adopted the same ensemble technique as Submission #2, but re-ranked top 1000 passages from both BM25 and DeepCT [16], and further combined the two ranking lists; Submission #4 only adopted the listwise loss in TF-Ranking but used ensemble over BERT, RoBERTa and ELECTRA; Submission #5 applied the same ensemble technique as Submission #4, but combined both DeepCT [16] and BM25 results for re-ranking.
⢠Submission #1 (re-ranking): TF-Ranking + BERT (Softmax Loss, List size 6, 200k steps) [17].
⢠Submission #2 (re-ranking): TF-Ranking + BERT (Ensemble of pointwise, pairwise and listwise losses) [4].
⢠Submission #3 (full ranking): DeepCT Retrieval + TF-Ranking BERT Ensemble [5].
⢠Submission #4 (re-ranking): TF-Ranking Ensemble of BERT, RoBERTa and ELECTRA [8].
⢠Submission #5 (full ranking): DeepCT + TF-Ranking Ensemble of BERT, RoBERTa and ELECTRA [9].
# 4.2 Re-ranking Experiments
Experimental results for re-ranking tasks are provided in Table 1. In addition to the ofï¬cial BM25 and Duet V2 baselines, we also include a baseline from Nogueira and Cho [13].
TFR-BERT Single Run. We experimented with three types of TFR-BERT models â pointwise model with sigmoid cross-entropy loss, pairwise model with pairwise logistic loss and listwise with softmax loss. We run each model 5 times, and the reported numbers are the average of 5 runs. For Submission #1 [17], we choose the softmax loss run with the best MRR@10 performance on the Dev data set over the 5 runs.
According to Table 1, TFR-BERT models outperform the ofï¬cial baselines by a large margin. More importantly, they further improve upon the existing state-of-the-art approach [13] that uses the same training data and BERT checkpoint. This demonstrates the effectiveness of combining ranking losses with BERT representations for passage ranking.
The Submission #1 achieved the second best performance for the passage re-ranking task at the time of its submission on March 19, 2020. Compared with the best method at that time [19], which used auxiliary information to enrich BERT, and introduced additional index information for ranking4, TFR-BERT only adopted the original BERT checkpoint, and can be reproduced easily in TF-Ranking.
Ensemble of Multiple Losses. After a manual examination of model predictions, we discovered that, despite similar MRR performance, different TFR-BERT runs (even with the same type of loss) show non-trivial difference in predictions. Therefore, we further include an approach to ensemble models trained from different runs. It worked as follows:
1: Supposes we have n runs (models) to ensemble: R1, R2, ..., Rn.
4However, the author did not disclose further details about his approach.
3
A PREPRINT - JANUARY 18, 2022
Table 1: MRR@10 performance for passage re-ranking. Note that 1) only the models submitted to the leaderboard have MRR@10 for the Eval dataset, 2) for multiple BERT ensemble, we switched the checkpoint from BERT-Large, Uncased to BERT-Large, Uncased (Whole Word Masking), which slightly improved MRR@10 from 0.3856 to 0.3898.
Model Dev (MRR@10) Eval (MRR@10) Baselines BM25 0.1670 0.1649 Duet V2 ([18]) 0.2517 0.2527 BERT + Small training ([13]) 0.3653 0.3587 Previous Leaderboard Best [19] 0.3730 0.3676 TFR-BERT Single Run Sigmoid cross entropy loss (pointwise) 0.3716 - Pairwise logistic loss (pairwise) 0.3718 - Softmax loss (listwise) 0.3725 - Submission #1 [17] 0.3782 0.3660 Multiple Losses (Ensemble) Sigmoid cross entropy loss (5 runs) 0.3839 - Pairwise logistic loss (5 runs) 0.3849 - Softmax loss (5 runs) 0.3856 - Submission #2 [4] 0.3877 0.3747 Multiple BERTs (Ensemble) BERT (5 runs, listwise loss*) 0.3898 - RoBERTa (5 runs, listwise loss) 0.3958 - ELECTRA (5 runs, listwise loss) 0.3976 - Submission #4 [8] 0.4046 0.3905
2: For each run Rk and a query qi, we rank the corresponding documents based on prediction scores, and then obtain the rank position Pk,i,j for each document dj.
3: For each query g;, we re-compute a new score s;,; for document d; based on the average reciprocal rank L L Cao Poa) of n runs.
4: Finally, we rank documents based on the new score si,j.
We ï¬rstly experimented with the ensemble of 5 different runs using the same loss function. According to Table 1, the ensemble approach improves the performance of a single run by 3.5% for all three loss functions. Through a further ensemble over all the three loss functions (total of 15 runs), we achieve the best overall MRR on the Dev data set. The 15-run ensemble is chosen as the Submission #2 [4], which outperforms the previously best submission [19] by 4.0% on the development dataset, and 1.9% on the evaluation dataset.
Ensemble of Multiple BERTs. To incorporate the recent advancement of pre-trained BERT models, we further integrated RoBERTa [6] and ELECTRA [7] into TF-Ranking. The ensemble process for each BERT model worked the same as the above. From Table 1, we observed that ensemble with RoBERTa slightly outperformed BERT, and ensemble with ELECTRA slightly outperformed RoBERTa. Through a further ensemble over all of the three models (total of 15 runs, Submission #4), we achieve the best MRR@10 for the re-ranking task [8], outperforming the previously best performance (also from us [4]) by 4.4% on the dev dataset and by 4.3% on the evaluation dataset.
# 4.3 Full Ranking Experiments
In addition to the re-ranking task, we made another submission to the full ranking task, in which we re-ranked the top 1000 passages from both BM25 and DeepCT [16] using the TFR-BERT ensemble model, and further combined the two resulting ranking lists. It worked as follows.
1: Re-rank the top 1000 passages retrieved by BM25 using the TFR-BERT ensemble model.
4
A PREPRINT - JANUARY 18, 2022
2: Re-rank the top 1000 passages retrieved by DeepCT [16] using the TFR-BERT ensemble model. 3: Combine the re-ranking scores (we use reciprocal rank to be consistent with the ensemble model) from the above two lists. For passages occurring in both lists, we take the average; otherwise, we keep its original score.
4: Finally, we re-rank passages based on the new score.
The full-ranking results are reported in Table 2. Same as the re-ranking results, we include the ofï¬cial BM25 and Duet V2 baselines for reference. In addition, we introduce a baseline (W-index + BERT F-rerank) from Dai et al. [16], as it is the original entry that proposed the DeepCT retrieval approach.
According to Table 2, we discovered that DeepCT helps boost the re-ranking of BM25 results by a large margin, and a further combination of both BM25 and DeepCT re-ranked lists brings additional gains. With Submission #3, we achieved the second best overall performance on the leaderboard as of April 10, 2020. With the recent Submission #5, we further improved our previous performance, and obtained the third best performance on the leaderboard as of June 8, 2020 (with tens of new leaderboard submissions in between).
The above results, again, demonstrate the effectiveness and robustness of the TFR-BERT ensemble model â it works well on both re-ranking and full ranking tasks, and more importantly, it does not require auxiliary information other than the original BERT checkpoints, and can be easily reproduced with TF-Ranking.
Table 2: MRR@10 performance for the passage full ranking task. Note that only the models submitted to the leaderboard have MRR@10 for the Eval dataset.
Model Baselines BM25 0.1670 0.1649 Duet V2 ([18]) 0.2517 0.2527 W-index + BERT-F rerank ([16]) 0.3935 0.3877 Leaderboard Best [20] (as of April 10, 2020) 0.4012 0.3998 Leaderboard Best [21] (as of June 8, 2020) 0.4200 0.4190 Multiple Losses (Ensemble) Re-ranking over BM25 0.3877 0.3747 Re-ranking over DeepCT 0.4012 - Submission #3: combining the above [5] 0.4049 0.3946 Multiple BERTs (Ensemble) Re-ranking over BM25 0.4046 0.3905 Re-ranking over DeepCT 0.4175 - Submission #5: combining the above [9] 0.4213 0.4073
# 5 Conclusion
In this paper, we propose the TFR-BERT framework for document and passage ranking. It combines state-of-the-art developments from both pretrained language models, such as BERT, and learning-to-rank approaches. Our experiments on the MS MARCO passage ranking task demonstrate its effectiveness.
# 6 Acknowledgement
We would like to thank Zhuyun Dai from Carnegie Mellon University for kindly sharing her DeepCT retrieval results. We would also like to thank Sebastian N. Bruch from Google Research for creating the MS MARCO datasets for our experiments. This work would not be possible without the support provided by the TF-Ranking team.
# References
[1] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
5
A PREPRINT - JANUARY 18, 2022
[2] Rama Kumar Pasumarthi, Sebastian Bruch, Xuanhui Wang, Cheng Li, Michael Bendersky, Marc Najork, Jan Pfeifer, Nadav Golbandi, Rohan Anil, and Stephan Wolf. Tf-ranking: Scalable tensorï¬ow library for learning- to-rank. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2970â2978, 2019.
[3] Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, et al. Ms marco: A human generated machine reading comprehension dataset. arXiv preprint arXiv:1611.09268, 2016.
[4] Shuguang Han, Xuanhui Wang, Michael Bendersky, and Marc Najork. Tf-ranking + bert (ensemble of pointwise, pairwise and listwise losses). https://microsoft.github.io/msmarco/, 2020. Online; accessed 30 March 2020. See the entry starts with TF-Ranking + BERT.
[5] Shuguang Han, Zhuyun Dai, Xuanhui Wang, Michael Bendersky, and Marc Najork. Deepct retrieval + tf-ranking bert ensemble. https://microsoft.github.io/msmarco/, 2020. Online; accessed 10 April 2020. See the entry starts with DeepCT Retrieval + TF-Ranking.
[6] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
[7] Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555, 2020.
[8] Shuguang Han, Xuanhui Wang, Michael Bendersky, and Marc Najork. Tf-ranking ensemble of bert, roberta and electra. https://microsoft.github.io/msmarco/, 2020. Online; accessed 2 June 2020. See the entry starts with TF-Ranking + BERT.
[9] Shuguang Han, Xuanhui Wang, Michael Bendersky, and Marc Najork. Deepct + tf-ranking ensemble of bert, roberta and electra. https://microsoft.github.io/msmarco/, 2020. Online; accessed 2 June 2020. See the entry starts with TF-Ranking + BERT.
[10] Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. Albert: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942, 2019. [11] Lee Xiong, Chuan Hu, Chenyan Xiong, Daniel Campos, and Arnold Overwijk. Open domain web keyphrase
extraction beyond language modeling. arXiv preprint arXiv:1911.02671, 2019.
2] Rodrigo Nogueira, Wei Yang, Kyunghyun Cho, and Jimmy Lin. Multi-stage document ranking with BERT, 2019. 3] Rodrigo Nogueira and Kyunghyun Cho. Passage re-ranking with bert. arXiv preprint arXiv:1901.04085, 2019. 4] Tie-Yan Liu et al. Learning to rank for information retrieval. Foundations and Trends®) in Information Retrieval,
3(3):225â331, 2009.
[15] Sebastian Bruch, Masrour Zoghi, Mike Bendersky, and Marc Najork. Revisiting approximate metric optimization in the age of deep neural networks. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR â19), pages 1241â1244, 2019.
[16] Zhuyun Dai and Jamie Callan. Context-aware sentence/passage term importance estimation for ï¬rst stage retrieval. arXiv preprint arXiv:1910.10687, 2019.
[17] Shuguang Han, Xuanhui Wang, Michael Bendersky, and Marc Najork. Tf-ranking + bert (softmax loss, list size 6, 200k steps). https://microsoft.github.io/msmarco/, 2020. Online; accessed 30 March 2020. See the entry starts with TF-Ranking + BERT.
[18] Bhaskar Mitra and Nick Craswell. An updated duet model for passage re-ranking. arXiv preprint arXiv:1903.07666, 2019.
[19] Ming Yan. Enriched bert base + aoa index. https://microsoft.github.io/msmarco/, 2019. Online; accessed 19 March 2020.
[20] Xinwu Sun. Table model. https://microsoft.github.io/msmarco/, 2020. Online; accessed 16 April 2020. [21] Xinwu Sun. Dr-bert. https://microsoft.github.io/msmarco/, 2020. Online; accessed 8 June 2020.
6 | {
"id": "1810.04805"
} |
2004.07780 | Shortcut Learning in Deep Neural Networks | Deep learning has triggered the current rise of artificial intelligence and
is the workhorse of today's machine intelligence. Numerous success stories have
rapidly spread all over science, industry and society, but its limitations have
only recently come into focus. In this perspective we seek to distill how many
of deep learning's problems can be seen as different symptoms of the same
underlying problem: shortcut learning. Shortcuts are decision rules that
perform well on standard benchmarks but fail to transfer to more challenging
testing conditions, such as real-world scenarios. Related issues are known in
Comparative Psychology, Education and Linguistics, suggesting that shortcut
learning may be a common characteristic of learning systems, biological and
artificial alike. Based on these observations, we develop a set of
recommendations for model interpretation and benchmarking, highlighting recent
advances in machine learning to improve robustness and transferability from the
lab to real-world applications. | http://arxiv.org/pdf/2004.07780 | Robert Geirhos, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, Felix A. Wichmann | cs.CV, cs.AI, cs.LG, q-bio.NC | perspective article published at Nature Machine Intelligence
(https://doi.org/10.1038/s42256-020-00257-z) | null | cs.CV | 20200416 | 20231121 | 3 2 0 2
v o N 1 2 ] V C . s c [
5 v 0 8 7 7 0 . 4 0 0 2 : v i X r a
# Shortcut Learning in Deep Neural Networks
Robert Geirhos1,2,â,§, J¨orn-Henrik Jacobsen3,â, Claudio Michaelis1,2,â, Richard Zemelâ ,3, Wieland Brendelâ ,1, Matthias Bethgeâ ,1 & Felix A. Wichmannâ ,1
1University of T¨ubingen, Germany 2International Max Planck Research School for Intelligent Systems, Germany 3University of Toronto, Vector Institute, Canada âJoint first / â joint senior authors §To whom correspondence should be addressed: [email protected]
# Abstract
Deep learning has triggered the current rise of artificial intelligence and is the workhorse of todayâs machine intelligence. Numerous success stories have rapidly spread all over science, industry and society, but its limitations have only recently come into focus. In this perspective we seek to distill how many of deep learningâs problems can be seen as differ- ent symptoms of the same underlying problem: shortcut learning. Shortcuts are decision rules that perform well on standard benchmarks but fail to transfer to more challenging testing conditions, such as real-world scenarios. Related issues are known in Comparative Psychology, Education and Linguistics, suggesting that shortcut learning may be a com- mon characteristic of learning systems, biological and artificial alike. Based on these obser- vations, we develop a set of recommendations for model interpretation and benchmarking, highlighting recent advances in machine learning to improve robustness and transferability from the lab to real-world applications.
1
# Introduction
If science was a journey, then its destination would be the discovery of simple explanations to complex phenomena. There was a time when the existence of tides, the planetâs orbit around the sun, and the observation that âthings fall downâ were all largely considered to be independent phenomenaâuntil 1687, when Isaac Newton formulated his law of gravitation that provided an elegantly simple explanation to all of these (and many more). Physics has made tremendous progress over the last few centuries, but the thriving field of deep learning is still very much at the beginning of its journeyâoften lacking a detailed understanding of the underlying principles.
For some time, the tremendous success of deep learning has perhaps overshadowed the need to thoroughly understand the behaviour of Deep Neural Networks (DNNs). In an ever-increasing pace, DNNs were reported as having achieved human-level object classifi- cation performance [1], beating world-class human Go, Poker, and Starcraft players [2, 3],
This is the preprint version of an article that has been published by Nature Machine Intelligence (https://doi.org/10.1038/s42256-020-00257-z).
1
âArticle: Super Bow 50 Paragraph: *Peython Mon £38 in Super Bow XXXII? Original Prediction: John EWway Prediction under adversary: Jett Dean Task for DNN Caption image Recognise object Recognise pneumonia Answer question Problem Describes green Hallucinates teapot if cer- Fails on scans from Changes answer if irrelevant hillside as grazing sheep tain patterns are present _ new hospitals information is added Shortcut Uses background to Uses features irrecogni- Looks at hospital token, Only looks at last sentence and recognise primary object sable to humans not lung ignores context
Figure 1. Deep neural networks often solve problems by taking shortcuts instead of learning the intended solution, leading to a lack of generalisation and unintuitive failures. This pattern can be observed in many real-world applications.
detecting cancer from X-ray scans [4], translating text across languages [5], helping com- bat climate change [6], and accelerating the pace of scientific progress itself [7]. Because of these successes, deep learning has gained a strong influence on our lives and society. At the same time, however, researchers are unsatisfied about the lack of a deeper understand- ing of the underlying principles and limitations. Different from the past, tackling this lack of understanding is not a purely scientific endeavour anymore but has become an urgent necessity due to the growing societal impact of machine learning applications. If we are to trust algorithms with our lives by being driven in an autonomous vehicle, if our job ap- plications are to be evaluated by neural networks, if our cancer screening results are to be assessed with the help of deep learningâthen we indeed need to understand thoroughly: When does deep learning work? When does it fail, and why?
In terms of understanding the limitations of deep learning, we are currently observing a large number of failure cases, some of which are visualised in Figure 1. DNNs achieve super-human performance recognising objects, but even small invisible changes [8] or a different background context [9, 10] can completely derail predictions. DNNs can generate a plausible caption for an image, butâworryinglyâthey can do so without ever looking at that image [11]. DNNs can accurately recognise faces, but they show high error rates for faces from minority groups [12]. DNNs can predict hiring decisions on the basis of r´esum´es, but the algorithmâs decisions are biased towards selecting men [13].
How can this discrepancy between super-human performance on one hand and aston- ishing failures on the other hand be reconciled? One central observation is that many failure cases are not independent phenomena, but are instead connected in the sense that DNNs follow unintended âshortcutâ strategies. While superficially successful, these strate- gies typically fail under slightly different circumstances. For instance, a DNN may appear to classify cows perfectly wellâbut fails when tested on pictures where cows appear out- side the typical grass landscape, revealing âgrassâ as an unintended (shortcut) predictor for âcowâ [9]. Likewise, a language model may appear to have learned to reasonâbut drops to chance performance when superficial correlations are removed from the dataset [14]. Worse yet, a machine classifier successfully detected pneumonia from X-ray scans of a number of hospitals, but its performance was surprisingly low for scans from novel hospitals: The model had unexpectedly learned to identify particular hospital systems with near-perfect accuracy (e.g. by detecting a hospital-specific metal token on the scan, see Figure 1). Together with the hospitalâs pneumonia prevalence rate it was able to achieve a
2
reasonably good predictionâwithout learning much about pneumonia at all [15].
At a principal level, shortcut learning is not a novel phenomenon. The field of machine learning with its strong mathematical underpinnings has long aspired to develop a formal understanding of shortcut learning which has led to a variety of mathematical concepts and an increasing amount of work under different terms such as learning under covariate shift [16], anti-causal learning [17], dataset bias [18], the tank legend [19] and the Clever Hans effect [20]. This perspective aims to present a unifying view of the various phenomena that can be collectively termed shortcuts, to describe common themes underlying them, and lay out the approaches that are being taken to address them both in theory and in practice.
The structure of this perspective is as follows. Starting from an intuitive level, we in- troduce shortcut learning across biological neural networks (Section 2) and then approach a more systematic level by introducing a taxonomy (Section 3) and by investigating the origins of shortcuts (Section 4). In Section 5, we highlight how these characteristics affect different areas of deep learning (Computer Vision, Natural Language Processing, Agent- based Learning, Fairness). The remainder of this perspective identifies actionable strate- gies towards diagnosing and understanding shortcut learning (Section 6) as well as current research directions attempting to overcome shortcut learning (Section 7). Overall, our se- lection of examples is biased towards Computer Vision since this is one of the areas where deep learning has had its biggest successes, and an area where examples are particularly easy to visualise. We hope that this perspective facilitates the awareness for shortcut learn- ing and motivates new research to tackle this fundamental challenge we currently face in machine learning.
# 2 Shortcut learning in biological neural networks
Shortcut learning typically reveals itself by a strong discrepancy between intended and actual learning strategy, causing an unexpected failure. Interestingly, machine learning is not alone with this issue: From the way students learn to the unintended strategies rats use in behavioural experimentsâvariants of shortcut learning are also common for biological neural networks. We here point out two examples of unintended learning strategies by natural systems in the hope that this may provide an interesting frame of reference for thinking about shortcut learning within and beyond artificial systems.
# 2.1 Shortcut learning in Comparative Psychology: unintended cue learning
Rats learned to navigate a complex maze apparently based on subtle colour differencesâ very surprising given that the rat retina has only rudimentary machinery to support at best somewhat crude colour vision. Intensive investigation into this curious finding revealed that the rats had tricked the researchers: They did not use their visual system at all in the experiment and instead simply discriminated the colours by the odour of the colour paint used on the walls of the maze. Once smell was controlled for, the remarkable colour dis- crimination ability disappeared ...1
Animals are no strangers to finding simple, unintended solutions that fail unexpectedly: They are prone to unintended cue learning, as shortcut learning is called in Comparative
1Nicholas Rawlins, personal communication with F.A.W. some time in the early 1990s, confirmed via email on 12.11.2019.
3
Psychology and the Behavioural Neurosciences. When discovering cases of unintended cue learning, one typically has to acknowledge that there was a crucial difference between performance in a given experimental paradigm (e.g. rewarding rats to identify different colours) and the investigated mental ability one is actually interested in (e.g. visual colour discrimination). In analogy to machine learning, we have a striking discrepancy between intended and actual learning outcome.
# 2.2 Shortcut learning in Education: surface learning
Alice loves history. Always has, probably always will. At this very moment, however, she is cursing the subject: After spending weeks immersing herself in the world of Hannibal and his exploits in the Roman Empire, she is now faced with a number of exam questions that are (in her opinion) to equal parts dull and difficult. âHow many elephants did Hannibal employ in his armyâ19, 34 or 40?â ... Alice notices that Bob, sitting in front of her, seems to be doing very well. Bob of all people, who had just boasted how he had learned the whole book chapter by rote last night ...
In educational research, Bobâs reproductive learning strategy would be considered surface learning, an approach that relies on narrow testing conditions where simple discriminative generalisation strategies can be highly successful. This fulfils the characteristics of shortcut learning by giving the appearance of good performance but failing immediately under more general test settings. Worryingly, surface learning helps rather than hurts test performance on typical multiple-choice exams [21]: Bob is likely to receive a good grade, and judging from grades alone Bob would appear to be a much better student than Alice in spite of her focus on understanding. Thus, in analogy to machine learning we again have a striking discrepancy between intended and actual learning outcome.
# 3 Shortcuts defined: a taxonomy of decision rules
With examples of biological shortcut learning in mind (examples which we will return to in Section 6), what does shortcut learning in artificial neural networks look like? Figure 2 shows a simple classification problem that a neural network is trained on (distinguishing a star from a moon).2 When testing the model on similar data (blue) the network does very wellâor so it may seem. Very much like the smart rats that tricked the experimenter, the network uses a shortcut to solve the classification problem by relying on the location of stars and moons. When location is controlled for, network performance deteriorates to random guessing (red). In this case (as is typical for object recognition), classification based on object shape would have been the intended solution, even though the difference between intended and shortcut solution is not something a neural network can possibly infer from the training data.
On a general level, any neural network (or machine learning algorithm) implements a decision rule which defines a relationship between input and outputâin this example assigning a category to every input image. Shortcuts, the focus of this article, are one particular group of decision rules. In order to distinguish them from other decision rules, we here introduce a taxonomy of decision rules (visualised in Figure 3), starting from a very general rule and subsequently adding more constraints until we approach the intended solution.
2Code is available from https://github.com/rgeirhos/shortcut-perspective.
4
Categorisation by (typical) human Categorisation by Neural Network i.i.d. test set o.0.d. test set
Figure 2. Toy example of shortcut learning in neural networks. When trained on a simple dataset of stars and moons (top row), a standard neural network (three layers, fully connected) can easily categorise novel similar exemplars (mathematically termed i.i.d. test set, defined later in Section 3). However, testing it on a slightly different dataset (o.o.d. test set, bottom row) reveals a shortcut strategy: The network has learned to associate object location with a category. During training, stars were always shown in the top right or bottom left of an image; moons in the top left or bottom right. This pattern is still present in samples from the i.i.d. test set (middle row) but not in o.o.d. test images (bottom row), exposing the shortcut.
(1) all possible decision rules, including non-solutions Imagine a model that tries to solve the problem of separating stars and moons by predicting âstarâ every time it detects a white pixel in the image. This model uses an uninformative feature (the grey area in Figure 3) and does not reach good performance on the data it was trained on, since it implements a poor decision rule (both moon and star images contain white pixels). Typically, interesting problems have an abundant amount of non-solutions.
(2) training solutions, including overfitting solutions In machine learning it is common practice to split the available data randomly into a train- ing and a test set. The training set is used to guide the model in its selection of a (hopefully useful) decision rule, and the test set is used to check whether the model achieves good per- formance on similar data it has not seen before. Mathematically, the notion of similarity between training and test set commonly referred to in machine learning is the assumption that the samples in both sets are drawn from the same distribution. This is the case if both the data generation mechanism and the sampling mechanism are identical. In practice this is achieved by randomising the split between training and test set. The test set is then called independent and identically distributed (i.i.d.) with regard to the training set. In or- der to achieve high average performance on the test set, a model needs to learn a function that is approximately correct within a subset of the input domain which covers most of the probability of the distribution. If a function is learned that yields the correct output on the training images but not on the i.i.d. test images, the learning machine uses overfitting features (the blue area in Figure 3).
5
All possible decision rules z a S training solution performs well on training set - shortcut solution performs well on training set and i.i.d. test set training Lid. 0.0.d. set test set test set Performance - intended solution performs well on training set, i.i.d. and all relevant 0.0.d. test sets low I I | overfitting shortcut intended features features features
Figure 3. Taxonomy of decision rules. Among the set of all possible rules, only some solve the training data. Among the solutions that solve the training data, only some generalise to an i.i.d. test set. Among those solutions, shortcuts fail to generalise to different data (o.o.d. test sets), but the intended solution does generalise.
(3) i.i.d. test solutions, including shortcuts Decision rules that solve both the training and i.i.d. test set typically score high on standard benchmark leaderboards. However, even the simple toy example can be solved through at least three different decision rules: (a) by shape, (b) by counting the number of white pixels (moons are smaller than stars) or (c) by location (which was correlated with object category in the training and i.i.d. test sets). As long as tests are performed only on i.i.d. data, it is impossible to distinguish between these. However, one can instead test models on datasets that are systematically different from the i.i.d. training and test data (also called out-of- distribution or o.o.d. data). For example, an o.o.d. test set with randomised object size will instantly invalidate a rule that counts white pixels. Which decision rule is the intended solution is clearly in the eye of the beholder, but humans often have clear expectations. In our toy example, humans typically classify by shape. A standard fully connected neural network3 trained on this dataset, however, learns a location-based rule (see Figure 2). In this case, the network has used a shortcut feature (the blue area in Figure 3): a feature that helps to perform well on i.i.d. test data but fails in o.o.d. generalisation tests.
(4) intended solution Decision rules that use the intended features (the red area in Figure 3) work well not only on an i.i.d. test set but also perform as intended on o.o.d. tests, where shortcut solutions fail. In the toy example, a decision rule based on object shape (the intended feature) would generalise to objects at a different location or with a different size. Humans typically have a strong intuition for what the intended solution should be capable of. Yet, for complex prob- lems, intended solutions are mostly impossible to formalise, so machine learning is needed to estimate these solutions from examples. Therefore the choice of examples, among other aspects, influence how closely the intended solution can be approximated.
3A convolutional (rather than fully connected) network would be prevented from taking this shortcut by design.
6
# 4 Shortcuts: where do they come from?
Following this taxonomy, shortcuts are decision rules that perform well on i.i.d. test data but fail on o.o.d. tests, revealing a mismatch between intended and learned solution. It is clear that shortcut learning is to be avoided, but where do shortcuts come from, and what are the defining real-world characteristics of shortcuts that one needs to look out for when assessing a model or task through the lens of shortcut learning? There are two different as- pects that one needs to take into account. First, shortcut opportunities (or shortcut features) in the data: possibilities for solving a problem differently than intended (Section 4.1). Sec- ond, feature combination: how different features are combined to form a decision rule (Section 4.2). Together, these aspects determine how a model generalises (Section 4.3).
# 4.1 Dataset: shortcut opportunities
i â = abe Mes
What makes a cow a cow? To DNNs, a familiar background can be as important for recognition as the object itself, and sometimes even more important: A cow at an unexpected location (such as a beach rather than grassland) is not classified correctly [9]. Conversely, a lush hilly landscape without any animal at all might be labelled as a âherd of grazing sheepâ by a DNN [22].
This example highlights how a systematic relationship between object and background or context can easily create a shortcut opportunity. If cows happen to be on grassland for most of the training data, detecting grass instead of cows becomes a successful strategy for solving a classification problem in an unintended way; and indeed many models base their predictions on context [23, 24, 25, 26, 9, 27, 10]. Many shortcut opportunities are a conse- quence of natural relationships, since grazing cows are typically surrounded by grassland rather than water. These so-called dataset biases have long been known to be problematic for machine learning algorithms [18]. Humans, too, are influenced by contextual biases (as evident from faster reaction times when objects appear in the expected context), but their predictions are much less affected when context is missing [28, 29, 30, 31]. In addition to shortcut opportunities that are fairly easy to recognise, deep learning has led to the dis- covery of much more subtle shortcut features, including high-frequency patterns that are almost invisible to the human eye [32, 33]. Whether easy to recognise or hard to detect, it is becoming more and more evident that shortcut opportunities are by no means disap- pearing when the size of a dataset is simply scaled up by some orders of magnitude (in the hope that this is sufficient to sample the diverse world that we live in [34]). Systematic biases are still present even in âBig Dataâ with large volume and variety, and consequently even large real-world datasets usually contain numerous shortcut opportunities. Overall, it is quite clear that data alone rarely constrains a model sufficiently, and that data cannot replace making assumptions [35]. The totality of all assumptions that a model incorporates (such as, e.g., the choice of architecture) is called the inductive bias of a model and will be discussed in more detail in Section 6.3.
7
# 4.2 Decision rule: shortcuts from discriminative learning
What makes a cat a cat? To standard DNNs, the example image on the left clearly shows an elephant, not a cat. Object textures and other local structures in images are highly useful for object classifi- cation in standard datasets [36], and DNNs strongly rely on texture cues for object classification, largely ignoring global object shape [37, 38].
In many cases, relying on object textures can be sufficient to solve an object categorisation task. Obviously, however, texture is only one of many attributes that define an object. Discriminative learning differs from generative modeling by picking any feature that is sufficient to reliably discriminate on a given dataset but the learning machine has no notion of how realistic examples typically look like and how the features used for discrimination are combined with other features that define an object. In our example, using textures for object classification becomes problematic if other intended attributes (like shape) are ignored entirely. This exemplifies the importance of feature combination: the definition of an object relies on a (potentially highly non-linear) combination of information from different sources or attributes that influence a decision rule.4 In the example of the cat with elephant texture above, a shape-agnostic decision rule that merely relies on texture properties clearly fails to capture the task of object recognition as it is understood for human vision. While the model uses an important attribute (texture) it tends to equate it with the definition of the object missing out other important attributes such as shape. Of course, being aligned with the human decision rule does not always conform to our intention. In medical or safety-critical applications, for instance, we may instead seek an improvement over human performance.
Inferring human-interpretable object attributes like shape or texture from an image requires specific nonlinear computations. In typical end-to-end discriminative learning, this again may be prone to shortcut learning. Standard DNNs do not impose any human- interpretability requirements on intermediate image representations and thus might be severely biased to the extraction of overly simplistic features which only generalise under the specific design of the particular dataset used but easily fail otherwise. Discriminative feature learning goes as far that some decision rules only depend on a single predictive pixel [39, 40, 41] while all other evidence is ignored.5 In principle, ignoring some evi- dence can be beneficial. In object recognition, for example, we want the decision rule to be invariant to an object shift. However, undesirable invariance (sometimes called excessive invariance) is harmful and modern machine learning models can be invariant to almost all features that humans would rely on when classifying an image [41].
4In Cognitive Science, this process is called cue combination. 5In models of animal learning, the blocking effect is a related phenomenon. Once a predictive cue/feature (say, a light flash) has been associated with an outcome (e.g. food), animals sometimes fail to associate a new, equally predictive cues with the same outcome [42, 43, 44].
8
same category for humans same category for DNNs but not for DNNs (intended generalisation) but not for humans (unintended generalisation) iid. domain â_ adversarial excessive _fooling natural _texturised shift examples distortions pose texture background invariance images _â_adversarials__â_ images e.g.Wang'18 Szegedyâ13. e.g. Dodge"19 â Alcorn 19 Geirhos 19 Beery 18 Jacobsen"19 Nguyenâ15 â Hendrycks â19 âBrendel 19 0.0.d.
Figure 4. Both human and machine vision generalise, but they generalise very differently. Left: image pairs that belong to the same category for humans, but not for DNNs. Right: images pairs assigned to the same category by a variety of DNNs, but not by humans.
# 4.3 Generalisation: how shortcuts can be revealed
What makes a guitar a guitar? When tested on this pattern never seen before, standard DNNs predict âguitarâ with high certainty [45]. Exposed by the generalisation test, it seems that DNNs learned to detect certain patterns (curved guitar body? strings?) instead of gui- tars: a successful strategy on training and i.i.d. test data that leads to unintended generalisation on o.o.d. data.
This exemplifies the inherent link between shortcut learning and generalisation. By itself, generalisation is not a part of shortcut learningâbut more often than not, shortcut learning is discovered through cases of unintended generalisation, revealing a mismatch between human-intended and model-learned solution. Interestingly, DNNs do not suffer from a general lack of o.o.d. generalisation (Figure 4) [45, 36, 46, 41]. DNNs recognise guitars even if only some abstract pattern is leftâhowever, this remarkable generalisation perfor- mance is undesired, at least in this case. In fact, the set of images that DNNs classify as âguitarâ with high certainty is incredibly big. To humans only some of these look like gui- tars, others like patterns (interpretable or abstract) and many more resemble white noise or even look like airplanes, cats or food [8, 45, 41]. Figure 4 on the right, for example, highlights a variety of image pairs that have hardly anything in common for humans but belong to the same category for DNNs. Conversely, to the human eye an imageâs category is not altered by innocuous distribution shifts like rotating objects or adding a bit of noise, but if these changes interact with the shortcut features that DNNs are sensitive to, they completely derail neural network predictions [8, 47, 9, 48, 49, 50, 38]. This highlights that generalisation failures are neither a failure to learn nor a failure to generalise at all, but instead a failure to generalise in the intended directionâgeneralisation and robustness can be considered the flip side of shortcut learning. Using a certain set of features creates insensitivity towards other features. Only if the selected features are still present after a distribution shift, a model generalises o.o.d.
9
# 5 Shortcut learning across deep learning
Taken together, we have seen how shortcuts are based on dataset shortcut opportunities and discriminative feature learing that result in a failure to generalise as intended. We will now turn to specific application areas, and discover how this general pattern appears across Computer Vision, Natural Language Processing, Agent-based (Reinforcement) Learning and Fairness / algorithmic decision-making. While shortcut learning is certainly not lim- ited to these areas, they might be the most prominent ones where the problem has been observed.
Computer Vision To humans, for example, a photograph of a car still shows the same car even when the image is slightly transformed. To DNNs, in contrast, innocuous trans- formations can completely change predictions. This has been reported in various cases such as shifting the image by a few pixels [47], rotating the object [49], adding a bit of random noise or blur [51, 50, 52, 53] or (as discussed earlier) by changing background [9] or texture while keeping the shape intact [38] (see Figure 4 for examples). Some key problems in Computer Vision are linked to shortcut learning. For example, transferring model performance across datasets (domain transfer) is challenging because models often use domain-specific shortcut features, and shortcuts limit the usefulness of unsupervised representations [54]. Furthermore, adversarial examples are particularly tiny changes to an input image that completely derail model predictions [8] (an example is shown in Fig- ure 4). Invisible to the human eye, those changes modify highly predictive patterns that DNNs use to classify objects [33]. In this sense, adversarial examplesâone of the most severe failure cases of neural networksâcan at least partly be interpreted as a consequence of shortcut learning.
Natural Language Processing The widely used language model BERT has been found to rely on superficial cue words. For instance, it learned that within a dataset of nat- ural language arguments, detecting the presence of ânotâ was sufficient to perform above chance in finding the correct line of argumentation. This strategy turned out to be very use- ful for drawing a conclusion without understanding the content of a sentence [14]. Natural Language Processing suffers from very similar problems as Computer Vision and other fields. Shortcut learning starts from various dataset biases such as annotation artefacts [55, 56, 57, 58]. Feature combination crucially depends on shortcut features like word length [59, 60, 14, 61], and consequently leads to a severe lack of robustness such as an inability to generalise to more challenging test conditions [62, 63, 64, 65]. Attempts like incorporating a certain degree of unsupervised training as employed in prominent language models like BERT [5] and GPT-2 [66] did not resolve the problem of shortcut learning [14].
Agent-based (Reinforcement) Learning Instead of learning how to play Tetris, an algorithm simply learned to pause the game to evade losing [67]. Systems of Agent-based Learning are usually trained using Reinforcement Learning and related approaches such as evolutionary algorithms. In both cases, designing a good reward function is crucial, since a reward function measures how close a system is to solving the problem. However, they all too often contain unexpected shortcuts that allow for so-called reward hacking [68]. The existence of loopholes exploited by machines that follow the letter (and not the spirit) of the reward function highlight how difficult it is to design a shortcut-free reward function [69]. Reinforcement Learning is also a widely used method in Robotics, where there is a commonly observed generalisation or reality gap between simulated training
10
environment and real-world use case. This can be thought of as a consequence of narrow shortcut learning by adapting to specific details of the simulation. Introducing additional variation in colour, size, texture, lighting, etc. helps a lot in closing this gap [70, 71].
Fairness & algorithmic decision-making Tasked to predict strong candidates on the basis of their r´esum´es, a hiring tool developed by Amazon was found to be biased towards preferring men. The model, trained on previous human decisions, found gender to be such a strong predictor that even removing applicant names would not help: The model always found a way around, for instance by inferring gender from all-woman college names [13]. This exemplifies how someâbut not allâproblems of (un)fair algorithmic decision- making are linked to shortcut learning: Once a predictive feature is found by a model, even if it is just an artifact of the dataset, the modelâs decision rule may depend entirely on the shortcut feature. When human biases are not only replicated, but worsened by a machine, this is referred to as bias amplification [72]. Other shortcut strategies include focusing on the majority group in a dataset while accepting high error rates for underrepresented groups [12, 73], which can amplify existing societal disparities and even create new ones over time [74]. In the dynamical setting a related problem is called disparity amplification [74], where sequential feedback loops may amplify a modelâs reliance on a majority group. It should be emphasised, however, that fairness is an active research area of machine learning closely related to invariance learning that might be useful to quantify and overcome biases of both machine and human decision making.
# 6 Diagnosing and understanding shortcut learning
Shortcut learning currently occurs across deep learning, causing machines to fail unexpect- edly. Many individual elements of shortcut learning have been identified long ago by parts of the machine learning community and some have already seen substantial progress, but currently a variety of approaches are explored without a commonly accepted strategy. We here outline three actionable steps towards diagnosing and analysing shortcut learning.
# Interpreting results carefully
Distinguishing datasets and underlying abilities Shortcut learning is most decep- tive when gone unnoticed. The most popular benchmarks in machine learning still rely on i.i.d. testing which drags attention away from the need to verify how closely this test performance measures the underlying ability one is actually interested in. For example, the ImageNet dataset [75] was intended to measure the ability âobject recognitionâ, but DNNs seem to rely mostly on âcounting texture patchesâ [36]. Likewise, instead of performing ânatural language inferenceâ, some language models perform well on datasets by simply detecting correlated key words [56]. Whenever there is a discrepancy between the simplic- ity with which a dataset (e.g. ImageNet, SQuAD) can be solved and the complexity evoked by the high-level description of the underlying ability (e.g. object recognition, scene under- standing, argument comprehension), it is important to bear in mind that a dataset is useful only for as long as it is a good proxy for the ability one is actually interested in [56, 76]. We would hardly be intrigued by reproducing human-defined labels on datasets per se (a lookup table would do just as well in this case)âit is the underlying generalisation ability that we truly intend to measure, and ultimately improve upon.
11
Morganâs Canon for machine learning Recall the cautionary tale of rats sniffing rather than seeing colour, described in Section 2.1. Animals often trick experimenters by solving an experimental paradigm (i.e., dataset) in an unintended way without using the un- derlying ability one is actually interested in. This highlights how incredibly difficult it can be for humans to imagine solving a tough challenge in any other way than the human way: Surely, at Marrâs implementational level [77] there may be differences between rat and human colour discrimination. But at the algorithmic level there is often a tacit assump- tion that human-like performance implies human-like strategy (or algorithm) [78]. This same strategy assumption is paralleled by deep learning: Surely, DNN units are different from biological neuronsâbut if DNNs successfully recognise objects, it seems natural to assume that they are using object shape like humans do [37, 36, 38].
Comparative Psychology with its long history of comparing mental abilities across species has coined a term for the fallacy to confuse human-centered interpretations of an observed behaviour and the actual behaviour at hand (which often has a much simpler explanation): anthropomorphism, âthe tendency of humans to attribute human-like psy- chological characteristics to nonhumans on the basis of insufficient empirical evidenceâ [79, p. 5]. As a reaction to the widespread occurrence of this fallacy, psychologist Lloyd Morgan developed a conservative guideline for interpreting non-human behaviour as early as 1903. It later became known as Morganâs Canon: âIn no case is an animal activity to be interpreted in terms of higher psychological processes if it can be fairly interpreted in terms of processes which stand lower on the scale of psychological evolution and developmentâ [80, p. 59]. Picking up on a simple correlation, for example, would be considered a pro- cess that stands low on this psychological scale whereas âunderstanding a sceneâ would be considered much higher. It has been argued that Morganâs Canon can and should be applied to interpreting machine learning results [79], and we consider it to be especially relevant in the context of shortcut learning. Accordingly, it might be worth acquiring the habit to confront machine learning models with a âMorganâs Canon for machine learn- ingâ6: Never attribute to high-level abilities that which can be adequately explained by shortcut learning.
Testing (surprisingly) strong baselines In order to find out whether a result may also be explained by shortcut learning, it can be helpful to test whether a baseline model exceeds expectations even though it does not use intended features. Examples include us- ing nearest neighbours for scene completion and estimating geolocation [81, 82], object recognition with local features only [36], reasoning based on single cue words [59, 14] or answering questions about a movie without ever showing the movie to a model [83]. Importantly, this is not meant to imply that DNNs cannot acquire high-level abilities. They certainly do have the potential to solve complex challenges and serve as scientific models for prediction, explanation and exploration [84]âhowever, we must not confuse perfor- mance on a dataset with the acquisition of an underlying ability.
# 6.2 Detecting shortcuts: towards o.o.d. generalisation tests
Making o.o.d. generalisation tests a standard practice Currently, measuring model performance by assessing validation performance on an i.i.d. test set is at the very heart of the vast majority of machine learning benchmarks. Unfortunately, in real-world settings
6Our formulation is adapted from Hanlonâs razor, âNever attribute to malice that which can be adequately explained by stupidityâ.
12
the i.i.d. assumption is rarely justified; in fact, this assumption has been called âthe big lie in machine learningâ [85]. While any metric is typically only an approximation of what we truly intend to measure, the i.i.d. performance metric may not be a good approximation as it can often be misleading, giving a false sense of security. In Section 2.2 we described how Bob gets a good grade on a multiple-choice exam through rote learning. Bobâs repro- ductive approach gives the superficial appearance of excellent performance, but it would not generalise to a more challenging test. Worse yet, as long as Bob continues to receive good grades through surface learning, he is unlikely to change his learning strategy.
Within the field of Education, what is the best practice to avoid surface learning? It has been argued that changing the type of examination from multiple-choice tests to essay questions discourages surface learning, and indeed surface approaches typically fail on these kinds of exams [21]. Essay questions, on the other hand, encourage so-called deep or transformational learning strategies [86, 87], like Aliceâs focus on understanding. This in turn enables transferring the learned content to novel problems and consequently achieves a much better overlap between the educational objectives of the teacher and what the students actually learn [88]. We can easily see the connection to machine learningâtransferring knowledge to novel problems corresponds to testing generalisation beyond the narrowly learned setting [89, 90, 91]. If model performance is assessed only on i.i.d. test data, then we are unable to discover whether the model is actually acquiring the ability we think it is, since exploiting shortcuts often leads to deceptively good results on standard metrics [92]. We, among many others [93, 78, 94, 95, 96], have explored a variety of o.o.d. tests and we hope it will be possible to identify a sufficiently simple and effective test procedure that could replace i.i.d. testing as a new standard method for benchmarking machine learning models in the future.
Designing good o.o.d. tests While a distribution shift (between i.i.d. and o.o.d. data) In has a clear mathematical definition, it can be hard to detect in practice [101, 102]. these cases, training a classifier to distinguish samples in dataset A from samples in dataset Aâ can reveal a distribution shift. We believe that good o.o.d. tests should fullfill at least the following three conditions: First, per definition there needs to be a clear distribution shift, a shift that may or may not be distinguishable by humans. Second, it should have a well-defined intended solution. Training on natural images while testing on white noise would technically constitute an o.o.d. test but lacks a solution. Third, a good o.o.d. test is a test where the majority of current models struggle. Typically, the space of all con- ceivable o.o.d. tests includes numerous uninteresting tests. Thus given limited time and resources, one might want to focus on challenging test cases. As models evolve, gener- alisation benchmarks will need to evolve as well, which is exemplified by the Winograd Schema Challenge [103]. Initially designed to overcome shortcut opportunities caused by the open-ended nature of the Turing test, this common-sense reasoning benchmark was scrutinised after modern language models started to perform suspiciously wellâand it in- deed contained more shortcut opportunities than originally envisioned [104], highlighting the need for revised tests. Fortunately, stronger generalisation tests are beginning to gain traction across deep learning. While o.o.d. tests will likely need to evolve alongside the models they aim to evaluate, a few current encouraging examples are listed in Box I. In summary, rigorous generalisation benchmarks are crucial when distinguishing between the intended and a shortcut solution, and it would be extremely useful if a strong generally applicable testing procedure will emerge from this range of approaches.
13
# Box I. EXAMPLES OF INTERESTING O.O.D. BENCHMARKS
We here list a few selected, encouraging examples of o.o.d. benchmarks.
Adversarial attacks can be seen as testing on model-specific worst-case o.o.d. data, which makes it an interesting diagnostic tool. If a successful adversarial attack [8] can change model predictions without changing semantic content, this is an indication that something akin to shortcut learning may be occurring [33, 97].
ARCT with removed shortcuts is a language argument comprehension dataset that follows the idea of removing known shortcut opportunities from the data itself in order to create harder test cases [14].
Cue conflict stimuli like images with conflicting texture and shape information pitch fea- tures/cues against each other, such as an intended against an unintended cue [38]. This approach can easily be compared to human responses.
ImageNet-A is a collection of natural images that several state-of-the-art models consistently classify wrongly. It thus benchmarks models on worst-case natural images [46].
ImageNet-C applies 15 different image corruptions to standard test images, an approach we find appealing for its variety and usability [52].
ObjectNet introduces the idea of scientific controls into o.o.d. benchmarking, allowing to disentangle the influence of background, rotation and viewpoint [98].
PACS and other domain generalisation datasets require extrapolation beyond i.i.d. data per design by testing on a domain different from training data (e.g. cartoon images) [99].
Shift-MNIST / biased CelebA / unfair dSprites are controlled toy datasets that introduce corre- lations in the training data (e.g. class-predictive pixels or image quality) and record the accuracy drop on clean test data as a way of finding out how prone a given architecture and loss function are to picking up on shortcuts [39, 40, 100, 41].
# 6.3 Shortcuts: why are they learned?
The âPrinciple of Least Effortâ Why are machines so prone to learning shortcuts, detecting grass instead of cows [9] or a metal token instead of pneumonia [15]? Exploit- ing those shortcuts seems much easier for DNNs than learning the intended solution. But what determines whether a solution is easy to learn? In Linguistics, a related phenomenon is called the âPrinciple of Least Effortâ [119], the observation that language speakers gen- erally try to minimise the amount of effort involved in communication. For example, the use of âplaneâ is becoming more common than âairplaneâ, and in pronouncing âcupboardâ, âpâ and âbâ are merged into a single sound [120, 121]. Interestingly, whether a language change makes it easier for the speaker doesnât always simply depend on objective measures like word length. On the contrary, this process is shaped by a variety of different factors, including the anatomy (architecture) of the human speech organs and previous language experience (training data).
14
# Box II. SHORTCUT LEARNING & INDUCTIVE BIASES
The four components listed below determine the inductive bias of a model and dataset: the set of assumptions that influence which solutions are learnable, and how readily they can be learned. Although in theory DNNs can approximate any function (given potentially infinite capacity) [105], their inductive bias plays an important role for the types of patterns they prefer to learn given finite capacity and data.
⢠Structure: architecture. Convolutions make it harder for a model to use locationâa prior [106] that is so powerful for natural images that even untrained networks can be used for tasks like image inpainting and denoising [107]. In Natural Language Processing, transformer architectures [108] use attention layers to understand the context by modelling relationships between words. In most cases, however, it is hard to understand the implicit priors in a DNN and even standard elements like ReLU activations can lead to unexpected effects like unwarranted confidence [109].
⢠Experience: training data. As discussed in Section 4.1, shortcut opportunities are present in most data and rarely disappear by adding more data [32, 69, 56, 38, 33]. Modifying the training data to block specific shortcuts has been demonstrated to work for reducing adversarial vulnerability [110] and texture bias [38].
⢠Goal: loss function. The most commonly used loss function for classification, cross- entropy, encourages DNNs to stop learning once a simple predictor is found; a modifica- tion can force neural networks to use all available information [41]. Regularisation terms that use additional information about the training data have been used to disentangle in- tended features from shortcut features [39, 111].
⢠Learning: optimisation. Stochastic gradient descent and its variants bias DNNs towards learning simple functions [112, 113, 114, 115]. The learning rate influences which patterns networks focus on: Large learning rates lead to learning simple patterns that are shared across examples, while small learning rates facilitate complex pattern learning and mem- orisation [116, 117]. The complex interactions between training method and architecture are poorly understood so far; strong claims can only be made for simple cases [118].
Understanding the influence of inductive biases In a similar vein, whether a solu- tion is easy to learn for machines does not simply depend on the data but on all of the four components of a machine learning algorithm: architecture, training data, loss function, and optimisation. Often, the training process starts with feeding training data to the model with a fixed architecture and randomly initialised parameters. When the modelâs prediction is compared to ground truth, the loss function measures the predictionâs quality. This super- vision signal is used by an optimiser for adapting the modelâs internal parameters such that the model makes a better prediction the next time. Taken together, these four components (which determine the inductive bias of a model) influence how certain solutions are much easier to learn than others, and thus ultimately determine whether a shortcut is learned instead of the intended solution [122]. Box II provides an overview of the connections between shortcut learning and inductive biases.
15
# 7 Beyond shortcut learning
A lack of out-of-distribution generalisation can be observed all across machine learning. Consequently, a significant fraction of machine learning research is concerned with over- coming shortcut learning, albeit not necessarily as a concerted effort. Here we highlight connections between different research areas. Note that an exhaustive list would be out of the scope for this work. Instead, we cover a diverse set of approaches we find promising, each providing a unique perspective on learning beyond shortcut learning.
Domain-specific prior knowledge Avoiding reliance on unintended cues can be achieved by designing architectures and data-augmentation strategies that discourage learning short- cut features. If the orientation of an object does not matter for its category, either data- augmentation or hard-coded rotation invariance [123] can be applied. This strategy can be applied to almost any well-understood transformation of the inputs and finds its proba- bly most general form in auto-augment as an augmentation strategy [124]. Extreme data- augmentation strategies are also the core ingredient of the most successful semi-supervised [125] and self-supervised learning approaches to date [126, 127].
Adversarial examples and robustness Adversarial attacks are a powerful analysis tool for worst-case generalisation [8]. Adversarial examples can be understood as counterfactual explanations, since they are the smallest change to an input that produces a certain output. Achieving counterfactual explanations of predictions aligned with human intention makes the ultimate goals of adversarial robustness tightly coupled to causality research in machine learning [128]. Adversarially robust models are somewhat more aligned with humans and show promising generalisation abilities [129, 130]. While adversarial attacks test model performance on model-dependent worst-case noise, a related line of research focuses on model-independent noise like image corruptions [51, 52].
Domain adaptation, -generalisation and -randomisation These areas are explicitly con- cerned with out-of-distribution generalisation. Usually, multiple distributions are observed during training time and the model is supposed to generalise to a new distribution at test time. Under certain assumptions the intended (or even causal) solution can be learned from multiple domains and environments [131, 39, 111]. In robotics, domain randomisa- tion (setting certain simulation parameters randomly during training) is a very successful approach for learning policies that generalise to similar situations in the real-world [70].
Fairness Fairness research aims at making machine decisions âfairâ according to a cer- Individual fairness aims at treating similar individuals similarly tain definition [132]. while group fairness aims at treating subgroups no different than the rest of the population [133, 134]. Fairness is closely linked to generalisation and causality [135]. Sensitive group membership can be viewed as a domain indicator: Just like machine decisions should not typically be influenced by changing the domain of the data, they also should not be biased against minority groups.
Meta-learning Meta-learning seeks to learn how to learn. An intermediate goal is to learn representations that can adapt quickly to new conditions [136, 137, 138]. This ability is connected to the identification of causal graphs [139] since learning causal features allows for small changes when changing environments.
16
Generative modelling and disentanglement Learning to generate the observed data forces a neural network to model every variation in the training data. By itself, however, this does not necessarily lead to representations useful for downstream tasks [140], let alone out- of-distribution generalisation. Research on disentanglement addresses this shortcoming by learning generative models with well-structured latent representations [141]. The goal is to recover the true generating factors of the data distribution from observations [142] by identifying independent causal mechanisms [128].
# 8 Conclusion
âThe road reaches every place, the short cut only oneâ â James Richardson [143]
Science aims for understanding. While deep learning as an engineering discipline has seen tremendous progress over the last few years, deep learning as a scientific discipline is still lagging behind in terms of understanding the principles and limitations that govern how machines learn to extract patterns from data. A deeper understanding of how to overcome shortcut learning is of relevance beyond the current application domains of machine learn- ing and there might be interesting future opportunities for cross-fertilisation with other disciplines such as Economics (designing management incentives that do not jeopardise long-term success by rewarding unintended âshortcutâ behaviour) or Law (creating laws without âloopholeâ shortcut opportunities). Until the problem is solved, however, we offer the following four recommendations:
(1) Connecting the dots: shortcut learning is ubiquitous Shortcut learning appears to be a ubiquitous characteristic of learning systems, biologi- cal and artificial alike. Many of deep learningâs problems are connected through shortcut learningâmodels exploit dataset shortcut opportunities, select only a few predictive fea- tures instead of taking all evidence into account, and consequently suffer from unexpected generalisation failures. âConnecting the dotsâ between affected areas is likely to facilitate progress, and making progress can generate highly valuable impact across various appli- cations domains.
(2) Interpreting results carefully Discovering a shortcut often reveals the existence of an easy solution to a seemingly com- plex dataset. We argue that we will need to exercise great care before attributing high-level abilities like âobject recognitionâ or âlanguage understandingâ to machines, since there is often a much simpler explanation.
(3) Testing o.o.d. generalisation Assessing model performance on i.i.d. test data (as the majority of current benchmarks do) is insufficient to distinguish between intended and unintended (shortcut) solutions. Conse- quently, o.o.d. generalisation tests will need to become the rule rather than the exception.
(4) Understanding what makes a solution easy to learn DNNs always learn the easiest possible solution to a problem, but understanding which solutions are easy (and thus likely to be learned) requires disentangling the influence of
17
structure (architecture), experience (training data), goal (loss function) and learning (opti- misation), as well as a thorough understanding of the interactions between these factors.
Shortcut learning is one of the key roadblocks towards fair, robust, deployable and trust- worthy machine learning. While overcoming shortcut learning in its entirety may poten- tially be impossible, any progress towards mitigating it will lead to a better alignment be- tween learned and intended solutions. This holds the promise that machines behave much more reliably in our complex and ever-changing world, even in situations far away from their training experience. Furthermore, machine decisions would become more transpar- ent, enabling us to detect and remove biases more easily. Currently, the research on short- cut learning is still fragmented into various communities. With this perspective we hope to fuel discussions across these different communities and to initiate a movement that pushes for a new standard paradigm of generalisation that is able to replace the current i.i.d. tests.
# Acknowledgement
The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS- IS) for supporting R.G. and C.M.; the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) for supporting C.M. via grant EC 479/1-1; the Collaborative Research Center (Pro- jektnummer 276693517âSFB 1233: Robust Vision) for supporting M.B. and F.A.W.; the German Federal Ministry of Education and Research through the T¨ubingen AI Center (FKZ 01IS18039A) for supporting W.B. and M.B.; as well as the Natural Sciences and Engineering Research Council of Canada and the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/Interior Business Center (DoI/IBC) contract number D16PC00003 for supporting J.J.
The authors would like to thank Judy Borowski, Max Burg, Santiago Cadena, Alexander S. Ecker, Lisa Eisenberg, Roland Fleming, Ingo Fr¨und, Samuel Greiner, Florian GrieÃer, Shaiyan Keshvari, Ruth Kessler, David Klindt, Matthias K¨ummerer, Benjamin Mitzkus, Hendrikje Nien- borg, Jonas Rauber, Evgenia Rusak, Steffen Schneider, Lukas Schott, Tino Sering, Yash Sharma, Matthias Tangemann, Roland Zimmermann and Tom Wallis for helpful discussions.
# Author contributions
The project was initiated by R.G. and C.M. and led by R.G. with support from C.M. and J.J.; M.B. and W.B. reshaped the initial thrust of the perspective and together with R.Z. supervised the machine learning components. The toy experiment was conducted by J.J. with input from R.G. and C.M. Most figures were designed by R.G. and W.B. with input from all other authors. Figure 2 (left) was conceived by M.B. The first draft was written by R.G., J.J. and C.M. with input from F.A.W. All authors contributed to the final version and provided critical revisions from different perspectives.
# References
[1] He, K., Zhang, X., Ren, S. & Sun, J. Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification. In Proceedings of the IEEE International Conference on Computer Vision, 1026â1034 (2015).
[2] Silver, D. et al. Mastering the game of Go with deep neural networks and tree search. Nature 529, 484 (2016).
[3] MoravËc´ık, M. et al. Deepstack: Expert-level artificial intelligence in heads-up no- limit poker. Science 356, 508â513 (2017).
18
[4] Rajpurkar, P. et al. Chexnet: Radiologist-level pneumonia detection on chest X-rays with deep learning. arXiv:1711.05225 (2017).
[5] Devlin, J., Chang, M.-W., Lee, K. & Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv:1810.04805 (2018).
[6] Rolnick, D. et al. Tackling climate change with machine learning. arXiv preprint arXiv:1906.05433 (2019).
[7] Reichstein, M. et al. Deep learning and process understanding for data-driven earth system science. Nature 566, 195 (2019).
[8] Szegedy, C. et al. Intriguing properties of neural networks. arXiv:1312.6199 (2013).
[9] Beery, S., Van Horn, G. & Perona, P. Recognition in terra incognita. In Proceedings of the European Conference on Computer Vision, 456â473 (2018).
[10] Rosenfeld, A., Zemel, R. & Tsotsos, J. K. The elephant in the room. arXiv preprint arXiv:1808.03305 (2018).
[11] Heuer, H., Monz, C. & Smeulders, A. W. Generating captions without looking beyond objects. arXiv preprint arXiv:1610.03708 (2016).
[12] Buolamwini, J. & Gebru, T. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on Fairness, Accountability and Transparency, 77â91 (2018).
[13] Dastin, J. Amazon scraps secret AI recruiting tool that showed bias against women. https://reut.rs/2Od9fPr (2018).
[14] Niven, T. & Kao, H.-Y. Probing neural network comprehension of natural language arguments. arXiv preprint arXiv:1907.07355 (2019).
[15] Zech, J. R. et al. Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: A cross-sectional study. PLoS Medicine 15, e1002683 (2018).
# This study highlights the importance of testing model generalisation in the medical context.
[16] Bickel, S., Br¨uckner, M. & Scheffer, T. Discriminative learning under covariate shift. Journal of Machine Learning Research 10, 2137â2155 (2009).
[17] Sch¨olkopf, B. et al. On causal and anticausal learning. In International Conference on Machine Learning, 1255â1262 ([Sl: sn], 2012).
[18] Torralba, A. & Efros, A. A. Unbiased look at dataset bias. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2011).
# This study provides a comprehensive overview of dataset biases in computer vision.
[19] Branwen, G. The neural net tank urban legend. https://www.gwern.net/Tanks (2011).
[20] Pfungst, O. Clever Hans: (the horse of Mr. Von Osten.) a contribution to experi- mental animal and human psychology (Holt, Rinehart and Winston, 1911).
[21] Scouller, K. The influence of assessment method on studentsâ learning approaches: Multiple choice question examination versus assignment essay. Higher Education 35, 453â472 (1998).
19
[22] Shane, J. Do neural (2018). electric https://aiweirdness.com/post/171451900302/ nets dream of sheep? URL do-neural-nets-dream-of-electric-sheep.
[23] Wichmann, F. A., Drewes, J., Rosas, P. & Gegenfurtner, K. R. Animal detection in natural scenes: Critical features revisited. Journal of Vision 10, 6â6 (2010).
[24] Ribeiro, M. T., Singh, S. & Guestrin, C. Why should I trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD Interna- tional Conference on Knowledge Discovery and Data Mining, 1135â1144 (ACM, 2016).
[25] Zhu, Z., Xie, L. & Yuille, A. L. Object recognition with and without objects. arXiv preprint arXiv:1611.06596 (2016).
[26] Wang, J. et al. Visual concepts and compositional voting. arXiv preprint arXiv:1711.04451 (2017).
[27] Dawson, M., Zisserman, A. & NellËaker, C. From same photo: Cheating on visual kinship challenges. In Asian Conference on Computer Vision, 654â668 (Springer, 2018).
[28] Biederman, I. On the semantics of a glance at a scene (Hillsdale, NJ: Erlbaum, 1981).
[29] Biederman, I., Mezzanotte, R. J. & Rabinowitz, J. C. Scene perception: Detect- ing and judging objects undergoing relational violations. Cognitive Psychology 14, 143â177 (1982).
[30] Oliva, A. & Torralba, A. The role of context in object recognition. Trends in Cog- nitive Sciences 11, 520â527 (2007).
[31] Castelhano, M. S. & Heaven, C. Scene context influences without scene gist: Eye movements guided by spatial associations in visual search. Psychonomic Bulletin & Review 18, 890â896 (2011).
[32] Jo, J. & Bengio, Y. Measuring the tendency of CNNs to learn surface statistical regularities. arXiv preprint arXiv:1711.11561 (2017).
[33] Ilyas, A. et al. Adversarial examples are not bugs, they are features. arXiv preprint arXiv:1905.02175 (2019).
# This study shows how learning imperceptible predictive features leads to adversarial vulnerability.
[34] Halevy, A., Norvig, P. & Pereira, F. The unreasonable effectiveness of data. Intelli- gent Systems (2009).
[35] Wolpert, D. H. & Macready, W. G. No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation 1, 67â82 (1997).
[36] Brendel, W. & Bethge, M. Approximating CNNs with bag-of-local-features models In International Conference on Learning works surprisingly well on ImageNet. Representations (2019).
[37] Baker, N., Lu, H., Erlikhman, G. & Kellman, P. J. Deep convolutional networks do not classify based on global object shape. PLoS Computational Biology 14, e1006613 (2018).
20
ImageNet-trained CNNs are biased towards texture; increas- ing shape bias improves accuracy and robustness. In International Conference on Learning Representations (2019).
# This article shows how shortcut feature combination strategies are linked to distortion robustness.
[39] Heinze-Deml, C. & Meinshausen, N. Conditional variance penalties and domain shift robustness. arXiv:1710.11469 (2017).
[40] Malhotra, G. & Bowers, J. What a difference a pixel makes: An empirical exami- nation of features used by CNNs for categorisation. In International Conference on Learning Representations (2019).
[41] Jacobsen, J.-H., Behrmann, J., Zemel, R. & Bethge, M. Excessive invariance causes adversarial vulnerability. In International Conference on Learning Representations (2019).
[42] Kamin, L. J. Predictability, surprise, attention, and conditioning. Punishment and aversive behavior 279â96 (1969).
[43] Dickinson, A. Contemporary animal learning theory, vol. 1 (CUP Archive, 1980).
[44] Bouton, M. E. Learning and behavior: A contemporary synthesis. (Sinauer Asso- ciates, 2007).
[45] Nguyen, A., Yosinski, J. & Clune, J. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proceedings of the IEEE Con- ference on Computer Vision and Pattern Recognition, 427â436 (IEEE, 2015).
[46] Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J. & Song, D. Natural adversarial examples. arXiv preprint arXiv:1907.07174 (2019).
[47] Azulay, A. & Weiss, Y. Why do deep convolutional networks generalize so poorly to small image transformations? arXiv:1805.12177 (2018).
[48] Wang, M. & Deng, W. Deep visual domain adaptation: A survey. Neurocomputing 312, 135â153 (2018).
[49] Alcorn, M. A. et al. Strike (with) a pose: Neural networks are easily fooled by strange poses of familiar objects. In Proceedings of the IEEE Conference on Com- puter Vision and Pattern Recognition (2019).
[50] Dodge, S. & Karam, L. Human and DNN classification performance on images with quality distortions: A comparative study. ACM Transactions on Applied Perception (TAP) 16, 7 (2019).
[51] Geirhos, R. et al. Generalisation in humans and deep neural networks. In Advances in Neural Information Processing Systems (2018).
[52] Hendrycks, D. & Dietterich, T. Benchmarking neural network robustness to com- mon corruptions and perturbations. In International Conference on Learning Rep- resentations (2019).
[53] Michaelis, C. et al. Benchmarking robustness in object detection: Autonomous driving when winter is coming. arXiv preprint arXiv:1907.07484 (2019).
[54] Minderer, M., Bachem, O., Houlsby, N. & Tschannen, M. Automatic shortcut re- moval for self-supervised representation learning. arXiv preprint arXiv:2002.08822 (2020).
21
[55] Goyal, Y., Khot, T., Summers-Stay, D., Batra, D. & Parikh, D. Making the V in VQA matter: Elevating the role of image understanding in Visual Question An- swering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 6904â6913 (2017).
[56] Gururangan, S. et al. Annotation artifacts in Natural Language Inference data. arXiv preprint arXiv:1803.02324 (2018).
# This article highlights how Natural Language Inference models learn heuristics that exploit superficial cues.
[57] Kaushik, D. & Lipton, Z. C. How much reading does reading comprehen- arXiv preprint sion require? A critical investigation of popular benchmarks. arXiv:1808.04926 (2018).
[58] Geva, M., Goldberg, Y. & Berant, J. Are we modeling the task or the annotator? An investigation of annotator bias in natural language understanding datasets. arXiv preprint arXiv:1908.07898 (2019).
[59] Poliak, A., Naradowsky, J., Haldar, A., Rudinger, R. & Van Durme, B. Hypothesis only baselines in Natural Language Inference. arXiv preprint arXiv:1805.01042 (2018).
[60] Kavumba, P. et al. When choosing plausible alternatives, Clever Hans can be clever. arXiv preprint arXiv:1911.00225 (2019).
[61] McCoy, R. T., Pavlick, E. & Linzen, T. Right for the wrong reasons: Di- arXiv preprint agnosing syntactic heuristics in Natural Language Inference. arXiv:1902.01007 (2019).
[62] Agrawal, A., Batra, D. & Parikh, D. Analyzing the behavior of visual question answering models. arXiv preprint arXiv:1606.07356 (2016).
[63] Belinkov, Y. & Bisk, Y. Synthetic and natural noise both break neural machine translation. arXiv preprint arXiv:1711.02173 (2017).
[64] Jia, R. & Liang, P. Adversarial examples for evaluating reading comprehension systems. arXiv preprint arXiv:1707.07328 (2017).
[65] Glockner, M., Shwartz, V. & Goldberg, Y. Breaking NLI systems with sentences that require simple lexical inferences. arXiv preprint arXiv:1805.02266 (2018).
[66] Radford, A. et al. Language models are unsupervised multitask learners. OpenAI Blog 1 (2019).
[67] Murphy VII, T. The first level of Super Mario Bros. is easy with lexicographic orderings and time travel. The Association for Computational Heresy (SIGBOVIK) 2013 112 (2013).
[68] Amodei, D. et al. Concrete problems in AI safety. arXiv preprint arXiv:1606.06565 (2016).
[69] Lehman, J. et al. The surprising creativity of digital evolution: A collection of anecdotes from the evolutionary computation and artificial life research communi- ties. arXiv preprint arXiv:1803.03453 (2018).
This paper provides a comprehensive collection of anecdotes about shortcut learning / reward hacking in Reinforcement Learning and beyond.
22
[70] Tobin, J. et al. Domain randomization for transferring deep neural networks from simulation to the real world. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 23â30 (IEEE, 2017).
[71] Akkaya, I. et al. Solving Rubikâs Cube with a robot hand. arXiv:1910.07113 (2019).
[72] Zhao, J., Wang, T., Yatskar, M., Ordonez, V. & Chang, K.-W. Men also like shop- ping: Reducing gender bias amplification using corpus-level constraints. arXiv preprint arXiv:1707.09457 (2017).
# This study shows how algorithms amplify social biases to boost per- formance.
[73] Rich, A. S. & Gureckis, T. M. Lessons for artificial intelligence from the study of natural stupidity. Nature Machine Intelligence 1, 174 (2019).
[74] Hashimoto, T. B., Srivastava, M., Namkoong, H. & Liang, P. Fairness without de- mographics in repeated loss minimization. arXiv preprint arXiv:1806.08010 (2018).
[75] Russakovsky, O. et al. ImageNet Large Scale Visual Recognition Challenge. Inter- national Journal of Computer Vision 115, 211â252 (2015).
[76] Zellers, R., Holtzman, A., Bisk, Y., Farhadi, A. & Choi, Y. Hellaswag: Can a machine really finish your sentence? arXiv preprint arXiv:1905.07830 (2019).
[77] Marr, D. Vision: A computational investigation into the human representation and processing of visual information (W.H. Freeman and Company, San Francisco, 1982).
[78] Borowski, J. et al. The notorious difficulty of comparing human and machine per- ception. In NeurIPS Shared Visual Representations in Human and Machine Intelli- gence Workshop (2019).
# The case studies presented in this article highlight the difficulty of interpreting machine behaviour in the presence of shortcut learning.
[79] Buckner, C. The Comparative Psychology of Artificial Intelligences (2019). URL http://philsci-archive.pitt.edu/16034/.
# This opinionated article points out important caveats when compar- ing human to machine intelligence.
[80] Morgan, C. L. Introduction to Comparative Psychology. (rev. ed.). New York: Scribner (1903).
[81] Hays, J. & Efros, A. A. Scene completion using millions of photographs. ACM Transactions on Graphics (TOG) 26, 4 (2007).
[82] Hays, J. & Efros, A. A. IM2GPS: estimating geographic information from a single In Proceedings of the IEEE Conference on Computer Vision and Pattern image. Recognition, 1â8 (IEEE, 2008).
[83] Jasani, B., Girdhar, R. & Ramanan, D. Are we asking the right questions in MovieQA? In Proceedings of the IEEE International Conference on Computer Vision Workshops, 0â0 (2019).
[84] Cichy, R. M. & Kaiser, D. Deep neural networks as scientific models. Trends in Cognitive Sciences (2019).
[85] Ghahramani, Z. Panel of workshop on advances in Approximate Bayesian In- ference (AABI) 2017 (2017). URL https://www.youtube.com/watch?v= x1UByHT60mQ&feature=youtu.be&t=37m44s.
23
[86] Marton, F. & S¨aalj¨o, R. On qualitative differences in learningâII Outcome as a function of the learnerâs conception of the task. British Journal of Educational Psychology 46, 115â127 (1976).
[87] Biggs, J. Individual differences in study processes and the quality of learning out- comes. Higher Education 8, 381â394 (1979).
[88] Chin, C. & Brown, D. E. Learning in science: A comparison of deep and surface approaches. Journal of Research in Science Teaching 37, 109â138 (2000).
# This article from the field of Education reflects upon ways to achieve a better overlap between educational objectives and the way students learn.
[89] Marcus, G. F. Rethinking eliminative connectionism. Cognitive Psychology 37, 243â282 (1998).
[90] Kilbertus, N., Parascandolo, G. & Sch¨olkopf, B. Generalization in anti-causal learn- ing. arXiv preprint arXiv:1812.00524 (2018).
[91] Marcus, G. Deep learning: A critical appraisal. arXiv preprint arXiv:1801.00631 (2018).
[92] Lapuschkin, S. et al. Unmasking Clever Hans predictors and assessing what ma- chines really learn. Nature Communications 10, 1096 (2019).
This study highlights how shortcut learning can lead to deceptively good results on standard metrics.
[93] Lake, B. M., Ullman, T. D., Tenenbaum, J. B. & Gershman, S. J. Building machines that learn and think like people. arXiv preprint arXiv:1604.00289 (2016).
[94] Chollet, F. The measure of intelligence. arXiv preprint arXiv:1911.01547 (2019).
[95] Crosby, M., Beyret, B. & Halina, M. The Animal-AI Olympics. Nature Machine Intelligence 1, 257â257 (2019).
[96] Juliani, A. et al. Obstacle tower: A generalization challenge in vision, control, and planning. arXiv preprint arXiv:1902.01378 (2019).
[97] Engstrom, L. et al. A discussion of âadversarial examples are not bugs, they are featuresâ. Distill (2019).
[98] Barbu, A. et al. ObjectNet: a large-scale bias-controlled dataset for pushing the limits of object recognition models. In Advances in Neural Information Processing Systems, 9448â9458 (2019).
[99] Li, D., Yang, Y., Song, Y.-Z. & Hospedales, T. M. Deeper, broader and artier domain generalization. In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, 5542â5550 (2017).
[100] Creager, E. et al. Flexibly fair representation learning by disentanglement. arXiv preprint arXiv:1906.02589 (2019).
[101] Recht, B., Roelofs, R., Schmidt, L. & Shankar, V. Do CIFAR-10 classifiers gener- alize to CIFAR-10? arXiv preprint arXiv:1806.00451 (2018).
[102] Recht, B., Roelofs, R., Schmidt, L. & Shankar, V. Do ImageNet classifiers general- ize to ImageNet? arXiv preprint arXiv:1902.10811 (2019).
[103] Levesque, H., Davis, E. & Morgenstern, L. The Winograd Schema Challenge. In Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning (2012).
24
[104] Trichelair, P., Emami, A., Trischler, A., Suleman, K. & Cheung, J. C. K. How rea- sonable are common-sense reasoning tasks: A case-study on the Winograd Schema Challenge and SWAG. In Proceedings of the 2019 Conference on Empirical Meth- ods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 3373â3378 (2019).
[105] Hornik, K., Stinchcombe, M. & White, H. Multilayer feedforward networks are universal approximators. Neural Networks 2, 359â366 (1989).
Finding the needle in the haystack with convolutions: On the benefits of architectural bias. arXiv preprint arXiv:1906.06766 (2019).
[107] Ulyanov, D., Vedaldi, A. & Lempitsky, V. Deep image prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 9446â9454 (2018).
[108] Vaswani, A. et al. Attention is all you need. In Advances in Neural Information Processing Systems, 5998â6008 (2017).
[109] Hein, M., Andriushchenko, M. & Bitterwolf, J. Why ReLU networks yield high- confidence predictions far away from the training data and how to mitigate the problem. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 41â50 (2019).
[110] Madry, A., Makelov, A., Schmidt, L., Tsipras, D. & Vladu, A. Towards deep learn- ing models resistant to adversarial attacks. In International Conference on Learning Representations (2018).
[111] Arjovsky, M., Bottou, L., Gulrajani, I. & Lopez-Paz, D. Invariant risk minimization. arXiv preprint arXiv:1907.02893 (2019).
[112] Wu, L., Zhu, Z. & E, W. Towards understanding generalization of deep learning: Perspective of loss landscapes. arXiv preprint arXiv:1706.10239 (2017).
[113] De Palma, G., Kiani, B. T. & Lloyd, S. Deep neural networks are biased towards simple functions. arXiv preprint arXiv:1812.10156 (2018).
[114] Valle-Perez, G., Camargo, C. Q. & Louis, A. A. Deep learning generalizes because In International the parameter-function map is biased towards simple functions. Conference on Learning Representations (2019).
[115] Sun, K. & Nielsen, F. Lightlike neuromanifolds, Occamâs Razor and deep learning. arXiv preprint arXiv:1905.11027 (2019).
[116] Arpit, D. et al. A closer look at memorization in deep networks. In International Conference on Machine Learning (2017).
[117] Li, Y., Wei, C. & Ma, T. Towards explaining the regularization effect of initial large learning rate in training neural networks. arXiv preprint arXiv:1907.04595 (2019).
[118] Bartlett, P. L., Long, P. M., Lugosi, G. & Tsigler, A. Benign overfitting in linear regression. arXiv preprint arXiv:1906.11300 (2019).
[119] Zipf, G. K. Human Behavior and the Principle of Least Effort (Addison-Wesley press, 1949).
[120] Ohala, J. J. The phonetics and phonology of aspects of assimilation. Papers in Laboratory Phonology 1, 258â275 (1990).
[121] Vicentini, A. The economy principle in language. Notes and Observations from early modern English grammars. Mots, Palabras, Words 3, 37â57 (2003).
25
[122] Sinz, F. H., Pitkow, X., Reimer, J., Bethge, M. & Tolias, A. S. Engineering a less artificial intelligence. Neuron 103, 967â979 (2019).
[123] Cohen, T. & Welling, M. Group equivariant convolutional networks. In Interna- tional Conference on Machine Learning, 2990â2999 (2016).
[124] Cubuk, E. D., Zoph, B., Mane, D., Vasudevan, V. & Le, Q. V. Autoaugment: Learn- ing augmentation strategies from data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 113â123 (2019).
[125] Berthelot, D. et al. Mixmatch: A holistic approach to semi-supervised learning. arXiv preprint arXiv:1905.02249 (2019).
[126] Hjelm, R. D. et al. Learning deep representations by mutual information estimation and maximization. arXiv preprint arXiv:1808.06670 (2018).
[127] Oord, A. v. d., Li, Y. & Vinyals, O. Representation learning with contrastive predic- tive coding. arXiv preprint arXiv:1807.03748 (2018).
[128] Sch¨olkopf, B. Causality for machine learning. arXiv preprint arXiv:1911.10500 (2019).
[129] Schott, L., Rauber, J., Bethge, M. & Brendel, W. Towards the first adversarially robust neural network model on MNIST. In International Conference on Learning Representations (2019).
[130] Engstrom, L. et al. Learning perceptually-aligned representations via adversarial robustness. arXiv:1906.00945 (2019).
[131] Peters, J., B¨uhlmann, P. & Meinshausen, N. Causal inference by using invariant prediction: identification and confidence intervals. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 78, 947â1012 (2016).
[132] Dwork, C., Hardt, M., Pitassi, T., Reingold, O. & Zemel, R. Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, 214â226 (2012).
[133] Zemel, R., Wu, Y., Swersky, K., Pitassi, T. & Dwork, C. Learning fair representa- tions. In International Conference on Machine Learning, 325â333 (2013).
[134] Hardt, M., Price, E. & Srebro, N. Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems, 3315â3323 (2016).
[135] Kusner, M. J., Loftus, J., Russell, C. & Silva, R. Counterfactual fairness. In Ad- vances in Neural Information Processing Systems, 4066â4076 (2017).
[136] Schmidhuber, J. Evolutionary principles in self-referential learning. On learning how to learn: The meta-meta-... hook.) Diploma thesis, Institut f. Informatik, Tech. Univ. Munich 1, 2 (1987).
[137] Santoro, A., Bartunov, S., Botvinick, M., Wierstra, D. & Lillicrap, T. Meta-learning with memory-augmented neural networks. In International Conference on Machine Learning, 1842â1850 (2016).
[138] Finn, C., Abbeel, P. & Levine, S. Model-agnostic meta-learning for fast adaptation of deep networks. In International Conference on Machine Learning (2017).
[139] Bengio, Y. et al. A meta-transfer objective for learning to disentangle causal mech- anisms. arXiv preprint arXiv:1901.10912 (2019).
26
[140] Fetaya, E., Jacobsen, J.-H., Grathwohl, W. & Zemel, R. Understanding the limi- tations of conditional generative models. In International Conference on Learning Representations (2020).
[141] Higgins, I. et al. Beta-VAE: Learning basic visual concepts with a constrained vari- ational framework. International Conference on Learning Representations (2017).
[142] Hyv¨arinen, A. & Oja, E. Independent component analysis: algorithms and applica- tions. Neural Networks 13, 411â430 (2000).
[143] Richardson, J. Vectors: aphorisms & ten-second essays (Ausable Press, 2001).
27
# Appendix
# A Toy example: method details
The code to reproduce our toy example (Figure 2) is available from https://github.com/rgeirhos/ shortcut-perspective. Two easily distinguishable shapes (star and moon) were placed on a 200 Ã 200 dimensional 2D canvas. The training set is constructed out of 4000 images, where 2000 contain a star shape and 2000 a moon shape. The star shape is randomly placed in the top right and bottom left quarters of the canvas, whereas the moon shape is randomly placed in the top left and bottom right quarters of the canvas. At test time the setup is nearly identical, 1000 images with a star and 1000 images with a moon are presented. However, this time the position of star and moon shapes are randomised over the full canvas, i.e. in test images stars and moons can appear at any location.
We train two classifiers on this dataset: a fully connected network as well as a convolutional network. The classifiers are trained for five epochs with a batch size of 100 on the training set and evaluated on the test set. The training objective is standard crossentropy loss and the optimizer is Adam with a learning rate of 0.00001, β1 = 0.9, β2 = 0.999 and ε = 1e â 08. The fully connected network was a three-layer ReLU MLP (multilayer perceptron) with 1024 units in each layer and two output units corresponding to the two target classes. It reaches 100% accuracy at training time and approximately chance-level accuracy at test time (51.0%). The convolutional network had three convolutional layers with 128 channels, a stride of 2 and filter size of 5 à 5 interleaved with ReLU nonlinearities, followed by a global average pooling and a linear layer mapping the 128 outputs to the logits. It reaches 100% accuracy on train and test set.
# B Image rights & attribution
Figure 1 consists of four images from different sources. The first image from the left was taken from https://aiweirdness.com/post/171451900302/do-neural-nets-dream-of-electric-sheep with permission of the author. The second image from the left was generated by ourselves. The third image from the left is from ref. [15]. It was released under the CC BY 4.0 license as stated here: https://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.1002683 and adapted by us from Figure 2B of the corresponding publication. The image on the right It was released under CC BY 4.0 license as stated here: https: is Figure 1 from ref. [64]. //www.aclweb.org/anthology/D17-1215/(at the bottom) and retrieved by us from .
The image from Section 4.1 was adapted from Figure 1 of ref. [9] with permission from the authors (image cropped from original figure by us). The image from Section 4.2 was adapted from Figure 1 of ref. [38] with permission from the authors (image cropped from original figure by us). The image from Section 4.3 was adapted from Figure 1 of ref. [45] with permission from the authors (image cropped from original figure by us).
Figure 4 consists of a number of images from different sources. The first author of the corre- sponding publication is mentioned in the figure for identification. The images from ref. [8] were released under the CC BY 3.0 license as stated here: https://arxiv.org/abs/1312.6199 and adapted by us from Figure 5a of the corresponding publication (images cropped from original fig- ure by us). The images from ref. [50] were adapted from Figure 1 of the corresponding paper with permission from the authors (images cropped from original figure by us). The images from ref. [49] were adapted from Figure 1 of the corresponding paper with permission from the authors (images cropped from original figure by us). The images from ref. [38] were adapted from Figure 1 of the corresponding paper with permission from the authors (images cropped from original figure by us). The images from ref. [41] were adapted from Figure 1 of the corresponding paper with per- mission from the authors (images cropped from original figure by us). The images from ref. [36] were adapted from Figure 5 of the corresponding paper with permission from the authors (images cropped from original figure by us). The images from ref. [9] were adapted from Figure 1 of the
28
corresponding paper with permission from the authors (images cropped from original figure by us). The images from ref. [45] were adapted from Figure 1 and Figure 2 of the corresponding paper with permission from the authors (images cropped from original figures by us).
29 | {
"id": "1801.00631"
} |
2004.07347 | HybridQA: A Dataset of Multi-Hop Question Answering over Tabular and Textual Data | Existing question answering datasets focus on dealing with homogeneous
information, based either only on text or KB/Table information alone. However,
as human knowledge is distributed over heterogeneous forms, using homogeneous
information alone might lead to severe coverage problems. To fill in the gap,
we present HybridQA https://github.com/wenhuchen/HybridQA, a new large-scale
question-answering dataset that requires reasoning on heterogeneous
information. Each question is aligned with a Wikipedia table and multiple
free-form corpora linked with the entities in the table. The questions are
designed to aggregate both tabular information and text information, i.e., lack
of either form would render the question unanswerable. We test with three
different models: 1) a table-only model. 2) text-only model. 3) a hybrid model
that combines heterogeneous information to find the answer. The experimental
results show that the EM scores obtained by two baselines are below 20\%, while
the hybrid model can achieve an EM over 40\%. This gap suggests the necessity
to aggregate heterogeneous information in HybridQA. However, the hybrid model's
score is still far behind human performance. Hence, HybridQA can serve as a
challenging benchmark to study question answering with heterogeneous
information. | http://arxiv.org/pdf/2004.07347 | Wenhu Chen, Hanwen Zha, Zhiyu Chen, Wenhan Xiong, Hong Wang, William Wang | cs.CL, cs.AI | Accepted to Proceedings of EMNLP 2020 (Findings) | null | cs.CL | 20200415 | 20210511 | 1 2 0 2
y a M 1 1 ] L C . s c [
3 v 7 4 3 7 0 . 4 0 0 2 : v i X r a
# HybridQA: A Dataset of Multi-Hop Question Answering over Tabular and Textual Data
Wenhu Chen, Hanwen Zha, Zhiyu Chen, Wenhan Xiong, Hong Wang, William Wang University of California, Santa Barbara, CA, USA {wenhuchen, hwzha, xwhan}@cs.ucsb.edu, {zhiyuchen, hongwang600, william}@cs.ucsb.edu
# Abstract
Existing question answering datasets focus on dealing with homogeneous information, based either only on text or KB/Table information alone. However, as human knowledge is dis- tributed over heterogeneous forms, using ho- mogeneous information alone might lead to in the severe coverage problems. gap, we present HybridQA1, a new large-scale question-answering dataset that requires rea- soning on heterogeneous information. Each question is aligned with a Wikipedia table and multiple free-form corpora linked with the en- tities in the table. The questions are designed to aggregate both tabular information and text information, i.e., lack of either form would ren- der the question unanswerable. We test with three different models: 1) a table-only model. 2) text-only model. 3) a hybrid model that combines heterogeneous information to ï¬nd the answer. The experimental results show that the EM scores obtained by two baselines are below 20%, while the hybrid model can achieve an EM over 40%. This gap suggests the necessity to aggregate heterogeneous in- formation in HybridQA. However, the hybrid modelâs score is still far behind human perfor- mance. Hence, HybridQA can serve as a chal- lenging benchmark to study question answer- ing with heterogeneous information.
# Introduction
Question answering systems aim to answer any form of question of our interests, with evidence provided by either free-form text like Wikipedia passages (Rajpurkar et al., 2016; Chen et al., 2017; Yang et al., 2018) or structured data like Free- base/WikiData (Berant et al., 2013; Kwiatkowski et al., 2013; Yih et al., 2015; Weston et al., 2015) and WikiTables (Pasupat and Liang, 2015). Both
# 1https://github.com/wenhuchen/HybridQA
forms have their advantages, the free-form cor- pus has in general better coverage while structured data has better compositionality to handle complex multi-hop questions. Due to the advantages of dif- ferent representation forms, people like to combine them in real world applications. Therefore, it is sometime not ideal to assume the question has an- swer in a passage. This paper aims to simulate a more realistic setting where the evidences are dis- tributed into heterogeneous data, and the model requires to aggregate information from different forms for answering a question. There has been some pioneering work on building hybrid QA sys- tems (Sun et al., 2019, 2018; Xiong et al., 2019). These methods adopts KB-only datasets (Berant et al., 2013; Yih et al., 2015; Talmor and Berant, 2018) to simulate a hybrid setting by randomly masking KB triples and replace them with text corpus. Experimental results have proved decent improvement, which shed lights on the potential of hybrid question answering systems to integrate heterogeneous information.
Though there already exist numerous valuable questions answering datasets as listed in Table 1, these datasets were initially designed to use either structured or unstructured information during an- notation. There is no guarantee that these ques- tions need to aggregate heterogeneous information to ï¬nd the answer. Therefore, designing hybrid question answering systems would probably yield marginal beneï¬ts over the non-hybrid ones, which greatly hinders the research development in build- ing hybrid question answering systems.
To ï¬ll in the gap, we construct a heterogeneous QA dataset HYBRIDQA, which is collected by crowdsourcing based on Wikipedia tables. During annotation, each crowd worker is presented with a table along with its hyperlinked Wikipedia pas- sages to propose questions requiring aggregating both forms of information. The dataset consists of
The 2016 Summer Olympics officially known as the Games of the XXxI | Olympiad (Portuguese : Jogos da XXXI Olimpiada) and commonly known Rio 2016 , was an international multi-sport event .. a Name Season Flag bearer XXX! summer Yan Naing Soe XXX [2012 summer Zaw Win Thet XxXIX | 2008 summer Phone Myint Tayzar XXvIIL | 2004 Summer Hla Win U XXvIl_ | 2000 summer Maung Maung Nge XX 1972 summer Win Maung n Naing Soe ( born 31 January 1979 ) is a Burmese judoka . He competed at the 2016 Summer Olympics in the men 's 100 kg event, | Zaw Win Thet ( born 1 March 1991 in Kyonpyaw , Pathein District , Ayeyarwady Division , Myanmar ) is a Burmese runner who Q: In which year did the judoka bearer participate in the Olympic opening ceremony? Q: Which event does the does the XXX! Olympic flag bearer participate in? A: menâs 100 kg event Q: Where does the Burmesse jodoka participate in the Olympic opening ceremony as a flag bearer? A: Rio Q: For the Olympic event happening after 2014, what session does the Flag bearer participate? A: Parade of Nations Hardness Q: For the XXXI and XXX Olympic event, which has an older flag bearer? A: XXXI Q: When does the oldest flag Burmese bearer participate in the Olympic ceremony? A: 1972
Figure 1: Examples of annotated question answering pairs from Wikipedia page2. Underlined entities have hyper- linked passages, which are displayed in the boxes. The lower part shows the human-annotated question-answer pairs roughly categorized based on their hardness.
Dataset WebQuestions WebQSP WebQComplex MetaQA WikiTableQA Size 5.8K 4.7K 34K 400k 22K #Docs no no no no no KB/ Table yes yes yes yes yes Multi-Hop yes yes yes yes yes SQuAD-v1 DROP TriviaQA HotpotQA Natural-QA 107K 99K 95K 112K 300K 1 1 >1 >1 >1 no no no no no no yes no yes yes HYBRIDQA 70K >>1 yes yes
Table 1: Comparison of existing datasets, where #docs means the number of documents provided for a spe- ciï¬c question. 1) KB-only datasets: WebQuestions (Be- rant et al., 2013), WebQSP (Yih et al., 2016), Web- Complex (Talmor and Berant, 2018), MetaQA (Zhang et al., 2018), WikiTableQuestion (Pasupat and Liang, 2015). 2) Text-only datasets with single passage: like SQuAD (Rajpurkar et al., 2016), DROP (Dua et al., 2019). 3) open-domain Text-Only dataset: Trivi- aQA (Joshi et al., 2017), HotpotQA (Yang et al., 2018), Natural Questions (Kwiatkowski et al., 2019).
roughly 70K question-answering pairs aligned with 13,000 Wikipedia tables. As Wikitables (Bhaga- vatula et al., 2013) are curated from high-standard professionals to organize a set of information re- garding a given theme, its information is mostly absent in the text. Such a complementary nature makes WikiTables an ideal environment for hybrid question answering. To ensure that the answers cannot be hacked by single-hop or homogeneous
#Table 13,000 #Passage 293,269 #Row/#Column 15.7/4.4 #Words/Passage 103 #Cell 70 #Ques 69,611 #Links/Table 44 #Words/Ques 18.9
Table 2: Statistics of Table and Passage in our dataset.
models, we carefully employ different strategies to calibrate the annotation process. An example is demonstrated in Figure 1. This table is aimed to de- scribe Burmese ï¬ag bearers over different Olympic events, where the second column has hyperlinked passages about the Olympic event, and the fourth column has hyperlinked passages about biography individual bearers. The dataset is both multi-hop and hybrid in the following senses: 1) the question requires multiple hops to achieve the answer, each reasoning hop may utilize either tabular or textual information. 2) the answer may come from either the table or a passage.
In our experiments, we implement three mod- els, namely Table-only model, Passage-only, and a heterogeneous model HYBRIDER, which com- bines both information forms to perform multi-hop reasoning. Our Experiments show that two ho- mogeneous models only achieve EM lower than 20%, while HYBRIDER can achieve an EM over 40%, which concludes the necessity to do multi- hop reasoning over heterogeneous information on HYBRIDQA. As the HYBRIDER is still far behind human performance, we believe that it would be a challenging next-problem for the community.
# 2 Dataset
In this section, we describe how we crawl high- quality tables with their associated passages, and then describe how we collect hybrid questions. The statistics of HYBRIDQA is in Table 2.
Table/Passage Collection To ease the annota- tion process, we apply the following rules during ta- ble crawling. 1) we need tables with rows between 5-20, columns between 3-6, which is appropriate for the crowd-workers to view. 2) we restrain the tables from having hyperlinked cells over 35% of its total cells, which provide an abundant amount of textual information. For each hyperlink in the table, we retrieve its Wikipedia page and crop at most the ï¬rst 12 sentences from its introduction ses- sion as the associated passage. 3) we apply some additional rules to avoid improper tables and ï¬nally collect 13,000 high-quality tables.
Question/Answer Collection We release 13K HITs (human intelligence task) on the Amazon Me- chanical Turk platform, where each HIT presents the crowd-worker with one crawled Wikipedia ta- ble along with its hyperlinked passages. We require the worker to write six questions as well as their answers. The question annotation phase is not triv- ial, as we speciï¬cally need questions that rely on both tabular and textual information. In order to achieve that, we exemplify abundant examples in our Amazon Turker interface with detailed expla- nations to help crowd-workers to understand the essence of the âhybridâ question. The guidelines are described as follows:
⢠The question requires multiple steps over two information forms of reasoning to answer.
in- step cludes (i) ï¬lter our table rows based on equal/greater/less, e.g. âFor the XXXI Olympic eventâ, (ii)) superlative operation over a column, e.g. âthe earliest Olympic eventâ, (iii) hop between two cells, e.g. âWhich event ... participate in ...â, (iv) extract information from table, e.g. âIn which year did the player ... â.
⢠Text reasoning step speciï¬cally includes (i) select passages based on the certain mentions, e.g. âthe judoka bearerâ, (ii) extract a span from the passage as the answer.
⢠The answer should be a minimum text span from either a table cell or a speciï¬c passage.
Based on the above criteria, we hire ï¬ve CS- majored graduate students as our âhuman expertâ to decide the acceptance of a HIT. The average completion time for one HIT is 12 minutes, and payment is $2.3 U.S. dollars/HIT.
Annotation De-biasing As has been suggested in previous papers (Kaushik and Lipton, 2018; Chen and Durrett, 2019; Clark et al., 2019), the ex- isting benchmarks on multi-hop reasoning question answering have annotation biases, which makes de- signing multi-hop models unnecessary. We discuss different biases and our prevention as follows:
⢠Table Bias: our preliminary study observes that the annotators prefer to ask questions re- garding the top part of the table. In order to deal with this issue, we explicitly high- light certain regions in the table to encourage crowd-workers to raise questions regarding the given uniformly-distributed regions.
⢠Passage Bias: the preliminary study shows that the annotators like to ask questions re- garding the ï¬rst few sentences in the passage. In order to deal with such a bias, we use an algorithm to match the answer with linked passages to ï¬nd their span and reject the HITs, which have all the answers centered around the ï¬rst few sentences.
⢠Question Bias: the most difï¬cult bias to deal with is the âfakeâ hybrid question like âwhen is 2012 Olympic Burmese runner ï¬ag bearer born?â for the table listed in Figure 1. Though it seems that â2012 Olympicâ is needed to perform hop operation on the table, the ârun- ner ï¬ag bearerâ already reviews the bearer as âZaw Win Thetâ because there is no other run- ner bearer in the table. With that said, reading the passage of âZaw Win Thetâ alone can sim- ply lead to the answer. In order to cope with such a bias, we ask âhuman expertsâ to spot such questions and reject them.
Statistics After we harvest the human annota- tions from 13K HITs (78K questions), we trace back the answers to its source (table or passage). Then we apply several rules to further ï¬lter out low- quality annotations: 1) the answer cannot be found from either table or passage, 2) the answer is longer
than 20 words, 3) using a TF-IDF retriever can di- rectly ï¬nd the answer passage with high similarity without relying on tabular information.
We ï¬lter the question-answer pairs based on the previous criteria and release the ï¬ltered version. As our goal is to solve multi-hop hybrid questions requiring a deeper understanding of heterogeneous information. We follow HotpotQA (Yang et al., 2018) to construct a more challenging dev/test split in our benchmark. Speciï¬cally, we use some statistical features like the âsize of the tableâ, âsimilarity between answer passage and questionâ, âwhether question directly mentions the ï¬eldâ, etc. to roughly classify the question into two difï¬culty levels: simple (65%) and hard (35%). We con- struct our dev and test set by sampling half-half from the two categories. We match the answer span against all the cells and passages in the table and divide the answer source into three categories: 1) the answer comes from a text span in a table cell, 2) the answer comes from a certain linked pas- sage, 3) the answer is computed by using numerical operation like âcountâ, âaddâ, âaverageâ, etc. The matching process is approximated, not guaranteed to be 100% correct. We summarize our ï¬ndings in Table 3. In the following experiments, we will report the EM/F1 score for these ï¬ne-grained ques- tion types to better understand our results.
Split Train Dev Test Total In-Passage In-Table Computed Total 35,215 26,803 664 62,682 2,025 1,349 92 3,466 20,45 1,346 72 3,463 39,285 (56.4%) 29,498 (42.3%) 828 (1.1%) 69,611
Table 3: Data Split: In-Table means the answer comes from plain text in the table, and In-Passage means the answer comes from certain passage.
# 3 Data Analysis
In this section, we speciï¬cally analyze the differ- ent aspects of the dataset to provide the overall characteristics of the new dataset.
# 3.1 Question Types
We heuristically identiï¬ed question types for each collected question. To identify the question type, we locate the central question word (CQW) in the question and take the neighboring three to- kens (Yang et al., 2018) to determine the question types. We visualize the distribution in Figure 2, which demonstrates the syntactic diversity of the questions in HYBRIDQA.
Figure 2: The type of questions in HYBRIDQA, ques- tion types are extracted using rules starting at the ques- tion words or preposition before them.
# 3.2 Answer Types
We further sample 100 examples from the dataset and present the types of answers in Table 4. As can be seen, it covers a wide range of answer types. Compared to (Yang et al., 2018), our dataset cov- ers more number-related or date-related questions, which reï¬ects the nature of tabular data.
Answer Type % Example(s) Location Number Date Person Group Event Artwork Adjective Other proper noun Common noun 22 22 20 15 3 3 1 4 8 1 Balestier Road, Atlanta 762 metres ( 2,500 ft ), 15,000 April 25 , 1994, 1913 B¨arbel W¨ockel, Jerry Hallmark Entertainment Battle of Hlobane Songmaster second-busiest Space Opera, CR 131 other musicians
Table 4: Types of answers in HYRBIDQA.
# 3.3 Inference Types
We analyze multi-hop reasoning types in Figure 3. According to our statistics, most of the questions require two or three hops to ï¬nd the answer. 1) Type I question (23.4%) uses Table â Pas- sage chain, it ï¬rst uses table-wise operations (equal/greater/less/ï¬rst/last/argmax/argmin) to lo- cate certain cells in the table, and then hop to their neighboring hyperlinked cells within the same row, ï¬nally extracts a text span from the passage of the
Type | (T->P) Name Event year Pl commonly known as Rio 2016, wasan. Q: Where was the XXXI Olympic held? A: Rio XXXI_ âââ 2016 international multi-sport event ...... Type Il (P->T) ee _ _ _ _ _ Q: What was the name of the Olympic event held (EGS Greabyeer + commonly known as Rio 2016 , was an XXXI. <âââJ} 2016 |, | international multi-sport event ...... Type Ill (P->T->P) Q: When was the flag bearer of Rio Olympic born? A: 31 January 1979 Yan Naing Soe (born 1 | 31 January 1979) ... | | ... commonly known as Rio 2016 , wasan international Flag Bearer Yan Naing Soe +} 2016 Event year Type IV (T&S' [Yan Naing Soe ...Menâs | Flag Bearer Gender Q: Which male bearer participated in Menâs 100kg event . . Sed ââ+ Yan Naing Soe +ââ Male 100kg event in the Olympic game? | Zaw Win Thet ... A: Yan Naing Soe Men's 400m running Zaw Win Thet + Male ya, a The a Type V (T-Compare | P-Compare) Yan Naing Soe (born 31 Flag Bearer Event year : i J 1979) .... Q: For the 2012 and 2016 Olympic Event, when (zenuary © ES _ Yan Naing Soe P| 2016 was the younger flag bearer born? Zaw Win Thet (born 1 A: 1 March 1991 | March 1991) Zaw Win Thet â_]2012 Type VI (T-Superlative | P-Superlative) _ __ GiSeiBeStey Suu Yer Q: When did the youngest Burmese flag bearer | Yan ... 31 January 1979)... p-------~ >|YanNaingSoe 7 | 2016 participate in the Olympic opening ceremony? March 1991) ... i>] Zaw Win Thet ââ>| 2012 A: 2012 âââââ oe July 21978) 0 beoooooo> | Phone Myint Tayzar --------- >| 2008
Figure 3: Illustration of different types of multi-hop questions.
hyperlinked cell as the answer. 2) Type II question (20.3%) uses Passage â Table chain, it ï¬rst uses cues present in the question to retrieve related passage, which traces back to cer- tain hyperlinked cells in the table, and then hop to a neighboring cell within the same row, ï¬nally extracts text span from that cell. 3) Type III question (35.1%) uses Passage âTableâPassage chain, it follows the same pat- tern as Type II, but in the last step, it hops to a hy- perlinked cell and extracts answer from its linked passage. This is the most common pattern. 4) Type IV question (17.3%) uses Passage and Ta- ble jointly to identify a hyperlinked cell based on table operations and passage similarity and then extract the plain text from that cell as the answer. 5) Type V question (3.1%) involves two parallel reasoning chain, while the comparison is involved in the intermediate step to ï¬nd the answer. 6) Type VI questions (0.8%) involve multiple rea- soning chains, while superlative in involved in the intermediate step to obtain the correct answer.
# 4 Model
In this section, we propose three models we use to perform question answering on HYBRIDQA.
# 4.1 Table-Only Model
In this setting, we design a model that can only rely on the tabular information to ï¬nd the answer. Our model is based on the SQL semantic parser (Zhong
SELECT AGGREGATOR (TARGET) where (CONDITION1, CONDITION2) t t t BERT Classifier Q: Where was the XXI Olympic held? âcommonly known as Rio .» Beijing 2016 , was an international db Olympic (reaver | multi-sport event event +| Retriever a 4 BERT MRC Passage-Only Q: Where was the XXI Olympic held? Table-Only âl
Figure 4: only baselines, both are based on BERT Encoder.
et al., 2017; Xu et al., 2017), which uses a neural network to parse the given questions into a sym- bolic form and execute against the table. We follow the SQLNet (Xu et al., 2017) to ï¬atten the pre- diction of the whole SQL query into a slot ï¬lling procedure. More speciï¬cally, our parser model ï¬rst encode the input question q using BERT (Devlin et al., 2019) and then decode the aggregation, target, condition separately as described in Figure 4. The aggregation slot can have the following values of âargmax, argmin, argmax- date, argmin-dateâ, the target and condition slots have their potential values based on the ta- ble ï¬eld and its corresponding entries. Though we do not have the ground-truth annotation for these simple SQL queries, we can use heuristics to infer them from the denotation. We use the synthesized question-SQL pairs to train the parser model.
# 4.2 Passage-Only Model
In this setting, we design a model that only uses the hyperlinked passages from the given table to ï¬nd the answer. Our model is based on DrQA (Chen et al., 2017), which ï¬rst uses an ensemble of several retrievers to retrieve related documents and then concatenate several documents together to do read- ing comprehension with the state-of-the-art BERT model (Devlin et al., 2019). The basic architecture is depicted in Figure 4, where we use the retriever to retrieve the top-5 passages from the pool and then concatenate them as a document for the MRC model, and the maximum length of the concate- nated document is set to 512.
# 4.3 HYBRIDER
In order to cope with heterogeneous information, we propose a novel architecture called HYBRIDER. We divide the model into two phases as depicted in Figure 6 and describe them separately below:
Linking This phase is aimed to link questions to their related cells from two sources: - Cell Matching: it aims to link cells explicitly men- tioned by the question. The linking consists of three criteria, 1) the cellâs value is explicitly men- tioned, 2) the cellâs value is greater/less than the mentioned value in question, 3) the cellâs value is maximum/minimum over the whole column if the question involves superlative words. - Passage Retriever: it aims to link cells implic- itly mentioned by the question through its hyper- linked passage. The linking model consists of a TD-IDF retriever with 2-3 gram lexicon and a longest-substring retriever, this ensemble retriever calculates the distances with all the passages in the pool and highlight the ones with cosine distance lower than a threshold Ï . The retrieved passages are mapped back to the linked cell in the table.
Country | Name | School Date us e D) Jul 24 cA a jun27 | Erestien Content [Yan Location | (1, 2) Content _| Jun 27, Descrip | Born in ... Yanis Location (2, 4) Ques Source _ | Longest-String Descrip Score |0.6 Plain Cell Retrieved Cell
Retrieved Cell
Figure 5: (green) and plain cells (orange). Illustration of cell encoder of retrieved
the set of cells from these two sources as âretrieved cellsâ denotes by C. Each
{ Retriever wMen... pnlOO ra 2 TF-IDF 2 (Hengestsubstines| a Min/Max. Greater/Less Yan Ere] » male oe) * male Win Cell Match t Yan Cars male D ⬠| | male Win 5 Co co CAO) ors 8 Rank Py a Yan Hop 12006 | Yan RC male | male XxX
Question: When ... male ... in Men's 100kg ... ?
Figure 6: Illustration of the proposed model to perform multi-hop reasoning over table and passage.
retrieved cell c is encoded by 5-element tuple (content, location, description, source, score). Content represents the string representation in the table, Content refers to the absolute row and column index in the table, description refers to the ev- idence sentence in the hyperlinked passage, which gives highest similarity score to question, source denotes where the entry comes from (e.g. equal/argmax/passage/etc), score denotes the score of linked score normalized to [0, 1].
Reasoning This phase is aimed to model the multi-hop reasoning in the table and passage, we specifically break down the whole process into hree stages, namely the ranking stage p(c\q,C), hoping stage p;,(câ|q, c), and the reading compre- hension stage p,(a|P,q). These three stages are modeled with three different neural networks. We first design a cell encoding scheme to encode each cell in the table as depicted in Figure 5: 1) for âre- trieved cellsââ, it contains information for retrieval source and score, 2) for âplain cellsâ (not retrieved), we set the information in source and score to empty. We concatenate them with their table field and ques- ion, and then fed into a encoder module (BERT) 0 obtain its vector representation H,.
1) Ranking model: As the âretriever cellsâ contain many noises, we leverage a ranker model to pre- dict the âcorrectâ linked cells for the next stage. Speciï¬cally, this model takes each cell c along with its neighboring Nc (cells in the same row) and
feed them all into the cell encoder to obtain their representations {Hc}. The representations are ag- gregated and further fed to a feed-forward neural network to obtain a score sc, which is normalized over the whole set of linked cell C as follows:
exp(se) elec CXP( Ser) Ps(ela,C) = qd)
2) Hop model: this model takes the predicted cell from the previous stage and then decide which neighboring cell or itself to hop to. Specifically, we represent each hop pair (c â câ) using their con- catenated representation H,... = [H., H.]. The representation is fed to a feed-forward neural net- work to obtain a hop score s,..., which is normal- ized over all the possible end cells as follows:
exp(Se,c) ol Neve CUP(Se,c"") Pilla) = 5 (2)
3) RC model: this model ï¬nally takes the hopped cell c from last stage and ï¬nd answer from it. If the cell is not hyperlinked, the RC model will simply output its plain text as the answer, otherwise, the plain text of the cell is prepended to the linked passage P (c) for reading comprehension. The prepended passage P and the question are given as the input to the question answering model to predict the score of answerâs start and end index as gs(P, q, index) and ge(P, q, index), which are normalized over the whole passage |P | to calculate the likelihood pr(a|P, q) as follows:
exp(gs(P,q,4s)) 9s(P, q, @e) Diep] exp(gs(P, q,7)) VielP| ge(P, 4, %) pr(a|P, q)
where as is the start index of answer a and ae is the end index of answer a.
By breaking the reasoning process into three stages, we manage to cover the Type-I/II/III/VI questions well. For example, the Type-III question ï¬rst uses the ranking model to select the most likely cell from retrievers, and then use the hop model to jump to neighboring hyperlinked cell, ï¬nally use the RC model to extract the answer.
Training & Inference The three-stage decom- position breaks the question answering likelihood p(a|q, T ) into the following marginal probability:
Srr(clac) =O cec cENeia⬠P(c') pr(Cle,g)pr(a|P(e'), 4)
where the marginalization is over all the linked cells c, and all the neighboring cell with answer a
in its plain text or linked passages. However, di- rectly maximizing the marginal likelihood is unnec- essarily complicated as the marginalization leads to huge computation cost. Therefore, we propose to train the three models independently and then combine them to do inference.
By using the source location of answers, we are able to 1) infer which cells c in the retrieved set C are valid, which can be applied to train the rank- ing model, 2) infer which cell it hops to get the answer, which we can be applied to train the hop model. Though the synthesized reasoning paths are somewhat noisy, it is still enough to be used for training the separate models in a weakly supervised manner. For the RC model, we use the passages containing the ground-truth answer to train it. The independent training avoids the marginalization computation to greatly decrease the computation and time cost. During inference, we apply these three models sequentially to get the answer. Specif- ically, we use greedy search at ï¬rst two steps to remain only the highest probably cell and ï¬nally extract the answer using the RC model.
# 5 Experiments
# 5.1 Experimental Setting
In the linking phase, we set the retrieval threshold Ï to a speciï¬c value. All the passages having distance lower than Ï will be retrieved and fed as input to the reasoning phase. If there is no passage that has been found with a distance lower than Ï , we will simply use the document with the lowest distance as the retrieval result. Increasing Ï can increase the recall of correct passages, but also increase the difï¬culty of the ï¬lter model in the reasoning step. In the reasoning phase, we mainly utilize BERT (Devlin et al., 2019) as our encoder for the cells and passages due to its strong semantic under- standing. Speciï¬cally, we use four BERT variants provided by huggingface library3, namely base- uncased, based-cased, large-uncased, and large- cased. We train the modules all for 3.0 epochs and save their checkpoint ï¬le at the end of each epoch. The ï¬ltering, hop, and RC models use AdamW (Loshchilov and Hutter, 2017) optimizer with learning rates of 2e-6, 5e-6, and 3e-5. We held out a small development set for model selection on the saved checkpoints and use the most performant ones in inference.
3https://github.com/huggingface/ transformers
Model Table-Only Passage-Only HYBRIDER (BERT-base-uncased, Ï =0.7) HYBRIDER (BERT-base-uncased, Ï =0.8) HYBRIDER (BERT-base-uncased, Ï =0.9) HYBRIDER (BERT-large-uncased, Ï =0.8) In-Table EM/F1 14.7/19.1 9.2/13.5 51.2/58.6 51.3/58.4 51.5/58.6 54.3/61.4 Dev In-Passage EM/F1 2.4/4.5 26.1/32.4 39.6/46.4 40.1/47.6 40.5/47.9 39.1/45.7 Total EM/F1 8.4/12.1 19.5/25.1 42.9/50.0 43.5/50.6 43.7/50.9 44.0/50.7 In-Table EM/F1 14.2/18.8 8.9/13.8 50.9/58.6 51.7/59.1 52.1/59.3 56.2/63.3 Test In-Passage EM/F1 2.6/4.7 25.5/32.0 37.4/45.7 37.8/46.0 38.1/46.3 37.5/44.4 Total EM/F1 8.3/11.7 19.1/25.0 41.8/49.5 42.2/49.9 42.5/50.2 43.8/50.6
Table 5: Experimental results of different models, In-Table refers to the subset of questions which have their answers in the table, In-Passage refers to the subset of questions which have their answer in a certain passage.
# 5.2 Evaluation
Following previous work (Rajpurkar et al., 2016), we use exact match (EM) and F1 as two evaluation metrics. F1 metric measures the average overlap be- tween the prediction and ground-truth answers. We assess human performance on a held-out set from the test set containing 500 instances. To evaluate human performance, we distribute each question along with its table to crowd-workers and com- pare their answer with the ground-truth answer. We obtain an estimated accuracy of EM=88.2 and F1=93.5, which is higher than both SQuAD (Ra- jpurkar et al., 2016) and HotpotQA (Yang et al., 2018). The higher accuracy is due to the In-Table questions (over 40%), which have much lesser am- biguity than the text-span questions.
# 5.3 Experimental Results
We demonstrate the experimental results for differ- ent models in Table 5, where we list ï¬ne-grained accuracy over the questions with answers in the cell and passage separately. The In-Table questions are remarkably simpler than In-Passage question be- cause they do not the RC reasoning step; the overall accuracy is roughly 8-10% higher than its counter- part. With the experimented model variants, the best accuracy is achieved with BERT-large-uncased as backend, which can beat the BERT-base-uncased by roughly 2%. However, its performance is still far lagged behind human performance, leaving am- ple room for future research.
Retriever Threshold We also experiment with a different Ï threshold. Having an aggressive re- triever increases the recall of the mentioned cells, but it increases the burden for the ranking model. Having a passive retriever can guarantee the preci- sion of predicted cells, but it also potentially miss evidence for the following reasoning phase. There exist trade-offs between these different modes. In Table 5, we experiment with different Ï during the retrieval stage and ï¬nd that the model is rather stable, which means the model is quite insensitive regarding different threshold values.
# 5.4 Error Analysis
To analyze the cause of the errors in HYBRIDER, we propose to break down into four types as Fig- ure 7. Concretely, linking error is caused by the retriever/linker, which fails to retrieve the most rel- evant cell in the linking phase. In the reasoning phase: 1) ranking error is caused by the ranking model, which fails to assign a high score to the correct retrieved cell. 2) hop error occurs when the correctly ranked cell couldnât hop to the an- swer cell. 3) RC error refers to the hoped cell is correct, but the RC model fails to extract the correct text span from it. We perform our anal-
Lit (87.4%) Ranking Accuracy (87.9%) >
Heterogeneous Reasoning From Table 5, we can clearly observe that using either Table-Only or Passage-Only model achieves a poor accuracy below 20%. In contrast, the proposed HYBRIDER can achieve up to 50% EM increase by leverag- ing both structured and unstructured data during reasoning. It strongly supports the necessity to do heterogeneous reasoning in HYBRIDQA.
Figure 7: The error of HYBRIDER is based on its stages. Pink cell means the answer cell; green means the modelâs prediction; circle means the current cell.
ysis on the full dev set based on the bert-large- uncased model (Ï =0.8), as indicated in Figure 7, the errors are quite uniformly distributed into the four categories except the reading comprehension step is slightly more erroneous. Based on the
step-wise error, we can compute its product as 87.4% à 87.9% à 89.2% à 61.9% â 42% and ï¬nd that the result consistent well the overall accu- racy, which demonstrates the necessity to perform each reasoning step correctly. Such error cascading makes the problem extremely difï¬cult than the pre- vious homogeneous question answering problems. By breaking down the reasoning into steps, HY- BRIDER layouts strong explainability about its ra- tionale, but it also causes error propagation, i.e., the mistakes made in the earlier stage are non- reversible in the following stage. We believe fu- ture research on building an end-to-end reasoning model could alleviate such an error propagation problem between different stages in HYBRIDER.
# 6 Related Work
Text-Based QA Since the surge of SQuAD (Ra- jpurkar et al., 2016) dataset, there have been numerous efforts to tackle the machine read- ing comprehension problem. Different datasets like DrQA (Chen et al., 2017), TriviaQA (Joshi et al., 2017), SearchQA (Dunn et al., 2017) and DROP (Dua et al., 2019) have been proposed. As the SQuAD (Rajpurkar et al., 2016) questions that are relatively simple because they usually require no more than one sentence in the paragraph to an- swer. The following datasets further challenge the QA modelâs capability to handle different scenarios like open-domain, long context, multi-hop, discrete operations, etc. There has been a huge success in proving that the deep learning model can show strong competence in terms of understanding text- only evidence. Unlike these datasets, HYBRIDQA leverages structured information in the evidence form, where the existing models are not able to han- dle, which distinguishes it from the other datasets.
KB/Table-Based QA Structured knowledge is known as unambiguous and compositional, which absorbs lots of attention to the QA system built on KB/Tables. There have been multiple datasets like WebQuestion (Berant et al., 2013), ComplexWe- bQuestions (Talmor and Berant, 2018), WebQues- tionSP (Yih et al., 2015) on using FreeBase to an- swer natural questions. Besides KB, structured or semi-structured tables are also an interesting form. Different datasets like WikiTableQuestions (Pa- supat and Liang, 2015), WikiSQL (Zhong et al., 2017), SPIDER (Yu et al., 2018), TabFact (Chen et al., 2020) have been proposed to build models which can interact with such structured information.
However, both KB and tables are known to suffer from low coverage issues. Therefore, HYBRIDQA combine tables with text as complementary infor- mation to answer natural questions.
Information Aggregation There are some pio- neering studies on designing hybrid question an- swering systems to aggregate heterogeneous infor- mation. GRAFT (Sun et al., 2018) proposes to use the early fusion system and use heuristics to build a question-speciï¬c subgraph that contains sentences from corpus and entities, facts from KB. PullNet (Sun et al., 2019) improves over GRAFT to use an integrated framework that dynamically learns to retrieve and reason over heterogeneous information to ï¬nd the best answers. More recently, KAReader (Xiong et al., 2019) proposes to refor- mulate the questions in latent space by reading retrieved text snippets under KB-incomplete cases. These models simulate a âfakeâ KB-incomplete sce- nario by masking out triples from KB. In contrast, the questions in HYBRIDQA are inherently hybrid in the sense that it requires both information forms to reason, which makes our testbed more realistic.
# 7 Conclusion
We present HYBRIDQA, which is collected as the ï¬rst hybrid question answering dataset over both tabular and textual data. We release the data to fa- cilitate the current research on using heterogeneous information to answer real-world questions. We design HYBRIDER as a strong baseline and offer interesting insights about the model. We believe HYBRIDQA is an interesting yet challenging next- problem for the community to solve.
# Acknowledgement
The authors would like to thank the anonymous reviewers and area chairs for their thoughtful com- ments. We would like to thank Amazon AWS Ma- chine Learning Research Award for sponsoring the computing resources.
# References
Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of the 2013 conference on empirical methods in natural lan- guage processing, pages 1533â1544.
Chandra Sekhar Bhagavatula, Thanapon Noraset, and Doug Downey. 2013. Methods for exploring and
mining tables on wikipedia. In Proceedings of the ACM SIGKDD Workshop on Interactive Data Explo- ration and Analytics, pages 18â26.
Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer open- In Proceedings of the 55th An- domain questions. nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870â 1879.
Jifan Chen and Greg Durrett. 2019. Understanding dataset design choices for multi-hop reasoning. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4026â 4032.
Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou, and William Yang Wang. 2020. Tabfact: A large- scale dataset for table-based fact veriï¬cation. Inter- national Conference on Learning Representations (ICLR).
Christopher Clark, Mark Yatskar, and Luke Zettle- moyer. 2019. Donât take the easy way out: En- semble based methods for avoiding known dataset biases. arXiv preprint arXiv:1909.03683.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171â4186.
Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. Drop: A reading comprehension benchmark requir- ing discrete reasoning over paragraphs. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2368â2378.
Matthew Dunn, Levent Sagun, Mike Higgins, V Ugur Guney, Volkan Cirik, and Kyunghyun Cho. 2017. Searchqa: A new q&a dataset augmented with arXiv preprint context from a search engine. arXiv:1704.05179.
Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehen- sion. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 1601â1611.
Divyansh Kaushik and Zachary C Lipton. 2018. How much reading does reading comprehension require? In a critical investigation of popular benchmarks.
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5010â5015.
Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke Zettlemoyer. 2013. Scaling semantic parsers with on-the-ï¬y ontology matching. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1545â1556.
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- ï¬eld, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. 2019. Natural questions: a bench- mark for question answering research. Transactions of the Association for Computational Linguistics, 7:453â466.
Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. ICLR 2019.
Panupong Pasupat and Percy Liang. 2015. Composi- tional semantic parsing on semi-structured tables. In Proceedings of the 53rd Annual Meeting of the Asso- ciation for Computational Linguistics and the 7th In- ternational Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1470â 1480.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2383â2392.
Haitian Sun, Tania Bedrax-Weiss, and William Cohen. 2019. Pullnet: Open domain question answering with iterative retrieval on knowledge bases and text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 2380â 2390.
Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Kathryn Mazaitis, Ruslan Salakhutdinov, and William Cohen. 2018. Open domain question answering using early In Proceed- fusion of knowledge bases and text. ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4231â4242.
Alon Talmor and Jonathan Berant. 2018. The web as a knowledge-base for answering complex questions. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 641â651.
Jason Weston, Antoine Bordes, Sumit Chopra, Alexan- der M Rush, Bart van Merri¨enboer, Armand Joulin, and Tomas Mikolov. 2015. Towards ai-complete question answering: A set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698.
Wenhan Xiong, Mo Yu, Shiyu Chang, Xiaoxiao Guo, Improving ques- and William Yang Wang. 2019. tion answering over incomplete kbs with knowledge- In Proceedings of the 57th Annual aware reader. Meeting of the Association for Computational Lin- guistics, pages 4258â4264.
Xiaojun Xu, Chang Liu, and Dawn Song. 2017. Sqlnet: Generating structured queries from natural language arXiv preprint without reinforcement arXiv:1711.04436.
Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christo- pher D Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 2369â2380.
Wen-tau Yih, Ming-Wei Chang, Xiaodong He, and Jianfeng Gao. 2015. Semantic parsing via staged query graph generation: Question answering with knowledge base. In Proceedings of the 53rd Annual Meeting of the Association for Computational Lin- guistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1321â1331.
Wen-tau Yih, Matthew Richardson, Christopher Meek, Ming-Wei Chang, and Jina Suh. 2016. The value of semantic parse labeling for knowledge base question answering. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics (Volume 2: Short Papers), pages 201â206.
Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingn- ing Yao, Shanelle Roman, et al. 2018. Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task. In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 3911â3921.
Yuyu Zhang, Hanjun Dai, Zornitsa Kozareva, Alexan- der J Smola, and Le Song. 2018. Variational reason- ing for question answering with knowledge graph. In Thirty-Second AAAI Conference on Artiï¬cial In- telligence.
Victor Zhong, Caiming Xiong, and Richard Socher. Seq2sql: Generating structured queries 2017. from natural language using reinforcement learning. arXiv preprint arXiv:1709.00103. | {
"id": "1909.03683"
} |
2004.07320 | Training with Quantization Noise for Extreme Model Compression | We tackle the problem of producing compact models, maximizing their accuracy
for a given model size. A standard solution is to train networks with
Quantization Aware Training, where the weights are quantized during training
and the gradients approximated with the Straight-Through Estimator. In this
paper, we extend this approach to work beyond int8 fixed-point quantization
with extreme compression methods where the approximations introduced by STE are
severe, such as Product Quantization. Our proposal is to only quantize a
different random subset of weights during each forward, allowing for unbiased
gradients to flow through the other weights. Controlling the amount of noise
and its form allows for extreme compression rates while maintaining the
performance of the original model. As a result we establish new
state-of-the-art compromises between accuracy and model size both in natural
language processing and image classification. For example, applying our method
to state-of-the-art Transformer and ConvNet architectures, we can achieve 82.5%
accuracy on MNLI by compressing RoBERTa to 14MB and 80.0 top-1 accuracy on
ImageNet by compressing an EfficientNet-B3 to 3.3MB. | http://arxiv.org/pdf/2004.07320 | Angela Fan, Pierre Stock, Benjamin Graham, Edouard Grave, Remi Gribonval, Herve Jegou, Armand Joulin | cs.LG, stat.ML | null | null | cs.LG | 20200415 | 20210228 | 1 2 0 2
b e F 8 2 ] G L . s c [
3 v 0 2 3 7 0 . 4 0 0 2 : v i X r a
Published as a conference paper at ICLR 2021
# TRAINING WITH QUANTIZATION NOISE FOR EXTREME MODEL COMPRESSION
Angela Fan â Facebook AI Research, LORIA Pierre Stock â â Facebook AI Research, Inria Benjamin Graham Facebook AI Research
Edouard Grave Facebook AI Research Rémi Gribonval â Inria Hervé Jégou Facebook AI Research Armand Joulin Facebook AI Research
# ABSTRACT
We tackle the problem of producing compact models, maximizing their accuracy for a given model size. A standard solution is to train networks with Quantization Aware Training (Jacob et al., 2018), where the weights are quantized during training and the gradients approximated with the Straight-Through Estimator (Bengio et al., 2013). In this paper, we extend this approach to work beyond int8 ï¬xed- point quantization with extreme compression methods where the approximations introduced by STE are severe, such as Product Quantization. Our proposal is to only quantize a different random subset of weights during each forward, allowing for unbiased gradients to ï¬ow through the other weights. Controlling the amount of noise and its form allows for extreme compression rates while maintaining the performance of the original model. As a result we establish new state-of-the-art compromises between accuracy and model size both in natural language processing and image classiï¬cation. For example, applying our method to state-of-the-art Transformer and ConvNet architectures, we can achieve 82.5% accuracy on MNLI by compressing RoBERTa to 14 MB and 80.0% top-1 accuracy on ImageNet by compressing an Efï¬cientNet-B3 to 3.3 MB.1
# INTRODUCTION
Many of the best performing neural network architectures in real-world applications have a large number of parameters. For example, the current standard machine translation architecture, Trans- former (Vaswani et al., 2017), has layers that contain millions of parameters. Even models that are designed to jointly optimize the performance and the parameter efï¬ciency, such as Efï¬cientNets (Tan & Le, 2019), still require dozens to hundreds of megabytes, which limits their applications to domains like robotics or virtual assistants.
Model compression schemes reduce the memory footprint of overparametrized models. Pruning (LeCun et al., 1990) and distillation (Hinton et al., 2015) remove parameters by reducing the number of network weights. In contrast, quantization focuses on reducing the bits per weight. This makes quantization particularly interesting when compressing models that have already been carefully optimized in terms of network architecture. Whereas deleting weights or whole hidden units will inevitably lead to a drop in performance, we demonstrate that quantizing the weights can be performed with little to no loss in accuracy.
Popular postprocessing quantization methods, like scalar quantization, replace the ï¬oating-point weights of a trained network by a lower-precision representation, like ï¬xed-width integers (Vanhoucke et al., 2011). These approaches achieve a good compression rate with the additional beneï¬t of accelerating inference on supporting hardware. However, the errors made by these approximations
âEqual contribution. Corresponding authors: [email protected], [email protected] â Univ Lyon, Inria, CNRS, ENS de Lyon, UCB Lyon 1, LIP UMR 5668, F-69342, Lyon, France 1Code available at https://github.com/pytorch/fairseq/tree/master/examples/
quant_noise
1
Published as a conference paper at ICLR 2021
26F XX w/o Quant-Noise X Training without Quant-Noise O Training with Quant-Noise Training Time Quantization Training Time Quantization = 24 Weight Matrix Weight Matrix Weight Matrix Weight Matrix Ei & 22 ia O w/ Quant-Noise | 20 1g [Meal Original Model A. Size(MB) 10 100 1000
Figure 1: Quant-Noise trains models to be resilient to inference-time quantization by mimicking the effect of the quantization method during training time. This allows for extreme compression rates without much loss in accuracy on a variety of tasks and benchmarks.
accumulate in the computations operated during the forward pass, inducing a signiï¬cant drop in performance (Stock et al., 2019).
A solution to address this drifting effect is to directly quantize the network during training. This raises two challenges. First, the discretization operators have a null gradient â the derivative with respect to the input is zero almost everywhere. This requires special workarounds to train a network with these operators. The second challenge that often comes with these workarounds is the discrepancy that appears between the train and test functions implemented by the network. Quantization Aware Training (QAT) (Jacob et al., 2018) resolves these issues by quantizing all the weights during the forward and using a straight through estimator (STE) (Bengio et al., 2013) to compute the gradient. This works when the error introduced by STE is small, like with int8 quantization, but does not sufï¬ce in compression regimes where the approximation made by the compression is more severe.
In this work, we show that quantizing only a subset of weights instead of the entire network during training is more stable for high compression schemes. Indeed, by quantizing only a random fraction of the network at each forward, most the weights are updated with unbiased gradients. Interestingly, we show that our method can employ a simpler quantization scheme during the training. This is particularly useful for quantizers with trainable parameters, such as Product Quantizer (PQ), for which our quantization proxy is not parametrized. Our approach simply applies a quantization noise, called Quant-Noise, to a random subset of the weights, see Figure 1. We observe that this makes a network resilient to various types of discretization methods: it signiï¬cantly improves the accuracy associated with (a) low precision representation of weights like int8; and (b) state-of-the-art PQ. Further, we demonstrate that Quant-Noise can be applied to existing trained networks as a post-processing step, to improve the performance network after quantization.
In summary, this paper makes the following contributions:
⢠We introduce the Quant-Noise technique to learn networks that are more resilient to a variety of quantization methods such as int4, int8, and PQ;
⢠Adding Quant-Noise to PQ leads to new state-of-the-art trade-offs between accuracy and model size. For instance, for natural language processing (NLP), we reach 82.5% accuracy on MNLI by compressing RoBERTa to 14 MB. Similarly for computer vision, we report 80.0% top-1 accuracy on ImageNet by compressing an Efï¬cientNet-B3 to 3.3 MB;
⢠By combining PQ and int8 to quantize weights and activations for networks trained with Quant-Noise, we obtain extreme compression with ï¬xed-precision computation and achieve 79.8% top-1 accuracy on ImageNet and 21.1 perplexity on WikiText-103.
# 2 RELATED WORK
Model compression. Many compression methods focus on efï¬cient parameterization, via weight pruning (LeCun et al., 1990; Li et al., 2016; Huang et al., 2018; Mittal et al., 2018), weight sharing (Dehghani et al., 2018; Turc et al., 2019; Lan et al., 2019) or with dedicated architectures (Tan & Le, 2019; Zhang et al., 2017; Howard et al., 2019). Weight pruning is implemented during training (Louizos et al., 2017) or as a ï¬ne-tuning post-processing step (Han et al., 2015; 2016). Many pruning methods are unstructured, i.e., remove individual weights (LeCun et al., 1990; Molchanov et al., 2017). On the other hand, structured pruning methods follow the structure of the weights to
2
Published as a conference paper at ICLR 2021
reduce both the memory footprint and the inference time of a model (Li et al., 2016; Luo et al., 2017; Fan et al., 2019). We refer the reader to Liu et al. (2018) for a review of different pruning strategies.
Other authors have worked on lightweight architectures, by modifying existing models (Zhang et al., 2018; Wu et al., 2019; Sukhbaatar et al., 2019a) or developing new networks, such as MobileNet (Howard et al., 2019), Shufï¬eNet (Zhang et al., 2017), and Efï¬cientNet (Tan & Le, 2019) in vision.
# enctn
Finally, knowledge distillation (Hinton et al., 2015) has been applied to sentence representation (Turc et al., 2019; Sanh et al., 2019a; Sun et al., 2019; Zhao et al., 2019; Jiao et al., 2019), to reduce the size of a BERT model (Devlin et al., 2018).
Quantization. There are extensive studies of scalar quantization to train networks with low- precision weights and activations (Courbariaux et al., 2015; Courbariaux & Bengio, 2016; Rastegari et al., 2016; McDonnell, 2018). These methods beneï¬t from specialized hardware to also improve the runtime during inference (Vanhoucke et al., 2011). Other quantization methods such as Vector Quantization (VQ) and PQ (Jegou et al., 2011) quantize blocks of weights simultaneously to achieve higher compression rate (Stock et al., 2019; Gong et al., 2014; Joulin et al., 2016; Carreira-Perpiñán & Idelbayev, 2017). Closer to our work, several works have focused at simultaneously training and quantizing a network (Jacob et al., 2018; Krishnamoorthi, 2018; Gupta et al., 2015; Dong et al., 2019). Gupta et al. (2015) assigns weights to a quantized bin stochastically which is speciï¬c to scalar quantization, but allows training with ï¬xed point arithmetic. Finally, our method can be interpreted as a form of Bayesian compression (Louizos et al., 2017), using the Bayesian interpretation of Dropout (Gal & Ghahramani, 2016). As opposed to their work, we select our noise to match the weight transformation of a target quantization methods without restricting it to a scale mixture prior.
# 3 QUANTIZING NEURAL NETWORKS
In this section, we present the principles of quantization, several standard quantization methods, and describe how to combine scalar and product quantization. For clarity, we focus on the case of a ï¬xed real matrix W â RnÃp. We suppose that this matrix is split into m à q blocks bkl:
b11 ... bm1 · · · b1q ... . . . · · · bmq , W = (1)
where the nature of these blocks is determined by the quantization method. A codebook is a set of K vectors, i.e., C = {c[1],...,c[A]}. Quantization methods compress the matrix W by assigning to each block b,,; an index that points to a codeword c in a codebook C, and storing the codebook C and the resulting indices (as the entries I,; of an index matrix I) instead of the real weights. During the inference, they reconstruct an approximation W of the original matrix W such that Bee = = c[I,]. We distinguish scalar quantization, such as int 8, where each block b,,; consists of a single weight, from vector quantization, where several weights are quantized jointly.
3.1 FIXED-POINT SCALAR QUANTIZATION
Fixed-point scalar quantization methods replace ï¬oating-point number representations by low- precision ï¬xed-point representations. They simultaneously reduce a modelâs memory footprint and accelerate inference by using ï¬xed-point arithmetic on supporting hardware.
Fixed-point scalar quantization operates on blocks that represent a single weight, i.e., bj, = Wy. Floating-point weights are replaced by N bit fixed-point numbers (Gupta et al. 2015p, with the extreme case of binarization where N = 1 (Courbariaux et al.|{2015). More precisely, the weights are rounded to one of 2" possible codewords. These codewords correspond to bins evenly spaced by a scale factor s and shifted by a bias z. Each weight W;, is mapped to its nearest codeword c by successively quantizing with z +> round(W,,/s + z) and dequantizing with the inverse operation: c = (round(Wyi/s + z) â 2) x s, (2)
where we compute the scale and bias as:
s = max W â min W 2N â 1 and z = round(min W/s).
3
Published as a conference paper at ICLR 2021
We focus on this uniform rounding scheme instead of other non-uniform schemes (Choi et al., 2018; Li et al., 2019), because it allows for ï¬xed-point arithmetic with implementations in PyTorch and Tensorï¬ow (see Appendix). The compression rate is Ã32/N . The activations are also rounded to N -bit ï¬xed-point numbers. With int8 for instance, this leads to Ã2 to Ã4 faster inference on dedicated hardware. In this work, we consider both int4 and int8 quantization.
3.2 PRODUCT QUANTIZATION
Several quantization methods work on groups of weights, such as vectors, to beneï¬t from the correlation induced by the structure of the network. In this work, we focus on Product Quantization for its good performance at extreme compression ratio (Stock et al., 2019).
Traditional PQ. In vector quantization methods, the blocks are predeï¬ned groups of weights instead of single weights. The codewords are groups of values, and the index matrix I maps groups of weights from the matrix W to these codewords. In this section, we present the Product Quantization framework as it generalizes both scalar and vector quantization. We consider the case where we apply PQ to the columns of W and thus assume that q = p.
Traditional vector quantization techniques split the matrix W into its p columns and learn a codebook on the resulting p vectors. Instead, Product Quantization splits each column into m subvectors and learns the same codebook for each of the resulting m à p subvectors. Each quantized vector is subsequently obtained by assigning its subvectors to the nearest codeword in the codebook. Learning the codebook is traditionally done using k-means with a ï¬xed number K of centroids, typically K = 256 to store the index matrix I using int8. Thus, the objective function is written as:
Jw âWI2 =P bu âeftull2. ° kl
k,l PQ shares representations between subvectors, which allows for higher compression rates than intN.
Iterative PQ. When quantizing a full network rather than a single matrix, extreme compression with PQ induces a quantization drift as reconstruction error accumulates (Stock et al., 2019). Indeed, subsequent layers take as input the output of preceding layers, which are modiï¬ed by the quantization of the preceding layers. This creates a drift in the network activations, resulting in large losses of performance. A solution proposed by Stock et al. (2019), which we call iterative PQ (iPQ), is to quantize layers sequentially from the lowest to the highest, and ï¬netune the upper layers as the lower layers are quantized, under the supervision of the uncompressed (teacher) model. Codewords of each layer are ï¬netuned by averaging the gradients of their assigned elements with gradient steps:
1 OL Je| (Dele Obp1 e+c~7n (4)
where Jc = {(k, l) | c[Ikl] = c}, L is the loss function and η > 0 is a learning rate. This adapts the upper layers to the drift appearing in their inputs, reducing the impact of the quantization approximation on the overall performance.
3.3 COMBINING FIXED-POINT WITH PRODUCT QUANTIZATION
Fixed-point quantization and Product Quantization are often regarded as competing choices, but can be advantageously combined. Indeed, PQ/iPQ compresses the network by replacing vectors of weights by their assigned centroids, but these centroids are in ï¬oating-point precision. Fixed-point quantization compresses both activations and weights to ï¬xed-point representations. Combining both approaches means that the vectors of weights are mapped to centroids that are compressed to ï¬xed-point representations, along with the activations. This beneï¬ts from the extreme compression ratio of iPQ and the ï¬nite-precision arithmetics of intN quantization.
More precisely, for a given matrix, we store the int8 representation of the K centroids of dimension d along with the log2 K representations of the centroid assignments of the m à p subvectors. The int8 representation of the centroids is obtained with Eq. (2). The overall storage of the matrix and activations during a forward pass with batch size 1 (recalling that the input dimension is n) writes M = 8 à Kd + log2 K à mp + 8 à n bits.
4
Published as a conference paper at ICLR 2021
In particular, when K = 256, the centroid assignments are also stored in int8, which means that every value required for a forward pass is stored in an int8 format. We divide by 4 the float32 overhead of storing the centroids, although the storage requirement associated with the centroids is small compared to the cost of indexing the subvectors for standard networks. In contrast to iPQ alone where we only quantize the weights, we also quantize the activations using int8. We evaluate this approach on both natural language processing and computer vision tasks in Section 5.
# 4 METHOD
Deep networks are not exposed to the noise caused by the quantization drift during training, leading to suboptimal performance. A solution to make the network robust to quantization is to introduce it during training. Quantization Aware Training (QAT) (Jacob et al., 2018) exposes the network during training by quantizing weights during the forward pass. This transformation is not differentiable and gradients are approximated with a straight through estimator (STE) (Bengio et al., 2013; Courbariaux & Bengio, 2016). STE introduces a bias in the gradients that depends on level of quantization of the weights, and thus, the compression ratio. In this section, we propose a simple modiï¬cation to control this induced bias with a stochastic amelioration of QAT, called Quant-Noise. The idea is to quantize a randomly selected fraction of the weights instead of the full network as in QAT, leaving some unbiased gradients ï¬ow through unquantized weights. Our general formulation can simulate the effect of both quantization and of pruning during training.
4.1 TRAINING NETWORKS WITH QUANTIZATION NOISE
We consider the case of a real matrix W as in Section 3. During the training of a network, our proposed Quant-Noise method works as follows: ï¬rst, we compute blocks bkl related to a target quantization method. Then, during each forward pass, we randomly select a subset of these blocks and apply some distortion to them.
More formally, given a set of tuples of indices J â {(k, l)} for 1 ⤠k ⤠m, 1 ⤠l ⤠q and a distortion or noise function Ï acting on a block, we deï¬ne an operator Ï(· | J) such that, for each block bkl, we apply the following transformation:
p(b) if (k,l) ⬠J, De otherwise. W(be | J) = { (6)
The noise function Ï simulates the change in the weights produced by the target quantization method (see Section 4.2 for details). We replace the matrix W by the resulting noisy matrix Wnoise during the forward pass to compute a noisy output ynoise, i.e.,
Wnoise = (Ï(bkl | J))kl and ynoise = xWnoise
where x is an input vector. During the backward pass, we apply STE, which amounts to replacing the distorted weights Wnoise by their non-distorted counterparts. Note that our approach is equivalent to QAT when J containts all the tuples of indices. However, an advantage of Quant-Noise over QAT is that unbiased gradients continue to ï¬ow via blocks unaffected by the noise. As these blocks are randomly selected for each forward, we guarantee that each weight regularly sees gradients that are not affected by the nature of the function Ï. As a side effect, our quantization noise regularizes the network in a similar way as DropConnect (Wan et al., 2013) or LayerDrop (Fan et al., 2019).
Composing quantization noises. As noise operators are compositionally commutative, we can make a network robust to a combination of quantization methods by composing their noise operators:
Ï(bkl | J) = Ï1 ⦠Ï2(bkl | J). (8)
This property is particularly useful to combine quantization with pruning operators during training, as well as combining scalar quantization with product quantization.
4.2 ADDING NOISE TO SPECIFIC QUANTIZATION METHODS
In this section, we propose several implementations of the noise function Ï for the quantization methods described in Section 3. We also show how to handle pruning with it.
5
(7)
Published as a conference paper at ICLR 2021
Quantization Scheme Language Modeling 16-layer Transformer Wikitext-103 Image Classiï¬cation Efï¬cientNet-B3 ImageNet-1k Size Compression PPL Size Compression Uncompressed model 942 à 1 18.3 46.7 à 1 int4 quantization - trained with QAT - trained with Quant-Noise 118 118 118 à 8 à 8 à 8 39.4 34.1 21.8 5.8 5.8 5.8 à 8 à 8 à 8 int8 quantization - trained with QAT - trained with Quant-Noise 236 236 236 à 4 à 4 à 4 19.6 21.0 18.7 11.7 11.7 11.7 à 4 à 4 à 4 iPQ - trained with QAT - trained with Quant-Noise 38 38 38 à 25 à 25 à 25 25.2 41.2 20.7 3.3 3.3 3.3 à 14 à 14 à 14 iPQ & int8 + Quant-Noise 38 à 25 21.1 3.1 à 15 Top-1 81.5 45.3 59.4 67.8 80.7 80.8 80.9 79.0 55.7 80.0 79.8
Table 1: Comparison of different quantization schemes with and without Quant-Noise on language mod- eling and image classiï¬cation. For language modeling, we train a Transformer on the Wikitext-103 benchmark and report perplexity (PPL) on test. For image classiï¬cation, we train a Efï¬cientNet-B3 on the ImageNet-1k benchmark and report top-1 accuracy on validation and use our re-implementation of Efï¬cientNet-B3. The original implementation of Tan & Le (2019) achieves an uncompressed Top-1 accuracy of 81.9%. For both settings, we report model size in megabyte (MB) and the compression ratio compared to the original model.
In intN quantization, the blocks are atomic and weights are Fixed-point scalar quantization. rounded to their nearest neighbor in the codebook. The function Ï replaces weight Wkl with the output of the rounding function deï¬ned in Eq. (2), i.e.,
ÏintN(w) = (round(w/s + z) â z) Ã s, (9)
where s and z are updated during training. In particular, the application of Quant-Noise to int8 scalar quantization is a stochastic amelioration of QAT.
Product quantization. As opposed to intN, codebooks in PQ require a clustering step based on weight values. During training, we learn codewords online and use the resulting centroids to implement the quantization noise. More precisely, the noise function ÏPQ assigns a selected block b to its nearest codeword in the associated codebook C:
gro(v) = argmingeg||b â ¢||3. (10) Updating the codebooks online works well. However, empirically, running k-means once per epoch is faster and does not noticeably modify the resulting accuracy.
Note that computing the exact noise function for PQ is computationally demanding. We propose a simpler and faster alternative approximation Ïproxy to the operational transformation of PQ and iPQ. The noise function simply zeroes out the subvectors of the selected blocks, i.e., Ïproxy(v) = 0. As a sidenote, we considered other alternatives, for instance one where the subvectors are mapped to the mean subvector. In practice, we found that these approximations lead to similar performance, see Section 7.2. This proxy noise function is a form of Structured Dropout and encourages correlations between the subvectors. This correlation is beneï¬cial to the subsequent clustering involved in PQ/iPQ.
Adding pruning to the quantization noise. The speciï¬c form of quantization noise can be ad- justed to incorporate additional noise speciï¬c to pruning. We simply combine the noise operators of quantization and pruning by composing them following Eq. (8). We consider the pruning noise function of Fan et al. (2019) where they randomly drop predeï¬ned structures during training. In particular, we focus on LayerDrop, where the structures are the residual blocks of highway-like layers (Srivastava et al., 2015), as most modern architectures, such as ResNet or Transformer, are composed of this structure. More precisely, the corresponding noise operator over residual blocks v is ÏLayerDrop(v) = 0. For pruning, we do not use STE to backpropagate the gradient of pruned weights, as dropping them entirely during training has the beneï¬t of speeding convergence (Huang
6
Published as a conference paper at ICLR 2021
# > S 3 Ea
Language Modeling MNLI ImageNet Top-1 24 xr XL Ideal 8 85 pe LayerDropx deal XBffNet-B4 x BERT < 22 - -OOurs+Share 5 84 MovileBeRT = 80 5 Ours =) Q urs ft +] Cours Pers Mem| ours B _[iSharex XIN KROavet-s0 E20 urs 5 83 TinyBER 3 75 iPQ ResN Dense-169 Tens core < Ours Haig PPR stil < 22 X x xMobile-v2 18 LayerDropxTr XL 82 Albert *DistiIBERT 70 xShuffle-v2x1 é Ideal Comp Trx 81 ReaBERT 10 30 100 300 1000 10 30 100 300 ©1000 1 3 10 30 100 300 1000 Size (MB) Size (MB) Size (MB)
Figure 2: Performance as a function of model size. We compare models quantized with PQ and trained with the related Quant-Noise to the state of the art. (a) Test perplexity on Wikitext-103 (b) Dev Accuracy on MNLI (c) ImageNet Top-1 accuracy. Model size is shown in megabytes on a log scale. Red and gray coloring indicates existing work, with different colors for visual distinction.
Language modeling Sentence Representation Image Classiï¬cation Comp. Size PPL Comp. Size Acc. Comp. Size Acc. Unquantized models Original model + Sharing + Pruning à 1 à 1.8 à 3.7 942 510 255 18.3 18.7 22.5 à 1 à 1.9 à 3.8 480 250 125 84.8 84.0 81.3 à 1 à 1.4 à 1.6 46.7 34.2 29.5 81.5 80.1 78.5 Quantized models iPQ + Quant-Noise + Sharing + Pruning à 24.8 à 24.8 à 49.5 à 94.2 38 38 19 10 25.2 20.7 22.0 24.7 à 12.6 à 12.6 à 34.3 à 58.5 38 38 14 8 82.5 83.6 82.5 78.8 à 14.1 à 14.1 à 18 à 20 3.3 3.3 2.6 2.3 79.0 80.0 78.9 77.8
Table 2: Decomposing the impact of the different compression schemes. (a) we train Transformers with Adaptive Input and LayerDrop on Wikitext-103 (b) we pre-train RoBERTA base models with LayerDrop and then ï¬netune on MNLI (c) we train an Efï¬cientNet-B3 on ImageNet. We report the compression ratio w.r.t. to the original model (âcomp.â) and the resulting size in MB.
et al., 2016). Once a model is trained with LayerDrop, the number of layers kept at inference can be adapted to match computation budget or time constraint.
# 5 RESULTS
We demonstrate the impact of Quant-Noise on the performance of several quantization schemes in a variety of settings (see Appendix - Sec. 7.5).
IMPROVING COMPRESSION WITH QUANT-NOISE
Quant-Noise is a regularization method that makes networks more robust to the target quantization scheme or combination of quantization schemes during training. We show the impact of Quant-Noise in Table 1 for a variety of quantization methods: int8/int4 and iPQ.
We experiment in 2 different settings: a Transformer network trained for language modeling on WikiText-103 and a Efï¬cientNet-B3 convolutional network trained for image classiï¬cation on ImageNet-1k. Our quantization noise framework is general and ï¬exible â Quant-Noise improves the performance of quantized models for every quantization scheme in both experimental settings. Importantly, Quant-Noise only changes model training by adding a regularization noise similar to dropout, with no impact on convergence and very limited impact on training speed (< 5% slower).
This comparison of different quantization schemes shows that Quant-Noise works particularly well with high performance quantization methods, like iPQ, where QAT tends to degrade the performances, even compared to quantizing as a post-processing step. In subsequent experiments in this section, we focus on applications with iPQ because it offers the best trade-off between model performance and compression, and has little negative impact on FLOPS.
7
Published as a conference paper at ICLR 2021
Language Modeling PPL RoBERTa Acc. Train without Quant-Noise + Finetune with Quant-Noise 25.2 20.9 Train without Quant-Noise + Finetune with Quant-Noise 82.5 83.4 Train with Quant-Noise 20.7 Train with Quant-Noise 83.6
Table 3: Quant-Noise: Finetuning vs training. We report performance after iPQ quantization. We train with the Ïproxy noise and ï¬netune with Quant-Noise, and use it during the transfer to MNLI for each RoBERTa model.
Fixed-Point Product Quantization. Combining iPQ and int8 as described in Section 3.3 allows us to take advantage of the high compression rate of iPQ with a ï¬xed-point representation of both centroids and activations. As shown in Table 1, this combination incurs little loss in accuracy with respect to iPQ + Quant-Noise. Most of the memory footprint of iPQ comes from indexing and not storing centroids, so the compression ratios are comparable.
Complementarity with Weight Pruning and Sharing. We analyze how Quant-Noise is compati- ble and complementary with pruning (â+Pruneâ) and weight sharing (â+Shareâ), see Appendix for details on weight sharing. We report results for Language modeling on WikiText-103, pre-trained sen- tence representations on MNLI and object classiï¬cation on ImageNet-1k in Table 2. The conclusions are remarkably consistent across tasks and benchmarks: Quant-Noise gives a large improvement over strong iPQ baselines. Combining it with sharing and pruning offers additional interesting operating points of performance vs size.
5.2 COMPARISON WITH THE STATE OF THE ART
We now compare our approach on the same tasks against the state of the art. We compare iPQ + Quant-Noise with 6 methods of network compression for Language modeling, 8 state-of-the-art methods for Text classiï¬cation, and 8 recent methods evaluate image classiï¬cation on Imagenet with compressed models. These comparisons demonstrate that Quant-Noise leads to extreme compression rates at a reasonable cost in accuracy. We apply our best quantization setup on competitive models and reduce their memory footprint by Ã20 â 94 when combining with weight sharing and pruning, offering extreme compression for good performance.
Natural Language Processing. In Figure 2, we examine the trade-off between performance and model size. Our quantized RoBERTa offers a competitive trade-off between size and performance with memory reduction methods dedicated to BERT, like TinyBERT, MobileBERT, or AdaBERT.
Image Classiï¬cation. We compress Efï¬cientNet-B3 from 46.7Mb to 3.3Mb (Ã14 compression) while maintaining high top-1 accuracy (78.5% versus 80% for the original model). As shown in Figure 2, our quantized Efï¬cientNet-B3 is smaller and more accurate than architectures dedicated to optimize on-device performance with limited size like MobileNet or Shufï¬eNet. We further evaluate the beneï¬cial effect of Quant-Noise on ResNet-50 to compare directly with Stock et al. (2019). Results shown in Table 4 indicate improvement with Quant-Noise compared to previous work.
Incorporating pruning noise into quantization is also beneï¬cial. For example, with pruning iPQ+Quant-Noise reduces size by Ã25 with only a drop of 2.4 PPL in language modeling. Further, pruning reduces FLOPS by the same ratio as its compression factor, in our case, Ã2. By adding sharing with pruning, in language modeling, we achieve an extreme compression ratio of Ã94 with a drop of 6.4 PPL with FLOPS reduction from pruning entire shared chunks of layers. For comparison, our 10 MB model has the same performance as the 570 MB Transformer-XL base.
5.3 FINETUNING WITH QUANT-NOISE FOR POST-PROCESSING QUANTIZATION
We explore taking existing models and post-processing with Quant-Noise instead of training from scratch. For language modeling, we train for 10 additional epochs. For RoBERTa, we train for 25k additional updates. Finetuning with Quant-Noise incorporates the beneï¬ts and almost matches training from scratch (Table 3). In language modeling, there is only a 0.2 PPL difference. We further examine how to incorporate Quant-Noise more ï¬exibly into pretraining RoBERTa. We take an already
8
Published as a conference paper at ICLR 2021
trained RoBERTa model and incorporate Quant-Noise during sentence classiï¬cation ï¬netuning. This is effective at compressing while retaining accuracy after quantization.
# 6 CONCLUSION
We show that quantizing a random subset of weights during training maintains performance in the high quantization regime. We validate that Quant-Noise works with a variety of different quantization schemes on several applications in text and vision. Our method can be applied to a combination of iPQ and int8 to beneï¬t from extreme compression ratio and ï¬xed-point arithmetic. Finally, we show that Quant-Noise can be used as a post-processing step to prepare already trained networks for subsequent quantization, to improve the performance of the compressed model.
# REFERENCES
A. Adcock, V. Reis, M. Singh, Z. Yan, L. van der Maaten, K. Zhang, S. Motwani, J. Guerin, N. Goyal, I. Misra, L. Gustafson, C. Changhan, and P. Goyal. Classy vision. 2019.
Alexei Baevski and Michael Auli. Adaptive input representations for neural language modeling. arXiv preprint arXiv:1809.10853, 2018.
Yoshua Bengio, Nicholas Léonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.
James Bradbury, Stephen Merity, Caiming Xiong, and Richard Socher. Quasi-recurrent neural networks. arXiv preprint arXiv:1611.01576, 2016.
Qingqing Cao, Harsh Trivedi, Aruna Balasubramanian, and Niranjan Balasubramanian. Faster and just as accurate: A simple decomposition for transformer models.
Miguel A. Carreira-Perpiñán and Yerlan Idelbayev. Model compression as constrained optimization, with application to neural nets. part ii: quantization, 2017.
Daoyuan Chen, Yaliang Li, Minghui Qiu, Zhen Wang, Bofang Li, Bolin Ding, Hongbo Deng, Jun Huang, Wei Lin, and Jingren Zhou. Adabert: Task-adaptive bert compression with differentiable neural architecture search. arXiv preprint arXiv:2001.04246, 2020.
Jungwook Choi, Zhuo Wang, Swagath Venkataramani, Pierce I-Jen Chuang, Vijayalakshmi Srinivasan, and Kailash Gopalakrishnan. Pact: Parameterized clipping activation for quantized neural networks. arXiv preprint arXiv:1805.06085, 2018.
Matthieu Courbariaux and Yoshua Bengio. Binarynet: Training deep neural networks with weights and activations constrained to +1 or -1. CoRR, 2016.
Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neural networks with binary weights during propagations. CoRR, 2015.
Zihang Dai, Zhilin Yang, Yiming Yang, William W Cohen, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. Transformer-xl: Attentive language models beyond a ï¬xed-length context. arXiv preprint arXiv:1901.02860, 2019.
Yann N. Dauphin, Angela Fan, Michael Auli, and David Grangier. Language modeling with gated convolutional networks. In Proc. of ICML, 2017.
Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Åukasz Kaiser. Universal transformers, 2018.
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR, 2009.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
9
Published as a conference paper at ICLR 2021
Yinpeng Dong, Renkun Ni, Jianguo Li, Yurong Chen, Hang Su, and Jun Zhu. Stochastic quantization for learning accurate low-bit deep neural networks. International Journal of Computer Vision, 127 (11-12):1629â1642, 2019.
Angela Fan, Edouard Grave, and Armand Joulin. Reducing transformer depth on demand with structured dropout. arXiv preprint arXiv:1909.11556, 2019.
Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning, pp. 1050â1059, 2016.
Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep convolutional networks using vector quantization. arXiv preprint arXiv:1412.6115, 2014.
Edouard Grave, Armand Joulin, Moustapha Cisse, David Grangier, and Herve Jegou. Efï¬cient softmax approximation for gpus. arXiv, abs/1609.04309, 2016.
Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan. Deep learning with limited numerical precision. In ICML, 2015.
Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efï¬cient neural network. In NIPS, pp. 1135â1143, 2015.
Song Han, Huizi Mao, and William J. Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and Huffman coding. ICLR, 2016.
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. CoRR, 2015.
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang, Vijay Vasudevan, Quoc V. Le, and Hartwig Adam. Searching for mobilenetv3. arXiv e-prints, 2019.
Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Q Weinberger. Deep networks with stochastic depth. In ECCV, 2016.
Gao Huang, Shichen Liu, Laurens Van der Maaten, and Kilian Q Weinberger. Condensenet: An efï¬cient densenet using learned group convolutions. In CVPR, 2018.
Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew Howard, Hartwig Adam, and Dmitry Kalenichenko. Quantization and training of neural networks for efï¬cient integer-arithmetic-only inference. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2704â2713, 2018.
Herve Jegou, Matthijs Douze, and Cordelia Schmid. Product quantization for nearest neighbor search. PAMI, 2011.
Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. Tinybert: Distilling bert for natural language understanding. arXiv preprint arXiv:1909.10351, 2019.
Armand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, and Tomas Mikolov. Fasttext.zip: Compressing text classiï¬cation models. arXiv preprint arXiv:1612.03651, 2016.
Raghuraman Krishnamoorthi. Quantizing deep convolutional networks for efï¬cient inference: A whitepaper. arXiv preprint arXiv:1806.08342, 2018.
Guillaume Lample and Alexis Conneau. Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291, 2019.
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. Albert: A lite bert for self-supervised learning of language representations, 2019.
10
Published as a conference paper at ICLR 2021
Yann LeCun, John S. Denker, and Sara A. Solla. Optimal brain damage. In NIPS, 1990.
Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. Pruning ï¬lters for efï¬cient convnets. arXiv preprint arXiv:1608.08710, 2016.
Yuhang Li, Xin Dong, and Wei Wang. Additive powers-of-two quantization: A non-uniform discretization for neural networks. arXiv preprint arXiv:1909.13144, 2019.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
Zhuang Liu, Mingjie Sun, Tinghui Zhou, Gao Huang, and Trevor Darrell. Rethinking the value of network pruning. arXiv preprint arXiv:1810.05270, 2018.
Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016.
Christos Louizos, Max Welling, and Diederik P Kingma. Learning sparse neural networks through l_0 regularization. arXiv preprint arXiv:1712.01312, 2017.
Jian-Hao Luo, Jianxin Wu, and Weiyao Lin. Thinet: A ï¬lter level pruning method for deep neural network compression. In ICCV, 2017.
Ningning Ma, Xiangyu Zhang, Hai-Tao Zheng, and Jian Sun. Shufï¬enet V2: practical guidelines for efï¬cient CNN architecture design. CoRR, 2018.
Xindian Ma, Peng Zhang, Shuai Zhang, Nan Duan, Yuexian Hou, Dawei Song, and Ming Zhou. A tensorized transformer for language modeling. arXiv preprint arXiv:1906.09777, 2019.
Mark D. McDonnell. Training wide residual networks for deployment using a single bit for each weight, 2018.
Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer Sentinel Mixture Models. arXiv, abs/1609.07843, 2016.
Deepak Mittal, Shweta Bhardwaj, Mitesh M Khapra, and Balaraman Ravindran. Recovering from random pruning: On the plasticity of deep convolutional neural networks. In WACV, 2018.
Dmitry Molchanov, Arsenii Ashukha, and Dmitry Vetrov. Variational dropout sparsiï¬es deep neural networks. In ICML, 2017.
Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations, 2019.
Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. How to construct deep recurrent neural networks. In Proceedings of the Second International Conference on Learning Representations (ICLR 2014), 2014.
Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in pytorch. 2017.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2019.
Jack W Rae, Anna Potapenko, Siddhant M Jayakumar, and Timothy P Lillicrap. Compressive transformers for long-range sequence modelling. arXiv preprint arXiv:1911.05507, 2019.
Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classiï¬cation using binary convolutional neural networks. In ECCV, 2016.
11
Published as a conference paper at ICLR 2021
Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. Mo- bilenetv2: Inverted residuals and linear bottlenecks. In Conference on Computer Vision and Pattern Recognition, pp. 4510â4520, 2018.
Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. 2019a.
Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108, 2019b.
Rupesh Kumar Srivastava, Klaus Greff, and Jürgen Schmidhuber. Highway networks. arXiv preprint arXiv:1505.00387, 2015.
Pierre Stock, Armand Joulin, Rémi Gribonval, Benjamin Graham, and Hervé Jégou. And the bit goes down: Revisiting the quantization of neural networks. CoRR, abs/1907.05686, 2019.
Sainbayar Sukhbaatar, Edouard Grave, Piotr Bojanowski, and Armand Joulin. Adaptive attention span in transformers. arXiv preprint arXiv:1905.07799, 2019a.
Sainbayar Sukhbaatar, Edouard Grave, Guillaume Lample, Herve Jegou, and Armand Joulin. Aug- menting self-attention with persistent memory. arXiv preprint arXiv:1907.01470, 2019b.
Siqi Sun, Yu Cheng, Zhe Gan, and Jingjing Liu. Patient knowledge distillation for bert model compression. EMNLP, 2019.
Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. Mobilebert: Task-agnostic compression of bert for resource limited devices.
Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization and momentum in deep learning. In International conference on machine learning, pp. 1139â1147, 2013.
Mingxing Tan and Quoc V. Le. Efï¬cientnet: Rethinking model scaling for convolutional neural networks, 2019.
Iulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Well-read students learn better: The impact of student initialization on knowledge distillation. arXiv preprint arXiv:1908.08962, 2019.
Vincent Vanhoucke, Andrew Senior, and Mark Z Mao. Improving the speed of neural networks on cpus. 2011.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NIPS, 2017.
Li Wan, Matthew Zeiler, Sixin Zhang, Yann Le Cun, and Rob Fergus. Regularization of neural networks using DropConnect. In ICML, 2013.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. GLUE: A multi-task benchmark and analysis platform for natural language understanding. 2019. ICLR.
Kuan Wang, Zhijian Liu, Yujun Lin andx Ji Lin, and Song Han. HAQ: hardware-aware automated quantization. CoRR, 2018.
Adina Williams, Nikita Nangia, and Samuel R. Bowman. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of NAACL-HLT, 2018.
Felix Wu, Angela Fan, Alexei Baevski, Yann Dauphin, and Michael Auli. Pay less attention with lightweight and dynamic convolutions. In ICLR, 2019.
Biao Zhang, Deyi Xiong, and Jinsong Su. Accelerating neural transformer via an average attention network. arXiv preprint arXiv:1805.00631, 2018.
12
Published as a conference paper at ICLR 2021
Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun. Shufï¬enet: An extremely efï¬cient convolutional neural network for mobile devices. CoRR, 2017.
Sanqiang Zhao, Raghav Gupta, Yang Song, and Denny Zhou. Extreme language model compression with optimal subwords and shared projections. arXiv preprint arXiv:1909.11687, 2019.
13
Published as a conference paper at ICLR 2021
Setting Model Compression Top-1 Accuracy Small Blocks Stock et al. (2019) Quant-Noise 19x 19x 73.8 74.3 Large Blocks Stock et al. (2019) Quant-Noise 32x 32x 68.2 68.8
Table 4: Compression of ResNet-50 with Quant-Noise. We compare to Stock et al. (2019) in both the small and large blocks regime. For fair comparison, we hold the compression rate constant. Quant-Noise provides improved performance in both settings.
int8 iPQ 21 | âoâ Non-Quantized 35 Fo Non-Quantized z 29 | = Quantized z 30 | C= Quantized z z 2 19 ani 18 20 p= ae 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.1 0.2 0.3 0.4 0.5 Quant-Noise rate Quant-Noise rate
Figure 3: Effect of Quantization Parameters. We report the inï¬uence of the proportion of blocks to which we apply the noise. We focus on Transformer for Wikitext-103 language modeling. We explore two settings: iPQ and int8. For iPQ, we use Ïproxy.
7 APPENDIX
7.1 QUANTIZATION OF ADDITIONAL ARCHITECTURES
ResNet-50. We explore the compression of ResNet-50, a standard architecture used Computer Vision. In Table 4, we compare Quant-Noise to iPQ Compression from Stock et al. (2019) and show that Quant-Noise provide consistent additional improvement.
7.2 ABLATIONS
In this section, we examine the impact of the level of noise during training as well as the impact of approximating iPQ during training.
7.3 IMPACT OF NOISE RATE
We analyze the performance for various values of Quant-Noise in Figure 3 on a Transformer for language modeling. For iPQ, performance is impacted by high rates of quantization noise. For example, a Transformer with the noise function Ïproxy degrades with rate higher than 0.5, i.e., when half of the weights are passed through the noise function Ïproxy. We hypothesize that for large quantities of noise, a larger effect of using proxy rather than the exact PQ noise is observed. For int8 quantization and its noise function, higher rates of noise are slightly worse but not as severe. A rate of 1 for int8 quantization is equivalent to the Quantization Aware Training of (Krishnamoorthi, 2018), as the full matrix is quantized with STE, showing the potential beneï¬t of partial quantization during training.
IMPACT OF APPROXIMATING THE NOISE FUNCTION
We study the impact of approximating quantization noise during training. We focus on the case of iPQ with the approximation described in Section 4.2. In Table 5, we compare the correct noise function for iPQ with its approximation Ïproxy. This approximate noise function does not consider cluster assignments or centroid values and simply zeroes out the selected blocks. For completeness, we include an intermediate approximation where we consider cluster assignments to apply noise
14
Published as a conference paper at ICLR 2021
Noise Blocks ÏPQ ÏPQ Ïproxy Ïproxy Clusters Subvectors Clusters Subvectors PPL Quant PPL 18.3 18.3 21.1 21.2 18.3 18.4 21.0 21.1 Table 5: Exact versus proxy noise function for different block selections with iPQ. We com- pare exact ÏPQ and the approximation Ïproxy with blocks selected from all subvectors or subvectors from the same cluster.
within each cluster, but still zero-out the vectors. These approximations do not affect the performance of the quantized models. This suggests that increasing the correlation between subvectors that are jointly clustered is enough to maintain the performance of a model quantized with iPQ. Since PQ tends to work well on highly correlated vectors, such as activations in convolutional networks, this is not surprising. Using the approximation Ïproxy presents the advantage of speed and practicality. Indeed, one does not need to compute cluster assignments and centroids for every layer in the network after each epoch. Moreover, the approach Ïproxy is less involved in terms of code.
7.5 EXPERIMENTAL SETTING
We assess the effectiveness of Quant-Noise on competitive language and vision benchmarks. We consider Transformers for language modeling, RoBERTa for pre-training sentence representations, and Efï¬cientNet for image classiï¬cation. Our models are implemented in PyTorch (Paszke et al., 2017). We use fairseq (Ott et al., 2019) for language modeling and pre-training for sentence representation tasks and Classy Vision (Adcock et al., 2019) for Efï¬cientNet.
Language Modeling. We experiment on the Wikitext-103 benchmark (Merity et al., 2016) that contains 100M tokens and a vocabulary of 260k words. We train a 16 layer Transformer following Baevski & Auli (2018) with a LayerDrop rate of 0.2 (Fan et al., 2019). We report perplexity (PPL) on the test set.
Pre-Training of Sentence Representations. We pre-train the base BERT model (Devlin et al., 2018) on the BooksCorpus + Wiki dataset with a LayerDrop rate of 0.2. We ï¬netune the pre-trained models on the MNLI task (Williams et al., 2018) from the GLUE Benchmark (Wang et al., 2019) and report accuracy. We follow the parameters in Liu et al. (2019) training and ï¬netuning.
Image Classiï¬cation. We train an Efï¬cientNet-B3 model (Tan & Le, 2019) on the ImageNet object classiï¬cation benchmark (Deng et al., 2009). The Efï¬cientNet-B3 of Classy Vision achieves a Top-1 accuracy of 81.5%, which is slightly below than the performance of 81.9% reported by Tan & Le (2019).
7.6 TRAINING DETAILS
Language Modeling To handle the large vocabulary of Wikitext-103, we follow (Dauphin et al., 2017) and (Baevski & Auli, 2018) in using adaptive softmax (Grave et al., 2016) and adaptive input for computational efï¬ciency. For both input and output embeddings, we use dimension size 1024 and three adaptive bands: 20K, 40K, and 200K. We use a cosine learning rate schedule (Baevski & Auli, 2018; Loshchilov & Hutter, 2016) and train with Nesterovâs accelerated gradient (Sutskever et al., 2013). We set the momentum to 0.99 and renormalize gradients if the norm exceeds 0.1 (Pascanu et al., 2014). During training, we partition the data into blocks of contiguous tokens that ignore document boundaries. At test time, we respect sentence boundaries. We set LayerDrop to 0.2. We set Quant-Noise value to 0.05. During training time, we searched over the parameters (0.05, 0.1, 0.2) to determine the optimal value of Quant-Noise. During training time, the block size of Quant-Noise is 8.
RoBERTa The base architecture is a 12 layer model with embedding size 768 and FFN size 3072. We follow (Liu et al., 2019) in using the subword tokenization scheme from (Radford et al., 2019), which uses bytes as subword units. This eliminates unknown tokens. We train with large batches of size 8192 and maintain this batch size using gradient accumulation. We do not use next sentence prediction (Lample & Conneau, 2019). We optimize with Adam with a polynomial decay learning rate schedule. We set LayerDrop to 0.2. We set Quant-Noise value to 0.1. We did not hyperparameter
15
Published as a conference paper at ICLR 2021
Model MB PPL Trans XL Large (Dai et al., 2019) Compressive Trans (Rae et al., 2019) GCNN (Dauphin et al., 2017) 4 Layer QRNN (Bradbury et al., 2016) Trans XL Base (Dai et al., 2019) Persis Mem (Sukhbaatar et al., 2019b) Tensorized core-2 (Ma et al., 2019) 970 970 870 575 570 506 325 18.3 17.1 37.2 33.0 24.0 20.6 18.9 Quant-Noise Quant-Noise + Share + Prune 38 10 20.7 24.2
Table 6: Performance on Wikitext-103. We report test set perplexity and model size in megabytes. Lower perplexity is better.
search to determine the optimal value of Quant-Noise as training RoBERTa is computationally intensive. During training time, the block size of Quant-Noise is 8.
During ï¬netuning, we hyperparameter search over three learning rate options (1e-5, 2e-5, 3e-5) and batchsize (16 or 32 sentences). The other parameters are set following (Liu et al., 2019). We do single task ï¬netuning, meaning we only tune on the data provided for the given natural language understanding task. We do not perform ensembling. When ï¬netuning models trained with LayerDrop, we apply LayerDrop and Quant-Noise during ï¬netuning time as well.
Efï¬cientNet We use the architecture of Efï¬cientNet-B3 deï¬ned in Classy Vision (Adcock et al., 2019) and follow the default hyperparameters for training. We set Quant-Noise value to 0.1. During training time, we searched over the parameters (0.05, 0.1, 0.2) to determine the optimal value of Quant-Noise. During training time, the block size of Quant-Noise is set to 4 for all 1 à 1 convolutions, 9 for depth-wise 3 à 3 convolutions, 5 for depth-wise 5 à 5 convolutions and 4 for the classiï¬er. For sharing, we shared weights between blocks 9-10, 11-12, 14-15, 16-17, 19-20-21, 22-23 and refer to blocks that share the same weights as a chunk. For LayerDrop, we drop the chunks of blocks deï¬ned previously with probability 0.2 and evaluate only with chunks 9-10, 14-15 and 19-20-21.
7.7 SCALAR QUANTIZATION DETAILS
We closely follow the methodology of PyTorch 1.4. We emulate scalar quantization by quantizing the weights and the activations. The scales and zero points of activations are determined by doing a few forward passes ahead of the evaluation and then ï¬xed. We use the Histogram method to compute s and z, which aims at approximately minimizing the L2 quantization error by adjusting s and z. This scheme is a reï¬nement of the MinMax scheme. Per channel quantization is also discussed in Table 10.
IPQ QUANTIZATION DETAILS
Language Modeling We quantize FFN with block size 8, embeddings with block size 8, and attention with block size 4. We tuned the block size for attention between the values (4, 8) to ï¬nd the best performance. Note that during training with apply Quant-Noise to all the layers.
RoBERTa We quantize FFN with block size 4, embeddings with block size 4, and attention with block size 4. We tuned the block size between the values (4, 8) to ï¬nd the best performance. Note that during training with apply Quant-Noise to all the layers.
Efï¬cientNet We quantize blocks sequentially and end up with the classiï¬er. The block sizes are 4 for all 1 à 1 convolutions, 9 for depth-wise 3 à 3 convolutions, 5 for depth-wise 5 à 5 convolutions and 4 for the classiï¬er. Note that during training with apply Quant-Noise to all the weights in InvertedResidual Blocks (except the Squeeze-Excitation subblocks), the head convolution and the classiï¬er.
16
Published as a conference paper at ICLR 2021
Model MB MNLI RoBERTa Base + LD (Fan et al., 2019) BERT Base (Devlin et al., 2018) PreTrained Distil (Turc et al., 2019) DistilBERT (Sanh et al., 2019b) MobileBERT* (Sun et al.) TinyBERTâ (Jiao et al., 2019) ALBERT Base (Lan et al., 2019) AdaBERTâ (Chen et al., 2020) 480 420 257 250 96 55 45 36 84.8 84.4 82.5 81.8 84.4 82.8 81.6 81.6 Quant-Noise Quant-Noise + Share + Prune 38 14 83.6 82.5
Table 7: Performance on MNLI. We report accuracy and size in megabytes. * indicates distillation using BERT Large. â indicates training with data augmentation. Work from Sun et al. (2019) and Zhao et al. (2019) do not report results on the dev set. Cao et al. do not report model size. Higher accuracy is better.
Model MB Acc. Efï¬cientNet-B7 (Tan & Le, 2019) ResNet-50 (He et al., 2015) DenseNet-169 (Huang et al., 2018) Efï¬cientNet-B0 (Tan & Le, 2019) MobileNet-v2 (Sandler et al., 2018) Shufï¬enet-v2 Ã1 (Ma et al., 2018) 260 97.5 53.4 20.2 13.4 8.7 84.4 76.1 76.2 77.3 71.9 69.4 HAQ 4 bits (Wang et al., 2018) iPQ ResNet-50 (Stock et al., 2019) 12.4 5.09 76.2 76.1 Quant-Noise Quant-Noise + Share + Prune 3.3 2.3 80.0 77.8
Table 8: Performance on ImageNet. We report accuracy and size in megabytes. Higher accuracy is better.
7.9 DETAILS OF PRUNING AND LAYER SHARING
We apply the Every Other Layer strategy from Fan et al. (2019). When combining layer sharing with pruning, we train models with shared layers and then prune chunks of shared layers. When sharing layers, the weights of adjacent layers are shared in chunks of two. For a concrete example, imagine we have a model with layers A, B, C, D, E, F, G, H. We share layers A and B, C and D, E and F, G and H. To prune, every other chunk would be pruned away, for example we could prune A, B, E, F.
7.10 NUMERICAL RESULTS FOR GRAPHICAL DIAGRAMS
We report the numerical values displayed in Figures 2 in Table 6 for language modeling, Table 7 for BERT, and Table 8 for ImageNet.
7.11 FURTHER ABLATIONS
IMPACT OF QUANT-NOISE FOR THE VISION SETUP
We provide another study showing the impact of the proportion of elements on which to apply Quant-Noise in Table 9.
IMPACT OF THE NUMBER OF CENTROIDS
We quantize with 256 centroids which represents a balance between size and representation capacity. The effect of the number of centroids on performance and size is shown in Figure 4 (a). Quantizing
17
Published as a conference paper at ICLR 2021
p 0 0.2 0.4 0.6 0.8 1 Top-1 80.66 80.83 80.82 80.88 80.92 80.64
Table 9: Effect of Quantization Parameters. We report the inï¬uence of the Quant-Noise rate p with Scalar Quantization (int8). We focus on Efï¬cientNet for ImageNet classiï¬cation.
Effect of Number of Centroids
a 0 1000 2000 Number of Centroids. Roo sS o8 y Ss Valid Perplexity Es
Figure 4: Quantizing with a larger number of centroids. Results are shown on Wikitext-103 valid.
with more centroids improves perplexity â this parameter could be adjusted based on the practical storage constraints.
7.11.3 EFFECT OF INITIAL MODEL SIZE
Large, overparameterized models are more easily compressed. In Figure 5, we explore quantizing both shallower and skinnier models. For shallow models, the gap between quantized and non-quantized perplexity does not increase as layers are removed (Figure 5, left). In contrast, there is a larger gap in performance for models with smaller FFN (Figure 5, right). As the FFN size decreases, the weights are less redundant and more difï¬cult to quantize with iPQ.
7.11.4 DIFFICULTY OF QUANTIZING DIFFERENT MODEL STRUCTURES
Quantization is applied to various portions of the Transformer architecture â the embedding, attention, feedforward, and classiï¬er output. We compare the quantizability of various portions of the network in this section.
Is the order of structures important? We quantize speciï¬c network structures ï¬rst â this is important as quantizing weight matrices can accumulate reconstruction error. Some structures of the network should be quantized last so the ï¬netuning process can better adjust the centroids. We ï¬nd that there are small variations in performance based on quantization order (see Figure 6). We choose to quantize FFN, then embeddings, and ï¬nally the attention matrices in Transformer networks.
Which structures can be compressed the most? Finally, we analyze which network structures can be most compressed. During quantization, various matrix block sizes can be chosen as a parameter â the larger the block size, the more compression, but also the larger the potential reduction of performance. Thus, it is important to understand how much each network structure can be compressed to reduce the memory footprint of the ï¬nal model as much as possible. In Figure 6, we quantize two model structures with a ï¬xed block size and vary the block size of the third between 4 and 32. As shown, the FFN and embedding structures are more robust to aggressive compression, while the attention drastically loses performance as larger block sizes are used.
# 7.11.5 APPROACH TO I N TN SCALAR QUANTIZATION
We compare quantizing per-channel to using a histogram quantizer in Table 10. The histogram quantizer maintains a running min/max and minimizes L2 distance between quantized and non- quantized values to ï¬nd the optimal min/max. Quantizing per channel learns scales and offsets as vectors along the channel dimension, which provides more ï¬exibility since scales and offsets can be different.
18
Published as a conference paper at ICLR 2021
Model Depth Model Width w Ss Valid Perplexity vv a Valid Perplexity iS) S a 4 8 10 14 16 1024 2048 3072 ©4096 Number of Layers FEN Size te Valid PPL -O Quant Valid PPL
Figure 5: (a) Effect of Initial Model Size for more shallow models (b) Effect of Initial Model Size more skinny models
Order of Quantization Quantization Block Size 25.0 24.0 2 2 % 22.5 % 23.5 a a 5 20.0 5 23.0 z z a 175 ms 22.5 > > 15.0 22.0 attn att embemb ffn ffn 4 8 16 32 emb ffn att ffn att emb Block Size ffn emb ffn att emb att > Attn -OEmb ~~ FFN
Figure 6: Effect of Quantization on Model Structures. Results are shown on the validation set of Wikitext-103. (a) Quantizing Attention, FFN, and Embeddings in different order. (b) More Extreme compression of different structures.
# 7.11.6 LAYERDROP WITH STE
For quantization noise, we apply the straight through estimator (STE) to remaining weights in the backward pass. We experiment with applying STE to the backward pass of LayerDropâs pruning noise. Results are shown in Table 11 and ï¬nd slightly worse results.
19
Published as a conference paper at ICLR 2021
Quantization Scheme Language Modeling 16-layer Transformer Wikitext-103 Image Classiï¬cation Efï¬cientNet-B3 ImageNet-1K Size Compress Test PPL Size Compress Top-1 Acc. Uncompressed model 942 Ã1 18.3 46.7 Ã1 81.5 Int4 Quant Histogram + Quant-Noise 118 118 Ã8 Ã8 39.4 21.8 5.8 5.8 Ã8 Ã8 45.3 67.8 Int4 Quant Channel + Quant-Noise 118 118 Ã8 Ã8 21.2 19.5 5.8 5.8 Ã8 Ã8 68.2 72.3 Int8 Quant Histogram + Quant-Noise 236 236 Ã4 Ã4 19.6 18.7 11.7 11.7 Ã4 Ã4 80.7 80.9 Int8 Quant Channel + Quant-Noise 236 236 Ã4 Ã4 18.5 18.3 11.7 11.7 Ã4 Ã4 81.1 81.2
Table 10: Comparison of different approaches to int4 and int8 with and without Quant- Noise on language modeling and image classiï¬cation. For language modeling, we train a Transformer on the Wikitext-103 benchmark. We report perplexity (PPL) on the test set. For image classiï¬cation, we train a Efï¬cientNet-B3 on the ImageNet-1K benchmark. We report top-1 accuracy on the validation set. For both setting, we also report model size in megabyte (MB) and the compression ratio compared to the original model.
Model MB PPL Quant-Noise + Share + Prune Quant-Noise + Share + Prune with STE 10 10 24.2 24.5
Table 11: Performance on Wikitext-103 when using STE in the backward pass of the Layer- Drop pruning noise.
20 | {
"id": "1810.04805"
} |
2004.06100 | Pretrained Transformers Improve Out-of-Distribution Robustness | Although pretrained Transformers such as BERT achieve high accuracy on
in-distribution examples, do they generalize to new distributions? We
systematically measure out-of-distribution (OOD) generalization for seven NLP
datasets by constructing a new robustness benchmark with realistic distribution
shifts. We measure the generalization of previous models including bag-of-words
models, ConvNets, and LSTMs, and we show that pretrained Transformers'
performance declines are substantially smaller. Pretrained transformers are
also more effective at detecting anomalous or OOD examples, while many previous
models are frequently worse than chance. We examine which factors affect
robustness, finding that larger models are not necessarily more robust,
distillation can be harmful, and more diverse pretraining data can enhance
robustness. Finally, we show where future work can improve OOD robustness. | http://arxiv.org/pdf/2004.06100 | Dan Hendrycks, Xiaoyuan Liu, Eric Wallace, Adam Dziedzic, Rishabh Krishnan, Dawn Song | cs.CL, cs.LG | ACL 2020 | null | cs.CL | 20200413 | 20200416 | 0 2 0 2
r p A 6 1 ] L C . s c [
2 v 0 0 1 6 0 . 4 0 0 2 : v i X r a
# Pretrained Transformers Improve Out-of-Distribution Robustness
Dan Hendrycks1â Adam Dziedzic3 Xiaoyuan Liu1,2â Rishabh Krishnan1 Eric Wallace1 Dawn Song1 1UC Berkeley 2Shanghai Jiao Tong University 3University of Chicago {hendrycks,ericwallace,dawnsong}@berkeley.edu
# Abstract
Although such as BERT achieve high accuracy on in- distribution examples, do they generalize to new distributions? We systematically measure out-of-distribution (OOD) generalization for seven NLP datasets by constructing a new robustness benchmark with realistic distribu- tion shifts. We measure the generalization of previous models including bag-of-words models, ConvNets, and LSTMs, and we show that pretrained Transformersâ performance declines are substantially smaller. Pretrained transformers are also more effective at de- tecting anomalous or OOD examples, while many previous models are frequently worse than chance. We examine which factors affect robustness, ï¬nding that larger models are not necessarily more robust, distillation can be harmful, and more diverse pretraining data can enhance robustness. Finally, we show where future work can improve OOD robustness.
dependent and identically distributed (IID). In the IID setting, large pretrained Transformer models can attain near human-level performance on nu- merous tasks (Wang et al., 2019). However, high IID accuracy does not necessarily translate to OOD robustness for image classiï¬ers (Hendrycks and Di- etterich, 2019), and pretrained Transformers may embody this same fragility. Moreover, pretrained Transformers can rely heavily on spurious cues and annotation artifacts (Cai et al., 2017; Gururangan et al., 2018) which out-of-distribution examples are less likely to include, so their OOD robustness remains uncertain.
In this work, we systematically study the OOD robustness of various NLP models, such as word embeddings averages, LSTMs, pretrained Trans- formers, and more. We decompose OOD robust- ness into a modelâs ability to (1) generalize and to (2) detect OOD examples (Card et al., 2018).
# Introduction
The train and test distributions are often not iden- tically distributed. Such train-test mismatches occur because evaluation datasets rarely charac- terize the entire distribution (Torralba and Efros, 2011), and the test distribution typically drifts over time (Quionero-Candela et al., 2009). Chasing an evolving data distribution is costly, and even if the training data does not become stale, models will still encounter unexpected situations at test time. Accordingly, models must generalize to OOD ex- amples whenever possible, and when OOD exam- ples do not belong to any known class, models must detect them in order to abstain or trigger a conservative fallback policy (Emmott et al., 2015). Most evaluation in natural language processing (NLP) assumes the train and test examples are in-
To measure OOD generalization, we create a new evaluation benchmark that tests robustness to shifts in writing style, topic, and vocabulary, and spans the tasks of sentiment analysis, textual entail- ment, question answering, and semantic similarity. We create OOD test sets by splitting datasets with their metadata or by pairing similar datasets to- gether (Section 2). Using our OOD generalization benchmark, we show that pretrained Transformers are considerably more robust to OOD examples than traditional NLP models (Section 3). We show that the performance of an LSTM semantic similar- ity model declines by over 35% on OOD examples, while a RoBERTa modelâs performance slightly increases. Moreover, we demonstrate that while pretraining larger models does not seem to improve OOD generalization, pretraining models on diverse data does improve OOD generalization.
# â Equal contribution.
https://github.com/camelop/NLP-Robustness
To measure OOD detection performance, we turn classiï¬ers into anomaly detectors by using their prediction conï¬dences as anomaly scores
(Hendrycks and Gimpel, 2017). We show that many non-pretrained NLP models are often near or worse than random chance at OOD detection. In contrast, pretrained Transformers are far more capable at OOD detection. Overall, our results highlight that while there is room for future robustness improvements, pretrained Transformers are already moderately robust.
# 2 How We Test Robustness
# 2.1 Train and Test Datasets
We evaluate OOD generalization with seven care- fully selected datasets. Each dataset either (1) con- tains metadata which allows us to naturally split the samples or (2) can be paired with a similar dataset from a distinct data generating process. By splitting or grouping our chosen datasets, we can induce a distribution shift and measure OOD generalization. We utilize four sentiment analysis datasets: ⢠We use SST-2, which contains pithy expert movie reviews (Socher et al., 2013), and IMDb (Maas et al., 2011), which contains full- length lay movie reviews. We train on one dataset and evaluate on the other dataset, and vice versa. Models predict a movie reviewâs binary sentiment, and we report accuracy.
⢠The Yelp Review Dataset contains restaurant reviews with detailed metadata (e.g., user ID, restaurant name). We carve out four groups from the dataset based on food type: American, Chi- nese, Italian, and Japanese. Models predict a restaurant reviewâs binary sentiment, and we re- port accuracy.
⢠The Amazon Review Dataset contains product reviews from Amazon (McAuley et al., 2015; He and McAuley, 2016). We split the data into ï¬ve categories of clothing (Clothes, Women Cloth- ing, Men Clothing, Baby Clothing, Shoes) and two categories of entertainment products (Music, Movies). We sample 50,000 reviews for each category. Models predict a reviewâs 1 to 5 star rating, and we report accuracy.
We also utilize these datasets for semantic similar- ity, reading comprehension, and textual entailment: ⢠STS-B requires predicting the semantic simi- larity between pairs of sentences (Cer et al., 2017). The dataset contains text of different genres and sources; we use four sources from two genres: MSRpar (news), Headlines (news); MSRvid (captions), Images (captions). The eval- uation metric is Pearsonâs correlation coefï¬cient.
⢠ReCoRD is a reading comprehension dataset using paragraphs from CNN and Daily Mail news articles and automatically generated ques- tions (Zhang et al., 2018). We bifurcate the dataset into CNN and Daily Mail splits and eval- uate using exact match.
⢠MNLI is a textual entailment dataset using sentence pairs drawn from different genres of text (Williams et al., 2018). We select examples from two genres of transcribed text (Telephone and Face-to-Face) and one genre of written text (Letters), and we report classiï¬cation accuracy.
# 2.2 Embedding and Model Types
We evaluate NLP models with different input rep- resentations and encoders. We investigate three model categories with a total of thirteen models.
Bag-of-words (BoW) Model. We use a bag-of- words model (Harris, 1954), which is high-bias but low-variance, so it may exhibit performance sta- bility. The BoW model is only used for sentiment analysis and STS-B due to its low performance on the other tasks. For STS-B, we use the cosine sim- ilarity of the BoW representations from the two input sentences.
Embedding Models. We Word use word2vec (Mikolov et al., 2013) and GloVe (Pen- nington et al., 2014) word embeddings. These embeddings are encoded with one of three models: word averages (Wieting et al., 2016), LSTMs (Hochreiter and Schmidhuber, 1997), and Convolutional Neural Networks (ConvNets). For classiï¬cation tasks, the representation from the encoder is fed into an MLP. For STS-B and MNLI, we use the cosine similarity of the encoded representations from the two input sentences. For reading comprehension, we use the DocQA model (Clark and Gardner, 2018) with GloVe embeddings. We implement our models in AllenNLP (Gardner et al., 2018) and tune the hyperparameters to maximize validation performance on the IID task.
Pretrained Transformers. We investigate BERT-based models (Devlin et al., 2019) which are pretrained bidirectional Transformers (Vaswani et al., 2017) with GELU (Hendrycks and Gimpel, 2016) activations. In addition to using BERT Base and BERT Large, we also use the large version of RoBERTa (Liu et al., 2019b), which than BERT. is pretrained on a larger dataset
Semantic Textual Similarity (STS-B) Generalization
â@mâ¢_ 11D Data (Images) imâ¢@m_ OOD Data (MSRvid) Pearson Correlation (%) Avg. Avg. ConvNet LSTM BoW w2v w2v â wav BERT Base BERT RoBERTa Large
Figure 1: Pretrained Transformers often have smaller IID/OOD generalization gaps than previous models.
We use ALBERT (Lan et al., 2020) and also a distilled version of BERT, DistilBERT (Sanh et al., 2019). We follow the standard BERT ï¬ne-tuning procedure (Devlin et al., 2019) and lightly tune the hyperparameters for our tasks. We perform our experiments using the HuggingFace Transformers library (Wolf et al., 2019).
# 3 Out-of-Distribution Generalization
In this section, we evaluate OOD generalization of numerous NLP models on seven datasets and provide some upshots. A subset of results are in Figures 1 and 2. Full results are in Appendix A.
Pretrained Transformers are More Robust. In our experiments, pretrained Transformers often have smaller generalization gaps from IID data to OOD data than traditional NLP models. For instance, Figure 1 shows that the LSTM model declined by over 35%, while RoBERTaâs general- ization performance in fact increases. For Amazon, MNLI, and Yelp, we ï¬nd that pretrained Trans- formersâ accuracy only slightly ï¬uctuates on OOD examples. Partial MNLI results are in Table 1. We present the full results for these three tasks in Ap- pendix A.2. In short, pretrained Transformers can generalize across a variety of distribution shifts.
Model Telephone BERT (IID) 81.4% Letters (OOD) 82.3% Face-to-Face (OOD) 80.8%
Table 1: Accuracy of a BERT Base MNLI model trained on Telephone data and tested on three different distributions. Accuracy only slightly ï¬uctuates.
Bigger Models Are Not Always Better. While larger models reduce the IID/OOD generaliza- tion gap in computer vision (Hendrycks and Di- etterich, 2019; Xie and Yuille, 2020; Hendrycks et al., 2019d), we ï¬nd the same does not hold in
IMDb Sentiment Classifier Generalization ° l@m 11D Data (IMDb) @m OOD Data (SST-2) 90 80 Accuracy (%) 70 60 ConvNet LSTM w2v w2v BERT Base BERT RoBERTa Large Avg. Avg. BoW = w2v ReCoRD Reading Comprehension Generalization ° Timm 11D Data (CNN) [Emm OOD Data (Daily Mail) Ss & Exact Match (%) 8 8
DocQA DistiIBERT BERT Base BERT Large RoBERTa
Figure 2: Generalization results for sentiment analysis and reading comprehension. While IID accuracy does not vary much for IMDb sentiment analysis, OOD ac- curacy does. Here pretrained Transformers do best.
SST-2 Model Size vs. Accuracy Drop
Bb ° SST-2 Accuracy - IMDb Accuracy (%) fo} N + 1 A & Ca & a Ca a & ee & rs é KS a eK ST KF EK SS Q N; N2 3) & e wy Y
Figure 3: The IID/OOD generalization gap is not im- proved with larger models, unlike in computer vision.
NLP. Figure 3 shows that larger BERT and AL- BERT models do not reduce the generalization gap. However, in keeping with results from vi- sion (Hendrycks and Dietterich, 2019), we ï¬nd that model distillation can reduce robustness, as evident in our DistilBERT results in Figure 2. This high- lights that testing model compression methods for BERT (Shen et al., 2020; Ganesh et al., 2020; Li et al., 2020) on only in-distribution examples gives a limited account of model generalization, and such narrow evaluation may mask downstream costs.
Detecting OOD Examples for an SST-2 Sentiment Classifier
False Alarm Rate (%) (Lower Is Better) 100 7â _LuoLLWW_______â 80 60 40 20 0 Model Type Random Detector Bag of Words Avg. word2vec LSTM word2vec ConvNet word2vec BERT Large 20 NG Multi30K SNLI WMT16 Average
Figure 4: We feed in OOD examples from out-of-distribution datasets (20 Newsgroups, Multi30K, etc.) to SST-2 sentiment classiï¬ers and report the False Alarm Rate at 95% Recall. A lower False Alarm Rate is better. Classiï¬ers are repurposed as anomaly detectors by using their negative maximum softmax probability as the anomaly scoreâ OOD examples should be predicted with less conï¬dence than IID examples. Models such as BoW, word2vec averages, and LSTMs are near random chance; that is, previous NLP models are frequently more conï¬dent when classifying OOD examples than when classifying IID test examples.
More Diverse Data Improves Generalization. Similar to computer vision (Orhan, 2019; Xie et al., 2020; Hendrycks et al., 2019a), pretraining on larger and more diverse datasets can improve ro- bustness. RoBERTa exhibits greater robustness than BERT Large, where one of the largest differ- ences between these two models is that RoBERTa pretrains on more data. See Figure 2âs results.
# 4 Out-of-Distribution Detection
Since OOD robustness requires evaluating both OOD generalization and OOD detection, we now turn to the latter. Without access to an outlier dataset (Hendrycks et al., 2019b), the state-of- the-art OOD detection technique is to use the modelâs prediction conï¬dence to separate in- and out-of-distribution examples (Hendrycks and Gim- pel, 2017). Speciï¬cally, we assign an example x the anomaly score â maxy p(y | x), the negative prediction conï¬dence, to perform OOD detection. We train models on SST-2, record the modelâs conï¬dence values on SST-2 test examples, and then record the modelâs conï¬dence values on OOD examples from ï¬ve other datasets. For our OOD examples, we use validation examples from 20 Newsgroups (20 NG) (Lang, 1995), the En- glish source side of English-German WMT16 and English-German Multi30K (Elliott et al., 2016), and concatenations of the premise and hypothesis for RTE (Dagan et al., 2005) and SNLI (Bowman et al., 2015). These examples are only used during OOD evaluation not training.
For evaluation, we follow past work (Hendrycks et al., 2019b) and report the False Alarm Rate at 95% Recall (FAR95). The FAR95 is the probability
that an in-distribution example raises a false alarm, assuming that 95% of all out-of-distribution exam- ples are detected. Hence a lower FAR95 is better. Partial results are in Figure 4, and full results are in Appendix A.3.
Previous Models Struggle at OOD Detection. Models without pretraining (e.g., BoW, LSTM word2vec) are often unable to reliably detect OOD examples. In particular, these modelsâ FAR95 scores are sometimes worse than chance because the models often assign a higher probability to out-of-distribution examples than in-distribution examples. The models particularly struggle on 20 Newsgroups (which contains text on diverse topics including computer hardware, motorcycles, space), as their false alarm rates are approximately 100%.
Pretrained Transformers Are Better Detectors. In contrast, pretrained Transformer models are bet- ter OOD detectors. Their FAR95 scores are always better than chance. Their superior detection perfor- mance is not solely because the underlying model is a language model, as prior work (Hendrycks et al., 2019b) shows that language models are not necessarily adept at OOD detection. Also note that in OOD detection for computer vision, higher accuracy does not reliably improve OOD detec- tion (Lee et al., 2018), so pretrained Transformersâ OOD detection performance is not anticipated. De- spite their relatively low FAR95 scores, pretrained Transformers still do not cleanly separate in- and out-of-distribution examples (Figure 5). OOD de- tection using pretrained Transformers is still far from perfect, and future work can aim towards cre- ating better methods for OOD detection.
SST Classifier Confidence Distribution
lam SST (IID) Mmm WMT16 (OOD) Frequency 0.5 0.6 0.7 0.8 0.9 1.0 Maximum Softmax Probability (Confidence)
Figure 5: The conï¬dence distribution for a RoBERTa SST-2 classiï¬er on examples from the SST-2 test set and the English side of WMT16 English-German. The WMT16 histogram is translucent and overlays the SST histogram. The minimum prediction conï¬dence is 0.5. Although RoBERTa is better than previous models at OOD detection, there is clearly room for future work.
# 5 Discussion and Related Work
Why Are Pretrained Models More Robust? An interesting area for future work is to analyze why pretrained Transformers are more robust. A ï¬awed explanation is that pretrained models are simply more accurate. However, this work and past work show that increases in accuracy do not directly translate to reduced IID/OOD generalization gaps (Hendrycks and Dietterich, 2019; Fried et al., 2019). One partial explanation is that Transformer models are pretrained on diverse data, and in computer vision, dataset diversity can improve OOD generalization (Hendrycks et al., 2020) and OOD detection (Hendrycks et al., 2019b). Similarly, Transformer models are pretrained with large amounts of data, which may also aid robustness (Orhan, 2019; Xie et al., 2020; Hendrycks et al., 2019a). However, this is not a complete explanation as BERT is pretrained on roughly 3 billion tokens, while GloVe is trained on roughly 840 billion tokens. Another partial explanation may lie in self-supervised training itself. Hendrycks et al. (2019c) show that com- puter vision models trained with self-supervised objectives exhibit better OOD generalization and far better OOD detection performance. Future work could propose new self-supervised objectives that enhance model robustness.
Domain Adaptation. Other research on robust- ness considers the separate problem of domain adaptation (Blitzer et al., 2007; Daum´e III, 2007), where models must learn representations of a source and target distribution. We focus on testing generalization without adaptation in order to bench- mark robustness to unforeseen distribution shifts. Unlike Fisch et al. (2019); Yogatama et al. (2019), we measure OOD generalization by considering simple and natural distribution shifts, and we also evaluate more than question answering.
Adversarial Examples. Adversarial examples can be created for NLP models by inserting phrases (Jia and Liang, 2017; Wallace et al., 2019), paraphrasing questions (Ribeiro et al., 2018), and reducing inputs (Feng et al., 2018). However, ad- versarial examples are often disconnected from real-world performance concerns (Gilmer et al., 2018). Thus, we focus on an experimental setting that is more realistic. While previous works show that, for all NLP models, there exist adversarial examples, we show that all models are not equally fragile. Rather, pretrained Transformers are overall far more robust than previous models.
Counteracting Annotation Artifacts. Annota- tors can accidentally leave unintended shortcuts in datasets that allow models to achieve high ac- curacy by effectively âcheatingâ (Cai et al., 2017; Gururangan et al., 2018; Min et al., 2019). These annotation artifacts are one reason for OOD brit- tleness: OOD examples are unlikely to contain the same spurious patterns as in-distribution examples. OOD robustness benchmarks like ours can stress test a modelâs dependence on artifacts (Liu et al., 2019a; Feng et al., 2019; Naik et al., 2018).
# 6 Conclusion
We created an expansive benchmark across several NLP tasks to evaluate out-of-distribution robust- ness. To accomplish this, we carefully restructured and matched previous datasets to induce numerous realistic distribution shifts. We ï¬rst showed that pretrained Transformers generalize to OOD ex- amples far better than previous models, so that the IID/OOD generalization gap is often markedly re- duced. We then showed that pretrained Transform- ers detect OOD examples surprisingly well. Over- all, our extensive evaluation shows that while pre- trained Transformers are moderately robust, there remains room for future research on robustness.
# Acknowledgements
We thank the members of Berkeley NLP, Sona Jeswani, Suchin Gururangan, Nelson Liu, Shi Feng, the anonymous reviewers, and especially Jon Cai. This material is in part based upon work supported by the National Science Foundation Frontier Award 1804794. Any opinions, ï¬ndings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reï¬ect the views of the National Science Foundation.
# References
John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classiï¬cation. In ACL.
Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large anno- tated corpus for learning natural language inference. In EMNLP.
Zheng Cai, Lifu Tu, and Kevin Gimpel. 2017. Pay at- tention to the ending: Strong neural baselines for the roc story cloze task. In ACL.
Dallas Card, Michael Zhang, and Noah A. Smith. 2018. Deep weighted averaging classiï¬ers. In FAT.
Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez- Gazpio, and Lucia Specia. 2017. SemEval-2017 Task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation. In SemEval.
Christopher Clark and Matt Gardner. 2018. Simple and effective multi-paragraph reading comprehen- sion. In ACL.
Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The PASCAL recognising textual entailment challenge. In Machine Learning Challenges Work- shop.
Hal Daum´e III. 2007. Frustratingly easy domain adap- tation. In ACL.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language under- standing. In NAACL-HLT.
Desmond Elliott, Stella Frank, Khalil Simaâan, and Lu- cia Specia. 2016. Multi30k: Multilingual english- german image descriptions. In ACL.
Andrew Emmott, Shubhomoy Das, Thomas G. Diet- terich, Alan Fern, and Weng-Keen Wong. 2015. A meta-analysis of the anomaly detection problem.
Shi Feng, Eric Wallace, and Jordan Boyd-Graber. 2019. Misleading failures of partial-input baselines. In ACL.
Shi Feng, Eric Wallace, II Grissom, Mohit Iyyer, Pedro Rodriguez, and Jordan Boyd-Graber. 2018. Patholo- gies of neural models make interpretations difï¬cult. In EMNLP.
Adam Fisch, Alon Talmor, Robin Jia, Minjoon Seo, Eu- nsol Choi, and Danqi Chen. 2019. Proceedings of the 2nd workshop on machine reading for question answering. In MRQA Workshop.
Daniel Fried, Nikita Kitaev, and Dan Klein. 2019. Cross-domain generalization of neural constituency parsers. In ACL.
Prakhar Ganesh, Yao Chen, Xin Lou, Mohammad Ali Khan, Yin Yang, Deming Chen, Marianne Winslett, Hassan Sajjad, and Preslav Nakov. 2020. Compress- ing large-scale transformer-based models: A case study on BERT. ArXiv, abs/2002.11985.
Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke S. Zettlemoyer. 2018. AllenNLP: a deep semantic natural language In Workshop for NLP Open processing platform. Source Software.
Justin Gilmer, Ryan P. Adams, Ian J. Goodfellow, David Andersen, and George E. Dahl. 2018. Moti- vating the rules of the game for adversarial example research. ArXiv, abs/1807.06732.
Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R. Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural lan- guage inference data. In NAACL-HLT.
Zellig S Harris. 1954. Distributional structure. Word.
Ruining He and Julian J. McAuley. 2016. Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative ï¬ltering. In WWW.
Dan Hendrycks and Thomas Dietterich. 2019. Bench- marking neural network robustness to common cor- ruptions and perturbations. In ICLR.
Dan Hendrycks and Kevin Gimpel. 2016. Gaus- arXiv preprint sian error linear units (GELUs). arXiv:1606.08415.
Dan Hendrycks and Kevin Gimpel. 2017. A baseline for detecting misclassiï¬ed and out-of-distribution examples in neural networks. In ICLR.
Dan Hendrycks, Kimin Lee, and Mantas Mazeika. 2019a. Using pre-training can improve model ro- bustness and uncertainty. ICML.
Dan Hendrycks, Mantas Mazeika, and Thomas G. Diet- terich. 2019b. Deep anomaly detection with outlier exposure. ICLR.
Dan Hendrycks, Mantas Mazeika, Saurav Kadavath, and Dawn Song. 2019c. Using self-supervised learn- ing can improve model robustness and uncertainty. In NeurIPS.
Dan Hendrycks, Norman Mu, Ekin D. Cubuk, Barret Zoph, Justin Gilmer, and Balaji Lakshminarayanan. 2020. AugMix: A simple data processing method to improve robustness and uncertainty. ICLR.
Dan Hendrycks, Kevin Zhao, Steven Basart, Jacob Steinhardt, and Dawn Song. 2019d. Natural adver- sarial examples. ArXiv, abs/1907.07174.
Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. In Neural Computation.
Robin Jia and Percy Liang. 2017. Adversarial exam- ples for evaluating reading comprehension systems. In EMNLP.
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: a lite BERT for self-supervised learning of language representations. In ICLR.
Ken Lang. 1995. NewsWeeder: Learning to ï¬lter Net- news. In ICML.
Kimin Lee, Honglak Lee, Kibok Lee, and Jinwoo Shin. 2018. Training conï¬dence-calibrated classiï¬ers for detecting out-of-distribution samples. In ICLR.
Zhuohan Li, Eric Wallace, Sheng Shen, Kevin Lin, Kurt Keutzer, Dan Klein, and Joseph E Gonzalez. 2020. Train large, then compress: Rethinking model size for efï¬cient training and inference of transform- ers. ArXiv, abs/2002.11794.
Nelson F Liu, Roy Schwartz, and Noah A Smith. 2019a. Inoculation by ï¬ne-tuning: A method for analyzing challenge datasets. In NAACL.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. RoBERTa: A robustly optimized BERT pretraining approach. ArXiv, abs/1907.11692.
Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In ACL.
Julian J. McAuley, Christopher Targett, Qinfeng Shi, and Anton van den Hengel. 2015. Image-based rec- ommendations on styles and substitutes. In SIGIR.
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In NIPS.
Sewon Min, Eric Wallace, Sameer Singh, Matt Gard- ner, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2019. Compositional questions do not necessitate multi-hop reasoning. In ACL.
Aakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, and Graham Neubig. 2018. Stress test evaluation for natural language inference. In COLING.
A. Emin Orhan. 2019. Robustness properties ArXiv, of facebookâs ResNeXt WSL models. abs/1907.07640.
Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word rep- resentation. In EMNLP.
Joaquin Quionero-Candela, Masashi Sugiyama, Anton Schwaighofer, and Neil D. Lawrence. 2009. Dataset shift in machine learning.
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Semantically equivalent adversarial rules for debugging NLP models. In ACL.
Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. DistilBERT, a distilled ver- sion of bert: smaller, faster, cheaper and lighter. In NeurIPS EMC2 Workshop.
Sheng Shen, Zhen Dong, Jiayu Ye, Linjian Ma, Zhewei Yao, Amir Gholami, Michael W Mahoney, and Kurt Keutzer. 2020. Q-BERT: Hessian based ultra low precision quantization of BERT.
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- bank. In EMNLP.
Antonio Torralba and Alexei A. Efros. 2011. Unbiased look at dataset bias. CVPR.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NIPS.
Eric Wallace, Shi Feng, Nikhil Kandpal, Matthew Gardner, and Sameer Singh. 2019. Universal adver- sarial triggers for attacking and analyzing NLP. In EMNLP.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A multitask benchmark and analysis plat- form for natural language understanding. In ICLR.
John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2016. Towards universal paraphrastic sen- tence embeddings. In ICLR.
Adina Williams, Nikita Nangia, and Samuel R Bow- man. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In NAACL-HLT.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R´emi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. HuggingFaceâs Trans- formers: State-of-the-art natural language process- ing. ArXiv, abs/1910.03771.
Cihang Xie and Alan L. Yuille. 2020. Intriguing prop- erties of adversarial training at scale. In ICLR.
Qizhe Xie, Eduard H. Hovy, Minh-Thang Luong, and Quoc V. Le. 2020. Self-training with noisy student improves imagenet classiï¬cation. In CVPR.
Dani Yogatama, Cyprien de Masson dâAutume, Jerome Connor, Tom´as Kocisk´y, Mike Chrzanowski, Ling- peng Kong, Angeliki Lazaridou, Wang Ling, Lei Yu, Chris Dyer, and Phil Blunsom. 2019. Learning and evaluating general linguistic intelligence. ArXiv, abs/1901.11373.
Sheng Zhang, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Kevin Duh, and Benjamin Van Durme. 2018. ReCoRD: bridging the gap between human and ma- chine commonsense reading comprehension. arXiv, abs/1810.12885.
# A Additional Experimental Results
# A.1 Signiï¬cant OOD Accuracy Drops
For STS-B, ReCoRD, and SST-2/IMDb, there is a noticeable drop in accuracy when testing on OOD examples. We show the STS-B results in Table 2, the ReCoRD results in Table 3, and the SST-2/IMDb results in Table 4.
# A.2 Minor OOD Accuracy Drops
We observe more minor performance declines for the Amazon, MNLI, and Yelp datasets. Figure 6 shows the Amazon results for BERT Base, Table 5 shows the MNLI results, and Table 6 shows the Yelp results.
# Generalization of BERT Base on Amazon Product Reviews
Clothes (C)- 52 | 62 | 58 | 54 | 53 Women'sC- 61 | 54 | 53 | 52. 51 Men'sC- 60 | 55 | 52 53 | 53 Baby'sC- 52 | 54 | 50 55 | 52 Train Dataset Shoes -~ Music- 48 50 | 48 | 49 Movies- 47. 49 48 50 ¢ wc mc Bc Ss MS mv Test Dataset
Figure 6: We ï¬netune BERT Base on one category of Amazon reviews and then evaluate it on other cat- egories. Models predict the reviewâs star rating with 5-way classiï¬cation. We use ï¬ve clothing categories: Clothes (C), Womenâs Clothing (WC), Menâs Clothing (MC), Baby Clothing (BC), and Shoes (S); and two entertainment categories: Music (MS), Movies (MV). BERT is robust for closely related categories such as menâs, womenâs, and baby clothing. However, BERT struggles when there is an extreme distribution shift such as Baby Clothing to Music (dark blue region). Note this shift is closer to a domain adaptation setting.
# A.3 OOD Detection
Full FAR95 values are in Table 7. We also report the Area Under the Receiver Operating Charac- teristic (AUROC) (Hendrycks and Gimpel, 2017). The AUROC is the probability that an OOD ex- ample receives a higher anomaly score than an
in-distribution example, viz.,
P(â max y p(y | xout) > â max y p(y | xin)).
A ï¬awless AUROC is 100% while 50% is random chance. These results are in Figure 7 and Table 8.
Train Images MSRvid Headlines MSRpar RoBERTa Test BoW BERT Large 92.8 90.5 (-2.3) 93.9 86.8 (-7.1) 88.3 Average word2vec 61.4 11.3 (-50.1) 38.3 (-37.4) 62.0 (-19.8) 6.1 (-55.2) 68.7 BERT Base 91.8 ConvNet GloVe 81.8 LSTM word2vec 75.7 LSTM GloVe 79.8 43.1 (-36.7) 57.8 (-24.1) 89.5 (-2.3) 85.6 Average GloVe 61.2 ConvNet word2vec 81.8 94.2 94.3 (0.1) 94.9 90.4 (-4.6) 91.3 Images MSRvid MSRvid Images Headlines 26.8 MSRpar MSRpar Headlines -9.7 (-56.7) 39.7 4.4 (-35.4) 60.7 19.3 (-41.4) 23.7 (-44.9) 45.6 (-40.2) 54.3 (-30.7) 11.1 (-55.7) 49.0 (-36.6) 51.9 (-35.4) 85.8 (-6.6) 85.0 92.4 87.4 66.8 85.9 66.2 -1.9 (-68.1) 46.7 67.4 9.8 (-57.6) 49.8 69.6 58.9 87.0 53.4 25.9 (-27.5) 25.4 (-44.5) 10.9 (-58.7) 69.9 (-17.1) 63.6 (-24.7) 75.5 (-15.8) 50.9 69.9 10.1 (-16.7) 19.1 (-39.7) 47.0 46.7 15.6 (-31.1) 30.6 (-15.6) 73.0 (-5.8) 46.2 86.8 83.9 (-2.9) 78.8 81.6 71.7 (-9.9) 27.0 12.7 (-14.4) 10.3 (-36.5) 23.7 (-26.1) 7.0 (-43.9)
Table 2: We train and test models on different STS-B distributions (Images, MSR videos, Headlines, and MSR paraphrase). The severe drop in the Pearson correlation coefï¬cient shows the consequence of a distribution shift. Models such as Average GloVe lose nearly all performance when out-of-distribution. RoBERTa does especially well in comparison to other models.
Test CNN DailyMail DailyMail CNN Document QA 39.0 29.7 (-9.3) 30.8 36.9 (6.2) DistilBERT 45.0 34.8 (-10.2) 36.7 43.9 (7.2) BERT Base 53.2 46.7 (-6.6) 48.2 51.8 (3.6) BERT Large 67.2 59.8 (-7.4) 61.2 65.5 (4.3) RoBERTa 71.5 72.2 (0.7) 73.0 73.0 (0.0)
Table 3: For ReCoRD, the exact match performance is closely tethered to the test dataset, which suggests a dif- ference in the difï¬culty of the two test sets. This gap can be bridged by larger Transformer models pretrained on more data.
Train SST IMDb Test SST IMDb IMDb SST BoW 80.6 73.9 (-6.8) 85.9 78.3 (-7.6) Average word2vec 81.4 76.4 (-5.0) 84.8 68.5 (-16.3) 63.7 (-26.3) 83.0 (-8.0) LSTM word2vec 87.5 78.0 (-9.5) 89.9 ConvNet word2vec 85.3 81.0 (-4.4) 91.0 Average GloVe 80.3 74.5 (-5.8) 83.5 77.5 (-6.1) LSTM GloVe 87.4 82.1 (-5.3) 91.3 79.9 (-11.4) 80.0 (-10.9) 87.6 (-4.3) ConvNet GloVe 84.8 81.0 (-3.8) 91.0 BERT Base 91.9 87.5 (-4.4) 91.8 BERT Large 93.6 88.3 (-5.3) 92.9 88.6 (-4.3) RoBERTa 95.6 92.8 (-2.8) 94.3 91.0 (-3.4)
Table 4: We train and test models on SST-2 and IMDB. Notice IID accuracy is not perfectly predictive of OOD accuracy, so increasing IID benchmark performance does not necessarily yield superior OOD generalization.
Train Telephone Test Telephone Letters Face-to-face DistilBERT 77.5 75.6 (-1.9) 76.0 (-1.4) BERT Base 81.4 82.3 (0.9) 80.8 (-0.7) BERT Large 84.0 85.1 (1.0) 83.2 (-0.8) RoBERTa 89.6 90.0 (0.4) 89.4 (-0.2)
Table 5: We train models on the MNLI Telephone dataset and test on the Telephone, Letters, and Face-to-face datasets. The difference in accuracies are quite small (and sometimes even positive) for all four models. This demonstrates that pretrained Transformers can withstand various types of shifts in the data distribution.
Train Test BoW AM CH IT JA AM 87.2 CH IT JA CH AM 82.2 (0.0) 84.6 (2.4) IT 83.8 (1.6) JA IT 87.2 AM 85.4 (-1.8) 79.6 (-7.6) CH 82.0 (-5.2) JA 85.0 JA AM 83.4 (-1.6) 81.6 (-3.4) CH 84.0 (-1.0) IT 82.4 (-4.8) 81.8 (-5.4) 84.2 (-3.0) 82.2 Average word2vec 85.6 80.4 (-5.2) 82.6 (-3.0) 86.0 (0.4) 84.4 85.4 (1.0) 82.0 (-2.4) 85.8 (1.4) 86.8 83.8 (-3.0) 81.6 (-5.2) 84.6 (-2.2) 87.6 85.0 (-2.6) 83.6 (-4.0) 83.6 (-4.0) LSTM word2vec 88.0 87.2 (-0.8) 86.4 (-1.6) 89.6 (1.6) 87.6 88.0 (0.4) 88.0 (0.4) 88.6 (1.0) 89.6 89.0 (-0.6) 83.8 (-5.8) 87.4 (-2.2) 89.0 87.8 (-1.2) 89.0 (0.0) 88.2 (-0.8) ConvNet word2vec 89.6 88.6 (-1.0) 89.4 (-0.2) 89.4 (-0.2) 88.8 89.2 (0.4) 89.6 (0.8) 89.0 (0.2) 90.8 90.2 (-0.6) 88.4 (-2.4) 88.6 (-2.2) 90.4 87.8 (-2.6) 89.0 (-1.4) 89.4 (-1.0) Average GloVe 85.0 75.1 (-9.9) 82.0 (-3.0) 79.2 (-5.8) 84.4 83.0 (-1.4) 84.6 (0.2) 86.8 (2.4) 86.2 85.6 (-0.6) 78.0 (-8.2) 85.0 (-1.2) 88.0 80.4 (-7.6) 80.6 (-7.4) 83.6 (-4.4) LSTM GloVe 88.0 88.4 (0.4) 89.2 (1.2) 87.8 (-0.2) 89.2 85.6 (-3.6) 88.6 (-0.6) 88.8 (-0.4) 89.6 89.0 (-0.6) 83.2 (-6.4) 86.8 (-2.8) 89.0 88.6 (-0.4) 87.4 (-1.6) 88.0 (-1.0) ConvNet GloVe 91.2 89.6 (-1.6) 89.6 (-1.6) 89.2 (-2.0) 89.0 90.2 (1.2) 90.4 (1.4) 89.6 (0.6) 90.8 90.2 (-0.6) 85.8 (-5.0) 89.4 (-1.4) 89.6 89.4 (-0.2) 89.2 (-0.4) 90.6 (1.0) DistilBERT BERT 90.0 91.8 (1.8) 92.6 (2.6) 92.0 (2.0) 90.2 90.6 (0.4) 91.4 (1.2) 91.6 (1.4) 92.4 90.4 (-2.0) 90.4 (-2.0) 91.8 (-0.6) 91.6 91.2 (-0.4) 92.8 (1.2) 92.6 (1.0) Base 90.8 91.0 (0.2) 91.6 (0.8) 92.0 (1.2) 90.4 88.8 (-1.6) 89.0 (-1.4) 89.4 (-1.0) 91.6 90.6 (-1.0) 89.6 (-2.0) 91.4 (-0.2) 92.2 90.4 (-1.8) 91.4 (-0.8) 90.2 (-2.0) BERT Large 91.0 90.6 (-0.4) 91.2 (0.2) 92.2 (1.2) 90.8 91.8 (1.0) 90.2 (-0.6) 91.6 (0.8) 91.8 89.4 (-2.4) 90.0 (-1.8) 91.2 (-0.6) 93.4 90.6 (-2.8) 90.8 (-2.6) 91.0 (-2.4) RoBERTa 93.0 90.8 (-2.2) 91.8 (-1.2) 93.4 (0.4) 92.4 92.4 (0.0) 92.6 (0.2) 92.2 (-0.2) 94.2 92.0 (-2.2) 92.4 (-1.8) 92.2 (-2.0) 92.6 91.0 (-1.6) 92.4 (-0.2) 92.6 (0.0)
Table 6: We train and test models on American (AM), Chinese (CH), Italian (IT), and Japanese (JA) restaurant reviews. The accuracy drop is smaller compared to SST-2/IMDb for most models and pretrained transformers are typically the most robust.
20 NG Multi30K RTE SNLI WMT16 Mean FAR95 BoW 100 61 100 81 100 88.4 Avg w2v 100 57 100 83 91 86.2 Avg GloVe 100 52 84 72 77 76.9 LSTM w2v 94 92 93 92 90 92.2 LSTM GloVe 90 85 88 82 82 85.4 ConvNet w2v 61 65 75 63 70 66.9 ConvNet GloVe 71 63 56 63 63 63.1 DistilBERT BERT Base 35 22 32 28 48 33.0 39 37 43 38 56 42.5 BERT Large 29 23 29 28 44 30.5 22 61 36 29 65 43.0
Table 7: Out-of-distribution detection FAR95 scores for various NLP models using the maximum softmax prob- ability anomaly score. Observe that while pretrained Transformers are consistently best, there remains room for improvement.
Detecting OOD Examples for an SST-2 Sentiment Classifier
did 20 NG Multi30K SNLI WMT16 Average Model Type Random Detector Bag of Words Avg. word2vec LSTM word2vec ConvNet word2vec BERT Large AUROC (%) (Higher Is Better)
Figure 7: We feed in OOD examples from out-of-distribution datasets (20 Newsgroups, Multi30K, etc.) to SST-2 sentiment classiï¬ers and report the AUROC detection performance. A 50% AUROC is the random chance level.
20 NG Multi30K RTE SNLI WMT16 Mean AUROC BoW 17 77 63 56 58 54.2 Avg w2v 19 75 47 58 60 51.8 Avg GloVe 30 80 72 71 69 64.5 LSTM w2v 44 55 36 53 58 49.3 LSTM GloVe 59 62 54 64 63 60.4 ConvNet w2v 74 71 61 72 69 69.5 ConvNet GloVe 64 73 77 74 74 72.5 DistilBERT BERT Base 83 93 89 92 85 88.1 82 86 83 86 80 83.1 BERT Large 87 91 89 90 85 88.4 90 89 90 92 83 88.7
Table 8: Out-of-distribution detection AUROC scores for various NLP models using the maximum softmax proba- bility anomaly score. An AUROC score of 50% is random chance, while 100% is perfect. | {
"id": "1606.08415"
} |
2004.06089 | Thinking While Moving: Deep Reinforcement Learning with Concurrent Control | We study reinforcement learning in settings where sampling an action from the
policy must be done concurrently with the time evolution of the controlled
system, such as when a robot must decide on the next action while still
performing the previous action. Much like a person or an animal, the robot must
think and move at the same time, deciding on its next action before the
previous one has completed. In order to develop an algorithmic framework for
such concurrent control problems, we start with a continuous-time formulation
of the Bellman equations, and then discretize them in a way that is aware of
system delays. We instantiate this new class of approximate dynamic programming
methods via a simple architectural extension to existing value-based deep
reinforcement learning algorithms. We evaluate our methods on simulated
benchmark tasks and a large-scale robotic grasping task where the robot must
"think while moving". | http://arxiv.org/pdf/2004.06089 | Ted Xiao, Eric Jang, Dmitry Kalashnikov, Sergey Levine, Julian Ibarz, Karol Hausman, Alexander Herzog | cs.LG, cs.AI, cs.RO, stat.ML, I.2.9 | Published as a conference paper at ICLR 2020 | null | cs.LG | 20200413 | 20200425 | 0 2 0 2
r p A 5 2 ] G L . s c [
4 v 9 8 0 6 0 . 4 0 0 2 : v i X r a
Published as a conference paper at ICLR 2020
THINKING WHILE MOVING: DEEP REINFORCEMENT LEARNING WITH CONCURRENT CONTROL
Ted Xiao1, Eric Jang1, Dmitry Kalashnikov1, Sergey Levine1,2, Julian Ibarz1, Karol Hausman1â, Alexander Herzog3â 1Google Brain, 2UC Berkeley, 3X {tedxiao, ejang, dkalashnikov, slevine, julianibarz, karolhausman}@google.com, [email protected]
# ABSTRACT
We study reinforcement learning in settings where sampling an action from the policy must be done concurrently with the time evolution of the controlled system, such as when a robot must decide on the next action while still performing the pre- vious action. Much like a person or an animal, the robot must think and move at the same time, deciding on its next action before the previous one has completed. In order to develop an algorithmic framework for such concurrent control prob- lems, we start with a continuous-time formulation of the Bellman equations, and then discretize them in a way that is aware of system delays. We instantiate this new class of approximate dynamic programming methods via a simple architec- tural extension to existing value-based deep reinforcement learning algorithms. We evaluate our methods on simulated benchmark tasks and a large-scale robotic grasping task where the robot must âthink while movingâ. Videos are available at https://sites.google.com/view/thinkingwhilemoving.
# INTRODUCTION
In recent years, Deep Reinforcement Learning (DRL) methods have achieved tremendous suc- cess on a variety of diverse environments, including video games (Mnih et al., 2015), zero-sum games (Silver et al., 2016), robotic grasping (Kalashnikov et al., 2018), and in-hand manipulation tasks (OpenAI et al., 2018). While impressive, all of these examples use a blocking observe-think- act paradigm: the agent assumes that the environment will remain static while it thinks, so that its actions will be executed on the same states from which they were computed. This assumption breaks in the concurrent real world, where the environment state evolves substantially as the agent processes observations and plans its next actions. As an example, consider a dynamic task such as catching a ball: it is not possible to pause the ball mid-air while waiting for the agent to decide on the next control to command. In addition to solving dynamic tasks where blocking models would fail, thinking and acting concurrently can provide beneï¬ts such as smoother, human-like motions and the ability to seamlessly plan for next actions while executing the current one. Despite these potential beneï¬ts, most DRL approaches are mainly evaluated in blocking simulation environments. Block- ing environments make the assumption that the environment state will not change between when the environment state is observed and when the action is executed. This assumption holds true in most simulated environments, which encompass popular domains such as Atari (Mnih et al., 2013) and Gym control benchmarks (Brockman et al., 2016). The system is treated in a sequential manner: the agent observes a state, freezes time while computing an action, and ï¬nally applies the action and unfreezes time. However, in dynamic real-time environments such as real-world robotics, the syn- chronous environment assumption is no longer valid. After observing the state of the environment and computing an action, the agent often ï¬nds that when it executes an action, the environment state has evolved from what it had initially observed; we consider this environment a concurrent environment.
In this paper, we introduce an algorithmic framework that can handle concurrent environments in the context of DRL. In particular, we derive a modiï¬ed Bellman operator for concurrent MDPs and
# âIndicates equal contribution.
1
Published as a conference paper at ICLR 2020
present the minimal set of information that we must augment state observations with in order to recover blocking performance with Q-learning. We introduce experiments on different simulated environments that incorporate concurrent actions, ranging from common simple control domains to vision-based robotic grasping tasks. Finally, we show an agent that acts concurrently in a real-world robotic grasping task is able to achieve comparable task success to a blocking baseline while acting 49% faster.
2 RELATED WORK
Minimizing Concurrent Effects Although real-world robotics systems are inherently concurrent, it is sometimes possible to engineer them into approximately blocking systems. For example, using low-latency hardware (Abbeel et al., 2006) and low-footprint controllers (Cruz et al., 2017) mini- mizes the time spent during state capture and policy inference. Another option is to design actions to be executed to completion via closed-loop feedback controllers and the system velocity is de- celerated to zero before a state is recorded (Kalashnikov et al., 2018). In contrast to these works, we tackle the concurrent action execution directly in the learning algorithm. Our approach can be applied to tasks where it is not possible to wait for the system to come to rest between deciding new actions.
Algorithmic Approaches Other works utilize algorithmic modiï¬cations to directly overcome the challenges of concurrent control. Previous work in this area can be grouped into ï¬ve approaches: (1) learning policies that are robust to variable latencies (Tan et al., 2018), (2) including past history such as frame-stacking (Haarnoja et al., 2018), (3) learning dynamics models to predict the future state at which the action will be executed (Firoiu et al., 2018; Amiranashvili et al., 2018), (4) using a time- delayed MDP framework (Walsh et al., 2007; Firoiu et al., 2018; Schuitema et al., 2010; Ramstedt & Pal, 2019), and (5) temporally-aware architectures such as Spiking Neural Networks (Vasilaki et al., 2009; Fr´emaux et al., 2013), point processes (Upadhyay et al., 2018; Li et al., 2018), and adaptive skip intervals (Neitz et al., 2018). In contrast to these works, our approach is able to (1) optimize for a speciï¬c latency regime as opposed to being robust to all of them, (2) consider the properties of the source of latency as opposed to forcing the network to learn them from high-dimensional inputs, (3) avoid learning explicit forward dynamics models in high-dimensional spaces, which can be costly and challenging, (4) consider environments where actions are interrupted as opposed to discrete- time time-delayed environments where multiple actions are queued and each action is executed until completion. A recent work Ramstedt & Pal (2019) extends 1-step constant delayed MDPs to actor-critic methods on high dimension image based tasks. The approaches in (5) show promise in enabling asynchronous agents, but are still active areas of research that have not yet been extended to high-dimensional, image-based robotic tasks.
Continuous-time Reinforcement Learning While previously mentioned related works largely operate in discrete-time environments, framing concurrent environments as continuous-time sys- tems is a natural framework to apply. In the realm of continuous-time optimal control, path integral solutions (Kappen, 2005; Theodorou et al., 2010) are linked to different noise levels in system dy- namics, which could potentially include latency that results in concurrent properties. Finite differ- ences can approximate the Bellman update in continuous-time stochastic control problems (Munos & Bourgine, 1998) and continuous-time temporal difference learning methods (Doya, 2000) can utilize neural networks as function approximators (Coulom, 2002). The effect of time-discretization (converting continuous-time environments to discrete-time environments) is studied in Tallec et al. (2019), where the advantage update is scaled by the time discretization parameter. While these approaches are promising, it is untested how these methods may apply to image-based DRL prob- lems. Nonetheless, we build on top of many of the theoretical formulations in these works, which motivate our applications of deep reinforcement learning methods to more complex, vision-based robotics tasks.
2
Published as a conference paper at ICLR 2020
3 VALUE-BASED REINFORCEMENT LEARNING IN CONCURRENT ENVIRONMENTS
In this section, we ï¬rst introduce the concept of concurrent environments, and then describe the preliminaries necessary for discrete- and continuous-time RL formulations. We then describe the MDP modiï¬cations sufï¬cient to represent concurrent actions and ï¬nally, present value-based RL algorithms that can cope with concurrent environments.
The main idea behind our method is simple and can be implemented using small modiï¬cations to standard value-based algorithms. It centers around adding additional information to the learning algorithm (in our case, adding extra information about the previous action to a Q-function) that allows it to cope with concurrent actions. Hereby, we provide theoretical justiï¬cation on why these modiï¬cations are necessary and we specify the details of the algorithm in Alg. 1.
While concurrent environments affect DRL methods beyond model-free value-based RL, we focus our scope on model-free value-based methods due to their attractive sample-efï¬ciency and off-policy properties for real-world vision-based robotic tasks.
3.1 CONCURRENT ACTION ENVIRONMENTS
In blocking environments (Figure 4a in the Appendix), actions are executed in a sequential blocking fashion that assumes the environment state does not change between when state is observed and when actions are executed. This can be understood as state capture and policy inference being viewed as instantaneous from the perspective of the agent. In contrast, concurrent environments (Figure 4b in the Appendix) do not assume a ï¬xed environment during state capture and policy inference, but instead allow the environment to evolve during these time segments.
3.2 DISCRETE-TIME REINFORCEMENT LEARNING PRELIMINARIES
We use standard reinforcement learning formulations in both discrete-time and continuous-time settings (Sutton & Barto} {1998}. In the discrete-time case, at each time step i, the agent receives state s; from a set of possible states S and selects an action a; from some set of possible actions A according to its policy 7, where 7 is a mapping from S to A. The environment returns the next state 8,41 sampled from a transition distribution p(s;41|s;,@;) and a reward r(s;,a;). The return for a given trajectory of states and actions is the total discounted return from time step 7 with discount factor y ⬠(0, 1]: Ri = D9 y*r(si+4, Gi+K)- The goal of the agent is to maximize the expected return from each state s;. The Q-function for a given stationary policy 7 gives the expected return when selecting action a at state s: Q"(s,a) = E[R;|s; = s,a; = a]. Similarly, the value function gives expected return from state s: V"(s) = E[R;|s; = s]. The default blocking environment formulation is detailed in Figure[Ih.
# 3.3 VALUE FUNCTIONS AND POLICIES IN CONTINUOUS TIME
For the continuous-time case, we start by formalizing a continuous-time MDP with the differential equation:
ds(t) = F (s(t), a(t))dt + G(s(t), a(t))dβ (1)
where S = Rd is a set of states, A is a set of actions, F : S à A â S and G : S à A â S describe the stochastic dynamics of the environment, and β is a Wiener process (Ross et al., 1996). In the continuous-time setting, ds(t) is analogous to the discrete-time p, deï¬ned in Section 3.2. Continuous-time functions s(t) and ai(t) specify the state and i-th action taken by the agent. The agent interacts with the environment through a state-dependent, deterministic policy function Ï and the return R of a trajectory Ï = (s(t), a(t)) is given by (Doya, 2000):
R(Ï ) = γtr(s(t), a(t))dt, t=0 (2)
3
Published as a conference paper at ICLR 2020
Discrete-Time MDP Continuous-Time MDP with Time-Delayed Actions tase. tas basiy ) a Sea IS;â1]) â>|s; > Si s(tâH + tasr) s(t + tas) NN a IO +H pa Qj a as Gane = El +e bag) â alag(E+ tas) - {4} } > iieerorine [> contnuoustine i-| i i+] tâH + tase qi t+tas t+H
Figure 1: Shaded nodes represent observed variables and unshaded nodes represent unobserved random variables. (a): In âblockingâ MDPs, the environment state does not change while the agent records the current state and selects an action. (b): In âconcurrentâ MDPs, state and action dynamics are continuous-time stochastic processes s(t) and a;(t). At time t, the agent observes the state of the world s(t), but by the time it selects an action a;(t + tag), the previous continuous-time action function a;_(t â H + t,gv) has ârolled overâ to an unobserved state s(t + tag). An agent that concurrently selects actions from old states while in motion may need to interrupt a previous action before it has finished executing its current trajectory.
which leads to a continuous-time value function (Tallec et al., 2019):
V"(s(t)) = Erna [R(7)|5(¢)] =E pny [. af r(s(t).a(t)a| ; =0 (3)
and similarly, a continuous Q-function:
Q" (s(t), a,t, H) = E, U=t+H ; [ of tr(s(t'), a(t'))dt! + V7 (s(t + ay), (4) =t
where H is the constant sampling period between state captures (i.e. the duration of an action trajectory) and a refers to the continuous action function that is applied between t and t + H. The expectations are computed with respect to stochastic process p deï¬ned in Eq. 1.
3.4 CONCURRENT ACTION MARKOV DECISION PROCESSES
We consider Markov Decision Processes (MDPs) with concurrent actions, where actions are not executed to full completion. More speciï¬cally, concurrent action environments capture system state while the previous action is still executed. After state capture, the policy selects an action that is executed in the environment regardless of whether the previous action has completed, as shown in Figure 4 in the Appendix. In the continuous-time MDP case, concurrent actions can be considered as horizontally translating the action along the time dimension (Walsh et al., 2007), and the effect of concurrent actions is illustrated in Figure 1b. Although we derive Bellman Equations for handling delays in both continuous and discrete-time RL, our experiments extend existing DRL implementa- tions that are based on discrete time.
3.5 VALUE-BASED CONCURRENT REINFORCEMENT LEARNING ALGORITHMS IN CONTINUOUS AND DISCRETE-TIME
We start our derivation from this continuous-time reinforcement learning standpoint, as it allows us to easily characterize the concurrent nature of the system. We then demonstrate that the conclusions drawn for the continuous case also apply to the more commonly-used discrete setting that we then use in all of our experiments.
Continuous Formulation In order to further analyze the concurrent setting, we introduce the following notation. As shown in Figure 1b, an agent selects N action trajectories during an episode, a1, ..., aN , where each ai(t) is a continuous function generating controls as a function of time t. Let tAS be the time duration of state capture, policy inference and any additional communication latencies. At time t, an agent begins computing the i-th trajectory ai(t) from state s(t), while
4
Published as a conference paper at ICLR 2020
concurrently executing the previous selected trajectory aiâ1(t) over the time interval (t â H + tAS, t + tAS). At time t + tAS, where t ⤠t + tAS ⤠t + H, the agent switches to executing actions from ai(t). The continuous-time Q-function for the concurrent case from Eq. 4 can be expressed as following:
t'=t+tas , Q" (s(t), ai-1, ai, t, H) = E, | 7 âro acalea '=t SS
# + Ep
Executing action trajectory a;_(t) until t + tas =t+H
| |
# 7 ârte
# t=t+tas
Executing action trajectory a;(t) until t + H
|
+E,[y"V"(st+H))] ~~ Value function at t + H
The ï¬rst two terms correspond to expected discounted returns for executing the action trajectory aiâ1(t) from time (t, t + tAS) and the trajectory ai(t) from time (t + tAS, t + tAS + H). We can obtain a single-sample Monte Carlo estimator ËQ by sampling random functions values p, which simply correspond to policy rollouts:
=tt+tas Q" (s(t), 51, a, t, H) = i yt tr(s(t!), axa (dt! + st U=t+H vias | yt tas r(s(t!), a;(t!))dt! + y4â-45V" (s(t + A) t/=t+tas (6)
Next, for the continuous-time case, let us deï¬ne a new concurrent Bellman backup operator:
U=at+tas T.Q(s(t), ajâ1,4;,t,tags) = | yt -tn(s(tâ), aj_1(tâ))dt!+ vst 7/48 maxE,Q"(s(t+tas),@i,dis1,t+tas,Hâtas). (7) itl
# T â c
In addition to expanding the Bellman operator to take into account concurrent actions, we demon- strate that this modiï¬ed operator maintain its contraction properties that are crucial for Q-learning convergence. Lemma 3.1. The concurrent continuous-time Bellman operator is a contraction.
Proof. See Appendix A.2.
Discrete Formulation In order to simplify the notation for the discrete-time case where the dis- tinction between the action function ai(t) and the value of that function at time step t, ai(t), is not necessary, we refer to the current state, current action, and previous action as st, at, atâ1 respec- tively, replacing subindex i with t. Following this notation, we deï¬ne the concurrent Q-function for the discrete-time case:
# QÏ(st, atâ1, at, t, tAS, H) =
at-1, a, t, tas, H) = tas 1 (St, 4-1) +9 Ep(s. 444 5|sear1)Q" (Stttas Qt, dt41,t + tas, tasy, H â tas) (8)
Where t 4s is the âspillover durationâ for action a, beginning execution at time t + tg (see Fig- ure[Ip . The concurrent Bellman operator, specified by a subscript c, is as follows:
5
(5)
Published as a conference paper at ICLR 2020
# T â c Q(st, atâ1, at, t, tAS, H) =
Q(st, 1-1, a,t, tas, H) = tas r(s¢,dr-1) + # MaAxEn(s. 44. Israr-1) Q" (Stttas, 4, t41,¢ + tas, tas, H â tas). 41 : (9)
Similarly to the continuous-time case, we demonstrate that this Bellman operator is a contraction. Lemma 3.2. The concurrent discrete-time Bellman operator is a contraction.
Proof. See Appendix A.2.
We refer the reader to Appendix A.1 for more detailed derivations of the Q-functions and Bellman operators. Crucially, Equation 9 implies that we can extend a conventional discrete-time Q-learning framework to handle MDPs with concurrent actions by providing the Q function with values of tAS and atâ1, in addition to the standard inputs st, at, t.
3.6 DEEP Q-LEARNING WITH CONCURRENT KNOWLEDGE
While we have shown that knowledge of the concurrent system properties (tAS and atâ1, as deï¬ned previously for the discrete-time case) is theoretically sufï¬cient, it is often hard to accurately predict tAS during inference on a complex robotics system. In order to allow practical implementation of our algorithm on a wide range of RL agents, we consider three additional features encapsulating concurrent knowledge used to condition the Q-function: (1) Previous action (atâ1), (2) Action selection time (tAS), and (3) Vector-to-go (V T G), which we deï¬ne as the remaining action to be executed at the instant the state is measured. We limit our analysis to environments where atâ1, tAS, and V T G are all obtainable and H is held constant. See Appendix A.3 for details.
# 4 EXPERIMENTS
In our experimental evaluation we aim to study the following questions: (1) Is concurrent knowledge deï¬ned in Section 3.6, both necessary and sufï¬cient for a Q-function to recover the performance of a blocking unconditioned Q-function, when acting in a concurrent environment? (2) Which representations of concurrent knowledge are most useful for a Q-function to act in a concurrent environment? (3) Can concurrent models improve smoothness and execution speed of a real-robot policy in a realistic, vision-based manipulation task?
4.1 TOY FIRST-ORDER CONTROL PROBLEMS
First, we illustrate the effects of a concurrent control paradigm on value-based DRL methods through an ablation study on concurrent versions of the standard Cartpole and Pendulum environments. We use 3D MuJoCo based implementations in DeepMind Control Suite (Tassa et al., 2018) for both tasks. For the baseline learning algorithm implementations, we use the TF-Agents (Guadarrama et al., 2018) implementations of a Deep Q-Network agent, which utilizes a Feed-forward Neural Network (FNN), and a Deep Q-Recurrent Neutral Network agent, which utilizes a Long Short-Term Memory (LSTM) network. To approximate different difï¬culty levels of latency in concurrent envi- ronments, we utilize different parameter combinations for action execution steps and action selection steps (tAS). The number of action execution steps is selected from {0ms, 5ms, 25ms, or 50ms} once at environment initialization. tAS is selected from {0ms, 5ms, 10ms, 25ms, or 50ms} either once at environment initialization or repeatedly at every episode reset. In addition to environment parame- ters, we allow trials to vary across model parameters: number of previous actions to store, number of previous states to store, whether to use VTG, whether to use tAS, Q-network architecture, and number of discretized actions. Further details are described in Appendix A.4.1.
To estimate the relative importance of different concurrent knowledge representations, we conduct an analysis of the sensitivity of each type of concurrent knowledge representations to combinations of the other hyperparameter values, shown in Figure 2a. While all combinations of concurrent knowledge representations increase learning performance over baselines that do not leverage this
6
Published as a conference paper at ICLR 2020
(a) Cartpole (b) Pendulum
Reward FAN, VIG FNN, LAS + prev action FNN, Unconditioned Lstâ¢, vic LSTM, LAS + prev_action LSTM, Unconditioned -+ Blocking Uncenditioned Baseline Percentage of Runs
Reward FN, VIG FNN, # prev_a>0 FNN, # prev_obs > 0 FNN, Unconditioned ++ Blocking Unconditioned Baseline ââââ Percentage of Runs
Figure 2: In concurrent versions of Cartpole and Pendulum, we observe that providing the critic with VTG leads to more robust performance across all hyperparameters. (a) Environment rewards achieved by DQN with different network architectures [either a feedforward network (FNN) or a Long Short-Term Memory (LSTM) network] and different concurrent knowledge features [Uncon- ditioned, Vector-to-go (VTG), or previous action and tAS] on the concurrent Cartpole task for ev- ery hyperparameter in a sweep, sorted in decreasing order. (b) Environment rewards achieved by DQN with a FNN and different frame-stacking and concurrent knowledge parameters on the con- current Pendulum task for every hyperparameter in a sweep, sorted in decreasing order. Larger area-under-curve implies more robustness to hyperparameter choices. Enlarged ï¬gures provided in Appendix A.5.
(a) Simulation
(b) Real
Figure 3: An overview of the robotic grasping task. A static manipulator arm attempts to grasp objects placed in bins front of it. In simulation, the objects are procedurally generated.
information, the clearest difference stems from including VTG. In Figure 2b we conduct a similar analysis but on a Pendulum environment where tAS is ï¬xed every environment; thus, we do not focus on tAS for this analysis but instead compare the importance of VTG with frame-stacking previous actions and observations. While frame-stacking helps nominally, the majority of the performance increase results from utilizing information from VTG.
4.2 CONCURRENT QT-OPT ON LARGE-SCALE ROBOTIC GRASPING
Next, we evaluate scalability of our approach to a practical robotic grasping task. We simulate a 7 DoF arm with an over-the-shoulder camera, where a bin in front of the robot is ï¬lled with
7
Published as a conference paper at ICLR 2020
Table 1: Large-Scale Simulated Robotic Grasping Results
Blocking Actions Timestep Penalty VTG Previous Action Grasp Success Episode Duration Action Completion Yes Yes No No No No No No Yes No Yes Yes Yes Yes No No No No Yes No Yes No No No No No Yes Yes 132.09s ±5.70s 92.72% ± 1.10% 120.81s ±9.13s 91.53% ± 1.04% 122.15s ±14.6s 84.11% ± 7.61% 97.16s ±6.28s 83.77% ± 9.27% 82.98s ± 5.74s 92.55% ± 4.39% 92.70% ± 1.42% 87.15s ±4.80s 93.49% ± 1.04% 90.75s ±4.15s 92.33% ± 1.476% 89.53% ± 2.267% 43.4% ± 22.41% 34.69% ± 16.80% 47.28% ± 14.25% 50.09% ± 14.25% 49.19% ± 14.98%
Table 2: Real-World Robotic Grasping Results.
Blocking Actions VTG Grasp Success Policy Duration Yes No No Yes 81.43% 68.60%
procedurally generated objects to be picked up by the robot. A binary reward is assigned if an object is lifted off a bin at the end of an episode. We train a policy with QT-Opt (Kalashnikov et al., 2018), a deep Q-Learning method that utilizes the cross-entropy method (CEM) to support continuous actions. In the blocking mode, a displacement action is executed until completion: the robot uses a closed-loop controller to fully execute an action, decelerating and coming to rest before observing the next state. In the concurrent mode, an action is triggered and executed without waiting, which means that the next state is observed while the robot remains in motion. Further details of the algorithm and experimental setup are shown in Figure 3 and explained in Appendix A.4.2.
Table 1 summarizes the performance for blocking and concurrent modes comparing unconditioned models against the concurrent knowledge models described in Section 3.6. Our results indicate that the VTG model acting in concurrent mode is able to recover baseline task performance of the blocking execution unconditioned baseline, while the unconditioned baseline acting in concurrent model suffers some performance loss. In addition to the success rate of the grasping policy, we also evaluate the speed and smoothness of the learned policy behavior. Concurrent knowledge models are able to learn faster trajectories: episode duration, which measures the total amount of wall-time used for an episode, is reduced by 31.3% when comparing concurrent knowledge models with blocking unconditioned models, even those that utilize a shaped timestep penalty that reward faster policies. When switching from blocking execution mode to concurrent execution mode, we see a signiï¬cantly lower action completion, measured as the ratio from executed gripper displacement to commanded displacement, which expectedly indicates a switch to a concurrent environment. The concurrent knowledge models have higher action completions than the unconditioned model in the concurrent environment, which suggests that the concurrent knowledge models are able to utilize more efï¬cient motions, resulting in smoother trajectories. The qualitative beneï¬ts of faster, smoother trajectories are drastically apparent when viewing video playback of learned policies1.
Real robot results In addition, we evaluate qualitative policy behaviors of concurrent models compared to blocking models on a real-world robot grasping task, which is shown in Figure 3b. As seen in Table 2, the models achieve comparable grasp success, but the concurrent model is 49% faster than the blocking model in terms of policy duration, which measures the total execution time of the policy (this excludes the infrastructure setup and teardown times accounted for in episode duration, which can not be optimized with concurrent actions). In addition, the concurrent VTG model is able to execute smoother and faster trajectories than the blocking unconditioned baseline, which is clear in video playback1.
# 1https://sites.google.com/view/thinkingwhilemoving
8
Published as a conference paper at ICLR 2020
# 5 DISCUSSION AND FUTURE WORK
We presented a theoretical framework to analyze concurrent systems where an agent must âthink while movingâ. Viewing this formulation through the lens of continuous-time value-based rein- forcement learning, we showed that by considering concurrent knowledge about the time delay tAS and the previous action, the concurrent continuous-time and discrete-time Bellman operators remained contractions and thus maintained Q-learning convergence guarantees. While more infor- mation than tAS and previous action may be helpful, we showed that tAS and previous action (and different representations of this information) are the sole theoretical requirements for good learning performance. In addition, we introduced Vector-to-go (VTG), which incorporates the remaining previous action to be executed, as an alternative representation for information about the concurrent system that previous action and tAS contain.
Our theoretical ï¬ndings were supported by experimental results on Q-learning models acting in simulated control tasks that were engineered to support concurrent action execution. We conducted large-scale ablation studies on toy task concurrent 3D Cartpole and Pendulum environments, across model parameters as well as concurrent environment parameters. Our results indicated that VTG is the least hyperparameter-sensitive representation, and was able to recover blocking learning per- formance in concurrent settings. We extended these results to a complex concurrent large-scale simulated robotic grasping task, where we showed that the concurrent models were able to recover blocking execution baseline model success while acting 31.3% faster. We analyzed the qualitative beneï¬ts of concurrent models through a real-world robotic grasping task, where we showed that a concurrent model with comparable grasp success as a blocking baseline was able to learn smoother trajectories that were 49% faster.
An interesting topic to explore in future work is the possibility of increased data efï¬ciency when training on off-policy data from various latency regimes. Another natural extension of this work is to evaluate DRL methods beyond value-based algorithms, such as on-policy learning and policy gra- dient approaches. Finally, concurrent methods may allow robotic control in dynamic environments where it is not possible for the robot to stop the environment before computing the action. In these scenarios, robots must truly think and act at the same time.
# REFERENCES
Pieter Abbeel, Adam Coates, Morgan Quigley, and Andrew Y. Ng. An application of rein- In Bernhard Schlkopf, John C. Platt, and ISBN 0-262-19568-2. URL forcement learning to aerobatic helicopter ï¬ight. Thomas Hofmann (eds.), NIPS, pp. 1â8. MIT Press, 2006. http://dblp.uni-trier.de/db/conf/nips/nips2006.html#AbbeelCQN06.
Artemij Amiranashvili, Alexey Dosovitskiy, Vladlen Koltun, and Thomas Brox. Motion perception in reinforcement learning with dynamic objects. In CoRL, volume 87 of Proceedings of Machine Learning Research, pp. 156â168. PMLR, 2018. URL http://dblp.uni-trier.de/db/ conf/corl/corl2018.html#AmiranashviliDK18.
Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym, 2016. URL http://arxiv.org/abs/1606.01540. cite arxiv:1606.01540.
R´emi Coulom. Reinforcement learning using neural networks, with applications to motor control. PhD thesis, Institut National Polytechnique de Grenoble-INPG, 2002.
Nicols Cruz, Kenzo Lobos-Tsunekawa, and Javier Ruiz del Solar. Using convolutional neural net- works in robots with limited computational resources: Detecting nao robots while playing soc- cer. CoRR, abs/1706.06702, 2017. URL http://dblp.uni-trier.de/db/journals/ corr/corr1706.html#CruzLR17.
Kenji Doya. Reinforcement learning in continuous time and space. Neural Computation, 12(1): 219â245, 2000. URL http://dblp.uni-trier.de/db/journals/neco/neco12. html#Doya00.
Vlad Firoiu, Tina Ju, and Joshua Tenenbaum. At Human Speed: Deep Reinforcement Learning with Action Delay. arXiv e-prints, October 2018.
9
Published as a conference paper at ICLR 2020
Nicolas Fr´emaux, Henning Sprekeler, and Wulfram Gerstner. Reinforcement learning using a continuous time actor-critic framework with spiking neurons. PLoS computational biology, 9: e1003024, 04 2013. doi: 10.1371/journal.pcbi.1003024.
Sergio Guadarrama, Anoop Korattikara, Oscar Ramirez, Pablo Castro, Ethan Holly, Sam Fishman, Ke Wang, Ekaterina Gonina, Chris Harris, Vincent Vanhoucke, et al. Tf-agents: A library for reinforcement learning in tensorï¬ow, 2018.
Tuomas Haarnoja, Aurick Zhou, Kristian Hartikainen, George Tucker, Sehoon Ha, Jie Tan, Vikash Kumar, Henry Zhu, Abhishek Gupta, Pieter Abbeel, and Sergey Levine. Soft actor-critic algo- rithms and applications. CoRR, abs/1812.05905, 2018. URL http://dblp.uni-trier. de/db/journals/corr/corr1812.html#abs-1812-05905.
Dmitry Kalashnikov, Alex Irpan, Peter Pastor, Julian Ibarz, Alexander Herzog, Eric Jang, Deirdre Quillen, Ethan Holly, Mrinal Kalakrishnan, Vincent Vanhoucke, and Sergey Levine. Qt-opt: Scal- able deep reinforcement learning for vision-based robotic manipulation. CoRR, abs/1806.10293, 2018. URL http://dblp.uni-trier.de/db/journals/corr/corr1806.html# abs-1806-10293.
H J Kappen. Path integrals and symmetry breaking for optimal control theory. Journal of Sta- tistical Mechanics: Theory and Experiment, 2005(11):P11011âP11011, nov 2005. doi: 10. 1088/1742-5468/2005/11/p11011. URL https://doi.org/10.1088%2F1742-5468% 2F2005%2F11%2Fp11011.
Shuang Li, Shuai Xiao, Shixiang Zhu, Nan Du, Yao Xie, and Le Song. Learning temporal point processes via reinforcement learning. In Proceedings of the 32Nd International Conference on Neural Information Processing Systems, NIPSâ18, pp. 10804â10814, USA, 2018. Curran Asso- ciates Inc. URL http://dl.acm.org/citation.cfm?id=3327546.3327737.
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing atari with deep reinforcement learning. 2013. URL http://arxiv.org/abs/1312.5602. cite arxiv:1312.5602Comment: NIPS Deep Learn- ing Workshop 2013.
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Pe- tersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep rein- forcement learning. Nature, 518(7540):529â533, February 2015. ISSN 00280836. URL http://dx.doi.org/10.1038/nature14236.
tic (eds.), 1035. MIT 1404-reinforcement-learning-for-continuous-stochastic-control-problems. pdf.
and Bernhard Sch¨olkopf. Adaptive skip intervals: In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Gar- pp. 9816â nett URL http://papers.nips.cc/paper/ 9826. Curran Associates, 8188-adaptive-skip-intervals-temporal-abstraction-for-recurrent-dynamical-models. pdf.
OpenAI, Marcin Andrychowicz, Bowen Baker, Maciek Chociej, Rafal Jzefowicz, Bob McGrew, Jakub W. Pachocki, Jakub Pachocki, Arthur Petron, Matthias Plappert, Glenn Powell, Alex Ray, Jonas Schneider, Szymon Sidor, Josh Tobin, Peter Welinder, Lilian Weng, and Wojciech Zaremba. Learning dexterous in-hand manipulation. CoRR, abs/1808.00177, 2018. URL http://dblp. uni-trier.de/db/journals/corr/corr1808.html#abs-1808-00177.
10
Published as a conference paper at ICLR 2020
Simon Ramstedt and Chris Pal. Real-time reinforcement learning. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d ´Alch´e-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems 32, pp. 3073â3082. Curran Associates, Inc., 2019. URL http://papers. nips.cc/paper/8571-real-time-reinforcement-learning.pdf.
Sheldon M Ross, John J Kelly, Roger J Sullivan, William James Perry, Donald Mercer, Ruth M Davis, Thomas Dell Washburn, Earl V Sager, Joseph B Boyce, and Vincent L Bristow. Stochastic processes, volume 2. Wiley New York, 1996.
Erik Schuitema, Lucian Busoniu, Robert Babuka, and Pieter P. Jonker. Control delay in reinforce- ment learning for real-time dynamic systems: A memoryless approach. 2010 IEEE/RSJ Interna- tional Conference on Intelligent Robots and Systems, pp. 3226â3231, 2010.
David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the game of go with deep neural networks and tree search. Nature, 529:484â, January 2016. URL http://dx. doi.org/10.1038/nature16961.
Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. The MIT Press, March 1998. ISBN 0262193981. URL http://www.amazon.ca/exec/obidos/ redirect?tag=citeulike09-20&path=ASIN/0262193981.
Correntin Tallec, Leonard Blier, and Yann Ollivier. Making Deep Q-learning Methods Robust to Time Discretization. arXiv e-prints, January 2019.
Jie Tan, Tingnan Zhang, Erwin Coumans, Atil Iscen, Yunfei Bai, Danijar Hafner, Steven Bohez, and Vincent Vanhoucke. Sim-to-Real: Learning Agile Locomotion For Quadruped Robots. arXiv e-prints, April 2018.
Yuval Tassa, Yotam Doron, Alistair Muldal, Tom Erez, Yazhe Li, Diego de Las Casas, David Bud- den, Abbas Abdolmaleki, Josh Merel, Andrew Lefrancq, Timothy P. Lillicrap, and Martin A. Riedmiller. Deepmind control suite. CoRR, abs/1801.00690, 2018. URL http://dblp. uni-trier.de/db/journals/corr/corr1801.html#abs-1801-00690.
Evangelos Theodorou, Jonas Buchli, and Stefan Schaal. Reinforcement learning of motor skills in high dimensions: A path integral approach. pp. 2397 â 2403, 06 2010. doi: 10.1109/ROBOT. 2010.5509336.
Utkarsh Upadhyay, Abir De, and Manuel Gomez-Rodrizuez. Deep reinforcement learning of marked temporal point processes. In Proceedings of the 32Nd International Conference on Neu- ral Information Processing Systems, NIPSâ18, pp. 3172â3182, USA, 2018. Curran Associates Inc. URL http://dl.acm.org/citation.cfm?id=3327144.3327238.
Eleni Vasilaki, Nicolas Frmaux, Robert Urbanczik, Walter Senn, and Wulfram Gerstner. Spike-based reinforcement learning in continuous state and action space: When policy gradient methods fail. PLoS computational biology, 5:e1000586, 12 2009. doi: 10.1371/journal.pcbi.1000586.
Thomas J. Walsh, Ali Nouri, Lihong Li, and Michael L. Littman. Planning and learning in envi- In Joost N. Kok, Jacek Koronacki, Ramn Lpez de Mntaras, ronments with delayed feedback. Stan Matwin, Dunja Mladenic, and Andrzej Skowron (eds.), ECML, volume 4701 of Lecture Notes in Computer Science, pp. 442â453. Springer, 2007. ISBN 978-3-540-74957-8. URL http://dblp.uni-trier.de/db/conf/ecml/ecml2007.html#WalshNLL07.
11
Published as a conference paper at ICLR 2020
# A APPENDIX
A.1 DEFINING BLOCKING BELLMAN OPERATORS
As introduced in Section 3.5, we deï¬ne a continuous-time Q-function estimator with concurrent actions.
# U=tttas
. U=tttas | Q(s(t), ai-1, ai, t, H) -/ a! ~tr(s(tâ'), a1 (t))dt! + (10) st
# st t/=t+H
y [roe vale ae! +9 V(6(e +H) a) t=t+tas
# U=tttas
| -/ o!'tr(s(t!), aia (t!) t+ a2) st
# st tâ=t+H
tâ=t+H » ates fat een (s(e"),aa(e))ae + V(alt+ HD) 03) =t+las
# stttas
stttas -/ a ~*r(s(tâ), a1 (tâ) dtâ + (14) st
# st t=t+H
t=t+H hn yas yh tase (s(t), ai(t"))dt" + yt '48V (s(t + H)) =t+tags
(15)
We observe that the second part of this equation (after γtAS ) is itself a Q-function at time t + tAS. Since the future state, action, and reward values at t + tAS are not known at time t, we take the following expectation:
/=t+tas Q(s(t), ai1, ai, t, H) = | a! ~*r(s(tâ), ai-1(tâ) dt! + (16) v=t
(17) which indicates that the Q-function in this setting is not just the expected sum of discounted future rewards, but it corresponds to an expected future Q-function.
In order to show the discrete-time version of the problem, we parameterize the discrete-time con- current Q-function as:
ËQ(st, atâ1, at, t, tAS, H) = r(st, atâ1) + γ tAS H Ep(st+tAS |st,atâ1)r(st+tAS , at)+ (18) H γ (19)
H Ep(st+H |st+tAS ,at)V (st+H ) which with tAS = 0, corresponds to a synchronous environment.
Using this parameterization, we can rewrite the discrete-time Q-function with concurrent actions as: ËQ(st, atâ1, at, t, tAS, H) = r(st, atâ1) + γ
# tAS
# H [Ep(st+tAS |st,atâ1)r(st+tAS , at)+
(20)
HâtAS H Ep(st+H |st+tAS ,at)V (st+H )] (21)
# γ
tas . = 7 (81, 4-1) + En(s.4 14 o|sear1) Q(St, Gt, G41, t + tas, tasâ, H â tas) (22)
A.2 CONTRACTION PROOFS FOR THE BLOCKING BELLMAN OPERATORS
# Proof of the Discrete-time Blocking Bellman Update
Lemma A.1. The traditional Bellman operator is a contraction, i.e.:
|T* Q.0(s, a) â T*Qe(s,@)|| < el]Qi(s, a) â Qa(s,@)|], (23) where T* Q(s,a) = r(s,a) + ymaxq E,Q(sâ,aâ) andO<c <1.
12
Published as a conference paper at ICLR 2020
Proof. In the original formulation, we can show that this is the case as following:
T âQ1(s, a) â T âQ2(s, a) = r(s, a) + γ max
(24)
=r(s,a) + ymax Ep[Qi(s', aâ)] â r(s,a) â ymax Ep[Qo(sâ,aâ)] (25)
= ymax E,[Qi(s',a") â Qo(sâ,aâ)] (26)
<
ysup[Q1(sâ,aâ) â Q2(sâ,aâ)], (27)
with 0 ⤠γ ⤠1 and ||f ||â = supx[f (x)].
Similarly, we can show that the updated Bellman operators introduced in Section 3.5 are contractions as well.
# Proof of Lemma 3.2
Proof. c Q1(st, aiâ1, ai, t, tAS, H) â T â T â = r(st, aiâ1) + γ
c Q2(st, aiâ1, ai, t, tAS, H) (28)
= r(se,ai1) + max Ep(sseg|seae1)Q1 (se, a;,di41,¢ + tas, tasâ, H âtas) (29)
â1(s¢,aiâ1) â yar max Ep(s,.; 45 [se,ar-1)Q2(Se, di, @i41,¢ + tas, tasâ, H â tas) (30) ai41
# tAS H max ai+1
# = γ
# =F
Eye... |seai-1)[Q1 (St, Gi, Gip1,¢ + tas, tas, H â tas) â Qo(se, ai, @i41,t + tas, tasâ, H â tas)|
(31)
# tAS H
# ⤠γ
sy sup [Qi(se, @i, @ip1,t + tas, tas, H â tas) â Qo(st, ai, ai41,t + tas, tasâ, H â tas)| 84.0: ,0:41,t+tAs tag: Hâ-tas (32)
(32)
# Proof of Lemma 3.1
Proof. To prove that this the continuous-time Bellman operator is a contraction, we can follow the discrete-time proof, from which it follows: T â c Q1(s(t), aiâ1, ai, t, tAS) â T â = γtAS max ai+1
c Q2(s(t), aiâ1, ai, t, tAS) Ep[Q1(s(t), ai, ai+1, t + tAS, H â tAS) â Q2(s(t), ai, ai+1, t + tAS, H â tAS)] (33) (34) ⤠γtAS sup s(t),ai,ai+1,t+tAS ,HâtAS [Q1(s(t), ai, ai+1, t + tAS, H â tAS) â Q2(s(t), ai, ai+1, t + tAS, H â tAS)] (35)
A.3 CONCURRENT KNOWLEDGE REPRESENTATION
We analyze 3 different representations of concurrent knowledge in discrete-time concurrent envi- ronments, described in Section 3.6. Previous action atâ1 is the action that the agent executed at the previous timestep. Action selection time tAS is a measure of how long action selection takes, which can be represented as either a categorical or continuous variable; in our experiments, which take advantage of a bounded latency regime, we normalize action selection time using these known bounds. Vector-to-go VTG is a feature that combines atâ1 and st by encoding the remaining amount of atâ1 left to execute. See Figure 5 for a visual comparison.
We note that atâ1 is available across the vast majority of environments and it is easy to obtain. Using tAS, which encompasses state capture, communication latency, and policy inference, relies
13
Published as a conference paper at ICLR 2020
Blocking Actions Concurrent Actions > State Capture â > Policy Inference t=t, t=H+t, y t=0||t=H t=2H âP Action Execution before Action Selection t=0 t=H + = -B> Action Execution during Action Selection Associativity â t=2H ion (a) (b)
Figure 4: The execution order of different stages are shown relative to the sampling period H as well as the latency tAS. (a): In âblockingâ environments, state capture and policy inference are assumed to be instantaneous. (b): In âconcurrentâ environments, state capture and policy inference are assumed to proceed concurrently to action execution.
Sit py , . ¢ , 7 , , o Siy Sir .7 ¢ bebe e teen eee > Vector-to-go (VTG) meee eee nee. > Previous Action (a,_,)
Figure 5: Concurrent knowledge representations can be visualized through an example of a 2-D pointmass discrete-time toy task. Vector-to-go represents the remaining action that may be executed when the current state st is observed. Previous action represents the full commanded action from the previous timestep.
14
Published as a conference paper at ICLR 2020
on having some knowledge of the concurrent properties of the system. Calculating V T G requires having access to some measure of action completion at the exact moment when state is observed. When utilizing a ï¬rst-order control action space, such as joint angle or desired pose, V T G is easily computable if proprioceptive state is measured and synchronized with state observation. In these cases, VTG is an alternate representation of the same information encapsulated by atâ1 and the current state.
A.4 EXPERIMENT IMPLEMENTATION DETAILS
A.4.1 CARTPOLE AND PENDULUM ABLATION STUDIES
Here, we describe the implementation details of the toy task Cartpole and Pendulum experiments in Section 4.1.
For the environments, we use the 3D MuJoCo implementations of the Cartpole-Swingup and Pendulum-Swingup tasks in DeepMind Control Suite (Tassa et al., 2018). We use discretized action spaces for ï¬rst-order control of joint position actuators. For the observation space of both tasks, we use the default state space of ground truth positions and velocities.
For the baseline learning algorithms, we use the TensorFlow Agents (Guadarrama et al., 2018) implementations of a Deep Q-Network agent, which utilizes a Feed-forward Neural Network (FNN), and a Deep Q-Recurrent Neutral Network agent, which utilizes a Long Short-Term Memory (LSTM) network. Learning parameters such as learning rate, lstm size, and fc layer size were selected through hyperparameter sweeps.
To approximate different difï¬culty levels of latency in concurrent environments, we utilize different parameter combinations for action execution steps and action selection steps (tAS). The number of action execution steps is selected from {0ms, 5ms, 25ms, or 50ms} once at environment initializa- tion. tAS is selected from {0ms, 5ms, 10ms, 25ms, or 50ms} either once at environment initializa- tion or repeatedly at every episode reset. The selected tAS is implemented in the environment as additional physics steps that update the system during simulated action selection.
Frame-stacking parameters affect the observation space by saving previous observations and actions. The number of previous actions to store as well as the number of previous observations to store are independently selected from the range [0, 4]. Concurrent knowledge parameters, as described in Section 4, include whether to use VTG and whether to use tAS. Including the previous action is already a feature implemented in the frame-stacking feature of including previous actions. Finally, the number of actions to discretize the continuous space to is selected from the range [3, 8].
A.4.2 LARGE SCALE ROBOTIC GRASPING
Simulated Environment We simulate a 7 DoF arm with an over-the-shoulder camera (see Figure 3a). A bin in front of the robot is ï¬lled with procedurally generated objects to be picked up by the robot and a sparse binary reward is assigned if an object is lifted off a bin at the end of an episode. States are represented in form of RGB images and actions are continuous Cartesian displacements of the gripper 3D positions and yaw. In addition, the policy commands discrete gripper open and close actions and may terminate an episode. In blocking mode, a displacement action is executed until completion: the robot uses a closed loop controller to fully execute an action, decelerating and coming to rest before observing the next state. In concurrent mode, an action is triggered and executed without waiting, which means that the next state is observed while the robot remains in motion. It should be noted that in blocking mode, action completion is close to 100% unless the gripper moves are blocked by contact with the environment or objects; this causes average blocking mode action completion to be lower than 100%, as seen in Table 1.
Real Environment Similar to the simulated setup, we use a 7 DoF robotic arm with an over- the-shoulder camera (see Figure 3b). The main difference in the physical setup is that objects are selected from a set of common household objects.
Algorithm We train a policy with QT-Opt (Kalashnikov et al., 2018), a Deep Q-Learning method that utilizes the Cross-Entropy Method (CEM) to support continuous actions. A Convolutional Neural Network (CNN) is trained to learn the Q-function conditioned on an image input along with
15
Published as a conference paper at ICLR 2020
a CEM-sampled continuous control action. At policy inference time, the agent sends an image of the environment and batches of CEM-sampled actions to the CNN Q-network. The highest-scoring action is then used as the policyâs selected action. Compared to the formulation in Kalashnikov et al. (2018), we also add a concurrent knowledge feature of VTG and/or previous action atâ1 as additional input to the Q-network. Algorithm 1 shows the modiï¬ed QT-Opt procedure.
Algorithm 1: QT-Opt with Concurrent Knowledge Initialize replay buffer D; Initialize random start state and receive image o0; Initialize concurrent knowledge features c0 = [V T G0 = 0, atâ1 = 0, tAS = 0]; Initialize environment state st = [o0, c0]; Initialize action-value function Q(s, a) with random weights θ; Initialize target action-value function ËQ(s, a) with weights Ëθ = θ; while training do for t = 1, T do
Initialize random start state and receive image 09; Initialize concurrent knowledge features cy = [VTGo = 0, ar-1 = 0, tas = 0]; Initialize environment state s; = [09, co]; Initialize action-value function Q(s, a) with random weights 6; Initialize target action-value function As, a) with weights 6=0; training do for t= 1, Tdo Select random action a; with probability ¢, else a, = CEM(Q, s:; 0); Execute action in environment, receive 0,41, C:, 713 Process necessary concurrent knowledge features c,, such as VT'G, azâ1, or tas; Set si41 = [ors1, ce]; Store transition (s;, az, $¢41,7¢) in D; if episode terminates then Reset s;41 to a random reset initialization state; Reset c,+1 to 0; end ample batch of transitions from D; for each transition (8;, a;, 8:41, 7i) in batch do if terminal transition then | wars else Select 4:41 = CEM(Q, s:;9); yi = ri + YQ(Si41, Gi41)5 end Perform SGD on (y; â Q(si, ai; 9) with respect to 0; n end Update target parameters Q with Q and 6 periodically; end
# end
For simplicity, the algorithm is described as if run synchronously on a single machine. In practice, episode generation, Bellman updates and Q-fitting are distributed across many machines and done asynchronously; refer to 2018) for more details. Standard DRL hyperparameters such as random exploration probability (â¬), reward discount (7), and learning rate are tuned through a hyperparameter sweep. For the time-penalized baselines in Table[I] we manually tune a timestep penalty that returns a fixed negative reward at every timestep. Empirically we find that a timestep penalty of â0.01, relative to a binary sparse reward of 1.0, encourages faster policies. For the non-penalized baselines, we set a timestep penalty of â0.0.
A.5 FIGURES
See Figure 6 and Figure 7.
16
Published as a conference paper at ICLR 2020
ee â FNN, VTG â NN, t_AS + prev_action â _FNN, Unconditioned â _ LSTM, VTG i â _ LSTM, t_AS + prev_action ââ _ LSTM, Unconditioned Blocking Unconditioned Baseline _ Reward By eo Percentage of Runs
Figure 6: Environment rewards achieved by DQN with different network architectures [either a feedforward network (FNN) or a Long Short-Term Memory (LSTM) network] and different con- current knowledge features [Unconditioned, vector-to-go (VTG), or previous action and tAS] on the concurrent Cartpole task for every hyperparameter in a sweep, sorted in decreasing order. Providing the critic with VTG information leads to more robust performance across all hyperparameters. This ï¬gure is a larger version of 2a.
2000 ~) â . â Reward FNN, VTG FNN, # prev_a>0 â FNN, # prev_obs > 0 â _FNN, Unconditioned Blocking Unconditioned Baseline Percentage of Runs
Figure 7: Environment rewards achieved by DQN with a FNN and different frame-stacking and concurrent knowledge parameters on the concurrent Pendulum task for every hyperparameter in a sweep, sorted in decreasing order.
17 | {
"id": "1606.01540"
} |
2004.05363 | WES: Agent-based User Interaction Simulation on Real Infrastructure | We introduce the Web-Enabled Simulation (WES) research agenda, and describe
FACEBOOK's WW system. We describe the application of WW to reliability,
integrity and privacy at FACEBOOK , where it is used to simulate social media
interactions on an infrastructure consisting of hundreds of millions of lines
of code. The WES agenda draws on research from many areas of study, including
Search Based Software Engineering, Machine Learning, Programming Languages,
Multi Agent Systems, Graph Theory, Game AI, and AI Assisted Game Play. We
conclude with a set of open problems and research challenges to motivate wider
investigation. | http://arxiv.org/pdf/2004.05363 | John Ahlgren, Maria Eugenia Berezin, Kinga Bojarczuk, Elena Dulskyte, Inna Dvortsova, Johann George, Natalija Gucevska, Mark Harman, Ralf Lämmel, Erik Meijer, Silvia Sapora, Justin Spahr-Summers | cs.SE, cs.HC, cs.LG, cs.SI | Author order is alphabetical. Correspondence to Mark Harman
([email protected]). This paper appears in GI 2020: 8th International
Workshop on Genetic Improvement | null | cs.SE | 20200411 | 20200411 | 0 2 0 2
r p A 1 1 ] E S . s c [
1 v 3 6 3 5 0 . 4 0 0 2 : v i X r a
# WES: Agent-based User Interaction Simulation on Real Infrastructureâ
John Ahlgren, Maria Eugenia Berezin, Kinga Bojarczuk, Elena Dulskyte, Inna Dvortsova, Johann George, Natalija Gucevska, Mark Harman, Ralf Lämmel, Erik Meijer, Silvia Sapora, Justin Spahr-Summersâ FACEBOOK Inc.
ABSTRACT We introduce the Web-Enabled Simulation (WES) research agenda, and describe FACEBOOKâs WW system. We describe the application of WW to reliability, integrity and privacy at FACEBOOK1, where it is used to simulate social media interactions on an infrastructure consisting of hundreds of millions of lines of code. The WES agenda draws on research from many areas of study, including Search Based Software Engineering, Machine Learning, Programming Languages, Multi Agent Systems, Graph Theory, Game AI, and AI Assisted Game Play. We conclude with a set of open problems and research challenges to motivate wider investigation.
Social Testing tests usersâ interactions with each other through a platform, while Automated Mechanism Design combines Search Based Software Engineering (SBSE) and Mechanism Design to au- tomatically find improvements to the platforms it simulates.
Like any software testing system, the WES approach helps find and fix any issues, e.g., with software changes and updates. In common with testing systems more generally, WES operates in a safely isolated environment. The primary way in which WES builds on existing testing approaches lies in the way it models behaviour. Traditional testing focuses on system behaviour rather than user behaviour, whereas WES focuses on the interactions between users mediated by the system.
1 INTRODUCTION A Web-Enabled Simulation (WES) is a simulation of the behaviour of a community of users on a software platform. It uses a (typically web-enabled) software platform to simulate real-user interactions and social behaviour on the real platform infrastructure, isolated from production users. Unlike traditional simulation [32, 41], in which a model of reality is created, a WES system is thus built on a realâworld software platform.
In order to model usersâ behaviour on a WES system, a multi agent-based approach is used, in which each agent is essentially a bot that simulates user behaviour. This user behaviour could be captured in a rule-based system, or could be learnt, either supervised from examples, or unsupervised in a reinforcement learning setting. The development of approaches to tackle the challenges posed by WES may thus draw heavily on recent advances in machine learning for software games, a topic that has witnessed many recent breakthroughs [51].
In this paper we set out the general principles for WES systems. We outline the development of FACEBOOKâÄŸs WW simulation of social media user communities, to illustrate principles of and challenges for the WES research agenda. WW is essentially a scaled down simulation of FACEBOOKâs platform, the actions and events of which use the same infrastructure code as the real platform itself. We also introduce two new approaches to testing and optimising systems: Social Testing and Automated Mechanism Design.
It is a subtle shift of emphasis that raises many technical and research challenges. Software systems involve increasing levels of social interaction, thereby elevating the potential for issues and bugs relating to complex interactions between users and software. It is the emergence of these kinds of social bugs and issues that necessitate the need for a WES-style approach, and the research agenda that underpins it. FACEBOOKâÄŹs WW simulation is WES that uses bots that try to break the community standards in a safe isolated environment in order to test and harden the infrastruc- ture that prevents real bad actors from contravening community standards. Widespread Applicability: Community behaviour is increasingly prevalent in software applications, for example for travel, accom- modation, entertainment, and shopping. These systems use social interactions so that each user can benefit from the collective expe- rience of other users. Although this paper focuses on FACEBOOKâs WW system, the concepts and approach could also find application in platforms used by other organisations. Realism: WES interactions between bots are achieved through the real platform infrastructure, whereas a more traditional simulation approach would first model this infrastructure. This is important because the platform infrastructures that mediate user interactions are increasingly complex. For instance, FACEBOOKâs WW simula- tion is built on a social media infrastructure consisting of several hundreds of millions of lines of code. While a traditional simulation modelling approach is applicable, there are many issues that are better understood using a WES approach, as we shall see.
âThis paper appears in GI 2020: 8th International Workshop on Genetic Improvement. â Author order is alphabetical. Correspondence to Mark Harman ([email protected]). 1âFACEBOOKâ, refers to the company, while âFacebookâ refers to the specific product.
GI 2020, October 2020, Korea
Platform realism does not necessarily mean that the interactions between users need to be realistic representations of the end usersâ experience. A WES system could, for instance, allow engineers to experiment with new features for which, by definition, there is no known realistic user behaviour. It can also be used to focus on atypical behaviours of special interest to engineers, such as those of bad actors. A WES system could even be used for counter- factual simulation; modelling what users cannot do. We use the terms âplatform realismâ and âend user realismâ for clarity. The term âplatform realismâ refers to the degree to which the simulation uses the real platform. The term âend user realismâ refers to the degree to which simulated interactions faithfully mimic real usersâ interactions. The former is inherent to the WES approach, while the latter may be desirable, but is not always essential. Opportunities for Researchers: It may not be possible for re- searchers to experiment with WES systems directly (for example, where they are built from proprietary software platforms). Nev- ertheless, many open questions can be tackled using traditional simulation, with results extended or extrapolated for application to WES systems. Researchers can also experiment with and evaluate novel techniques and approaches using WES systems built from open source components.
We report on our plans for the further future development of WW. The WES research agenda resides at an exciting intersection between many topics including, but not limited to, Search Based Software Engineering (SBSE) [24], Multi Agent Systems [54], Ma- chine Learning [47], Software Testing [7], Graph Theory [53], Game Theory [42], and Game AI [55]. We hope that this paper will serve to stimulate interest in activity in the development of research on WES approaches.
The primary contributions of this paper are:
(1) The introduction of the WES approach to on-platform simu- lation of real-world software user communities;
(2) The introduction of the concepts of Automated Mechanism Design and Social Testing, both of which are relevant to WES systems, but also have wider applications;
(3) An outline of the FACEBOOK WW system; an example of a WES system, applied to social media user communities; (4) A list of open problems and challenges for the WES research
agenda.
2 WEB-ENABLED SIMULATION A WES simulation can be seen as a game, in which we have a set of players that operate to fulfil a certain objective on a software platform. This observation connects research on WES systems with research on AI Assisted Game Play [51]. In an AI Assisted Game, reinforcement learning can be used to train a bot to play the game, guided by a reward (such as winning the game or achieving a higher score). Similarly, a WES simulation can also use reinforcement learning to train bots.
For example, with FACEBOOKâs WW simulation, we train bots to behave like bad actors, guided by rewards that simulate their ability to behave in ways that, in the simulation, contravene our commu- nity standards [15]. The users whose behaviour is stimulated by other WES approaches could be end-users of the software platform but, more generally, could also be any software user community.
Ahlgren et al.
For example, a simulation of the users of a continuous integration system, would be a simulation of a developer community, while an App Store WES system may involve both developers and end users. We define the following generic concepts that we believe will be common to many, if not all, WES systems:
Bot: A bot is an autonomous agent. Note that bots can create ânewâ data as they execute. For instance, social networking bots may weave connections in a social graph as they interact, âjust as the Jacquard loom weaves flowers and leavesâ [34].
Action: An action is a potentially state-changing operation that a bot can perform on the platform.
Event: An event is a platform state change that is visible to some set of users.
Observation: An observation of the platformâs state does not change the platform state. It is useful to distinguish actions (state changing), from observations (which are pure functional). This means that some apparent observations need to be decomposed into action and observation pairs. For example, the apparent obser- vation âread a messageâ, may update notifications (that the message has been read). Therefore, it is decomposed into the (state-changing) action of getting the message (which happens once) and the obser- vation of reading the message (which is sideâeffect free and can occur multiple times for those messages that permit multiple reads). Read-only bot: a read-only bot is one that cannot perform any actions, but can observe state. Read-only bots can potentially op- erate on real platform data, because they are sideâeffect free, by construction.
Writer bot: a writer bot that can perform actions and, thereby, may affect the platform state on which it acts (e.g. the social graph in the case of social media applications).
Fully isolated bot: a fully isolated bot can neither read from not write to any part of state that would affect real user experience, by construction of the isolation system in place (See Section 2.2). Mechanism: The mechanism is the abstraction layer through which a bot interacts with the platform. Actions, events and obser- vations may be restricted and/or augmented by the mechanism. For instance, the mechanism might constrain the actions and events so that the bot can only exhibit a subset of real behaviours of particu- lar interest, such as rate limiting, or restricted observability. The mechanism might also augment the available actions and events to explore novel approaches to the platform configuration, products and features, before these are released to end users. This opens up the potential for automated mechanism design, as we discuss in Section 2.3. The mechanism is also one way in which we achieve isolation (see Section 2.2).
One obvious choice for a mechanism would be to provide a bot with a user interface similar to the one that the GUI offers to a real user. However, there is no need for a WES system to interface through the GUI. There may be practical reasons why it is advantageous to implement interactions more deeply within the infrastructure. For example, it is likely to be more efficient for a WES bot to bypass some levels of abstraction between a normal user and the core underlying platform. Of course, there is a natural tension between the twin objectives of platform realism and efficiency, a topic to which we return in Section 5.
WES: Agent-based User Interaction Simulation on Real Infrastructure
Script: A script run is akin to a single game episode. Scripts cap- ture, inter alia, the type of environment created, how bots interact, simulation stopping criteria and measurement choices.
Simulation time: The simulation may compress or expand the apparent time within the simulation in order to simulate a desired use case more efficiently or effectively. However, there will always be a fundamental limitation, because the simulation can only be as fast as the underlying real platform environment will permit. This is another difference between WES and traditional simulation.
Monitor: The monitor captures and records salient aspects of the simulation for subsequent analysis.
2.1 Bot Training Bots behave autonomously, though training to exhibit particular behaviours of interest. In the simplest use case, bots merely explore the platform, randomly choosing from a predefined set of actions and observations. More intelligent bots use algorithmic decision making and/or ML models if behaviour. The bots could also be modelled to cooperate towards a common goal.
2.2 Bot Isolation Bots must be suitably isolated from real users to ensure that the simulation, although executed on real platform code, does not lead to unexpected interactions between bots and real users. This isola- tion could be achieved by a âsandboxâ copy of the platform, or by constraints, e.g., expressed through the mechanism and/or using the platformâs own privacy constraint mechanisms.
Despite this isolation, in some applications bots will need to ex- hibit high end user realism, which poses challenges for the machine learning approaches used to train them. In other applications where read only bots are applicable, isolation need not necessarily prevent the bots from reading user information and reacting to it, but these read only bots cannot take actions (so cannot affect real users).
Bots also need to be isolated from affecting production monitor- ing and metrics. To achieve this aspect of isolation, the underlying platform requires (some limited) awareness of the distinction be- tween bots and real users, so that it can distinguish real user logging data from that accrued by bot execution. At FACEBOOK, we have well-established gate keepers and configurators that allow us to do this with minimal intervention on production code. These gate keepers and configurators essentially denote a Software Product Line [12] that could also, itself, be the subject of optimisation [22]. Finally, isolation also requires protection of the reliability of the underlying platform. We need to reduce the risk that botsâ execution could crash the production system or affect it by the large scale consumption of computational resources.
2.3 Automated Mechanism Design Suppose we want to experiment with the likely end user behavioural response to new privacy restrictions, before implementing the re- strictions on the real platform. We could fork the platform and experiment with the forked version. However, WES offers a more âlight touchâ approach: we simply adjust the mechanism through which bots interact with the underlying platform in order to model the proposed restrictions. The mechanism can thus model a possible future version of the platform.
G12020
# October 2020
Real world i WES users 5 ' QQ} i ae | Mechanism Platform Infrastructure
Figure 1: Generic WES System Infrastructure. Real users and bots reside on the same overall platform infrastructure. There is a conceptual isolation layer between them that de- termines the level of interaction possible, if any, between bots and real users. There is also a mechanism layer that mediates the actions and observations that the bots can per- form through the platform.
Like all models, the mechanism need not capture all implemen- tation details, thereby offering the engineer an agile way to explore such future platform versions. The engineer can now perform A/B tests through different mechanism parameters, exploring the differ- ential behaviours of bot communities under different changes.
Using this intermediate âmechanismâ layer ameliorates two chal- lenges for automated software improvement at industrial scales: build times and developer acceptance. That is, build times can run to minutes [6] or even hours [27], so an automated search that insists on changes to the code may incur a heavy computational cost. Even where cost is manageable, developers may be reluctant to land code changes developed by a machine [45]. We have found that a ârecommender systemâ approach sits well with our develop- ersâ expectations at FACEBOOK [38]. It ensures that the developer retains ultimate control over what lands onto the code base.
The ease with which the mechanism can be adjusted without needing to land changes into the underlying platform code means that this exploration process can be automated. Automated Mecha- nism Design is thus the search for optimal (or near optimal) mech- anisms, according to some fitness criteria of interest. In the domain of AI Assisted Game Play, this is akin to changing the rules of the game as the player plays it, for example to make the game more challenging for an experienced player [33].
Borrowing the terminology of economic game theory [28], we use the term âAutomated Mechanism Designâ to characterise the automated (or semi automated) exploration of the search space of possible mechanisms through which WES bots interact with the underlying infrastructure. Automated Mechanism Design is there- fore also another application of Search Based Software Engineering (SBSE) [23, 24]. As with AI Assisted Game Play, we wish to make the platform more challenging, for example, to frustrate bad actors. However, the applications of Automated Mechanism Design are far wider than this because it offers a new approach to automated A/B testing, at volumes never previously considered.
# Korea
GI 2020, October 2020, Korea
2.4 Social Testing WES systems bear some relationships to testing, in particular, end- to-end system level testing. Indeed, FACEBOOKâs WW traces its origins back to observations of the behaviour of multiple execu- tions of FACEBOOKâs Sapienz automated test design platform [3]. However, even with only a single bot, a WES system differs from traditional testing, because a WES bot is trained, while a traditional test follows a specific sequence of input steps.
Furthermore, unlike end-to-end tests, which typically consider the journey of a single user through the system and avoid test user interaction lest it elevate test flakiness [25], a WES system specifically encourages test user interaction to model community behaviour. Therefore, WES systems can reveal faults that are best explored and debugged at this âcommunityâ level of abstraction. We give several examples of such faults, encountered in our work at FACEBOOK. Our analysis of the most impactful production bugs indicated that as much as 25% were social bugs, of which at least 10% has suitable automated oracles to support a WES approach.
Such social bugs, arising through community interactions, re- quire a new approach to testing: Social Testing; testing emergent properties of a system that manifest when bots interact with one another to simulate real user interactions on the platform. WES sys- tems are clearly well-suited to Social Testing. However, we believe other approaches are also viable; Social Testing is an interesting new level of abstraction (lying above system testing levels of ab- straction). It is worthy of research investigation in its own right.
In theory, all such âsocial faultsâ could be found at the system level. Indeed, all system level faults could, in theory, be found at unit level. In practice, however, it proves necessary to stratify testing. We believe that social testing (at the whole platform level) is just a new level of abstraction; one that is increasingly important as systems themselves become more social.
3 FACEBOOKâS WW At FACEBOOK, we are building a WES system (called WW), ac- cording to the principles set out in Section 2. WW is an environ- ment and framework for simulating social network community behaviours, with which we investigate emergent social properties of FACEBOOKâs platforms. We are using WW to (semi) automati- cally explore and find new improvements to strengthen Reliability, Integrity and Privacy on FACEBOOK platforms. WW is a WES system that uses techniques from Reinforcement Learning [49] to train bots (Multi Agent Reinforcement Learning) and Search Based Software Engineering [24] to search the product design space for mechanism optimisations: Mechanism Design.
Bots are represented by test users that perform different actions on real FACEBOOK infrastructures. In our current implementation, actions execute only the back-end code: bots do not generate HTTP requests, nor do they interact with the GUI of any platform surface; we use direct calls to internal FACEBOOK product libraries. These users are isolated from production using privacy constraints and a well-defined mechanism of actions and observations through which the bots access the underlying platform code. However, when one WW bot interacts with another (e.g., by sending a friend request or message) it uses the production back-end code stack, components and systems to do so, thereby ensuring platform realism.
Ahlgren et al.
3.1 Training WW bots To train bots to simulate real user behaviour, we use Reinforcement Learning (RL) techniques [49]. Our bot training top level approach is depicted in Figure 2. As can be seen from Figure 2 the WW bot training closely models that of a typical RL system [49]. That is, a bot executes an action in the environment, which in turn, returns an observation (or current state), and an eventual reward to the bot. Using this information, the bot decides to take an action, and thus the SARSA (State-Action-Reward-State-Action) loop is executed during a simulation.
However, when considering the environment, we explicitly tease apart the mechanism from the underlying platform. The platform is out of WWâs control: its code can change, since it is under continual development by developers, but WW cannot change the platform code itself. Furthermore, WW cannot determine the behaviour of the platform. The platform may choose to terminate and/or it may choose to allocate different resources on each episode of the simulation. Furthermore, the social graph at the heart of the database is also continually changing.
The mechanism helps to maintain a consistent interface for WW bots, so that their code is insulated from such changes. It also mediates the actions and observations a bot can make and witness, so that many different mechanisms can be explored without any need to change the platform. As can be seen from Figure 2, the mechanism is separated from the platform. Each bot contains its own set of mechanism parameters, so that each can report the fitness of a different mechanism parameter choice. At the same time, the bots seek to achieve their goals, guided by reinforcement learning.
For example, to simulate scammers and their targets, we need at least two bots, one to simulate the scammer and another to simulate the potential target of the scam. The precise technical details of how we impede scammers on WW are obviously sensitive, so we cannot reveal them here. Nevertheless, we are able to outline the technical principles.
The reinforcement learning reward for the scammer bot is based on its ability to find suitable candidate targets, while the candidate targets are simulated by rule-based bots that exhibit likely target behaviours. The mechanism parameters residing in the scammer bots are used to compute fitness in terms of the mechanismâs ability to impede the scammers in their goal of finding suitable targets.
This use case need not involve a large number of bots. However, the ability to simulate at scale gives us automated parallelisation for increased scalability of the underlying search, and also supports av- eraging fitness and reward results over multiple parallel executions. Such averaging can be useful for statistical testing since results are inherently stochastic.
3.2 Top Level Implementation The top level components of the FACEBOOK WW system are de- picted in Figure 3. This is a very high level view; there is, of course, a lot more detail not shown here. We focus on key components to illustrate overall principles. WW consists of two overall sub- systems: the general framework classes, which are the core of our simulation platform (and remain unchanged for each use case), and the per-use-case classes (that are tailored for each use case).
WES: Agent-based User Interaction Simulation on Real Infrastructure
Observation Reward + Mechanism Design Bot Platform Environment
Actions
Figure 2: WW Reinforcement Learning Architecture: bots ex- ecute actions in the environment, which in turn, returns an observation, and an eventual reward. The platformâs code is unchanged by simulation, while the mechanism through which the bots interact with the platform is subject to change during the simulation process (to explore optimisa- tions of the underlying platform).
General framework ScriptRunner Per-use-case Je \ Monitor Script | Objective Bot proses > Model
Figure 3: WW implementation: general framework compo- nents form the core platform, while per-use-case compo- nents are specialised to each use case.
3.2.1 General Framework classes.
ScriptRunner: entry point to the WW simulation. It is responsi- ble for building the environment necessary for a WW script, executing the state machine, and setting up monitoring of the results.
Monitor: responsible for recording events and collecting data for post-analysis, as the Script is run.
Objective: represents an objective that a Script is aiming to achieve. Possible objectives include time, steps, episodes, âresultsâ (such as vulnerabilities found, etc.). The objective is also used to de- termine when to end the simulation.
GI 2020, October 2020, Korea
Algorithm 1: Pseudo-code of the ScriptRunner loop
1 Setup 2 Script creates the environment and the Bots. 3 while Objective is not reached do 4
Advance the virtual-time clock. Execute the action of the next Bot. Observe and log events and data.
5
6
# 7 End 8 Finalize the simulation (cleanup).
Model: a machine learning model for the bot, e.g., a Policy Gradi- ent model with determined parameters.
3.2.2 Per-use-case classes. The core WW platform consists of the general framework class together with a set of components from which the per-use case classes are defined. In order to define each use case, we simply define a script and a bot class, using the com- ponents and deploy them on the general framework.
Script: describes the user community (e.g., the size of the graph), and the environment where the users will interact (e.g., groups with fake news).
Bot: an automated agent (represented by a test user) with a partic- ular behaviour defined by actions. For example, a FACEBOOK Messenger user. A bot interacts with other users (as defined by its behaviour), and can have its own learning model.
4 APPLICATIONS OF WW AT FACEBOOK We believe many of these application use cases for WW may gen- eralise to other WES systems, but we give them here in their FACE- BOOK context to illustrate WES applications with specific concrete use cases. At the time of writing we are focusing our engineering effort on the applications of WW to integrity challenges, but we fully anticipate application to the other areas listed in the section in due course. Indeed, we expect many more use cases to emerge as the development of the WW infrastructure matures.
4.1 Integrity and Privacy In any large scale system, not all user behaviour will be benign; some behaviours are perpetrated by bad actors, intent on exploiting the platform and/or its users. On the Facebook platform such bad actor user behaviour includes any actions that contravene FACE- BOOKâs Community Standards [15].
We are using WW to enhance our ability to detect bad actor behaviours. We are also developing Automated Mechanism Design approaches that search product design space to find ways to harden the platform against such bad actors, thereby promoting the in- tegrity of the platform and its users. In this section, we illustrate both with applications of WW to the challenges of detecting and preventing contravention of integrity constraints.
WW also provides us with a realistic, yet safely isolated, way to investigate potential privacy issues on the real platform. Because the WW bots are isolated from affecting real users, they can be trained to perform potentially privacyâviolating actions on each other.
GI 2020, October 2020, Korea
On the other hand, because the bots use the real infrastructure, the actionability of any potential issues found through such simula- tion is considerably increased (compared to a traditional simulation on a purely theoretical model). Simulating bad actors: Consider the problem of users trying to post violating content on the Facebook platform. Even though we employ state-of-the-art classifiers and algorithms to protect the platform, we need to be proactive in our exploration of the space of potential exploits; WW provides one way to do this. If our bots succeed in finding novel undetected contravening behaviours, the WW simulation has allowed us to head off a potential integrity vulnerability. Search for bad content: Bad actors use our platform to try to find policyâviolating content, or to try to locate and communicate with users who may share their bad intent. Our bots can be used to simulate such bad actors, exploring the social graph. This enables them, for example, to seek out policyâviolating content and the violators that create it. WW bots can also search for clusters of users sharing policyâviolating content. Searching for mechanisms that impede bad actors: We use automated mechanism design to search for ways to frustrate these bad actors from achieving their goals within the simulation. This makes the optimisation problem a multiâobjective one, in which the bots seek to achieve bad actions, while the system simultaneously searches for mechanisms that frustrate their ability to do so. The search is also multi objective because we require new mechanisms that simultaneously frustrate the achievement of bad actions, while having little or no noticeable impact on normal users. In this way we use WW automated mechanism design to explore the space of potential changes that may also lead to greater integrity of the real platform.
Interestingly, this is a problem where preventing bad activity does not require the ability to detect it. Automated search may yield mechanisms that frustrate scamming, for example by hiding potential victims from scammers, without necessarily relying on classifiers that detect such scammers. This is reminiscent of the way in which removing side effects (which may be computable), does not require the ability to detect side effects (which is undecidable, in general) [21]. Bots that break privacy rules: In the Facebook infrastructure, every entity has wellâdefined privacy rules. Creating bots trained to seek to achieve the sole purpose of breaking these privacy rules (e.g., to access another botâs private photos) is thus a way to surface potential bugs, as well as unexpected changes in the systemâs be- haviour. For example, if a bot was never previously able to perform a certain action (e.g., access another botâs message), but becomes able to do so after a code change, this could highlight a change in privacy rules that resulted in unexpected behaviours. Data acquiring bots: Even with the privacy rules currently in place, a Facebook user has the ability to access another usersâ data (with consent, of course). This functionality is a necessary part of the normal usability of the platform. Nevertheless, we need to maintain a constantly vigilant posture against any potential to exploit this fundamentally important ability. By creating bots whose sole purpose is to accrue as much data as possible from each other, we are able to test our preventative measures and their effectiveness against this type of behaviour.
Ahlgren et al.
_â~ Allbugs Bugs \ Bugs \ caught by caughtby =| | traditional WES }} tests /
Figure 4: Traditional testing methods such as unit tests, or end-to-end tests, only succeed in capturing a subset of all possible bugs. WW gives developers the option to test on a new level of abstraction, one that takes community be- haviour and interactions into account.
4.2 Reliability Large organisation like Facebook naturally face challenges of relia- bility at scale. WW is not only a simulation of hundreds of millions of lines of code; it is a software system that runs on top of those very same lines of code. In order to cater for the reliability of the WW system itself, we use a DevOps approach, commonly used throughout the company [16]. WW runs in continuous deployment as a production version, underpinned by suitable maintenance pro- cedures, such as time series monitoring and analysis, alarms and warnings and on-call rotations. However, WW can also be used to explore the reliability of the platform as we outline in this section. Social Bugs: WW provides tools for social testing, whereby failures can be expressed as percentages, rather than more traditional binary success/fail. All traditional tests might execute successfully, yet we still observe an âsocial bugâ issue in production. Examples include drops in top line product metrics, significant changes in machine learning classification outcomes, big jumps in data pipeline line breakages.
These kind of bugs have many causes including code, data and/or configuration changes. While all could, in theory be detected by lower levels of test abstraction, it is useful to have a WES style final âfull ecosystemâ test (as opposed to âfull systemâ test). With WW we can detect such a significant metric change before such a change affects real users, because it tests the whole ecosystem with a simulation of the community that uses the platform. A single user test, even at full system level, would be insufficiently expressive to capture community interaction faults.
We also retain lower levels of testing abstraction. The WW sim- ulation is the most computationally expensive form of testing we have, as well as the highest level of abstraction. Also, although âplat- form realismâ is the goal of all WES systems, there are necessary compromises to achieve scalability, as discussed in Section 5. The WES Test Oracle: These metrics play the role of test oracle [5], thereby ensuring that the platform level testing problem can be entirely automated. Of course, since WW is a scaled down version of the real community, there is a need to tune such metrics and alerts, but this is, itself, an interesting machine learning problem.
WES: Agent-based User Interaction Simulation on Real Infrastructure
5 OPEN PROBLEMS AND SOME RELATED WORK THAT MAY HELP TACKLE THEM In this section we review related work and highlight open prob- lems and challenges for the WES research agenda. Neither our characterisation of related work, nor our list of open problems is comprehensive. We are excited to work with the academic and sci- entific research community to tackle these open problems together using such related work and/or other promising approaches.
Naturally, we can expect research to draw on the extensive body of work on simulation, and in particular, multi agent simulation [41], which is now relatively mature, with the advent of so-called âsophis- botsâ that are hard to distinguish from real users [8].
There are also frameworks for simulation of software systems and communities, but these tend to focus on traditional simulation rather than on-platform simulation, the sine qua non of a WES system. For example, RecSym [29] uses an abstraction of a generic Recommender System (RS) infrastructure to simulate the behaviour of different choices of Reinforcement Learning (RL) for recommend- ing content to users. The most important difference, is that WW uses RL (and other techniques) to train bots to behave like users so that the behaviours of users on the real Facebook infrastructure can be better simulated, whereas RecSym simulates the behaviour of an abstraction on Infra with respect to a given RL.
5.1 Open Problems and Challenges Since WES systems, more generally, rely on training bots to simu- late real users on real software platforms, there is a pressing need for further research on a number of related topics. This section lists 15 areas of related work that can contribute to tackling open WES research agenda challenges. The large number and diversity of topics and challenges underscores the rich opportunities for research. Another Application for MARL: Recent developments in Multi Agent Reinforcement Learning (MARL) [30] may offer techniques that can be adapted and applied to WES systems. One important challenge is to find ways to train bots to behave like specific classes of bad actors. Multi Objective Search: Typically, Software Engineering prob- lems, such as reliability and integrity, will have a multi objective character. For example, it would be insufficient to constrain a mech- anism to frustrate bad actors, if one does not counter-balance this objective against the (potentially competing) objective of not imped- ing normal users in their routine use of the platform. Fortunately, multi objective optimisation algorithms, such as NSGA II [13], are readily available and have been widelyâstudied in the Software Engineering community for over two decades [23]. More research is needed on the application of multi objective search to WES prob- lems. AI Assisted Game Play: The WES agenda naturally draws on previous work on artificial intelligence for software game play. Re- cent advances on competitive game playing behaviour [51] may be adapted to also imbue WES bots with more realistic social inter- actions. In a WES system we do not necessarily need competitive âplayâ, but realistic social interaction; the rewards and fitness func- tions may differ, but key insights may, nevertheless, carry over between these two related application paradigms.
GI 2020, October 2020, Korea
In some WES applications it may also be important to avoid the bots acquiring super-human abilities, such as interacting faster than any human ever could. This is also a problem recently tackled in machine learning for computer game playing optimisation [51]. Automated Mechanism Design:
Mechanism Design is a form of automated game design, a topic that has been studied for over a decade [50], and that remains an active area of research [33]. The challenge is to define techniques for deployment, testing and for efficiently and effectively searching the mechanism space. Tackling these problems may draw on previ- ous work on Genetic Improvement [45], Program Synthesis [18], Constraint Solving [31], and Model Checking [11]. Co-evolutionary Mechanism Learning: Automatically improv- ing the platform mechanism to frustrate some well-known attack from a class of bad actors may yield short term relief from such bad actors. However, in order to continue to remain âahead of the gameâ and to thereby frustrate all possible, as yet unknown attacks, we need a form of co-evolutionary optimisation; the mechanism is optimised to frustrate bad actions, while the bots simultaneously learn counter strategies that allow them to achieve these bad actions despite the improved mechanism.
Co-evolutionary optimisation is well-suited to this form of âarms raceâ. Co-evolutionary optimisation has not yet received widespread attention from the SBSE community, although there is some initial literature on the topic [1, 4, 46]. Co-evolutionary Mechanism Design therefore establishes an new avenue of research that promises to widen the appeal and application of co-evolutionary approaches to software engineering problems. End User Realism and Isolation: In some applications, WES bots will need to be trained to faithfully model the behaviours of the platformâs real users; âend user realismâ. Tackling this may rely on recent advances in machine learning, but will also be constrained by the need for user privacy. There is also interesting research to be done on the degree of end user realism required, and metrics for measuring such realism, a problem only hitherto lightly explored in testing research [2, 9, 14, 37].
Because bots are isolated from real users, we face the research challenge of defining, capturing, measuring, and replicating realistic behaviour. Faithfully replicating every aspect of end user behaviour is seldom necessary. Indeed, in some cases, end user realism is not required at all. For example, for social testing the FACEBOOK Mes- senger system, we found that it was sufficient to have a community of bots regularly sending messages to one another in order to detect some social bugs that manifest through drops in production metrics, such as number of messages sent.
For integrity-facing applications, such as preventing bad actorsâ harmful interaction with normal users, we need reasonably faithful bad actor bot behaviours, and bots that faithfully replicate normal usersâ responses to such bad actors. This is a challenging, but highly impactful research area. Search Based Software Engineering: Many of the applications of WES approaches lie within the remit of software engineering, and it can be expected that software engineering, and in particu- lar, Search Based Software Engineering (SBSE) [23] may also find application to open problems and challenges for WES systems. In common with SBSE, WES systems share the important salient char- acteristic that the simulation is executed on the real system itself.
GI 2020, October 2020, Korea
It is therefore âdirectâ. This directness is one advantage of SBSE over other engineering applications of computational search [20]. We can expect similar advantages for WES systems. By contrast, traditional simulation tends to be highly indirect: The simulation is not built from the real systemâs components, but as an abstraction of a theoretical model of the real system and its environment. Diff Batching: The WES approach has the advantage that it allows engineers to investigate properties of proposed changes to the platform. However, for larger organisations, the volume of changes poses challenges itself. For example, at FACEBOOK, over 100,000 modifications to the repository land in master every week [3]. Faced with this volume of changes, many companies, not just FACEBOOK [43], use Diff batching, with which collections of code changes are clustered together. More work is needed on smarter clustering techniques that group code modifications (aka Diffs) in ways that promote subsequent bisection [43]. Speed up: Simulated clock time is a property under experimental control. It can be artificially sped up, thereby yielding results in orders of magnitude less real time than a production A/B test [48]. However, since a WES system uses real infrastructure, we cannot always speed up behaviour without introducing non-determinism: If bots interact faster (with the system and each other) this may introduce race conditions and other behaviours that would tend to be thought of as flakiness in the testing paradigm [25, 35]. Social Testing: Section 2.4 introduces a new form of software testing, which we call âSocial Testingâ. Testing is generally regarded as an activity that takes place at different levels of abstraction, with unit testing typically being regarded as lowest level, while system level testing is regarded as highest level. Social testing adds a new level of abstraction above system level testing. There are so many interesting problems in social testing that a complete treatment would require a full paper in its own right. In this brief paper we hope we have sufficiently motivated the introduction of this new higher level of abstraction, and that others will be encouraged to take up research on social testing. Predictive Systems: WES systems would benefit from automated prediction (based on the simulation) of the future properties of the real world. This will help translate insights from the simulation to actionable implications for the real world phenomena. Therefore, research on predictive modelling [10, 19] is also highly relevant to the WES research agenda. Causality: To be actionable, changes proposed will also need to cor- relate with improvements in the real world, drawing potentially on advances in causal reasoning [44], another topic of recent interest in the software engineering community [39]. Simulating Developer Communities: Although this paper has focused on WES for social media users, a possible avenue for other WES systems lies in simulation of developer communities. This is a potential new avenue for the Mining Software Repositories (MSR) research community [26]. The challenge is to mine information that can be usefully employed to train bots to behave like developers, thereby exploring emergent developer community properties using WES approaches. This may have applications to and benefit from MSR. It is also related to topics such as App Store analysis [40], for which the community combines developers and users, and to software ecosystems [36], which combine diverse developer sub- communities.
Ahlgren et al.
Synthetic Graph Generation: For WW, we are concerned with the simulation of social media. Read-only bots can operate on the real social network, which is protected by isolation. However, many applications require writer bots. Naturally, we do not want WW writer bots interacting with real users in unexpected ways, so part of our isolation strategy involves large scale generation of large synthetic (but representative) graphs. This is an interesting research problem in its own right. On a synthetic graph it will also be possible to deploy fully isolated bots that can exhibit arbitrary actions and observations, without the need for extra mechanism constraints to enforce isolation. Game Theory: A WES execution is a form of game, in which both the players and the rules of the game can be optimised (possibly in a co-evolutionary âarms raceâ). Formal game theoretic investigation of simulations offers the possibility of underpinning the empiri- cal observations with mathematical analysis. Naturally, empirical game-theoretic analysis [52] is also highly relevant. There has been recent interest in game theoretic formulations in the Software En- gineering community [17]. WES systems may provide a further stimulus for this Game Theoretic Software Engineering research agenda.
6 CONCLUSION In this paper we set out a new research agenda: Web-Enabled Sim- ulation of user communities. This WES agenda draws on rich re- search strands, including machine learning and optimisation, multi agent technologies, reliability, integrity, privacy and security as well as traditional simulation, and topics in user community and emergent behaviour analysis.
The promise of WES is realistic, actionable, on-platform simula- tion of complex community interactions that can be used to better understand and automatically improve deployments of multi-user systems. In this short paper, we merely outline the WES research agenda and some of its open problems and research challenges. Much more remains to be done. We hope that this paper will en- courage further uptake and research on this exciting WES research agenda.
7 ACKNOWLEDGEMENTS The authors would like to thank Facebookâs engineering leadership for supporting this work and also the many Facebook engineers who have provided valuable feedback, suggestions and use cases for the FACEBOOK WW system.
REFERENCES [1] Konstantinos Adamopoulos, Mark Harman, and Robert Mark Hierons. 2004. Mutation Testing Using Genetic Algorithms: A Co-evolution Approach. In Genetic and Evolutionary Computation Conference (GECCO 2004), LNCS 3103. Springer, Seattle, Washington, USA, 1338â1349.
[2] Sheeva Afshan, Phil McMinn, and Mark Stevenson. 2013. Evolving Readable String Test Inputs Using a Natural Language Model to Reduce Human Oracle Cost. In International Conference on Software Testing, Verification and Validation (ICST 2013). 352â361.
[3] Nadia Alshahwan, Xinbo Gao, Mark Harman, Yue Jia, Ke Mao, Alexander Mols, Taijin Tei, and Ilya Zorin. 2018. Deploying Search Based Software Engineering with Sapienz at Facebook (keynote paper). In 10t h International Symposium on Search Based Software Engineering (SSBSE 2018). Montpellier, France, 3â45. Springer LNCS 11036.
[4] Andrea Arcuri and Xin Yao. 2010. Co-evolutionary automatic programming Information Sciences (2010). https://doi.org/doi:
WES: Agent-based User Interaction Simulation on Real Infrastructure
10.1016/j.ins.2009.12.019 To appear. Available on line from Elsevier.
[5] Earl T. Barr, Mark Harman, Phil McMinn, Muzammil Shahbaz, and Shin Yoo. 2015. The Oracle Problem in Software Testing: A Survey. IEEE Transactions on Software Engineering 41, 5 (May 2015), 507â525.
[6] Jonathan Bell, Gail Kaiser, Eric Melski, and Mohan Dattatreya. 2015. Efficient dependency detection for safe Java test acceleration. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering. 770â781.
[7] Antonia Bertolino. 2007. Software testing research: Achievements, challenges, dreams. In Future of Software Engineering (FOSEâ07). IEEE, 85â103.
[8] Dan Boneh, Andrew J. Grotto, Patrick McDaniel, and Nicolas Papernot. 2019. How Relevant is the Turing Test in the Age of Sophisbots? arXiv e-prints, Article arXiv:1909.00056 (Aug 2019), arXiv:1909.00056 pages. arXiv:cs.CY/1909.00056
[9] Mustafa Bozkurt and Mark Harman. 2011. Automatically generating realistic test input from web services. In IEEE 6th International Symposium on Service Oriented System Engineering (SOSE 2011), Jerry Zeyu Gao, Xiaodong Lu, Muhammad Younas, and Hong Zhu (Eds.). IEEE, Irvine, CA, USA, 13â24.
[10] Cagatay Catal and Banu Diri. 2009. A systematic review of software fault predic- tion studies. Expert systems with applications 36, 4 (2009), 7346â7354.
[11] Edmund M Clarke Jr, Orna Grumberg, Daniel Kroening, Doron Peled, and Helmut Veith. 2018. Model checking. MIT press.
[12] Paul Clements and Linda Northrop. 2001. Software Product Lines: Practices and Patterns. Addison-Wesley Professional. 608 pages.
[13] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan. 2002. A fast and elitist multiobjec- tive genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation 6 (April 2002), 182â197. Issue 2.
[14] Dirk Draheim, John Grundy, John Hosking, Christof Lutteroth, and Gerald We- ber. 2006. Realistic load testing of web applications. In Conference on Software Maintenance and Reengineering (CSMRâ06). IEEE, 11âpp.
[15] FACEBOOK, INC. 2020. Community Standards. https://www.facebook.com/ communitystandards/
[16] Dror G. Feitelson, Eitan Frachtenberg, and Kent L. Beck. 2013. Development and Deployment at Facebook. IEEE Internet Computing 17, 4 (2013), 8â17.
[17] Carlos Gavidia-Calderon, Federica Sarro, Mark Harman, and Earl T. Barr. To appear. The Assessorâs Dilemma: Improving Bug Repair via Empirical Game Theory. IEEE Transactions on Software Engineering (To appear).
[18] Sumit Gulwani, Oleksandr Polozov, Rishabh Singh, et al. 2017. Program synthesis. Foundations and Trends in Programming Languages 4, 1-2 (2017), 1â119.
[19] Mark Harman. 2010. How SBSE Can Support Construction and Analysis of Predictive Models (Keynote Paper). In 6t h International Conference on Predictive Models in Software Engineering (PROMISE 2010). Timisoara, Romania.
[20] Mark Harman. 2010. Search Based Software Engineering (Keynote Paper). In 13t h International Conference on Fundamental Approaches to Software Engineering (FASE 2010). Paphos, Cyprus.
[21] Mark Harman, Lin Hu, Robert Mark Hierons, Xingyuan Zhang, Malcolm Munro, José Javier Dolado, Mari Carmen Otero, and Joachim Wegener. 2002. A Post- Placement Side-Effect Removal Algorithm. In IEEE International Conference on Software Maintenance (Montreal, Canada). IEEE Computer Society Press, Los Alamitos, California, USA, 2â11.
[22] Mark Harman, Yue Jia, Jens Krinke, Bill Langdon, Justyna Petke, and Yuanyuan Zhang. 2014. Search based software engineering for software product line engi- neering: a survey and directions for future work. In 18t h International Software Product Line Conference (SPLC 14). Florence, Italy, 5â18.
[23] Mark Harman and Bryan F. Jones. 2001. Search Based Software Engineering. Information and Software Technology 43, 14 (Dec. 2001), 833â839.
[24] Mark Harman, Afshin Mansouri, and Yuanyuan Zhang. 2012. Search Based Software Engineering: Trends, Techniques and Applications. Comput. Surveys 45, 1 (November 2012), 11:1â11:61.
[25] Mark Harman and Peter OâHearn. 2018. From Start-ups to Scale-ups: Opportu- nities and Open Problems for Static and Dynamic Program Analysis (keynote paper). In 18t h IEEE International Working Conference on Source Code Analysis and Manipulation (SCAM 2018). Madrid, Spain, 1â23.
[26] Ahmed E Hassan. 2008. The road ahead for mining software repositories. In 2008 Frontiers of Software Maintenance. IEEE, 48â57.
[27] Michael Hilton, Nicholas Nelson, Timothy Tunnell, Darko Marinov, and Danny Dig. 2017. Trade-offs in continuous integration: assurance, security, and flexi- bility. In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering. 197â207.
[28] Leonid Hurwicz and Stanley Reiter. 2006. Designing Economic Mechanisms. Cam- bridge University Press.
[29] Eugene Ie, Chih-wei Hsu, Martin Mladenov, Vihan Jain, Sanmit Narvekar, Jing Wang, Rui Wu, and Craig Boutilier. 2019. RecSim: A Configurable Simulation Platform for Recommender Systems. arXiv e-prints, Article arXiv:1909.04847 (Sep 2019), arXiv:1909.04847 pages. arXiv:cs.LG/1909.04847
[30] Nicholas R Jennings and Michael J Wooldridge. 1996. Software agents. IEE review (1996), 17â20.
[31] Dongwon Kang, Jinhwan Jung, and Doo-Hwan Bae. 2011. Constraint-based Hu- man Resource Allocation in Software Projects. Software: Practice and Experience
GI 2020, October 2020, Korea
41, 5 (April 2011), 551â577.
[32] Jack PC Kleijnen. 2005. Supply chain simulation tools and techniques: a survey. International journal of simulation and process modelling 1, 1-2 (2005), 82â89. [33] Kamolwan Kunanusont, Raluca D Gaina, Jialin Liu, Diego Perez-Liebana, and Simon M Lucas. 2017. The n-tuple bandit evolutionary algorithm for automatic game improvement. In 2017 IEEE Congress on Evolutionary Computation (CEC). IEEE, 2201â2208.
[34] Ada Augusta Lovelace. 1843. Sketch of the Analytical Engine Invented by Charles Babbage By L. F. Menabrea of Turin, officer of the military engineers, with notes by the translator. (1843). Translation with notes on article in italian in Bibliothèque Universelle de Genève, October, 1842, Number 82.
[35] Qingzhou Luo, Farah Hariri, Lamyaa Eloussi, and Darko Marinov. 2014. An empirical analysis of flaky tests. In 22nd International Symposium on Foundations of Software Engineering (FSE 2014), Shing-Chi Cheung, Alessandro Orso, and Margaret-Anne Storey (Eds.). ACM, Hong Kong, China, 643â653.
[36] Konstantinos Manikas and Klaus Marius Hansen. 2013. Software ecosystemsâ A systematic literature review. Journal of Systems and Software 86, 5 (2013), 1294â1306.
[37] Ke Mao, Mark Harman, and Yue Jia. 2017. Robotic Testing of Mobile Apps for Truly Black-Box Automation. IEEE Software 34, 2 (2017), 11â16.
[38] Alexandru Marginean, Johannes Bader, Satish Chandra, Mark Harman, Yue Jia, Ke Mao, Alexander Mols, and Andrew Scott. 2019. SapFix: Automated End-to- End Repair at Scale. In International Conference on Software Engineering (ICSE) Software Engineering in Practice (SEIP) track. Montreal, Canada.
[39] William Martin, Federica Sarro, and Mark Harman. 2016. Causal Impact Analysis for App Releases in Google Play. In 24th ACM SIGSOFT International Symposium on the Foundations of Software Engineering(FSE 2016). Seattle, WA, USA, 435â446. [40] William Martin, Federica Sarro, Yue Jia, Yuanyuan Zhang, and Mark Harman. 2017. A Survey of App Store Analysis for Software Engineering. IEEE Transactions on Software Engineering 43, 9 (2017).
[41] Fabien Michel, Jacques Ferber, and Alexis Drogoul. 2018. Multi-Agent Systems and Simulation: A Survey from the Agent Commu-nityâÄŹs Perspective. In Multi-Agent Systems. CRC Press, 17â66.
[42] Roger B Myerson. 2013. Game theory. Harvard university press. [43] Armin Najafi, Peter C. Rigby, and Weiyi Shang. 2019. Bisecting Commits and Modeling Commit Risk during Testing. In Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE 2019). 279âÄÅ289. [44] Judea Pearl. 2000. Causality. Cambridge University Press, Cambridge. [45] Justyna Petke, Saemundur O. Haraldsson, Mark Harman, William B. Langdon, David R. White, and John R. Woodward. 2018. Genetic Improvement of Software: a Comprehensive Survey. IEEE Transactions on Evolutionary Computation 22, 3 (June 2018), 415â432. https://doi.org/doi:10.1109/TEVC.2017.2693219
[46] Jian Ren, Mark Harman, and Massimiliano Di Penta. 2011. Cooperative Co- evolutionary Optimization on Software Project Staff Assignments and Job Sched- uling. In 3r d International Symposium on Search based Software Engineering (SSBSE 2011). 127â141. LNCS Volume 6956.
[47] Luis G. Serrano. [n.d.]. Grokking Machine Learning. Manning Publications. [48] Dan Siroker and Pete Koomen. 2013. A/B testing: The most powerful way to turn
clicks into customers. John Wiley & Sons.
[49] Richard S. Sutton (Ed.). 1992. Reinforcement Learning. SECS, Vol. 173. Kluwer Academic Publishers. Reprinted from volume 8(3â4) (1992) of Machine Learning. [50] Julian Togelius and Jurgen Schmidhuber. 2008. An experiment in automatic game design. In 2008 IEEE Symposium On Computational Intelligence and Games. IEEE, 111â118.
[51] Oriol Vinyals, Igor Babuschkin, Wojciech M. Czarnecki, MichaÃÅl Mathieu, An- drew Dudzik, Junyoung Chung, David H. Choi, Richard Powell, Timo Ewalds, Petko Georgiev, Junhyuk Oh, Dan Horgan, Manuel Kroiss, Ivo Danihelka, Aja Huang, Laurent Sifre, Trevor Cai, John P. Agapiou, Max Jaderberg, Alexander S. Vezhnevets, RÃľmi Leblond, Tobias Pohlen, Valentin Dalibard, David Budden, Yury Sulsky, James Molloy, Tom L. Paine, Caglar Gulcehre, Ziyu Wang, Tobias Pfaff, Yuhuai Wu, Roman Ring, Dani Yogatama, Dario WÃijnsch, Katrina McKin- ney, Oliver Smith, Tom Schaul, Timothy Lillicrap, Koray Kavukcuoglu, Demis Hassabis, Chris Apps, and David Silver. 2019. Grandmaster level in StarCraft II using multi-agent reinforcement learning. Nature 575, 7782 (2019), 350â354. [52] Michael P. Wellman. 2006. Methods for Empirical Game-Theoretic Analysis. In
Proceedings of the 21st AAAI Conference. 1552â1556.
[53] Douglas Brent West. 2001. Upper Saddle River. Introduction to graph theory. Vol. 2. Prentice hall
[54] Michael Wooldridge and Nicholas R Jennings. 1994. Agent theories, architec- tures, and languages: a survey. In International Workshop on Agent Theories, Architectures, and Languages. Springer, 1â39.
[55] Geogios N Yannakakis. 2012. Game AI revisited. In Proceedings of the 9th confer- ence on Computing Frontiers. 285â292. | {
"id": "1909.04847"
} |
2004.04696 | BLEURT: Learning Robust Metrics for Text Generation | Text generation has made significant advances in the last few years. Yet,
evaluation metrics have lagged behind, as the most popular choices (e.g., BLEU
and ROUGE) may correlate poorly with human judgments. We propose BLEURT, a
learned evaluation metric based on BERT that can model human judgments with a
few thousand possibly biased training examples. A key aspect of our approach is
a novel pre-training scheme that uses millions of synthetic examples to help
the model generalize. BLEURT provides state-of-the-art results on the last
three years of the WMT Metrics shared task and the WebNLG Competition dataset.
In contrast to a vanilla BERT-based approach, it yields superior results even
when the training data is scarce and out-of-distribution. | http://arxiv.org/pdf/2004.04696 | Thibault Sellam, Dipanjan Das, Ankur P. Parikh | cs.CL | Accepted at ACL 2020 | null | cs.CL | 20200409 | 20200521 | 0 2 0 2
y a M 1 2 ] L C . s c [
5 v 6 9 6 4 0 . 4 0 0 2 : v i X r a
# BLEURT: Learning Robust Metrics for Text Generation
Thibault Sellam Dipanjan Das Ankur P. Parikh Google Research New York, NY {tsellam, dipanjand, aparikh }@google.com
# Abstract
Text generation has made signiï¬cant advances in the last few years. Yet, evaluation met- rics have lagged behind, as the most popu- lar choices (e.g., BLEU and ROUGE) may correlate poorly with human judgments. We propose BLEURT, a learned evaluation met- ric based on BERT that can model human judgments with a few thousand possibly bi- ased training examples. A key aspect of our approach is a novel pre-training scheme that uses millions of synthetic examples to help the model generalize. BLEURT provides state-of- the-art results on the last three years of the WMT Metrics shared task and the WebNLG Competition dataset. In contrast to a vanilla BERT-based approach, it yields superior re- sults even when the training data is scarce and out-of-distribution.
# Introduction
evaluation metrics, which provide an acceptable proxy for quality and are very cheap to compute. This paper investigates sentence-level, reference- based metrics, which describe the extent to which a candidate sentence is similar to a reference one. The exact deï¬nition of similarity may range from string overlap to logical entailment.
The ï¬rst generation of metrics relied on hand- crafted rules that measure the surface similarity between the sentences. To illustrate, BLEU (Pa- pineni et al., 2002) and ROUGE (Lin, 2004), two popular metrics, rely on N-gram overlap. Because those metrics are only sensitive to lexical vari- ation, they cannot appropriately reward seman- tic or syntactic variations of a given reference. Thus, they have been repeatedly shown to cor- relate poorly with human judgment, in particular when all the systems to compare have a similar level of accuracy (Liu et al., 2016; Novikova et al., 2017; Chaganty et al., 2018).
In the last few years, research in natural text generation (NLG) has made signiï¬cant progress, driven largely by the neural encoder-decoder paradigm (Sutskever et al., 2014; Bahdanau et al., 2015) which can tackle a wide array of tasks including translation (Koehn, 2009), summariza- tion (Mani, 1999; Chopra et al., 2016), structured- data-to-text generation (McKeown, 1992; Kukich, 1983; Wiseman et al., 2017) dialog (Smith and Hipp, 1994; Vinyals and Le, 2015) and image cap- tioning (Fang et al., 2015). However, progress is increasingly impeded by the shortcomings of ex- isting metrics (Wiseman et al., 2017; Ma et al., 2019; Tian et al., 2019).
Human evaluation is often the best indicator of the quality of a system. However, design- ing crowd sourcing experiments is an expensive and high-latency process, which does not easily ï¬t in a daily model development pipeline. There- fore, NLG researchers commonly use automatic
Increasingly, NLG researchers have addressed those problems by injecting learned components in their metrics. To illustrate, consider the WMT Metrics Shared Task, an annual benchmark in which translation metrics are compared on their ability to imitate human assessments. The last two years of the competition were largely dominated by neural net-based approaches, RUSE, YiSi and ESIM (Ma et al., 2018, 2019). Current approaches largely fall into two categories. Fully learned met- rics, such as BEER, RUSE, and ESIM are trained end-to-end, and they typically rely on handcrafted features and/or learned embeddings. Conversely, hybrid metrics, such as YiSi and BERTscore com- bine trained elements, e.g., contextual embed- dings, with handwritten logic, e.g., as token align- ment rules. The ï¬rst category typically offers great expressivity: if a training set of human ratings data is available, the metrics may take full advantage of it and ï¬t the ratings distribution tightly. Fur-
thermore, learned metrics can be tuned to measure task-speciï¬c properties, such as ï¬uency, faithful- ness, grammar, or style. On the other hand, hybrid metrics offer robustness. They may provide better results when there is little to no training data, and they do not rely on the assumption that training and test data are identically distributed.
And indeed, the IID assumption is particularly problematic in NLG evaluation because of domain drifts, that have been the main target of the metrics literature, but also because of quality drifts: NLG systems tend to get better over time, and therefore a model trained on ratings data from 2015 may fail to distinguish top performing systems in 2019, es- pecially for newer research tasks. An ideal learned metric would be able to both take full advantage of available ratings data for training, and be robust to distribution drifts, i.e., it should be able to extrap- olate.
Our insight is that it is possible to combine ex- pressivity and robustness by pre-training a fully learned metric on large amounts of synthetic data, before ï¬ne-tuning it on human ratings. To this end, we introduce BLEURT,1 a text generation metric based on BERT (Devlin et al., 2019). A key ingre- dient of BLEURT is a novel pre-training scheme, which uses random perturbations of Wikipedia sentences augmented with a diverse set of lexical and semantic-level supervision signals.
To demonstrate our approach, we train BLEURT for English and evaluate it under different gen- eralization regimes. We ï¬rst verify that it pro- vides state-of-the-art results on all recent years of the WMT Metrics Shared task (2017 to 2019, to-English language pairs). We then stress-test its ability to cope with quality drifts with a syn- thetic benchmark based on WMT 2017. Finally, we show that it can easily adapt to a different do- main with three tasks from a data-to-text dataset, WebNLG 2017 (Gardent et al., 2017). Ablations show that our synthetic pretraining scheme in- creases performance in the IID setting, and is crit- ical to ensure robustness when the training data is scarce, skewed, or out-of-domain.
The code and pre-trained models are available online2.
1Bilingual Evaluation Understudy with Representations from Transformers. We refer the intrigued reader to Papineni et al. 2002 for a justiï¬cation of the term understudy.
2http://github.com/google-research/ bleurt
# 2 Preliminaries
Deï¬ne x = (x1, .., xr) to be the reference sen- tence of length r where each xi is a token and let Ëx = (Ëx1, .., Ëxp) be a prediction sentence of length p. Let {(xi, Ëxi, yi)}N n=1 be a training dataset of size N where yi â R is the human rating that in- dicates how good Ëxi is with respect to xi. Given the training data, our goal is to learn a function f : (x, Ëx) â y that predicts the human rating.
# 3 Fine-Tuning BERT for Quality Evaluation
Given the small amounts of rating data available, it is natural to leverage unsupervised representations for this task. In our model, we use BERT (Bidirec- tional Encoder Representations from Transform- ers) (Devlin et al., 2019), which is an unsuper- vised technique that learns contextualized repre- sentations of sequences of text. Given x and Ëx, BERT is a Transformer (Vaswani et al., 2017) that returns a sequence of contextualized vectors:
v[CLS], vx1, ..., vxr , v1, ..., vËxp = BERT(x, Ëx)
where v[CLS] is the representation for the special [CLS] token. As described by Devlin et al. (2019), we add a linear layer on top of the [CLS] vector to predict the rating:
Ëy = f (x, Ëx) = W Ëv[CLS] + b
where W and 6 are the weight matrix and bias vector respectively. Both the above linear layer as well as the BERT parameters are trained (i.e. fine-tuned) on the supervised data which typically numbers in a few thousand examples. We use the regression loss lsupervised = 4 Don=1 lly â al?
Although this approach is quite straightforward, we will show in Section 5 that it gives state-of-the- art results on WMT Metrics Shared Task 17-19, which makes it a high-performing evaluation met- ric. However, ï¬ne-tuning BERT requires a sizable amount of IID data, which is less than ideal for a metric that should generalize to a variety of tasks and model drift.
# 4 Pre-Training on Synthetic Data
The key aspect of our approach is a pre-training technique that we use to âwarm upâ BERT before ï¬ne-tuning on rating data.3 We generate a large
3To clarify, our pre-training scheme is an addition, not a replacement to BERTâs initial training (Devlin et al., 2019) and happens after it.
number of of synthetic reference-candidate pairs (z, Ëz), and we train BERT on several lexical- and semantic-level supervision signals with a multi- task loss. As our experiments will show, BLEURT generalizes much better after this phase, especially with incomplete training data.
Any pre-training approach requires a dataset and a set of pre-training tasks. Ideally, the setup should resemble the ï¬nal NLG evaluation task, i.e., the sentence pairs should be distributed sim- ilarly and the pre-training signals should corre- late with human ratings. Unfortunately, we cannot have access to the NLG models that we will eval- uate in the future. Therefore, we optimized our scheme for generality, with three requirements. (1) The set of reference sentences should be large and diverse, so that BLEURT can cope with a wide range of NLG domains and tasks. (2) The sen- tence pairs should contain a wide variety of lex- ical, syntactic, and semantic dissimilarities. The aim here is to anticipate all variations that an NLG system may produce, e.g., phrase substitu- tion, paraphrases, noise, or omissions. (3) The pre-training objectives should effectively capture those phenomena, so that BLEURT can learn to identify them. The following sections present our approach.
# 4.1 Generating Sentence Pairs
One way to expose BLEURT to a wide variety of sentence differences is to use existing sentence pairs datasets (Bowman et al., 2015; Williams et al., 2018; Wang et al., 2019). These sets are a rich source of related sentences, but they may fail to capture the errors and alterations that NLG systems produce (e.g., omissions, repetitions, non- sensical substitutions). We opted for an automatic approach instead, that can be scaled arbitrarily and at little cost: we generate synthetic sentence pairs (z, Ëz) by randomly perturbing 1.8 million seg- ments z from Wikipedia. We use three techniques: mask-ï¬lling with BERT, backtranslation, and ran- domly dropping out words. We obtain about 6.5 million perturbations Ëz. Let us describe those techniques.
Mask-ï¬lling with BERT: BERTâs initial train- ing task is to ï¬ll gaps (i.e., masked tokens) in to- kenized sentences. We leverage this functional- ity by inserting masks at random positions in the Wikipedia sentences, and ï¬ll them with the lan- guage model. Thus, we introduce lexical alter-
ations while maintaining the ï¬uency of the sen- tence. We use two masking strategiesâwe either introduce the masks at random positions in the sentences, or we create contiguous sequences of masked tokens. More details are provided in the Appendix.
Backtranslation: We generate paraphrases and perturbations with backtranslation, that is, round trips from English to another language and then back to English with a translation model (Bannard and Callison-Burch, 2005; Ganitkevitch et al., 2013; Sennrich et al., 2016). Our primary aim is to create variants of the reference sentence that pre- serves semantics. Additionally, we use the mispre- dictions of the backtranslation models as a source of realistic alterations.
Dropping words: We found it useful in our ex- periments to randomly drop words from the syn- thetic examples above to create other examples. This method prepares BLEURT for âpathologicalâ behaviors or NLG systems, e.g., void predictions, or sentence truncation.
# 4.2 Pre-Training Signals
The next step is to augment each sentence pair (z, Ëz) with a set of pre-training signals {Ïk}, where Ïk is the target vector of pre-training task k. Good pre-training signals should capture a wide variety of lexical and semantic differences. They should also be cheap to obtain, so that the ap- proach can scale to large amounts of synthetic data. The following section presents our 9 pre- training tasks, summarized in Table 1. Additional implementation details are in the Appendix.
Automatic Metrics: We create three signals ÏBLEU, ÏROUGE, and ÏBERTscore with sentence BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), and BERTscore (Zhang et al., 2020) re- spectively (we use precision, recall and F-score for the latter two).
Backtranslation Likelihood: The idea behind this signal is to leverage existing translation mod- els to measure semantic equivalence. Given a pair (z, Ëz), this training signal measures the probabil- ity that Ëz is a backtranslation of z, P ( Ëz|z), nor- malized by the length of Ëz. Let Penâfr(zfr|z) be a translation model that assigns probabilities to French sentences zfr conditioned on English sentences z and let Pfrâen(z|zfr) be a trans- lation model that assigns probabilities to English
Task Type BLEU ROUGE BERTscore Backtrans. likelihood Entailment Backtrans. ï¬ag Pre-training Signals ÏBLEU ÏROUGE = (ÏROUGE-P, ÏROUGE-R, ÏROUGE-F) Ïen-fr,z| Ëz, Ïen-fr, Ëz|z, Ïen-de,z| Ëz, Ïen-de, Ëz|z Ïentail = (ÏEntail, ÏContradict, ÏNeutral) Ïbacktran ï¬ag
Loss Type Regression Regression ÏBERTscore = (ÏBERTscore-P, ÏBERTscore-R, ÏBERTscore-F) Regression Regression Multiclass Multiclass
Table 1: Our pre-training signals.
If | Ëz| is the sentences given french sentences. number of tokens in Ëz, we deï¬ne our score as Ïen-fr, Ëz|z = log P ( Ëz|z) , with:
P ( Ëz|z) = Pfrâen( Ëz|zfr)Penâfr(zfr|z) zfr
WÏk Ëv[CLS] + bÏk . If Ïk is a classiï¬cation task, we use a separate linear layer to predict a logit for each class c: ËÏkc = WÏkc Ëv[CLS] + bÏkc, and we use the multiclass cross-entropy loss. We deï¬ne our aggregate pre-training loss function as follows:
Because over all possible in- tractable, we sum using zâ fr = arg max Penâfr(zfr|z) and we as- sume that Penâfr(zâ fr|z) â 1:
P ( Ëz|z) â Pfrâen( Ëz|zâ
M K 1 a Cpre-training = Mu S- So nba (Te FR") dd) m=1k=1
where Ï m k is the target vector for example m, M is number of synthetic examples, and γk are hy- perparameter weights obtained with grid search (more details in the Appendix).
We can trivially reverse the procedure to com- pute P (z| Ëz), thus we create 4 pre-training signals Ïen-fr,z| Ëz, Ïen-fr, Ëz|z, Ïen-de,z| Ëz, Ïen-de, Ëz|z with two pairs of languages (en â de and en â fr) in both directions.
Textual Entailment: The signal Ïentail expresses whether z entails or contradicts Ëz using a clas- siï¬er. We report the probability of three labels: Entail, Contradict, and Neutral, using BERT ï¬ne- tuned on an entailment dataset, MNLI (Devlin et al., 2019; Williams et al., 2018).
# 5 Experiments
In this section, we report our experimental results for two tasks, translation and data-to-text. First, we benchmark BLEURT against existing text gen- eration metrics on the last 3 years of the WMT Metrics Shared Task (Bojar et al., 2017). We then evaluate its robustness to quality drifts with a se- ries of synthetic datasets based on WMT17. We test BLEURTâs ability to adapt to different tasks with the WebNLG 2017 Challenge Dataset (Gar- dent et al., 2017). Finally, we measure the contri- bution of each pre-training task with ablation ex- periments.
Backtranslation ï¬ag: The signal Ïbacktran ï¬ag is a Boolean that indicates whether the perturbation was generated with backtranslation or with mask- ï¬lling.
# 4.3 Modeling
For each pre-training task, our model uses either a regression or a classiï¬cation loss. We then aggre- gate the task-level losses with a weighted sum.
Let 7; describe the target vector for each task, e.g., the probabilities for the classes Entail, Con- tradict, Neutral, or the precision, recall, and F- score for ROUGE. If 7;, is a regression task, then the loss used is the 2 loss ie. ¢, = ||T â Fx|[3/|7x| where |7;.| is the dimension of 7 and T, is computed by using a task-specific linear layer on top of the [CLS] embedding: 7, =
Our Models: Unless speciï¬ed otherwise, all BLEURT models are trained in three steps: reg- ular BERT pre-training (Devlin et al., 2019), pre-training on synthetic data (as explained in Section 4), and ï¬ne-tuning on task-speciï¬c rat- ings (translation and/or data-to-text). We exper- iment with two versions of BLEURT, BLEURT and BLEURTbase, respectively based on BERT- Large (24 layers, 1024 hidden units, 16 heads) and BERT-Base (12 layers, 768 hidden units, 12 heads) (Devlin et al., 2019), both uncased. We use batch size 32, learning rate 1e-5, and 800,000 steps for pre-training and 40,000 steps for ï¬ne- tuning. We provide the full detail of our training setup in the Appendix.
model sentBLEU MoverScore BERTscore w/ BERT BERTscore w/ roBERTa chrF++ BEER BLEURTbase -pre BLEURTbase BLEURT -pre BLEURT cs-en Ï / r 29.6 / 43.2 47.6 / 67.0 48.0 / 66.6 54.2 / 72.6 35.0 / 52.3 34.0 / 51.1 51.5 / 68.2 55.7 / 73.4 56.0 / 74.7 59.3 / 77.3 de-en Ï / r 28.9 / 42.2 51.2 / 70.8 50.3 / 70.1 56.9 / 76.0 36.5 / 53.4 36.1 / 53.0 52.0 / 70.7 56.3 / 75.7 57.1 / 75.7 59.9 / 79.2 ï¬-en Ï / r 38.6 / 56.0 NA 61.4 / 81.4 64.8 / 83.2 47.5 / 67.8 48.3 / 68.1 66.6 / 85.1 68.0 / 86.8 67.2 / 86.1 69.5 / 87.8 lv-en Ï / r 23.9 / 38.2 NA 51.6 / 72.3 56.2 / 75.7 33.3 / 52.0 32.8 / 51.5 60.8 / 80.5 64.7 / 83.3 62.3 / 81.7 64.4 / 83.5 ru-en Ï / r 34.3 / 47.7 53.4 / 73.8 53.7 / 73.0 57.2 / 75.2 41.5 / 58.8 40.2 / 57.7 57.5 / 77.7 60.1 / 80.1 58.4 / 78.3 61.3 / 81.1 tr-en Ï / r 34.3 / 54.0 56.1 / 76.2 55.6 / 76.0 57.9 / 76.1 43.2 / 61.4 42.8 / 60.0 56.9 / 76.0 62.4 / 81.7 61.6 / 81.4 62.9 / 82.4 zh-en Ï / r 37.4 / 51.3 53.1 / 74.4 52.2 / 73.1 58.8 / 78.9 40.5 / 59.3 39.5 / 58.2 52.1 / 72.1 59.5 / 80.5 55.9 / 76.5 60.2 / 81.4 avg Ï / r 32.4 / 47.5 52.3 / 72.4 53.3 / 73.2 58.0 / 76.8 39.6 / 57.9 39.1 / 57.1 56.8 / 75.8 61.0 / 80.2 59.8 / 79.2 62.5 / 81.8
Table 2: Agreement with human ratings on the WMT17 Metrics Shared Task. The metrics are Kendall Tau (Ï ) and the Pearson correlation (r, the ofï¬cial metric of the shared task), divided by 100.
model sentBLEU BERTscore w/ BERT BERTscore w/ roBERTa Meteor++ RUSE YiSi1 YiSi1 SRL 18 BLEURTbase -pre BLEURTbase BLEURT -pre BLEURT cs-en Ï / DA 20.0 / 22.5 29.5 / 40.0 31.2 / 41.1 22.4 / 26.8 27.0 / 34.5 23.5 / 31.7 23.3 / 31.5 33.0 / 39.0 34.5 / 42.9 34.5 / 42.1 35.6 / 42.3 de-en Ï / DA 31.6 / 41.5 39.9 / 53.8 42.2 / 55.5 34.7 / 45.7 36.1 / 49.8 35.5 / 48.8 34.3 / 48.3 41.5 / 54.6 43.5 / 55.6 42.7 / 55.4 44.2 / 56.7 et-en Ï / DA 26.0 / 28.2 34.7 / 39.0 37.0 / 40.3 29.7 / 32.9 32.9 / 36.8 30.2 / 35.1 29.8 / 34.5 38.2 / 39.6 39.2 / 40.5 39.2 / 40.6 40.0 / 41.4 ï¬-en Ï / DA 17.1 / 15.6 26.0 / 29.7 27.8 / 30.8 21.6 / 20.6 25.5 / 27.5 21.5 / 23.1 21.2 / 23.7 30.7 / 31.1 31.5 / 30.9 31.4 / 31.6 32.1 / 32.5 ru-en Ï / DA 20.5 / 22.4 27.8 / 34.7 30.2 / 35.4 22.8 / 25.3 25.0 / 31.1 23.3 / 30.0 22.6 / 30.6 30.7 / 34.9 31.0 / 35.7 31.4 / 34.2 31.9 / 36.0 tr-en Ï / DA 22.9 / 13.6 31.7 / 27.5 32.8 / 30.2 27.3 / 20.4 29.1 / 25.9 26.8 / 23.4 26.1 / 23.3 32.9 / 29.8 35.0 / 29.4 33.4 / 29.3 35.5 / 31.5 zh-en Ï / DA 21.6 / 17.6 27.5 / 25.2 29.2 / 26.3 23.6 / 17.5* 24.6 / 21.5* 23.1 / 20.9 22.9 / 20.7 28.3 / 25.6 29.6 / 26.9 28.9 / 25.6 29.7 / 26.0 avg Ï / DA 22.8 / 23.2 31.0 / 35.7 32.9 / 37.1 26.0 / 27.0 28.6 / 32.4 26.3 / 30.4 25.7 / 30.4 33.6 / 36.4 34.9 / 37.4 34.5 / 37.0 35.6 / 38.1
Table 3: Agreement with human ratings on the WMT18 Metrics Shared Task. The metrics are Kendall Tau (Ï ) and WMTâs Direct Assessment metrics divided by 100. The star * indicates results that are more than 0.2 percentage points away from the ofï¬cial WMT results (up to 0.4 percentage points away).
.
# 5.1 WMT Metrics Shared Task
Datasets and Metrics: We use years 2017 to 2019 of the WMT Metrics Shared Task, to-English language pairs. For each year, we used the of- ï¬cial WMT test set, which include several thou- sand pairs of sentences with human ratings from the news domain. The training sets contain 5,360, 9,492, and 147,691 records for each year. The test sets for years 2018 and 2019 are noisier, as re- ported by the organizers and shown by the overall lower correlations.
We evaluate the agreement between the auto- matic metrics and the human ratings. For each year, we report two metrics: Kendallâs Tau Ï (for consistency across experiments), and the ofï¬cial WMT metric for that year (for completeness). The ofï¬cial WMT metric is either Pearsonâs correla- tion or a robust variant of Kendallâs Tau called DARR, described in the Appendix. All the num- bers come from our own implementation of the benchmark.4 Our results are globally consistent with the ofï¬cial results but we report small differ- ences in 2018 and 2019, marked in the tables.
Models: We experiment with four versions of BLEURT: BLEURT, BLEURTbase, BLEURT -pre and BLEURTbase -pre. The ï¬rst two models are based on BERT-large and BERT-base. In the latter two versions, we skip the pre-training phase and ï¬ne-tune directly on the WMT ratings. For each year of the WMT shared task, we use the test set from the previous years for training and validation. We describe our setup in further detail in the Appendix. We compare BLEURT to partici- pant data from the shared task and automatic met- rics that we ran ourselves. In the former case, we use the the best-performing contestants for each year, that is, chrF++, BEER, Meteor++, RUSE, Yisi1, ESIM and Yisi1-SRL (Mathur et al., 2019). All the contestants use the same WMT training data, in addition to existing sentence or to- ken embeddings. In the latter case, we use Moses sentenceBLEU, BERTscore (Zhang et al., 2020), and MoverScore (Zhao et al., 2019). For BERTscore, we use BERT-large uncased for fairness, and roBERTa (the recommended ver- sion) for completeness (Liu et al., 2019). We run MoverScore on WMT 2017 using the scripts published by the authors.
4The ofï¬cial scripts are public but they suffer from docu- mentation and dependency issues, as shown by a README ï¬le in the 2019 edition which explicitly discourages using them.
Results: Tables 2, 3, 4 show the results. For years 2017 and 2018, a BLEURT-based metric
model sentBLEU BERTscore w/ BERT BERTscore w/ roBERTa ESIM YiSi1 SRL 19 BLEURTbase -pre BLEURTbase BLEURT -pre BLEURT de-en Ï / DA 19.4 / 5.4 26.2 / 17.3 29.1 / 19.3 28.4 / 16.6 26.3 / 19.8 30.1 / 15.8 31.0 / 16.6 31.1 / 16.9 31.2 / 16.9 ï¬-en Ï / DA 20.6 / 23.3 27.6 / 34.7 29.7 / 35.3 28.9 / 33.7 27.8 / 34.6 30.4 / 35.4 31.3 / 36.2 31.3 / 36.5 31.7 / 36.3 gu-en Ï / DA 17.3 / 18.9 25.8 / 29.3 27.7 / 32.4 27.1 / 30.4 26.6 / 30.6 26.8 / 29.7 27.9 / 30.6 27.6 / 31.3 28.3 / 31.9 kk-en Ï / DA 30.0 / 37.6 36.9 / 44.0 37.1 / 43.1 38.4 / 43.3 36.9 / 44.1 37.8 / 41.8 39.5 / 44.6 38.4 / 42.8 39.5 / 44.6 lt-en Ï / DA 23.8 / 26.2 30.8 / 37.4 32.6 / 38.2 33.2 / 35.9 30.9 / 38.0 34.2 / 39.0 35.2 / 39.4 35.0 / 40.0 35.2 / 40.6 ru-en Ï / DA 19.4 / 12.4 25.2 / 20.6 26.3 / 22.7 26.6 / 19.9 25.3 / 22.0 27.0 / 20.7 28.5 / 21.5 27.5 / 21.4 28.3 / 22.3 zh-en Ï / DA 28.7 / 32.2 37.5 / 41.4 41.4 / 43.8 38.7 / 39.6 38.9 / 43.1 40.1 / 39.8 41.7 / 41.6 41.6 / 41.4 42.7 / 42.4 avg Ï / DA 22.7 / 22.3 30.0 / 32.1 32.0 / 33.6 31.6 / 31.3 30.4 / 33.2 32.3 / 31.7 33.6 / 32.9 33.2 / 32.9 33.8 / 33.6
Table 4: Agreement with human ratings on the WMT19 Metrics Shared Task. The metrics are Kendall Tau (Ï ) and WMTâs Direct Assessment metrics divided by 100. All the values reported for Yisi1 SRL and ESIM fall within 0.2 percentage of the ofï¬cial WMT results.
Dataset Test Skew factor 0 05 1.0 15 3.0 Density (rescaled) Ratings
BLEURT No Pretrain. BLEURT w. Pretrain S ° ° ° Kendall Tau w. Human Ratings 0 1 2 30 1 2 3 Test Set skew
Figure 1: Distribution of the human ratings in the train/validation and test datasets for different skew fac- tors.
+-BERTscore BLEU train sk. âtrain sk. 1.0âtrain sk. 3.0 train sk. 0.5 âtrain sk. 1.5
Figure 2: Agreement between BLEURT and human ratings for different skew factors in train and test.
dominates the benchmark for each language pair (Tables 2 and 3). BLEURT and BLEURTbase are also competitive for year 2019: they yield the best results for all language pairs on Kendallâs Tau, and they come ï¬rst for 3 out of 7 pairs on DARR. As expected, BLEURT dominates BLEURTbase in the majority of cases. Pre-training consistently im- proves the results of BLEURT and BLEURTbase. We observe the largest effect on year 2017, where it adds up to 7.4 Kendall Tau points for BLEURTbase (zh-en). The effect is milder on years 2018 and 2019, up to 2.1 points (tr-en, 2018). We explain the difference by the fact that the training data used for 2017 is smaller than the datasets used for the following years, so pre-training is likelier to help. In general pre- training yields higher returns for BERT-base than for BERT-largeâin fact, BLEURTbase with pre- training is often better than BLEURT without.
a series of tasks for which it is increasingly pres- sured to extrapolate. All the experiments that fol- low are based on the WMT Metrics Shared Task 2017, because the ratings for this edition are par- ticularly reliable.5
Methodology: We create increasingly challeng- ing datasets by sub-sampling the records from the WMT Metrics shared task, keeping low-rated translations for training and high-rated translations for test. The key parameter is the skew factor α, that measures how much the training data is left- skewed and the test data is right-skewed. Figure 1 demonstrates the ratings distribution that we used in our experiments. The training data shrinks as α increases: in the most extreme case (α = 3.0), we use only 11.9% of the original 5,344 training records. We give the full detail of our sampling methodology in the Appendix.
Pre-training delivers consis- tent improvements, especially for BLEURT-base. BLEURT yields state-of-the art performance for all years of the WMT Metrics Shared task.
We use BLEURT with and without pre-training and we compare to Moses sentBLEU and BERTscore. We use BERT-large uncased for both BLEURT and BERTscore.
# 5.2 Robustness to Quality Drift
We assess our claim that pre-training makes BLEURT robust to quality drifts, by constructing
5The organizers managed to collect 15 adequacy scores for each translation, and thus the ratings are almost perfectly repeatable (Bojar et al., 2017)
Split by System Split by Input © ES ° iS dldla Aouany / 1 rPririfl Metric BLEU rITIritl Jalal | TER || Meteor sewer SS 2N 68 aS Kentall Tau w. Human Ratings csoosos cs ddidld didi {0 sonuewes 0/9 systems 2/9 systems 3/9 systems __5/9 systems Orecords 1,174 records 1,317 records 2,424 records Orecords 0/224 inputs 38/224 inputs 66/224 inputs 122/224 inputs 836 records 1,445 records 2,689 records Num. Systems/Inputs Used for Training and Validation
# BERTscore
# BLEURT -pre -wmt
# BLEURT -wmt
# steurt
Figure 3: Absolute Kendall Tau of BLEU, Meteor, and BLEURT with human judgements on the WebNLG dataset, varying the size of the data used for training and validation.
Results: Figure 2 presents BLEURTâs perfor- mance as we vary the train and test skew inde- pendently. Our ï¬rst observation is that the agree- ments fall for all metrics as we increase the test skew. This effect was already described is the 2019 WMT Metrics report (Ma et al., 2019). A common explanation is that the task gets more dif- ï¬cult as the ratings get closerâit is easier to dis- criminate between âgoodâ and âbadâ systems than to rank âgoodâ systems.
pairs in total (we removed null values). Each in- put comes with 1 to 3 reference descriptions. The submissions are evaluated on 3 aspects: semantics, grammar, and ï¬uency. We treat each type of rat- ing as a separate modeling task. The data has no natural split between train and test, therefore we experiment with several schemes. We allocate 0% to about 50% of the data to training, and we split on both the evaluated systems or the RDF inputs in order to test different generalization regimes.
Training skew has a disastrous effect on BLEURT without pre-training: is below BERTscore for α = 1.0, and it falls under sentBLEU for α ⥠1.5. Pre-trained BLEURT is much more robust: the only case in which it falls under the baselines is α = 3.0, the most extreme drift, for which incorrect translations are used for train while excellent ones for test.
Takeaways: Pre-training makes BLEURT sig- niï¬cantly more robust to quality drifts.
# 5.3 WebNLG Experiments
In this section, we evaluate BLEURTâs perfor- mance on three tasks from a data-to-text dataset, the WebNLG Challenge 2017 (Shimorina et al., 2019). The aim is to assess BLEURTâs capacity to adapt to new tasks with limited training data.
Dataset and Evaluation Tasks: The WebNLG challenge benchmarks systems that produce natu- ral language description of entities (e.g., buildings, cities, artists) from sets of 1 to 5 RDF triples. The organizers released the human assessments for 9 systems over 223 inputs, that is, 4,677 sentence
Baselines: BLEURT -pre Systems -wmt, is a public BERT-large uncased checkpoint directly trained on the WebNLG ratings. BLEURT -wmtwas ï¬rst pre-trained on synthetic data, BLEURT then ï¬ne-tuned on WebNLG data. was trained in three steps: ï¬rst on synthetic data, then on WMT data (16-18), and ï¬nally on WebNLG data. When a record comes with several references, we run BLEURT on each reference and report the highest value (Zhang et al., 2020). BLEU, TER, four baselines: Meteor, and BERTscore. The ï¬rst three were computed by the WebNLG competition organiz- ers. We ran the latter one ourselves, using BERT- large uncased for a fair comparison.
Results: Figure 3 presents the correlation of the metrics with human assessments as we vary the share of data allocated to training. The more pre- trained BLEURT is, the quicker it adapts. The vanilla BERT approach BLEURT -pre -wmt requires about a third of the WebNLG data to dom- inate the baselines on the majority of tasks, and it still lags behind on semantics (split by system). In
1 task N-1 tasks 0%: no pre-training 0%: all pre-training tasks ° lis |' -5 © Bd a6 923. cd CE o® 923, 9 ce ha O9, HY OO" He hMO, WY OO" â oe Seer here) oS Satyr X20) Relative Improv./Degradation (%) Pretraining Task BLEURT[JBLEURTbase
Figure 4: Improvement in Kendall Tau on WMT 17 varying the pre-training tasks.
contrast, BLEURT -wmt is competitive with as little as 836 records, and BLEURT is comparable with BERTscore with zero ï¬ne-tuning.
Takeaways: Thanks to pre-training, BLEURT can quickly adapt to the new tasks. BLEURT ï¬ne- tuned twice (ï¬rst on synthetic data, then on WMT data) provides acceptable results on all tasks with- out training data.
# 5.4 Ablation Experiments
Figure 4 presents our ablation experiments on WMT 2017, which highlight the relative impor- tance of each pre-training task. On the left side, we compare BLEURT pre-trained on a single task to BLEURT without pre-training. On the right side, we compare full BLEURT to BLEURT pre- trained on all tasks except one. Pre-training on BERTscore, entailment, and the backtranslation scores yield improvements (symmetrically, ablat- ing them degrades BLEURT). Oppositely, BLEU and ROUGE have a negative impact. We con- clude that pre-training on high quality signals helps BLEURT, but that metrics that correlate less well with human judgment may in fact harm the model.6
# 6 Related Work
The WMT shared metrics competition (Bojar et al., 2016; Ma et al., 2018, 2019) has inspired
6Do those results imply that BLEU and ROUGE should be removed from future versions of BLEURT? Doing so may indeed yield slight improvements on the WMT Metrics 2017 shared task. On the other hand the removal may hurt future tasks in which BLEU or ROUGE actually correlate with hu- man assessments. We therefore leave the question open.
the creation of many learned metrics, some of which use regression or deep learning (Stanoje- vic and Simaan, 2014; Ma et al., 2017; Shimanaka et al., 2018; Chen et al., 2017; Mathur et al., 2019). Other metrics have been introduced, such as the recent MoverScore (Zhao et al., 2019) which com- bines contextual embeddings and Earth Moverâs Distance. We provide a head-to-head compari- son with the best performing of those in our ex- periments. Other approaches do not attempt to estimate quality directly, but use information ex- traction or question answering as a proxy (Wise- man et al., 2017; Goodrich et al., 2019; Eyal et al., 2019). Those are complementary to our work.
There has been recent work that uses BERT for evaluation. BERTScore (Zhang et al., 2020) pro- poses replacing the hard n-gram overlap of BLEU with a soft-overlap using BERT embeddings. We use it in all our experiments. Bertr (Mathur et al., 2019) and YiSi (Mathur et al., 2019) also make use of BERT embeddings to capture similarity. Sum- QE (Xenouleas et al., 2019) ï¬ne-tunes BERT for quality estimation as we describe in Section 3. Our focus is differentâwe train metrics that are not only state-of-the-art in conventional IID ex- perimental setups, but also robust in the presence of scarce and out-of-distribution training data. To our knowledge no existing work has explored pre- training and extrapolation in the context of NLG. Previous studies have used noising for refer- enceless evaluation (DuËsek et al., 2019). Noisy pre-training has also been proposed before for other tasks such as paraphrasing (Wieting et al., 2016; Tomar et al., 2017) but generally not with synthetic data. Generating synthetic data via para- phrases and perturbations has been commonly used for generating adversarial examples (Jia and Liang, 2017; Iyyer et al., 2018; Belinkov and Bisk, 2018; Ribeiro et al., 2018), an orthogonal line of research.
# 7 Conclusion
We presented BLEURT, a reference-based text generation metric for English. Because the metric is trained end-to-end, BLEURT can model human assessment with superior accuracy. Furthermore, pre-training makes the metrics robust particularly robust to both domain and quality drifts. Future re- search directions include multilingual NLG evalu- ation, and hybrid methods involving both humans and classiï¬ers.
# Acknowledgments
Thanks to Eunsol Choi, Nicholas FitzGerald, Ja- cob Devlin, and to the members of the Google AI Language team for the proof-reading, feedback, and suggestions. We also thank Madhavan Ki- dambi and Ming-Wei Chang, who implemented blank-ï¬lling with BERT.
# References
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly In Proceedings of learning to align and translate. ICLR.
Colin Bannard and Chris Callison-Burch. 2005. Para- In Pro- phrasing with bilingual parallel corpora. ceedings of ACL.
Yonatan Belinkov and Yonatan Bisk. 2018. Synthetic and natural noise both break neural machine transla- tion. In Proceedings of ICLR.
OndËrej Bojar, Yvette Graham, and Amir Kamran. 2017. Results of the wmt17 metrics shared task. In Proceedings of WMT.
OndËrej Bojar, Yvette Graham, Amir Kamran, and MiloËs Stanojevi´c. 2016. Results of the wmt16 met- rics shared task. In Proceedings of WMT.
Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large anno- tated corpus for learning natural language inference. Proceedings of EMNLP.
Arun Tejasvi Chaganty, Stephen Mussman, and Percy Liang. 2018. The price of debiasing automatic met- rics in natural language evaluation. Proceedings of ACL.
Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced lstm for natural language inference. Proceedings of ACL.
Sumit Chopra, Michael Auli, and Alexander M Rush. 2016. Abstractive sentence summarization with at- tentive recurrent neural networks. In Proceedings of NAACL HLT.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In Proceedings of NAACL HLT.
OndËrej DuËsek, Karin Sevegnani, Ioannis Konstas, and Verena Rieser. 2019. Automatic quality estimation for natural language generation: Ranting (jointly rat- ing and ranking). Proceedings of INLG.
Matan Eyal, Tal Baumel, and Michael Elhadad. 2019. Question answering as an automatic evaluation met- ric for news article summarization. In Proceedings of NAACL HLT.
Hao Fang, Saurabh Gupta, Forrest Iandola, Rupesh K Srivastava, Li Deng, Piotr Doll´ar, Jianfeng Gao, Xi- aodong He, Margaret Mitchell, John C Platt, et al. 2015. From captions to visual concepts and back. In Proceedings of CVPR.
Juri Ganitkevitch, Benjamin Van Durme, and Chris Ppdb: The paraphrase Callison-Burch. 2013. database. In Proceedings NAACL HLT.
Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. 2017. The webnlg In Pro- challenge: Generating text from rdf data. ceedings of INLG.
Ben Goodrich, Mohammad Ahmad Saleh, Peter Liu, and Vinay Rao. 2019. Assessing the factual ac- curacy of text generation. In Proceedings of ACM SIGKDD.
Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. Proceedings of NAACL HLT.
Robin Jia and Percy Liang. 2017. Adversarial exam- ples for evaluating reading comprehension systems. Proceedings of EMNLP.
Philipp Koehn. 2009. Statistical machine translation. Cambridge University Press.
Karen Kukich. 1983. Design of a knowledge-based re- port generator. In Proceedings of ACL.
Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Workshop on Text Sum- marization Branches Out.
Chia-Wei Liu, Ryan Lowe, Iulian V Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation met- rics for dialogue response generation. Proceedings of EMNLP.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv:1907.11692.
Qingsong Ma, OndËrej Bojar, and Yvette Graham. 2018. Results of the wmt18 metrics shared task: Both char- acters and embeddings achieve good performance. In Proceedings of WMT.
Qingsong Ma, Yvette Graham, Shugen Wang, and Qun Liu. 2017. Blend: a novel combined mt metric based on direct assessmentâcasict-dcu submission to wmt17 metrics task. In Proceedings of WMT.
Qingsong Ma, Johnny Wei, OndËrej Bojar, and Yvette Graham. 2019. Results of the wmt19 metrics shared task: Segment-level and strong mt systems pose big challenges. In Proceedings of WMT.
Inderjeet Mani. 1999. Advances in automatic text sum- marization. MIT press.
Nitika Mathur, Timothy Baldwin, and Trevor Cohn. 2019. Putting evaluation in context: Contextual em- beddings improve machine translation evaluation. In Proceedings of ACL.
Kathleen McKeown. 1992. Text generation. Cam- bridge University Press.
Jekaterina Novikova, OndËrej DuËsek, Amanda Cercas Curry, and Verena Rieser. 2017. Why we need new evaluation metrics for nlg. Proceedings of EMNLP.
Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- In Proceedings of uation of machine translation. ACL.
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Semantically equivalent adversar- ial rules for debugging nlp models. In Proceedings of ACL.
Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. Proceedings of ACL.
Hiroki Shimanaka, Tomoyuki Kajiwara, and Mamoru Komachi. 2018. Ruse: Regressor using sentence embeddings for automatic machine translation eval- uation. In Proceedings of WMT.
Anastasia Shimorina, Claire Gardent, Shashi Narayan, and Laura Perez-Beltrachini. 2019. Webnlg chal- lenge: Human evaluation results. Technical report.
Ronnie W Smith and D Richard Hipp. 1994. Spoken natural language dialog systems: A practical ap- proach. Oxford University Press.
Milos Stanojevic and Khalil Simaan. 2014. Beer: Bet- ter evaluation as ranking. In Proceedings of WMT.
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural net- works. In Proceedings of NIPS.
Ran Tian, Shashi Narayan, Thibault Sellam, and Ankur P Parikh. 2019. Sticking to the facts: Con- ï¬dent decoding for faithful data-to-text generation. arXiv:1910.08684.
Thyago Duque, Oscar T¨ackstr¨om, Jakob Uszkoreit, and Dipanjan Das. 2017. Neural paraphrase identiï¬cation of questions with noisy pretraining. Proceedings of the First Workshop on Subword and Character Level Models in NLP.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of NIPS.
Oriol Vinyals and Quoc Le. 2015. A neural conversa- tional model. Proceedings of ICML.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2019. Glue: A multi-task benchmark and analysis platform for natural language understanding. Proceedings of ICLR.
John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2016. Towards universal paraphrastic sen- tence embeddings. Proceedings of ICLR.
Adina Williams, Nikita Nangia, and Samuel R Bow- man. 2018. A broad-coverage challenge corpus for sentence understanding through inference. Proceed- ings of NAACL HLT.
Sam Wiseman, Stuart M Shieber, and Alexander M Rush. 2017. Challenges in data-to-document gen- eration. Proceedings of EMNLP.
Stratos Xenouleas, Prodromos Malakasiotis, Marianna Apidianaki, and Ion Androutsopoulos. 2019. Sum- qe: a bert-based summary quality estimation model supplementary material. In Proceedings of EMNLP.
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2020. Bertscore: Eval- uating text generation with bert. Proceedings of ICLR.
Wei Zhao, Maxime Peyrard, Fei Liu, Yang Gao, Chris- tian M Meyer, and Steffen Eger. 2019. Moverscore: Text generation evaluating with contextualized em- beddings and earth mover distance. Proceedings of EMNLP.
# A Implementation Details of the Pre-Training Phase
This section provides implementation details for some of the pre-training techniques described in the main paper.
# A.1 Data Generation
Random Masking: We use two masking strate- gies. The ï¬rst strategy samples random words in the sentence and it replaces them with masks (one for each token). Thus, the masks are scat- tered across the sentence. The second strategy cre- ates contiguous sequences: it samples a start po- sition s, a length l (uniformly distributed), and it masks all the tokens spanned by words between positions s and s + l. In both cases, we use up to 15 masks per sentence. Instead of running the language model once and picking the most likely token at each position, we use beam search (the beam size 8 by default). This enforces consistency and avoids repeated sequences, e.g., â,,,â.
Backtranslation: Consider and Given a forward translation model French. Penâfr(zfr|zen) and backward translation model Pfrâen(zen|zfr), we generate Ëz as follows:
Ëz = arg max zen (Pfrâen(zen|zâ fr))
where zâ fr = arg maxzfr (Pfrâen(zfr|z)). For the translations, we use a Transformer model (Vaswani et al., 2017), trained on English- German with the tensor2tensor framework.7
Word dropping: Given a synthetic example (z,Z) we generate a pair (z, 2â), by randomly dropping words from Z. We draw the number of words to drop uniformly, up to the length of the sentence. We apply this transformation on about 30% of the data generated with the previous method.
# A.2 Pre-Training Tasks
We now provide additional details on the signals we used for pre-training.
Automatic Metrics: As shown in the table, we use three types of signals: BLEU, ROUGE, and BERTscore. For BLEU, we used the original Moses SENTENCEBLEU8 implementation, using the Moses tokenizer and the default parameters. For ROUGE, we used the seq2seq implemen- tation of ROUGE-N.9 We used a custom imple- mentation of BERTSCORE, based on BERT-large uncased. ROUGE and BERTscore return three scores: precision, recall, and F-score. We use all three quantities.
Backtranslation Likelihood: We compute custom Transformer all trained on two model (Vaswani et al., 2017), language pairs (English-French and English- German) with the tensor2tensor framework.
Normalization: All normalized before training. the regression labels are
# A.3 Modeling
Setting the weights of the pre-training tasks: We set the weights γk with grid search, opti- mizing BLEURTâs performance on WMT 17âs
7https://github.com/tensorflow/ tensor2tensor
8https://github.com/moses-smt/ mosesdecoder/blob/master/mert/ sentence-bleu.cpp
9https://github.com/google/seq2seq/ blob/master/seq2seq/metrics/rouge.py
validation set. To reduce the size of the grid, we make groups of pre-training tasks that share (ÏBLEU, ÏROUGE, ÏBERTscore), the same weights: (Ïen-fr,z|Ëz, Ïen-fr,Ëz|z, Ïen-de,z|Ëz, Ïen-de,Ëz|z), and (Ïentail, Ïbacktran ï¬ag).
# B ExperimentsâSupplementary Material
B.1 Training Setup for All Experiments We user BERTâs public checkpoints10 with Adam (the default optimizer), learning rate 1e-5, and batch size 32. Unless speciï¬ed otherwise, we use 800,00 training steps for pre-training and 40,000 steps for ï¬ne-tuning. We run training and evalua- tion in parallel: we run the evaluation every 1,500 steps and store the checkpoint that performs best on a held-out validation set (more details on the data splits and our choice of metrics in the follow- ing sections). We use Google Cloud TPUs v2 for learning, and Nvidia Tesla V100 accelerators for evaluation and test. Our code uses Tensorï¬ow 1.15 and Python 2.7.
# B.2 WMT Metric Shared Task
Metrics. The metrics used to compare the eval- uation systems vary across the years. The organiz- ers use Pearsonâs correlation on standardized hu- man judgments across all segments in 2017, and a custom variant of Kendallâs Tau named âDARRâ on raw human judgments in 2018 and 2019. The latter metrics operates as follows. The organiz- ers gather all the translations for the same ref- erence segment, they enumerate all the possible pairs (translation1, translation2), and they discard all the pairs which have a âsimilarâ score (less than 25 points away on a 100 points scale). For each remaining pair, they then determine which trans- lation is the best according both human judgment and the candidate metric. Let |Concordant| be the number of pairs on which the NLG metrics agree and |Discordant| be those on which they disagree, then the score is computed as follows:
|Concordant| â |Discordant| |Concordant| + |Discordant|
The idea behind the 25 points ï¬lter is to make the evaluation more robust, since the judgments collected for WMT 2018 and 2019 are noisy. Kendallâs Tau is identical, but it does not use the ï¬lter.
10https://github.com/google-research/ bert
BLEURT ----BLEURTbase 0 Rel. Kendall Tau Improvement (%) 0 200 400 600 800 Number of Pretraining Steps (*1,000)
Figure 5: Improvement in Kendall Tau accuracy on all language pairs of the WMT Metrics Shared Task 2017, varying the number of pre-training steps. 0 steps cor- responds to 0.555 Kendall Tau for BLEURTbase and 0.580 for BLEURT.
Training setup. To separate training and vali- dation data, we set aside a ï¬xed ratio of records in such a way that there is no âleakâ between the datasets (i.e., train and validation records that share the same source). We use 10% of the data for validation for years 2017 and 2018, and 5% for year 2019. We report results for the models that yield the highest Kendall Tau across all records on validation data. The weights associated to each pretraining task (see our Modeling section) are set with grid search, using the train/validation setup of WMT 2017.
Baselines. we use three metrics: the Moses sentenceBLEU,11 implementation BERTscore,12 and MoverScore,13 which are all available online. We run the Moses tokenizer on the reference and candidate segments before computing sentenceBLEU.
# B.3 Robustness to Quality Drift
Data Re-sampling Methodology: We sample the training and test separately, as follows. We split the data in 10 bins of equal size. We then sample each record in the dataset with probabili- ties 1 (11âB)α for train and test respectively, where B is the bin index of the record between 1 and 10, and α is a predeï¬ned skew factor. The skew factor α controls the drift: a value of 0 has no effect (the ratings are centered around 0), and value of 3.0 yields extreme differences. Note that
11https://github.com/moses-smt/ mosesdecoder/blob/master/mert/ sentence-bleu.cpp 12https://github.com/Tiiiger/bert_score 13https://github.com/AIPHES/ emnlp19-moverscore
the sizes of the datasets decrease as α increases: we use 50.7%, 30.3%, 20.4%, and 11.9% of the original 5,344 training records for α = 0.5, 1.0, 1.5, and 3.0 respectively.
# B.4 Ablation ExperimentâHow Much Pre-Training Time is Necessary?
To understand the relationship between pre- training time and downstream accuracy, we pre- train several versions of BLEURT and we ï¬ne-tune them on WMT17 data, varying the number of pre- training steps. Figure 5 presents the results. Most gains are obtained during the ï¬rst 400,000 steps, that is, after about 2 epochs over our synthetic dataset. | {
"id": "1907.11692"
} |
2004.04124 | LadaBERT: Lightweight Adaptation of BERT through Hybrid Model Compression | BERT is a cutting-edge language representation model pre-trained by a large
corpus, which achieves superior performances on various natural language
understanding tasks. However, a major blocking issue of applying BERT to online
services is that it is memory-intensive and leads to unsatisfactory latency of
user requests, raising the necessity of model compression. Existing solutions
leverage the knowledge distillation framework to learn a smaller model that
imitates the behaviors of BERT. However, the training procedure of knowledge
distillation is expensive itself as it requires sufficient training data to
imitate the teacher model. In this paper, we address this issue by proposing a
hybrid solution named LadaBERT (Lightweight adaptation of BERT through hybrid
model compression), which combines the advantages of different model
compression methods, including weight pruning, matrix factorization and
knowledge distillation. LadaBERT achieves state-of-the-art accuracy on various
public datasets while the training overheads can be reduced by an order of
magnitude. | http://arxiv.org/pdf/2004.04124 | Yihuan Mao, Yujing Wang, Chufan Wu, Chen Zhang, Yang Wang, Yaming Yang, Quanlu Zhang, Yunhai Tong, Jing Bai | cs.CL, cs.LG | COLING2020 | null | cs.CL | 20200408 | 20201021 | 0 2 0 2
t c O 1 2 ] L C . s c [
2 v 4 2 1 4 0 . 4 0 0 2 : v i X r a
# LadaBERT: Lightweight Adaptation of BERT through Hybrid Model Compression
Yihuan Mao1,â, Yujing Wang2,3,â , Chufan Wu1, Chen Zhang2, Yang Wang2 Quanlu Zhang2, Yaming Yang2, Yunhai Tong3, Jing Bai2 1Tsinghua University 2Microsoft Research Asia 3Key Laboratory of Machine Perception, MOE, School of EECS, Peking University [email protected], {yujwang,yhtong}@pku.edu.cn, [email protected] {yujwang,zhac,t-yangwa,yayaming,quzha,jbai}@microsoft.com
# Abstract
BERT is a cutting-edge language representation model pre-trained by a large corpus, which achieves superior performances on various natural language understanding tasks. However, a major blocking issue of applying BERT to online services is that it is memory-intensive and leads to unsatisfactory latency of user requests. Existing solutions leverage knowledge distilla- tion frameworks to learn smaller models that imitate the behaviors of BERT. However, the train- ing procedure of knowledge distillation is expensive itself as it requires sufï¬cient training data to imitate the teacher model. In this paper, we address this issue by proposing a hybrid solution named LadaBERT (Lightweight adaptation of BERT through hybrid model compression), which combines the advantages of different model compression methods, including weight pruning, matrix factorization and knowledge distillation. LadaBERT achieves state-of-the-art accuracy on various public datasets while the training overheads can be reduced by an order of magnitude.
# Introduction
The pre-trained language model, BERT (Devlin et al., 2018) has led to a big breakthrough in various kinds of natural language understanding tasks. Ideally, people can start from a pre-trained BERT check- point and ï¬ne-tune it on a speciï¬c downstream task. However, the original BERT models are memory- exhaustive and latency-prohibitive to be served in embedded devices or CPU-based online environments. As the memory and latency constraints vary in different scenarios, the pre-trained BERT model should be adaptive to different requirements with accuracy retained to the largest extent. Existing BERT-oriented model compression solutions largely depend on knowledge distillation (Hinton et al., 2015), which is inefï¬cient and resource-consuming because a large training corpus is required to learn the behaviors of a teacher. For example, DistilBERT (Sanh et al., 2019) is re-trained on the same corpus as pre-training a vanilla BERT from scratch; and TinyBERT (Jiao et al., 2019) utilizes expensive data augmentation to ï¬t the distillation target. The costs of these model compression methods are as large as pre-training, which are unaffordable for low-resource settings. Therefore, it is straight-forward to ask, can we design a lightweight method to generate adaptive models with comparable accuracy using signiï¬cantly less time and resource consumption?
In this paper, we propose LadaBERT (Lightweight adaptation of BERT through hybrid model com- pression) to tackle this problem. Speciï¬cally, LadaBERT is based on an iterative hybrid model com- pression framework consisting of weighting pruning, matrix factorization and knowledge distillation. Initially, the architecture and weights of student model are inherited from the BERT teacher. In each iteration, the student model is ï¬rst compressed by a small ratio based on weight pruning and matrix factorization, and is then ï¬ne-tuned under the guidance of teacher model through knowledge distillation. Because weight pruning and matrix factorization help to generate better initial and intermediate status for knowledge distillation, both accuracy and efï¬ciency can be greatly improved.
We conduct extensive experiments on ï¬ve public datasets of natural language understanding. As an example, the performance comparison of LadaBERT and state-of-the-art models on MNLI-m dataset is illustrated in Figure 1.
âThe work was done when the author visited Microsoft Research Asia. â Corresponding Author
UadaBERT 3 % TinyBERT © Distilled. BEST. ae voy ye, oe : [adaBERT * / ~ * âWeight pruning ra ~â-+-~ Matrix factorization 7 â © âHybrid pruning 60 +--+ . TimBERT. © _Distilled-BiLSTM to. 1s (0)CO«Stia*SSCtStCiSC«S parameter size (millions)
Figure 1: Accuracy comparison on MNLI-m dataset
We can see that LadaBERT outperforms other BERT-oriented model compression baselines at various model compression ratios. Espe- cially, LadaBERT outperforms BERT-PKD sig- niï¬cantly under 2.5à compression ratio and out- performs TinyBERT under 7.5à compression ratio while the training speed is accelerated by an order of magnitude.
The rest of this paper is organized as follows. First, we summarize the related works of model compression and their applications to BERT in Section 2. Then, the methodology of LadaBERT is introduced in Section 3, and experimental re- sults are presented in Section 4. At last, we con- clude this work and discuss future works in Sec- tion 5.
# 2 Related Work
Deep Neural Networks (DNNs) have achieved great success in many areas in recent years, but the mem- ory consumption and computational cost expand greatly with the growing complexity of models. Thus, model compression has become an indispensable technique in practice, especially for low-resource sce- narios. Here we review the current progresses of model compression techniques brieï¬y, and present their application to pre-trained BERT models.
# 2.1 Model compression algorithms
Existing model compression algorithms can be divided into four categories, namely weight pruning, matrix factorization, weight quantization and knowledge distillation.
Numerous researches have shown that removing a large portion of connections or neurons does not cause signiï¬cant performance drop in deep neural networks. For example, Han et al. (2015) proposed a method to reduce the storage and computation of neural networks by removing unimportant connec- tions, resulting in sparse networks without affecting the model accuracy. Li et al. (2016) presented an acceleration method for convolution neural network by pruning whole ï¬lters together with their connect- ing ï¬lter maps. This approach does not generate sparse connectivity patterns and brings a much larger acceleration ratio with existing BLAS libraries for dense matrix multiplications.
Matrix factorization was also widely studied in the deep learning domain, the goal of which is to decompose a matrix into the product of two matrices in lower dimensions. Sainath et al (2013) explored a low-rank matrix factorization method of DNN layers for acoustic modeling. Xu et al. (2013; 2014) applied singular value decomposition to deep neural network acoustic models and achieved comparable performances with state-of-the-art models through much fewer parameters. GroupReduce (Chen et al., 2018) focused on the compression of neural language models and applied low-rank matrix approximation to vocabulary-partition. Winata et al. (2019) carried out experiments for low-rank matrix factorization on different NLP tasks and demonstrated that it was more effective in general than weight pruning.
Weight quantization is another common technique for compressing deep neural networks, which aims to reduce the number of bits to represent every weight in the model. With weight quantization, the weights can be reduced to at most 1-bit binary value from 32-bits ï¬oating point numbers. Zhou et al. (2016) showed that quantizing weights to 8-bits does not hurt the performance; Binarized Neural Networks (Hubara et al., 2016) contained binary weights and activations of only one bit; and Incremental Network Quantization (Zhou et al., 2017) converted a pre-trained full-precision neural network into low- precision counterpart through three interdependent operations: weight partition, groupwise quantization and re-training.
Knowledge distillation (Hinton et al., 2015) trains a compact and smaller model to approximate the
Pre-trained Tea Model Knowledge Knowledge Knowledge Knowledge distillation distillation distillation distillation Iterative Student (initial model) 1-4 (1-A)? Target size
Figure 2: Overview of LadaBERT framework
function learned by a large and complex model. A preliminary step of knowledge distillation is to train a deep network (the teacher model) that automatically generates soft labels for training instances. This âsyntheticâ label is then used to train a smaller network (the student model), which assimilates the function learned by the teacher model. Chen et al. (2017) successfully applied knowledge distillation to object detection tasks by introducing several modiï¬cations, including a weighted cross-entropy loss, a teacher bounded loss, and adaptation layers to model intermediate teacher distributions. Li et al. (2017) developed a framework to learn from noisy labels, where the knowledge learned from a clean dataset and semantic knowledge graph were leveraged to correct the wrong labels.
To improve the performance of model compression, there are also numerous attempts to develop hy- brid model compression methods that combine more than one category of algorithms. Han et al. (2016) combined quantization, hamming coding and weight pruning to conduct model compression on image classiï¬cation tasks. Yu et al. (2017) proposed a uniï¬ed framework for low-rank and sparse decomposi- tion of weight matrices with feature map reconstructions. Polino et al. (2018) advocated a combination of distillation and quantization techniques and proposed two hybrid models, i.e., quantiï¬ed distillation and differentiable quantization. Li et al., (2018) compressed DNN-based acoustic models through knowledge distillation and pruning.
# 2.2 BERT model compression
In the natural language processing community, there is a growing interest recently to study BERT- oriented model compression for shipping its performance gain into latency-critical or low-resource sce- narios. Most existing works focus on knowledge distillation. For instance, BERT-PKD (Sun et al., 2019) is a patient knowledge distillation approach that compresses the original BERT model into a lightweight shallow network. Different from traditional knowledge distillation methods, BERT-PKD enables an exploitation of rich information in the teacherâs hidden layers by utilizing a layer-wise distillation con- straint. DistillBERT (Sanh et al., 2019) pre-trains a smaller general-purpose language model on the same corpus as vanilla BERT. Distilled BiLSTM (Tang et al., 2019) adopts a single-layer BiLSTM as the student model and achieves comparable results with ELMo (Peters et al., 2018) through much fewer parameters and less inference time. TinyBERT (Jiao et al., 2019) exploits a novel attention-based distil- lation schema that encourages the linguistic knowledge in teacher to be well transferred into the student model. It adopts a two-stage learning framework, including general distillation (pre-training from scratch via distillation loss) and task-speciï¬c distillation with data augmentation. Both procedures require huge resources and long training times (from several days to weeks), which is cumbersome for industrial applications. Therefore, we are aiming to explore more lightweight solutions in this paper.
# 3 Lightweight Adaptation of BERT
# 3.1 Overview
The overall pipeline of LadaBERT (Lightweight Adaptation of BERT) is illustrated in Figure 2. As shown in the ï¬gure, the pre-trained BERT model (e.g., BERT-Base) is served as the teacher as well as the initial status of the student model. Then, the student model is compressed towards smaller parameter size iteratively through a hybrid model compression approach until the target size is reached. Concretely, in each iteration, the parameter size of student model is ï¬rst reduced by 1 â â based on weight pruning and matrix factorization, and then the parameters are ï¬ne-tuned by the loss function of knowledge distillation. The motivation behind is that matrix factorization and weight pruning are complementary to each other. Matrix factorization calculates the optimal approximation under a certain rank, while weight pruning introduces additional sparsity to the decomposed matrices. Moreover, both weight pruning and matrix factorization generate better initial and intermediate status of the student model, which improve the efï¬ciency and effectiveness of knowledge distillation. In the following subsections, we will introduce the algorithms in detail.
# 3.2 Matrix factorization
We use Singular Value Decomposition (SVD) for matrix factorization. All parameter matrices, includ- ing the embedding ones, are compressed by SVD. Without loss of generality, we assume a matrix of parameters W â RmÃn, the singular value decomposition of which can be written as:
W = UΣVT (1) where U â RmÃp and V â RpÃn. Σ = diag(Ï1, Ï2, . . . , Ïp) is a diagonal matrix composed of singular values and p is the full rank of W satisfying p ⤠min(m, n).
To compress this weight matrix, we select a lower rank r < p. The diagonal matrix Σ is truncated by selecting the top r singular values. i.e., Σr = diag(Ï1, Ï2, . . . , Ïr), while U and V are also truncated by selecting the top r columns and rows respectively, resulting in Ur â RmÃr and Vr â RrÃn.
Then, low-rank matrix approximation of W can be formulated as:
W =U,=, V2 = (U,V/5;)(V>/5,)7 = ABT (2)
In this way, the original weight matrix W is decomposed to two smaller matrices, where A = â Σr â RmÃr. These two matrices are initialized by SVD and will â Σr â RnÃr and B = Vr Ur be further ï¬ne-tuned during training.
Given a rank r ⤠min(m, n), the compression ratio of matrix factorization is deï¬ned as:
Psvd = (m + n)r mn (3)
Therefore, for a target model compression ratio Psvd, the desired rank r can be calculated by:
r = mn m + n Psvd (4)
# 3.3 Weight pruning
Weight pruning (Han et al., 2015) is an unstructured compression method that induces desirable sparsity for a neural network model. For a neural network f (x; θ) with parameters θ, weight pruning ï¬nds a binary mask M â {0, 1}|θ| subject to a given sparsity ratio, Pweight. The neural network after pruning will be f (x; M · θ), where the non-zero parameter size is ||M||1 = Pweight · |θ|, where |θ| is the number of parameters in θ. For example, when Pm = 0.3, there are 70% zeros and 30% ones in the mask m. In our implementation, we adopt a simple pruning strategy (Frankle and Carbin, 2018): the binary mask is generated by setting the smallest weights to zeros.
To combine the beneï¬ts of weight pruning and matrix factorization, we leverage a hybrid approach that applies weight pruning on the basis of decomposed matrices generated by SVD. Following Equation
(2), SVD-based matrix factorization for any weight matrix W can be written as: Wsvd = AmÃrBT nÃr. Then, weight pruning is applied on the decomposed matrices A â RmÃr and B â RnÃr separately. The weight matrix after hybrid compression is formulated as:
Whybrid = (MA · A)(MB · B)T (5)
where MA and MB are binary masks derived by the weight pruning algorithm with compression ratio Pweight. The compression ratio after hybrid compression can be calculated by:
Phybrid = Psvd · Pweight = (m + n)r mn Pweight (6)
In LadaBERT, the hybrid compression produce is applied to each layer of the pre-trained BERT model. Given an overall model compression target P , the following constraint should be satisï¬ed:
P · |θ| = Pembd · |θembd| + Phybrid|θencd| + |θcls| (7)
where |θ| is the total number of model parameters and P is the target compression ratio; |θembd| denotes the parameter number of embedding layer, which has a relative compression ratio of Pembd, and |θencd| denotes the number of parameters of all layers in BERT encoder, which have a compression ratio of Phybrid. The classiï¬cation layers (MLP layers with Softmax activation) have a relative small number of parameters (|θcls|), so they are not modiï¬ed in model compression. In general, we have three ï¬exible hyper-parameters for ï¬ne-grained compression: Pembed, Psvd and Pweight, which can be optimized by random search on the validation data.
# 3.4 Knowledge distillation
Knowledge distillation (KD) has been widely used to transfer knowledge from a large teacher model to a smaller student model. In other words, the student model mimics the behavior of the teacher model by minimizing the knowledge distillation loss functions. Various types of knowledge distillation can be employed at different sub-layers. Generally, all types of knowledge distillation can be modeled as minimizing the following loss function:
Lev = S21 (f%), £9) 8) xeX
Where X denotes the training set and x is a sample input in the set. f (s)(x) and f (t)(x) represent intermediate outputs or weight matrices for the student model and teacher model respectively. L(·) rep- resents for a loss function which can be carefully designed for different types of knowledge distillation. We partly follow the recent technique proposed by TinyBERT (Jiao et al., 2019), which applies knowl- edge distillation constraints upon embedding, self-attention, hidden representation and prediction levels. Concretely, there are four types of knowledge distillation constraints as follows:
⢠Embedding-layer distillation is performed upon the embedding layer. f (x) â RnÃd represents for the word embedding output for input x, where n is the input word length and d is the dimension of word embedding. Mean Squared Error (MSE) is adopted as the loss function L(·).
⢠Attention-layer distillation is performed upon the self-attention sub-layer. f (x) = {aij} â RnÃn represents the attention output for each self-attention sub-layer, and L(·) denotes MSE loss function.
⢠Hidden-layer distillation is performed at each fully-connected sub-layer in the Transformer ar- chitectures. f (x) denotes the output representation of the corresponding sub-layer, and L(·) also adopts MSE loss function.
⢠Prediction-layer distillation makes the student model to learns the predictions from a teacher It is identical to a vanilla form of knowledge distillation (Hinton et al., 2015). model directly. It takes soft cross-entropy loss function, which can be formulated as:
Lpred = âÏ(f t(x)) · log (Ï(f s(x)/t)) (9)
where Ï(·) denotes Softmax function, f t(x) and f s(x) are the predictive logits of teacher and student models respectively. t is a temperature value, which generally works well at t = 1 (Jiao et al., 2019).
# 4 Experiments
# 4.1 Datasets & Baselines
We compare LadaBERT with state-of-the-art model compression approaches on ï¬ve public datasets of different tasks of natural language understanding, including sentiment classiï¬cation (SST-2), natural lan- guage inference (MNLI-m, MNLI-mm, QNLI) and pairwise semantic equivalence (QQP). The statistics of these datasets are described in Table 1.
#Train 67,350 363,871 MNLI-m 392,703 MNLI-mm 392,703 104,744 Task SST-2 QQP QNLI #Dev. 873 40,432 9,816 9,833 5,464 #Test 1,822 390,965 9,797 9,848 5,464 #Class 2 2 3 3 2
Table 1: Dataset Statistics
The baseline approaches are summarized below.
⢠Weight pruning and Matrix factorization are two simple baselines described in Section 3.3. We evaluate both pruning methods in an iterative manner until the target compression ratio is reached.
⢠Hybrid pruning is a combination of matrix factorization and weight pruning, which conducts it- erative weight pruning on the basis of SVD-based matrix factorization. It is performed iteratively until the desired compression ratio is achieved.
⢠BERT-FT, BERT-KD and BERT-PKD are reported in (Sun et al., 2019), where BERT-FT directly ï¬ne-tunes the model via supervision labels, BERT-KD is the vanilla knowledge distillation algo- rithm (Hinton et al., 2015), and BERT-PKD stands for Patient Knowledge Distillation proposed in (Sun et al., 2019). The student model is composed of 3 Transformer layers, resulting in a 2.5à compression ratio. Each layer has the same hidden size as the pre-trained teacher, so the initial parameters of student model can be inherited from the corresponding teacher.
⢠TinyBERT (Jiao et al., 2019) instantiates a tiny student model, which has totally 14.5M parameters (7.5à compression ratio) composed of 4 layers, 312 hidden units, 1200 intermediate size and 12 heads. For a fair comparison, we reproduce the TinyBERT pipeline1 without general distillation and data augmentation, which is time-exhaustive and resource-consuming.
⢠BERT-Small has the same model architecture as TinyBERT, but is directly pre-trained by the ofï¬- cial BERT pipeline. The performance values are copied from (Jiao et al., 2019) for reference.
⢠Distilled-BiLSTM (Tang et al., 2019) leverages a single-layer bidirectional-LSTM as the student model, where the hidden units and intermediate size are set to be 300 and 400 respectively, resulting in a 10.8à compression ratio. This model requires an expensive training process similar to vanilla BERT.
4.2 Setup We leverage the pre-trained checkpoint of base-bert-uncased2 as the initial model for compression, which contains 12 layers, 12 heads, 110M parameters, and 768 hidden units per layer. Hyper-parameter
1https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/TinyBERT 2https://storage.googleapis.com/bert models/2018 10 18/uncased L12 H768 A12.zip
selection is conducted on the validation data for each dataset. After training, the prediction results are submitted to the GLUE-benchmark evaluation platform3 to get the evaluation performance on test data. For a comprehensive evaluation, we experiment with four settings of LadaBERT, namely LadaBERT- 1, -2, -3 and -4, which reduce the model parameters of BERT-Base by 2.5, 5.0, 7.5 and 10.0 times respectively. In our experiment, we set the batch size as 32 and learning rate as 2e-5. The optimizer is BertAdam with the default setting (Devlin et al., 2018). Fine-grained compression ratios are optimized by random search on SST dataset and transferred to other datasets (shown in Table 2). Following (Jiao et al., 2019), the temperature value in distillation loss function is set as 1 in all experiments without tuning.
Table 2: Fine-grained compression ratios
Model Overall Embedding layer Matrix factorization Weight pruning Ã2.5 LadaBERT-1 Ã5.0 LadaBERT-2 Ã7.5 LadaBERT-3 LadaBERT-4 Ã10.0 Ã1.43 Ã2.05 Ã5.0 Ã5.0 Ã2.0 Ã2.0 Ã2.0 Ã2.5 Ã1.56 Ã3.41 Ã4.33 Ã5.45
# 4.3 Performance Comparison
# Table 3: Performance comparison on various model sizes
Algorithm MNLI-m MNLI-mm SST-2 QQP QNLI #Params Ratio BERT-Base 84.6 83.4 93.5 71.2/- 90.5 110M Ã1.0 LadaBERT-1 BERT-FT BERT-KD BERT-PKD Weight pruning matrix factorization Hybrid pruning 83.5 74.8 75.4 76.7 82.8 77.7 81.2 82.5 74.3 74.8 76.3 81.6 77.4 80.0 92.8 86.4 86.9 87.5 92.3 87.6 90.0 70.7/88.9 65.8/86.9 67.3/87.6 68.1/87.8 70.1/88.5 65.7/87.2 68.0/87.5 89.6 84.3 84.0 84.7 88.9 84.3 83.3 44M 44M 44M 44M 44M 44M 44M Ã2.5 Ã2.5 Ã2.5 Ã2.5 Ã2.5 Ã2.5 Ã2.5 LadaBERT-2 Weight pruning matrix factorization Hybrid pruning 83.1 75.9 71.8 76.1 82.2 75.6 71.8 75.3 91.8 84.8 82.8 85.4 69.9/87.9 60.3/83.5 60.3/83.5 64.9/85.8 88.2 81.7 75.4 80.6 22M 22M 22M 22M Ã5.0 Ã5.0 Ã5.0 Ã5.0 LadaBERT-3 TinyBERT BERT-Small Weight pruning matrix factorization Hybrid pruning 82.1 80.9 75.4 69.1 60.2 71.9 81.8 79.5 74.9 68.8 60.0 71.0 89.9 89.5 87.6 81.8 81.3 83.5 69.4/87.8 65.4/87.5 66.5/- 59.7/82.9 58.5/82.0 62.3/84.7 84.5 77.9 84.8 76.4 62.2 73.8 15M 15M 15M 15M 15M 15M Ã7.5 Ã7.5 Ã7.5 Ã7.5 Ã7.5 Ã7.5
The evaluation results of LadaBERT and state-of-the-art approaches are listed in Table 3, where the models are ranked by parameter sizes for feasible comparison. As shown in the table, LadaBERT con- sistently outperforms the strongest baselines under similar model sizes. In addition, the performance of LadaBERT demonstrates the superiority of a combination of SVD-based matrix factorization, weight pruning and knowledge distillation.
3https://gluebenchmark.com/
09 0.8 Venere ea 0.5 0.4 J - 0 50000 100000 150000 200000 steps a accuracy
09 0.85 5 0.75 0.7 0.65 0.6 2000 12000 22000 32000 42000 steps 30 accuracy
09 09 0.8 Venere ea 0.85 5 0.75 0.5 0.7 0.4 J - 0.65 0.6 0 50000 100000 150000 200000 2000 12000 22000 32000 42000 steps steps a 30 accuracy accuracy
Figure 3: Learning curve on MNLI-m dataset. Figure 4: Learning curve on QQP dataset.
With model size of 2.5à reduction, LadaBERT-1 performs signiï¬cantly better than BERT-PKD, boost- ing the performance by relative 8.9, 8.1, 6.1, 3.8 and 5.8 percentages on MNLI-m, MNLI-mm, SST-2, QQP and QNLI datasets respectively. Recall that BERT-PKD initializes the student model by selecting 3 of 12 layers in the pre-trained BERT-Base model. It turns out that the discarded layers have a huge impact on the model performance, which is hard to be recovered by knowledge distillation. On the other hand, LadaBERT generates the student model by iterative pruning on the pre-trained teacher. In this way, the original knowledge in the teacher model can be preserved to the largest extent, and the beneï¬t is complementary to knowledge distillation.
LadaBERT-3 has a comparable size as TinyBERT with a 7.5à compression ratio. As shown in the results, TinyBERT does not work well without expensive data augmentation and general distillation, hindering its application to low-resource settings. The reason is that the student model of TinyBERT is distilled from scratch, so it requires much more data to mimic the teacherâs behaviors. Instead, Lad- aBERT has better initial and intermediate status calculated by hybrid model compression, which is much more light-weighted and achieves competitive performances with much faster learning speed (learning curve comparison is shown in Section 4.4). Moreover, LadaBERT-3 outperforms BERT-Small on most of the datasets, which is pre-trained from scratch by the ofï¬cial BERT pipeline. This means that Lad- aBERT can quickly adapt to smaller model sizes and achieve competitive performance without expansive re-training on a large corpus.
Moreover, Distilled-BiLSTM performs well on SST-2 dataset with more than 10à compression ratio, owing to good generalization ability of LSTM model on small datasets. Nevertheless, the performance of LadaBERT-4 is competitive on larger datasets such as MNLI and QQP. This is impressive as LadaBERT is much more efï¬cient without exhaustive re-training on a large corpus. In addition, the inference speed of BiLSTM is slower than transformer-based models with similar parameter sizes.
# 4.4 Learning curve comparison
To further demonstrate the efï¬ciency of LadaBERT, we visualize the learning curves on MNLI-m and QQP datasets in Figure 3 and 4, where LadaBERT-3 is compared to the strongest baseline, TinyBERT, under 7.5à compression ratio. As shown in the ï¬gures, LadaBERT-3 achieves good performances much faster and results in a better convergence point. After training 2Ã104 steps (batches) on MNLI-m dataset, the performance of LadaBERT-3 is already comparable to TinyBERT after convergence (approximately 2 à 105 steps), achieving nearly 10 times acceleration. And on QQP dataset, both performance improve- ment and training speed acceleration are very signiï¬cant. This clearly shows the superiority of combining matrix factorization, weight pruning and knowledge distillation in a collaborative manner. On the other hand, TinyBERT is based on pure knowledge distillation, so the learning speed is much slower.
# 4.5 Effect of low-rank + sparsity
In this paper, we demonstrate that a combination of matrix factorization and weight pruning is better than single solutions for BERT-oriented model compression. Similar phenomena has been reported in computer vision, showing that low-rank and sparsity are complementary to each other (Yu et al., 2017). Here we provide another explanation to support our observation.
In Figure 5, we visualize the distribution of element biases for a weight matrix in the neu- ral network after pruning to 20% of its orig- inal parameter size. For illustration, we con- sider the matrix initialized by real pretrained BERT weights, and the pruning process is done at once. We deï¬ne the biases to be calculated by Biasij = ËMij â Mij, where ËM denotes the weight matrix after pruning.
Hybrid pruning Weisht pruning I probability density Ss
The yellow line in Figure 5 shows the distribu- tion of biases generated by pure weight pruning, which has a sudden drop at the pruning thresh- old. The orange line represents for pure SVD pruning, which turns out to be smoother and is aligned with Gaussian distribution. The blue line shows the result of hybrid pruning, which conducts weight pruning on the decomposed matrices. First, we apply SVD-based matrix factorization to reduce 60% of total parameters. Then, weight pruning is applied on the decomposed matrices by 50%, resulting in 20% parameters while the bias distribution changes slightly. As visualized in Figure 5, it has smaller mean and deviation of bias distribution than that of pure matrix factorization. In addition, it seems that a smoother weight distribution is more feasi- ble for the ï¬ne-tuning procedure. Therefore, it is reasonable that a hybrid model compression approach is advantageous than pure weight pruning.
# 5 Conclusion
Model compression is a common way to deal with latency-critical or memory-intensive scenarios. Exist- ing model compression methods for BERT are expansive as they require re-training on a large corpus to reserve the original performance. In this paper, we propose LadaBERT, a lightweight model compression pipeline that generates an adaptive BERT model efï¬ciently based on a given task and speciï¬c constraint. It is based on a hybrid solution, which conducts matrix factorization, weight pruning and knowledge distillation in a collaborative fashion. The experimental results demonstrate that LadaBERT is able to achieve comparable performance with other state-of-the-art solutions using much less training data and computation budget. Therefore, LadaBERT can be easily plugged into various applications to achieve competitive performances with little training overheads. In the future, we would like to apply LadaBERT to large-scale industrial applications, such as search relevance and query recommendation.
# References
Guobin Chen, Wongun Choi, Xiang Yu, Tony Han, and Manmohan Chandraker. 2017. Learning efï¬cient object detection models with knowledge distillation. In Advances in Neural Information Processing Systems, pages 742â751.
Patrick Chen, Si Si, Yang Li, Ciprian Chelba, and Cho-Jui Hsieh. 2018. Groupreduce: Block-wise low-rank approximation for neural language model shrinking. In Advances in Neural Information Processing Systems, pages 10988â10998.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirec- tional transformers for language understanding. arXiv preprint arXiv:1810.04805.
Jonathan Frankle and Michael Carbin. 2018. The lottery ticket hypothesis: Finding sparse, trainable neural networks. arXiv preprint arXiv:1803.03635.
Song Han, Jeff Pool, John Tran, and William J Dally. 2015. Learning both weights and connections for efï¬cient neural networks. pages 1135â1143.
Song Han, Huizi Mao, and William J Dally. 2016. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding.
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531.
Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. 2016. Binarized neural networks. In Advances in neural information processing systems, pages 4107â4115.
Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2019. Tinybert: Distilling bert for natural language understanding. arXiv preprint arXiv:1909.10351.
Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. 2016. Pruning ï¬lters for efï¬cient convnets. arXiv: Computer Vision and Pattern Recognition.
Yuncheng Li, Jianchao Yang, Yale Song, Liangliang Cao, Jiebo Luo, and Li-Jia Li. 2017. Learning from noisy labels with distillation. In Proceedings of the IEEE International Conference on Computer Vision, pages 1910â 1918.
Chenxing Li, Lei Zhu, Shuang Xu, Peng Gao, and Bo Xu. 2018. Compression of acoustic model via knowledge distillation and pruning. In 2018 24th International Conference on Pattern Recognition (ICPR), pages 2785â 2790. IEEE.
Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettle- moyer. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365.
Antonio Polino, Razvan Pascanu, and Dan Alistarh. 2018. Model compression via distillation and quantization. arXiv preprint arXiv:1802.05668.
Tara N Sainath, Brian Kingsbury, Vikas Sindhwani, Ebru Arisoy, and Bhuvana Ramabhadran. 2013. Low-rank In 2013 IEEE matrix factorization for deep neural network training with high-dimensional output targets. international conference on acoustics, speech and signal processing, pages 6655â6659. IEEE.
Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
Siqi Sun, Yu Cheng, Zhe Gan, and Jingjing Liu. 2019. Patient knowledge distillation for bert model compression. arXiv preprint arXiv:1908.09355.
Raphael Tang, Yao Lu, Linqing Liu, Lili Mou, Olga Vechtomova, and Jimmy Lin. 2019. Distilling task-speciï¬c knowledge from bert into simple neural networks. arXiv preprint arXiv:1903.12136.
Genta Indra Winata, Andrea Madotto, Jamin Shin, Elham J Barezi, and Pascale Fung. 2019. On the effectiveness of low-rank matrix factorization for lstm model compression. arXiv preprint arXiv:1908.09982.
Jian Xue, Jinyu Li, and Yifan Gong. 2013. Restructuring of deep neural network acoustic models with singular value decomposition. In Interspeech, pages 2365â2369.
Jian Xue, Jinyu Li, Dong Yu, Mike Seltzer, and Yifan Gong. 2014. Singular value decomposition based low- footprint speaker adaptation and personalization for deep neural network. In 2014 IEEE International Confer- ence on Acoustics, Speech and Signal Processing (ICASSP), pages 6359â6363. IEEE.
Xiyu Yu, Tongliang Liu, Xinchao Wang, and Dacheng Tao. 2017. On compressing deep models by low rank and sparse decomposition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7370â7379.
Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, and Yuheng Zou. 2016. Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160.
Aojun Zhou, Anbang Yao, Yiwen Guo, Lin Xu, and Yurong Chen. 2017. Incremental network quantization: Towards lossless cnns with low-precision weights. arXiv preprint arXiv:1702.03044. | {
"id": "1810.04805"
} |
2004.03844 | On the Effect of Dropping Layers of Pre-trained Transformer Models | Transformer-based NLP models are trained using hundreds of millions or even
billions of parameters, limiting their applicability in computationally
constrained environments. While the number of parameters generally correlates
with performance, it is not clear whether the entire network is required for a
downstream task. Motivated by the recent work on pruning and distilling
pre-trained models, we explore strategies to drop layers in pre-trained models,
and observe the effect of pruning on downstream GLUE tasks. We were able to
prune BERT, RoBERTa and XLNet models up to 40%, while maintaining up to 98% of
their original performance. Additionally we show that our pruned models are on
par with those built using knowledge distillation, both in terms of size and
performance. Our experiments yield interesting observations such as, (i) the
lower layers are most critical to maintain downstream task performance, (ii)
some tasks such as paraphrase detection and sentence similarity are more robust
to the dropping of layers, and (iii) models trained using a different objective
function exhibit different learning patterns and w.r.t the layer dropping. | http://arxiv.org/pdf/2004.03844 | Hassan Sajjad, Fahim Dalvi, Nadir Durrani, Preslav Nakov | cs.CL, cs.LG | null | Computer Speech and Language, 2022 | cs.CL | 20200408 | 20220813 | 2 2 0 2
g u A 3 1 ] L C . s c [
3 v 4 4 8 3 0 . 4 0 0 2 : v i X r a
# On the Eï¬ect of Dropping Layers of Pre-trained Transformer Models
Hassan Sajjadâ£1 Fahim Dalvi⦠Nadir Durrani⦠Preslav Nakovâ 1 â£Faculty of Computer Science, Dalhousie University, Canada â¦Qatar Computing Research Institute, Hamad Bin Khalifa University, Qatar â Mohamed bin Zayed University of Artiï¬cial Intelligence, Abu Dhabi, UAE [email protected],{faimaduddin, ndurrani}@hbku.edu.qa, [email protected]
# Abstract
Transformer-based NLP models are trained using billions of parameters, lim- iting their applicability in computationally constrained environments. While the number of parameters generally correlates with performance, it is not clear whether the entire network is required for a downstream task. Motivated by the recent work on pruning and distilling pre-trained models, we explore strate- gies to drop layers in pre-trained models, and observe the eï¬ect of pruning on downstream GLUE tasks. We were able to prune BERT, RoBERTa and XLNet models up to 40%, while maintaining up to 98% of their original performance. Additionally we show that our pruned models are at par with those built using knowledge distillation, both in terms of size and performance. Our experiments yield interesting observations such as: (i) the lower layers are most critical to maintain downstream task performance, (ii) some tasks such as paraphrase de- tection and sentence similarity are more robust to the dropping of layers, and (iii) models trained using diï¬erent objective function exhibit diï¬erent learning patterns and w.r.t the layer dropping.1
Keywords: pre-trained transformer models, eï¬cient transfer learning, interpretation and analysis
1The work was done while the author was at QCRI 1The code is available at https://github.com/hsajjad/transformers/.
Preprint submitted to Journal of Computer Speech and Language
August 16, 2022
# 1. Introduction
Pre-trained Transformer models have achieved state-of-the-art performance on natural language processing tasks and have been adopted as feature extrac- tors for solving downstream tasks such as question answering, natural language inference, and sentiment analysis. The current state-of-the-art Transformer- based pre-trained models consist of dozens of layers and millions of parameters. While deeper and wider models yield better performance, they also need large GPU/TPU memory. For example, BERT-large [1] is trained with 335 million parameters, and requires at least 24 GB of GPU memory to load. The larger size of these models limits their applicability in time- and memory-constrained environments.
Several methods have been proposed to reduce the size of pre-trained mod- els. Notable approaches include pruning parts of the network after training [2, 3, 4], reduction through weight factorization and sharing [5], compression via knowledge-distillation [6] and quantization [7, 8]. Our work falls under the class of pruning methods.
The central argument governing pruning methods is that deep neural models are over-parameterized and that not all parameters are strictly needed, espe- cially at the inference time. For example, previous research has shown that most of the attention heads can be removed [9, 3] or reallocated [10] without signif- icantly impacting performance. Gordon et al. [11] pruned the least important weights in the network. We build our work based on similar observations, but we are interested in (i ) whether it is necessary to use all layers of a pre-trained model for downstream tasks, and if not, (ii ) which layers are necessary to keep in order to maintain good task-speciï¬c performance while achieving eï¬ciency in transfer learning.
Motivated by recent ï¬ndings in representation learning, we propose novel strategies to drop layers in pre-trained models. Voita et al. [12] showed that the top layers are biased towards the pre-training objective, leading us to ques- tion whether they are necessary for downstream tasks. Michel et al. [9], Dalvi
2
et al. [13] discussed over-parameterization and the redundancy in pre-trained
models, leading us to question whether adjacent layers contain redundant infor- mation. More concretely, we drop top, bottom, middle, or alternate layers in the network. We additionally present methods to ï¬nd layers that contribute least in the network by using their activation patterns and weights. We apply our strategies on four state-of-the-art pre-trained models, BERT [1], RoBERTa [14], ALBERT [5] and XLNet [15]. The ï¬rst three are auto-encoders, while XLNet is an auto-regressive model. ALBERT presents an interesting case in the mix as its layers share parameters. We additionally experiment using DistilBERT to analyze whether a distilled model can be pruned further. We evaluate against GLUE benchmark [16] a suite of language understanding tasks. Our ï¬ndings are summarized below:
We propose practical strategies to drop layers in pre-trained models for
eï¬cient transfer learning.
⢠We show that dropping top layers works consistently well across diï¬erent tasks and pre-trained models, e.g., yielding 40% reduction in size while preserving up to 98.2% of the performance.
Our reduced models perform on par with models built using knowledge
distillation in terms of accuracy, model size and inference speed, without requiring costly training of a new model.
One-third of a distilled models can also be pruned successfully with an
average loss of 0.75 points
Despite having cross-layer parameter sharing, ALBERT can still be pruned
for eï¬cient inference with a small drop in performance.
⢠Certain downstream tasks require as few 3 layers to maintain performance within 1% threshold.
⢠Comparing architectures, models show diï¬erent learning dynamics. For example, compared to BERT, RoBERTa and XLNet learn task-speciï¬c
3
knowledge earlier in the network and are thus more robust to layer-dropping.
Contribution. While a number of studies partially overlap with the strategies and the ï¬ndings presented in this work, this is the ï¬rst work that thoroughly investigates the eï¬ect of various layer-dropping methods using a variety of pre- trained models and on a large number of tasks. We showed that i) models have diï¬erent learning dynamics, ii) a smaller close to optimal network can be achieved by optimizing the number of layers to drop with respect to the task at hand, iii) a distilled model can also beneï¬t from layer-dropping. Our work recommends to use top layer-dropping as an essential baseline when building distilled models. Moreover, it provides a cheap way to get smaller models of any architecture rapidly, that are both memory and speed eï¬cient.
# 2. Related Work
Eï¬cient Pre-trained Models: Work done on exploring methods to down- scale pre-trained models can be categorized into architecture-invariant compres- sion [5, 17, 8], knowledge distillation [18, 6], and pruning [11, 19].
Quantization [7, 8], an architecture-invariant method, reduces the numerical precision of the weights of the model to fewer bits. Knowledge distillation (KD) also known as student-teacher model [20] trains a smaller model that mimics the behavior of the larger model. Researchers have experimented with learning from the outputs of the encoder layers [21, 22], from the output logits [6, 23], and from the attention maps [22, 24]. Another distinction is between general- purpose distilled models [6, 24] and task-speciï¬c ones [22, 25, 23, 21, 26].
Pruning methods involve removing some parts of the networks that are either redundant or less relevant to the task at hand. [11, 19, 27] pruned the least important weights in the network. Michel et al. [9], Voita et al. [3] demonstrated that most of the attention heads can be pruned at test time, which reduces the computation, and speeds up inference. Fan et al. [28] introduced LayerDrop during training that resulted in pre-trained models that are robust towards dropping of layers at inference time. Our work is similar to them as we also
4
remove layers from the network. But we show that layers can be dropped safely from the pre-trained models without the need for additional training using LayerDrop. Nevertheless our strategies can also be applied to a model trained using LayerDrop.
Recently, Peer et al. [29] proposed a greedy layer pruning method that drops layers based on their independent performance on the end task. Their assump- tion is that a local decision about a layer aligns with a globally correct selection of layers. We demonstrate that our results are comparable to theirs, but we need no additional training to ï¬nd an optimal set of layers.
Sun et al. [21], Xu et al. [30] used the bottom six layers of the BERT-base model to initialize the student model. This is similar to one of our strategies. However, their performance is much lower compared to our method. Moreover, we provide a comprehensive evaluation of our strategies on four pre-trained models to prove their eï¬cacy in reducing the size of the network.
Liu et al. [31], Schwartz et al. [32], Xin et al. [33], Zhou et al. [34] speed up the inference time by introducing dynamic exiting strategies. The limitation of their work are the memory footprints of the model that remain identical to the original model.
Representation analysis: A number of studies have analyzed representations of pre-trained models at layer-level and showed that they learn linguistic in- formation [35, 36, 37, 38, 39, 40, 41, 42, 43, 44]. Belinkov et al. [45], Sajjad et al. [46] provided a comprehensive literature review of such work. While the representation analysis uncovers, what linguistic properties diï¬erent layers cap- ture, they do not reï¬ect which layers are important for transfer learning to a downstream task. Recently, Tamkin et al. [47], Merchant et al. [48], Durrani et al. [49] attempted to address this by analyzing layer-wise transferability of features during ï¬ne-tuning. Tamkin et al. [47] reinitialized individual layers of pre-trained model and observed the eï¬ect on the ï¬ne-tuning performance. Mer- chant et al. [48] used probing classiï¬er, layer-wise similarity and layer-ablation for their analysis. Our work is similar to their layer-ablation study which they carried out to understand the diï¬culty of a downstream task, but the premise of
5
DeEmbedding Layer Encoder layer Task-Specific Layer Full Top-Layer Even Alternate Odd Alternate Parameter/Contribution Symmetric Bottom-Layer Network Dropping Dropping Dropping based Dropping Dropping Dropping SS SoS Sa. oC SS SS, â ââ ââ ââ â a 3 ( ââ=ââ_ â =, âââ- ; oT | S635 ca \ } L¢ 2 > =: ââ â| [| on oss \, | === ¢ 2 â J â » ââJâ| Ss â=a / & > _âââ _ââââ| | Ss. â at / â ol â x Sse ¢ = 1s = ââ| \ ââ|} = = = == ad =: = = = i =; â_ axâ ( } â ââ ââ âââ \ â â| | â â ese ses ef \ / = =] es) =) =) || a | =) ==) K=4 K=6 K=3 K=5 K=2 K=5 [2,3] [4,9,10] K=2 K=6 K=4 K=7
Figure 1: Illustration of layer-dropping strategies. K represents the number of layers that are dropped. For example, K = 4 in the top-layer strategy means top four layers of the model are dropped. In the contribution-based dropping, we select layers based on a similarity threshold. The number mentioned in the ï¬gure e.g. [2,3] shows the layers which are dropped based on the similarity threshold.
our work is very diï¬erent. We gauge the importance of various subsets of layers with respect to the performance on downstream tasks, to achieve eï¬cient mod- els. Durrani et al. [49] used layer-wise and neuron probing classiï¬ers [50, 51] and showed that core-linguistic knowledge is preserved in the lower layers of ï¬ne-tuned models. This resonates with our empirical ï¬nding that shows that higher layers can be safely pruned for eï¬cient transfer learning.
# 3. Methodology
Consider a pre-trained language model M with an embedding layer E0 and L encoder layers: {l1, l2, . . . , lL}. We probe whether it is necessary to keep all layers of the network for downstream tasks. We explore six strategies, that we describe below (also shown in Figure 1), to drop encoder layers from the model. Each pruning regime is followed by task-speciï¬c ï¬ne-tuning to analyze the eï¬ect of layer-dropping on the performance of the task.
# 3.1. Top-Layer Dropping
The top layers in pre-trained models are specialized towards the underlying objective function [12]. Zhang et al. [52] reinitialized the upper layers when ï¬ne-tuning towards GLUE task. We hypothesize that the top layers may not be
6
important when ï¬ne-tuning towards the a downstream task. In this strategy, we drop top K layers from the model. The output of layer lLâK serves as the last layer of the reduced network. Then, a task-speciï¬c layer is added on top of this layer to perform task-speciï¬c ï¬ne-tuning. Figure 1 shows an example with dropping top 4 and 6 layers.
# 3.2. Alternate Dropping
Deep neural networks are innately redundant. Sun et al. [21] and Jiao et al. [22] amalgamated information from adjacent layers of the teacher model into a single layer of the student model. We hypothesize that neighbouring layers preserve similar information and may be dropped safely without any substantial loss of information. We drop N alternating odd or even layers from top to bottom of the network. For example in a 12-layer model with K = 4, we consider two sets of alternate layers: Odd-alternate Dropping â {5,7,9,11} and Even-alternate Dropping â {6,8,10,12}, see Figure 1 for illustration. When dropping an in-between layer li, the output of the previous layer liâ1 becomes the input of the next layer li+1, causing a mismatch in the expected input to li+1. However, we hope that during task-speciï¬c ï¬ne-tuning, the model will recover from this discrepancy.
# 3.3. Parameter-Based Dropping
In this approach, we estimate the importance of a given layer based on the model parameters. More speciï¬cally, we rank the layers based on their weights. We tested two hypotheses: (i ) higher magnitude of the weights signals higher layer importance, (ii ) higher variance of the weights corresponds to higher layer importance. We refer to the former as Aggregation Method, where we aggre- gate the weights of a layer, and we call the latter a Variance Method, where we calculate the variance of each layer. We drop the layers with the lowest aggre- gation or variance scores. Note that a transformer block has various sub-layers, but in our experiments we only used the ï¬nal weights. We leave experiments with other layers within a transformer block as a possible direction for future work.
7
# 3.4. Contribution-Based Dropping
Our next strategy is based on the idea that a layer contributing below a certain threshold might be a good candidate for dropping. We deï¬ne the con- tribution of a layer li in terms of the cosine similarity between its input and its output representations. A layer li with a high similarity (above a certain threshold) indicates that its output has not changed much from its input, and therefore it can be dropped from the network. More concretely, in the forward pass, we calculate the cosine similarity between the representation of the sen- tence token (CLS) before and after each layer. We average the similarity scores of each layer over the development set, and select layers that have an aver- age similarity above a certain threshold for dropping. This contribution-based strategy can be seen as a principled variation of alternate dropping.
# 3.5. Symmetric Dropping
The bottom layers are closer to the input while the top layers are closer to the output. It is possible that both the top layers and the bottom layers are more important than the middle layers. The Symmetric dropping strategy retains the top and the bottom X layers, and drop K middle layers, where 2X + K = L. For example, in a 12-layer model, if K = 6, we retain three top and three bottom layers, dropping layers 4â9. The output of layer 3 would then serve as an input to layer 10.
# 3.6. Bottom-Layer Dropping
Previous work on analyzing layers in Neural Networks [35, 41, 39, 53, 54] has shown that the lower layers model local interactions between words (which is important for morphology and lexical semantics), thus providing essential input to the higher layers. Removing lower layers could be therefore catastrophic. We still perform these experiments for the sake of completeness. We remove the bottom K layers of the model. The output of the embedding layer l0 serves as an input to layer lK+1 of the original model.
8
Task Description Train Dev SST-2 Sentiment analysis 67349 872 MRPC Microsoft Research paraphrase corpus 3668 408 MNLI Natural language inference 392702 9815 QNLI Question natural language inference 104743 5463 QQP Quora question pairs 363846 40430 RTE Recognizing textual entailment 2490 277 STS-B Semantic textual similarity 5749 1500
Table 1: Data statistics of the GLUE tasks. All tasks are binary classiï¬cation tasks, except for STS-B which is a regression task. Recall that the test sets are not publicly available, and hence we use development set to report results.
# 4. Experimental Setup
Datasets. We evaluated our strategies on General Language Understanding Evaluation (GLUE) tasks [16] tasks, which serves as a defacto standard to eval- uate pre-trained language models. Table 1 provides statistics of each dataset. More speciï¬cally, we evaluated on the following tasks: SST-2 for sentiment anal- ysis with the Stanford sentiment treebank [55], MNLI for natural language in- ference [56], QNLI for Question NLI [57], QQP for Quora Question Pairs,2 RTE for recognizing textual entailment [58], MRPC for Microsoft Research para- phrase corpus [59], and STS-B for the semantic textual similarity benchmark [60]. We left out WNLI, due to the irregularities in its dataset, as also reported by others,3 as well as CoLA due to large variance and unstable results across ï¬ne-tuning runs.
Models. We experimented with three state-of-the-art 12-layered pre-trained mod- els 4 BERT [1], RoBERTa [14] and XLNet [15]. We additionally experimented using a 12-layered ALBERT [5] model and a distilled model, DistilBERT [6].
2http://data.quora.com/First-Quora-Dataset-Release-Question-Pairs 3http://gluebenchmark.com/faq 4For the sake of clarity when the trends are similar across models, we present the results
of selected models only.
9
Our selection of models encourage interesting comparison between diï¬erent types of models such as auto-regressive vs. auto-encoder and a large model vs. its distilled version. All experiments are conducted using the transformers library [61]. We used the default settings and did not optimize the parameters. We limit our experiments to the base versions of the transformers as we could not experiment with BERT-large or XLNet-large due to memory limitations.5 However, our strategies are straightforward to apply to models of any depth.
End-to-End Procedure. Given a pre-trained model, we drop layers using one of the strategies described in Section 3. We then performed task-speciï¬c ï¬ne- tuning using GLUE training sets for three epochs as prescribed by [1]6 and evaluated on the oï¬cial devsets.
# 5. Evaluation Results
We experimented with dropping K number of layers where K = 2, 4, 6 in BERT, RoBERTa and XLNet, and K = 1, 2, 3 in DistilBERT (a 6-layer model). As an example, for K = 2 on a 12-layer model, we drop the following layers: top strategy â {11, 12}; bottom strategy â {1, 2}; even-alternate â {10, 12}; odd-alternate â {9, 11}; symmetric â {6, 7}. For the parameter-based strategy, we calculate the score of every layer based on the aggregated weights and the variance in the weights, and we drop the layers with the lowest score. In the contribution-based strategy, the dropping of layers is dependent on a similarity threshold. We calculate the similarity between input and output of each layer and remove layers with similarity above the threshold values of 0.95, 0.925 and 0.9. These values were chosen empirically. A threshold value below 0.9 or above
5In order to ï¬t large models in our TitanX 12GB GPU cards, we tried to reduce the batch size, but this yielded poor performance, see https://github.com/google-research/ bert#out-of-memory-issues.
6We experimented with using more epochs, especially for dropping strategies that exclude in-between layers, in order to let the weight matrix adapt to the changes. However, we did not see any beneï¬t in going beyond three epochs.
10
0.95 resulted in either more than half of the network being considered as similar, or none of the layers to be similar.
# 5.1. Comparing Strategies
Figure 2 presents average classiï¬cation performance of BERT and XLNet using various layer-dropping strategies. We observe similar trends for RoBERTa and DistilBERT and limit the presentation of results to two models here.
Top-layer dropping consistently outperforms other strategies when dropping 6 layers. We dropped half of the top layers (yellow bars in the top strategy) with an average loss of only 2.91 and 1.81 points for BERT and XLNet respectively. The Bottom-layer dropping strategy performed the worst across all models, as expected, showing that it is more damaging to remove information from the lower layers of the network. The behavior of top and bottom dropping is consistent across all models. It nicely connects with ï¬ndings in representation learning, i.e. lower layers learn core-linguistic phenomena and our results show that they are important to maintain task-speciï¬c performance.
Parameter-based strategy using variance is the second best strat-
egy at K = 6. Compared to most of the other strategies presented in this work, the parameter-based strategy makes a more informed decision based on the parameters of the model, i.e., the weights. We found the variance-based strategy to outperform the aggregation-based one, and thus we limit our discus- sion to the former only. The variance-based method selected diï¬erent layers to drop for each model. The order of the six layers to drop is {1, 12, 8, 9, 11, 2} for BERT, {11, 12, 6, 7, 5, 10} for RoBERTa and {11, 12, 7, 8, 9, 10} for XLNet. One common observation here is that the last 2â3 layers and the middle layers of the models can be removed safely with a small drop in performance (see the results of the variance-based method in Figure 2). Moreover, BERT is an exception where the ï¬rst two contextualized layers {1, 2} are also selected to be removed. This resulted in a huge loss in performance (see the results for BERT when dropping 6 layers based on the variance-based method). Interestingly, dropping 6-layers of XLNet resulted in a model that was identical to that of the top-layer
11
strategy, i.e., removing the top-6 layers. RoBERTa presents an interesting case where the parameter-based strategy resulted in a drop of the middle layers and of the top layers, while keeping the lower and the higher middle layers. The average results for RoBERTa when using the variance-based method are lower by 0.73 point only compared to the top-layer method. The promising results of the parameter-based method on two out of three models show its eï¬cacy. Note that our current exploration is limited to the parameters of the base models. Fine-tuning substantially changes the parameters [49], which may result in a task-wise informed dropping of layers. We did not try task-speciï¬c pruning as the focus of our work is on task-agnostic eï¬cient models.
Dropping top alternate layers is better than dropping top consec- utive layers. The Odd-alternate dropping strategy gave better results than the top at K = 2 (blue bars in the Odd-alternate strategy), across all the tasks. Looking at the layers that were dropped: top â {11, 12}; even-alternate
â {10, 12}; odd-alternate â {9, 11}, we can say that (i ) dropping last two con- secutive layers {11, 12} is more harmful than removing alternate layers, and (ii ) keeping the last layer {9, 11} is more important than keeping the second last layer with its alternate pair. At K = 6, the Alternate dropping strategies show a large drop in the performance, perhaps due to removal of lower lay- ers. Recall that our results from the bottom strategy showed lower layers to be critical for transfer learning.
The Symmetric strategy gives importance to both top and bottom layers and drops the middle layers. Dropping two middle layers from BERT degrades the performance by 0.97 points and makes it the second best strategy at K = 2. However, on XLNet the performance degrades drastically when dropping the same set of layers. Comparing these two models, XLNet is sensitive to the dropping of middle layers while BERT shows competitive results to the Top- layer dropping strategy even after removing 4 middle layers. We analyze the diï¬erence in the behavior of models in Section 6.
For Contribution-based strategy, we chose layers {3, 5} at threshold 0.95 and {3, 5, 8, 9} at threshold 0.925 for BERT, and layers {9, 10, 11} at threshold 0.925
12
= BERT base (12 layers) 90 85 80 75 oo I 4 layers dropped m6 cd 65 Score Ti 2iayers dropped
(a) BERT (b) XLNet
. XLNet base (12 layers) Score i 2 layers dropped IB 4 layers dropped HS Slayers âSymmetric Parameter Aggregation
Figure 2: Average classiï¬cation performance on GLUE tasks when using diï¬erent layer- dropping strategies and when removing diï¬erent numbers of layers for BERT and XLNet. Note that the contribution-based strategy selects layers based on the similarity threshold. In some cases it does not select (2,4 or 6) number of layers, which results in some missing bars in the ï¬gure. The horizontal red line represents the results using the full model.
and {8, 9, 10, 11} at threshold 0.9 for XLNet. Using a lower or a higher similarity threshold resulted in dropping none or more than half of the layers in the network respectively. For BERT, the contribution-based dropping did not work well since the method chose a few lower layers for dropping. On the contrary, it worked quite well on XLNet where higher layers were selected. This is in-line with the ï¬ndings of top and bottom strategy that all models are robust to dropping of higher layers compared to dropping of lower layers.
The contribution-based strategy is based on the activations of each layer, which is an input-dependent process. Depending on the nature of the input or the task, the activation patterns will change. We suspect that this is one of the reasons for the failure of the strategy. A strategy based on task-speciï¬c
13
Drop. SST-2 MNLI QNLI QQP STS-B RTE MRPC BERT 0/12 92.43 84.04 91.12 91.07 88.79 67.87 87.99 2/12 92.20 (0.23â) 83.26 (0.78â) 89.84 (1.28â) 90.92 (0.15â) 88.70 (0.09â) 62.82 (5.05â) 86.27 (1.72â) 4/12 90.60 (1.83â) 82.51 (1.53â) 89.68 (1.44â) 90.63 (0.44â) 88.64 (0.15â) 67.87 (0.00) 79.41 (8.58â) 6/12 90.25 (2.18â) 81.13 (2.91â) 87.63 (3.49â) 90.35 (0.72â) 88.45 (0.34â) 64.98 (2.89â) 80.15 (7.84â) RoBERTa 0/12 92.20 86.44 91.73 90.48 89.87 68.95 88.48 2/12 93.46 (1.26â) 86.53 (0.09â) 91.23 (0.50â) 91.02 (0.54â) 90.21 (0.34â) 71.84 (2.89â) 89.71 (1.23â) 4/12 93.00 (0.80â) 86.20 (0.24â) 90.57 (1.16â) 91.12 (0.64â) 89.77 (0.10â) 70.40 (1.45â) 87.50 (0.98â) 6/12 91.97 (0.23â) 84.44 (2.00â) 90.00 (1.73â) 90.91 (0.43â) 88.92 (0.95â) 64.62 (4.33â) 85.78 (2.70â) XLNET 0/12 93.92 85.97 90.35 90.55 88.01 65.70 88.48 2/12 93.35 (0.57â) 85.67 (0.30â) 89.35 (1.00â) 90.69 (0.14â) 87.59 (0.42â) 66.06 (0.36â) 86.52 (1.96â) 4/12 92.78 (1.14â) 85.46 (0.51â) 89.51 (0.84â) 90.75 (0.20â) 87.74 (0.27â) 67.87 (2.17â) 87.25 (1.23â) 6/12 92.20 (1.72â) 83.48 (2.49â) 88.03 (2.32â) 90.62 (0.07â) 87.45 (0.56â) 65.70 (0.00) 82.84 (5.64â) DistilBERT 0/6 90.37 81.78 88.98 90.40 87.14 60.29 85.05 1/6 90.37 (0.00) 80.41 (1.37â) 88.50 (0.48â) 90.33 (0.07â) 86.21 (0.93â) 59.93 (0.36â) 84.80 (0.25â) 2/6 90.25 (0.12â) 79.41 (2.37â) 86.60 (2.38â) 90.19 (0.21â) 86.91 (0.23â) 62.82 (2.53â) 82.60 (2.45â) 3/6 87.50 (2.87â) 77.07 (4.71â) 85.78 (3.20â) 89.59 (0.81â) 85.19 (1.95â) 58.48 (1.81â) 77.45 (7.60â)
Table 2: Task-wise performance for the top-layer dropping strategy using the oï¬cial GLUE development sets. Drop. represents the number of layers that are dropped in comparison to the total number of layers in the model. The red numbers with downward arrow shows the drop in performance in comparison to using the full model i.e. 0/12 and the blue numbers with upward arrow shows the gain in performance.
contribution might yield a better performance. However, in this work we focused on task-independent eï¬cient models, leaving task-dependent models for future work.
5.2. Task-wise Results
Top-layer strategy works consistently well for all models at K = 6. In the rest of the paper, we discuss the results for the Top-layer strategy only, unless speciï¬ed otherwise. Table 27 presents the results for the individual GLUE tasks
7We use default settings provided in the Transformer library. This causes a slight mismatch between some numbers mentioned in the original papers of each models and our paper.
14
using the Top-layer strategy on three pre-trained models and a distilled model. We observe the same trend as for the averaged results: for most of the tasks, we can safely drop half of the top layers in BERT, RoBERTa and XLNet losing only 1-3 points.
The paraphrase task (QQP) and sentence similarity task (STS-B) are least aï¬ected by the dropping of layers. When dropping half of the layers, there was no loss in performance for QQP on XLNet and RoBERTa, and a loss of 0.72 only for BERT. Similarly, for STS-B we observed a decrease of only 0.56, 0.95 and 0.34 points for XLNet, RoBERTa and BERT respectively. In contrast, RTE and MRPC tasks show substantial change (gain/drop) in the performance with layer-dropping when compared with using the full model (see BERT and RoBERTa 0/12,2/12,4/12 results). This is due to the small size of the dev sets, 408 and 277 instances for MRPC and RTE respectively. A few right and wrong predictions cause a large variation in the overall score. We use McNemarâs test at p=value=0.05, and we found these diï¬erences, such as 5.05 points drop in the performance of BERT for RTE, statistically insigniï¬cant.
Dropping top two layers for RoBERTa resulted in better perfor- mance and stability. Interestingly, in several cases for RoBERTa, dropping two layers resulted in better performance than using the full model. Moreover, we observed that layer-dropping resulted in stable runs and was less prone to initialization seed and batch size. We used default settings for all the model and did not investigate the eï¬ect of parameter optimization on the performance of the pre-trained and reduced models to have comparable results.
A distilled model can also be pruned successfully. We observed a similar trend, dropping layers in DistilBERT compared to BERT model. It is interesting to see that an already distilled version of the model can be further pruned by a third, with an average loss of 0.75 points only. However, dropping half of its layers drastically degrades the performance on several tasks. Schwartz et al. [32] also showed that pruning is orthogonal to model distillation.
15
5.3. Memory and Speed Comparison
Dropping layers reduces the number of parameters in the network, signiï¬- cantly speeding up the task-speciï¬c ï¬ne-tuning and the inference time. Table 3 compares the number of parameters, and the speed up in the ï¬ne-tuning and decoding time, versus the loss in performance. We see that dropping top half of the layers of the network, reduced the number of parameters by 40%, speed- ing up ï¬ne-tuning and inference by 50% with average performance loss between 0.89â2.91 points. The results for RoBERTa are even remarkable; as with all the memory and speed improvements, the average performance dropped by only 0.89 points. Dropping 4 layers (which gives a speed-up of 33%), RoBERTa achieved a performance close to dropping no layers. XLNet also showed robustness to the drop of top 4 layers and the performance dropped by only 0.23 points. It is worth noting that a better trade-oï¬ between computational eï¬ciency and loss in performance can be achieved by optimizing for a speciï¬c task. For example QQP maintained performance within 1% on XLNet when 9 layers were dropped (See Table 4). This corresponds to 60% reduction in the number of parameters
# and 80% reduction in terms of inference time.
# 6. Discussion
Now we perform further analysis and discuss variations of our methodology. We limit the results to 5 most stable tasks (SST-2, MNLI, QNLI, QQP, STS-B).
# 6.1. Task-speciï¬c optimal number of layers to drop.
The variation in the amount of loss for each task with the dropping of lay- ers in Table 2 suggests that the task-speciï¬c optimal number of layers would result in a better balance between the size of the pruned model and the loss in performance. In this section, we present the results of the optimal number of layers for each task. For these experiments, we split the standard development set into equal-sized hold-out set and dev set. We ï¬nd the minimum number of layers required to maintain 1%, 2%, and 3% performance on the dev set using
16
Drop. Loss Param. Fine-tuning Inference speedup seconds BERT || RoBERTa 0/12 | 0.00 || 0.00 | 110M 1.00 - 2/12 | 1.33 || -0.42 | 94M 1.24 17% 4/12 | 2.00 || 0.01 | 80M 1.48 33% 6/12 | 2.91 || 0.89 | 66M 1.94 50% XLNET 0/12 0.00 116M 1.00 - 2/12 0.54 101M 1.20 16% 4/12 0.23 86M 1.49 32% 6/12 1.81 71M 1.96 49%
Table 3: Comparing the number of parameters (Param.), the speed up in the ï¬ne-tuning step, and the inference time for diï¬erent models. Fine-tuning speedup shows how many times the model speeds up compared to the original network. We report inference time on the QQP devset consisting of 40.4k instances with a batch size of 32.
our top-layer strategy and we verify that the ï¬ndings generalize to the hold-out test. Table 4 shows the optimal number of layers on dev and the corresponding percentage of performance drop on the hold-out set (in parentheses). For most of the cases, the optimal number of layers found using the dev set aligns well with the hold-out set. For example, BERT QNLI with 1% loss in performance showed that one layer can be dropped safely and this results in a loss of 0.84 points absolute compared to using the full model.
Overall, RoBERTa and XLNet showed most robustness towards the dropping of layers while maintaining performance threshold of 1%. For example, QQP maintained performance within 1 point even when the top 9 and 8 layers of XLNet and RoBERTa respectively were dropped. Essentially, the model consists of only three layers â {1, 2, 3}. On the contrary, dropping 9 layers in BERT resulted in a loss of 3% points for the QQP task.
17
SST-2 MNLI QNLI QQP STS-B 1% Loss Threshold BERT 7(1.6) 3(1.04) 1(0.84) 6(0.75) 7(1.16) RoBERTa 4(0.00) 4(0.20) 5(0.87) 8(0.77) 5(1.22) XLNet 8(1.38) 5(1.22) 4(0.51) 9(0.60) 7(0.05) 2% Loss Threshold BERT 7(1.60) 5(1.26) 3(1.68) 8(1.60) 7(1.16) RoBERTa 4(0.00) 5(1.26) 6(1.42) 9(1.51) 6(2.31) XLNet 8(1.38) 5(1.22) 6(1.46) 9(0.60) 8(1.22) 3% Loss Threshold BERT 8(2.06) 6(2.42) 5(2.60) 9(2.27) 8(2.61) RoBERTa 5(0.69) 6(2.73) 7(2.37) 10(3.21) 7(3.00) XLNet 8(1.38) 6(1.55) 7(1.61) 9(0.60) 9(2.46)
Table 4: Number of layers dropped from the network while maintaining performance within a pre-deï¬ned threshold. The numbers outside brackets are the optimal number of layers found using the dev set and the numbers within brackets report the performance loss on the hold-out set. For example in 7(1.6), 7 are the optimal number of layers that can be dropped based on the dev set and 1.6 is the performance loss when 7 layers are dropped on the hold-out set.
# 6.2. Comparing Pre-trained Models
Our pruning strategies illuminate model-speciï¬c peculiarities that help us in comparing and understanding the learning dynamics of these models. RoBERTa and XLNet learn task-speciï¬c knowledge earlier in the network com- pared to BERT. Figure 3 shows the average layer-wise performance of each model. RoBERTa learns task-level information much earlier in the model (see the steep slope of the yellow line for lower layers). Although XLNet starts similar to BERT but in the lower-middle layers, it learns the task information relatively faster than BERT. For both RoBERTa and XLNet, the performance matures close to the 7th layer of the model while BERT improves with every higher layer until the 11th layer. Since XLNet and RoBERTa mature much earlier in the network, this suggests that top layers in these networks might be redundant for downstream tasks and are a good candidate for dropping in exchange for a small
18
0.938 7 @ 0.875 8 § E 0.813 : g ' & 07 + © & BERT B 1 @ 0.688 S ' âO XLNET 2 o625 : & RoBERTa 1 0.563 1 ' 0.5 = 0 1 2 3 4 5 6 7 8 9 10 14 12 Layer #
Figure 3: Average layer-wise classiï¬cation results.
loss in performance. This observation is in line with the results presented in Table 2. For example, we showed that the drop of top two layers of RoBERTa resulted in either marginal drop in performance or improvement in performance.
The diï¬erence between the learning dynamics of BERT and RoBERTa en- courages further investigation into what caused RoBERTa to learn task-speciï¬c knowledge earlier in the network. Is it because of the large amount of training data used for RoBERTa or because of better pre-training procedures such as dynamic masking, and exclusion of next sentence prediction loss? Does early learning of task-speciï¬c knowledge as in XLNet and RoBERTa reï¬ect towards a better and robust pre-trained model? Answering these questions is important for improving the design of pre-trained models and require future exploration.
# 6.3. Pruning the ALBERT Model
ALBERT is based on the cross-layer parameter sharing. Because of this, our layer dropping strategies do not save any memory as opposed to using BERT and other transformers. However, it still makes the inference faster by speeding up the forward pass. Table 5 presents the results on ï¬ve GLUE tasks. Interestingly, dropping the top-6 layers did not result in drastic degradation of the model performance and, in some cases, the results even improved compared to using the baseline model. For example, in the case of SST-2, the performance of a
19
Drop SST-2 MNLI QNLI QQP STS-B 0/12 89.79 83.39 90.24 90.29 89.61 2/12 91.40 83.82 89.55 89.64 89.54 4/12 91.63 82.73 90.24 88.51 87.00 6/12 90.14 81.64 89.11 90.08 88.21
Table 5: ALBERT: task-wise performance for the top-layer dropping strategy using the oï¬cial GLUE dev-sets. Drop shows the number of layer dropped/the total layers in the model.
6-layered model is 90.14, which is 0.35 points absolute better than the baseline. Compared to the 6-layered BERT model (Table 2), the drop in the performance of ALBERT-6 is relatively small. We hypothesize that the parameter sharing in the case of ALBERT make the model learn much richer representation in the shared contextualized layers of the model, which yields a model that is robust towards layer-dropping. These results are encouraging and show that the model that was designed to be space-eï¬cient can be further improved towards run-time eï¬ciency by simply pruning some of its layers.
# 6.4. Comparing against Distilled Models
We now compare the performance of our pruned models when applying the
top-layer dropping strategy to distilled and pruned models built using various so- phisticated architectures and training procedures. In particular, we compare to previous work [6, 25, 23] that used KD to build 6-layered distilled models. More speciï¬cally, we present the result of the following distilled models; Vanilla-KD â a distilled model built using the original KD method [18], BERT-PKD [21] â patient knowledge distillation method that encourages a student model to learn from various layers of the teacher model, and BERT-TH â a theseus compression method that gradually distill layers of a large model. Additionally, we compare with the pruned RoBERTa model of [28] that used layer-level dropout during training of a pre-trained model and showed that it enables robust dropping of layers at test time. We also compare to the greedy layer pruning method [29],
20
which creates task-speciï¬c smaller-size models by dropping layers in a greedy fashion. All these models are identical in size to our smaller models obtained by dropping the top-6 layers in BERT and RoBERTa. We refer to them as BERT-6 and RoBERTa-6. Table 6 compares the results.8
Our pruned models (BERT-6 and RoBERTa-6) showed competi- tive performance compared to their distilled versions built using KD. This result is quite surprising, given that our pruned models do not require any additional training, while building a distilled model using KD requires training from scratch, which is a time consuming and computation expensive process. The top layer-dropping works consistently for all model types including distilled models and a large set of language understanding tasks. Moreover, our setup oï¬ers the ï¬exibility to choose diï¬erent sizes of the model based on the com- putational requirements and the speciï¬cs of a downstream task. The result of preserving bottom layers of the model suggests selective compression applied to pre-trained models. For example, in KD while combining information from various layers of the large model, it is advisable to preserve the bottom layers and distilled the top layers. Similarly, pruning methods such as weight and attention-head pruning, and quantization can be aggressively applied to top layers of the models while preserving the bottom layers.
Our RoBERTa-6 has comparable results to the 6-layer pruned model trained using LayerDrop and Greedy layer pruning. Fan et al. [28] used layer-level dropout during training of a pre-trained model and showed that it enables robust dropping of layers at test time. Similar to us, they directly pruned top 6-layers of their large model and ï¬ne-tuned it for speciï¬c tasks. Ta- ble 6 (row 7 and 10) compares top-layer dropping using their model and the original RoBERTa model. On two out of three tasks, dropping top-layers from the original RoBERTa model outperformed training a new model using Layer- Drop. This shows that the current models are already robust and the top-layer
8There is an exhaustive list of task-speciï¬c distilled models but we show the results for a few for comparison.
21
No. Model SST-2 MNLI QNLI QQP STS-B 1. Vanilla-KD 90.50 80.10 88.00 88.10 84.90 2. BERT-PKD 91.30 81.30 88.40 88.40 86.20 3. BERT-TH 91.80 82.10 88.80 88.80 87.80 4. GLP6 91.20 81.30 87.60 86.80 87.60 5. DistilBERT 90.37 81.78 88.98 90.40 87.14 6. BERT-6 90.25 81.13 87.63 90.35 88.45 7. Fan et al. RoBERTa-6 92.50 82.90 89.40 - - 8. GLP6 92.00 85.60 90.80 87.80 86.60 9. DistilRoBERTa 92.50 84.00 90.80 89.40 88.30 10. RoBERTa-6 91.97 84.44 90.00 90.91 88.92
Table 6: Comparing 6-layered BERT and RoBERTa models. Results of Vanilla-KD, BERT- PKD and BERT-TH are taken from Xu et al. [30]. Fan et al. results and GLP6 are taken from [28, 29]. BERT-6 and RoBERTa-6 represent our models achieved by pruning top 6 layers.
dropping strategy can be directly applied to the available pre-trained models. Similarly, we found that despite optimizing the model towards a downstream GLUE task, the greedy layer pruning (GLP6) did not show a clear advantage over our 6-layered model. For example, compared to BERT (rows 4 and 6), our BERT-6 model yields better or comparable performance to GLP6 on the QQP, STS-B, MNLI and QNLI tasks, and performs worse only on the SST-2 task.
# 6.5. Layer-Dropping using Fine-tuned Models
Here, we question whether dropping layers from a ï¬ne-tuned model is more eï¬ective than dropping them from a base model? To answer this, we ï¬rst ï¬ne- tune the model, drop the layers, and then ï¬ne-tune the reduced model again. Table 7 presents the results on BERT and XLNet. We found this setup to be comparable to dropping layers directly from the pre-trained model in most of the cases. This shows that dropping top layers directly from a pre-trained model does not lose any critical information which was essential for a speciï¬c task. However, we think that pruning a ï¬ne-tuned model may lose task-speciï¬c information because the model is optimized for the task. Dropping layers may
22
Model SST-2 MNLI QNLI QQP STS-B BERT-6 92.25 81.13 87.63 90.35 88.45 BERT-FT-6 90.02 80.85 87.24 90.34 88.16 XLNet-6 92.20 83.48 88.03 90.62 87.45 XLNet-FT-6 92.43 83.75 86.80 90.77 87.60
Table 7: Layer-dropping using task-speciï¬c models. XLNet-FT-6 ï¬rst ï¬ne-tunes the pre- trained model, removes the layers and performs ï¬ne-tuning again.
have severe eï¬ect. This is reï¬ected in some of the results of BERT-6.
Gradual Dropping:. In another attempt to preserve the modelâs performance during the dropping process, we iteratively drop one layer after every two epochs of the ï¬ne-tuning process. This did not yield any improvement over dropping layers directly from the model.
# 7. Conclusion
We proposed strategies to drop layers in pre-trained models and analyzed the
model behavior on downstream tasks. We conducted experiments using a variety of pre-trained models and using a diverse set of natural language understanding tasks and showed that one can reduce the model size by up to 40%, while maintaining up to 98% of their original performance on downstream tasks. Our pruned models performed on par with distilled models building using knowledge distillation. However, unlike distilled models, our approach does not require re- training, is applicable to a large set of pre-trained models including distilled models, and provides the ï¬exibility to balance the trade-oï¬ between accuracy and model size. Moreover, we made several interesting observations such as, i) the lower layers are most critical to maintain downstream task performance, ii) certain downstream tasks require as few as only 3 layers out of 12 layers to maintain within 1% performance threshold, iii) networks trained using diï¬erent objective functions have diï¬erent learning patterns e.g. XLNet and RoBERTa learns task-speciï¬c information much earlier in the network compared to BERT.
23
# References
[1] J. Devlin, M.-W. Chang, K. Lee, K. Toutanova, BERT: Pre-training of deep bidirectional transformers for language understanding, in: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Association for Computational Linguistics, Minneapolis, Minnesota, 2019.
[2] P. Michel, O. Levy, G. Neubig, Are sixteen heads really better than one?, in: Advances in Neural Information Processing Systems 32, Cur- ran Associates, Inc., 2019, pp. 14014â14024. URL: http://papers.nips. cc/paper/9551-are-sixteen-heads-really-better-than-one.pdf.
[3] E. Voita, D. Talbot, F. Moiseev, R. Sennrich, I. Titov, Analyzing multi- head self-attention: Specialized heads do the heavy lifting, the rest can be pruned, in: Proceedings of the 57th Annual Meeting of the Associa- tion for Computational Linguistics, Florence, Italy, 2019. doi:10.18653/ v1/P19-1580.
[4] J. S. McCarley, Pruning a bert-based question answering model, 2019. arXiv:1910.06360.
[5] Z. Lan, M. Chen, S. Goodman, K. Gimpel, P. Sharma, R. Soricut, Albert: A lite bert for self-supervised learning of language representations, 2019. arXiv:1909.11942.
[6] V. Sanh, L. Debut, J. Chaumond, T. Wolf, Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter, 2019. arXiv:1910.01108.
[7] O. Zafrir, G. Boudoukh, P. Izsak, M. Wasserblat, Q8bert: Quantized 8bit bert, 2019. arXiv:1910.06188.
[8] S. Shen, Z. Dong, J. Ye, L. Ma, Z. Yao, A. Gholami, M. W. Mahoney, K. Keutzer, Q-bert: Hessian based ultra low precision quantization of bert, 2019. arXiv:1909.05840.
24
[9] P. Michel, O. Levy, G. Neubig, Are sixteen heads really better than one?, CoRR abs/1905.10650 (2019). URL: http://arxiv.org/abs/1905.10650.
[10] H. Peng, R. Schwartz, D. Li, N. A. Smith, A mixture of h â 1 heads is better than h heads, 2020. arXiv:2005.06537.
[11] M. A. Gordon, K. Duh, N. Andrews, Compressing BERT: Studying the ef- fects of weight pruning on transfer learning, ArXiv abs/2002.08307 (2019).
[12] E. Voita, R. Sennrich, I. Titov, The bottom-up evolution of representations in the transformer: A study with machine translation and language model- ing objectives, in: Proceedings of the 2019 Conference on Empirical Meth- ods in Natural Language Processing and the 9th International Joint Con- ference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China, 2019.
[13] F. Dalvi, H. Sajjad, N. Durrani, Y. Belinkov, Analyzing redundancy in pretrained transformer models, in: Proceedings of the 2020 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), Association for Computational Linguistics, Online, 2020, pp. 4908â 4926. URL: https://aclanthology.org/2020.emnlp-main.398. doi:10. 18653/v1/2020.emnlp-main.398.
[14] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, V. Stoyanov, Roberta: A robustly optimized BERT pre- training approach, CoRR abs/1907.11692 (2019). URL: http://arxiv. org/abs/1907.11692. arXiv:1907.11692.
[15] Z. Yang, Z. Dai, Y. Yang, J. Carbonell, R. Salakhutdinov, Q. V. Le, Xlnet: Generalized autoregressive pretraining for language understanding, 2019. arXiv:1906.08237.
[16] A. Wang, A. Singh, J. Michael, F. Hill, O. Levy, S. Bowman, GLUE: A multi-task benchmark and analysis platform for natural language under- standing, in: Proceedings of the 2018 EMNLP Workshop BlackboxNLP:
25
Analyzing and Interpreting Neural Networks for NLP, Association for Com- putational Linguistics, Brussels, Belgium, 2018, pp. 353â355. URL: https: //www.aclweb.org/anthology/W18-5446. doi:10.18653/v1/W18-5446.
[17] Q. Cao, H. Trivedi, A. Balasubramanian, et al., Faster and just as accurate: A simple decomposition for transformer models, ICLR Openreview (2020).
[18] G. E. Hinton, S. Osindero, Y. W. Teh, A fast learning algorithm for deep belief nets, Neural Computation 18 (2006) 1527â1554.
[19] I. Guyon, A. Elisseeï¬, An introduction to variable and feature selection, Machine Learning Research 3 (2003).
[20] G. Hinton, O. Vinyals, J. Dean, Distilling the knowledge in a neural network, 2015. URL: http://arxiv.org/abs/1503.02531, cite arxiv:1503.02531Comment: NIPS 2014 Deep Learning Workshop.
[21] S. Sun, Y. Cheng, Z. Gan, J. Liu, Patient knowledge distillation for BERT model compression, in: Proceedings of the 2019 Conference on Empir- ical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Association for Computational Linguistics, Hong Kong, China, 2019, pp. 4322â4331. URL: https://www.aclweb.org/anthology/D19-1441. doi:10.18653/v1/D19-1441.
[22] X. Jiao, Y. Yin, L. Shang, X. Jiang, X. Chen, L. Li, F. Wang, Q. Liu, Tinybert: Distilling bert for natural language understanding, 2019. arXiv:1909.10351.
[23] R. Tang, Y. Lu, L. Liu, L. Mou, O. Vechtomova, J. Lin, Distilling task-speciï¬c knowledge from BERT into simple neural networks, CoRR abs/1903.12136 (2019). URL: http://arxiv.org/abs/1903.12136.
[24] Z. Sun, H. Yu, X. Song, R. Liu, Y. Yang, D. Zhou, âmobilebert: Task- agnostic compression of bert by progressive knowledge transferâ, in: In-
26
ternational Conference on Learning Representations, 2020. URL: https: //openreview.net/forum?id=SJxjVaNKwB.
[25] I. Turc, M.-W. Chang, K. Lee, K. Toutanova, Well-read students learn better: On the importance of pre-training compact models, 2019. arXiv:1908.08962.
[26] H. Tsai, J. Riesa, M. Johnson, N. Arivazhagan, X. Li, A. Archer, Small and practical bert models for sequence labeling, 2019. arXiv:1909.00100.
[27] A. Renda, J. Frankle, M. Carbin, Comparing rewinding and ï¬ne-tuning in neural network pruning, in: ICLR, 2020.
[28] A. Fan, E. Grave, A. Joulin, Reducing transformer depth on demand with structured dropout, 2019. arXiv:1909.11556.
[29] D. Peer, S. Stabinger, S. Engl, A. J. Rodr´ıguez-S´anchez, Greedy layer pruning: Decreasing inference time of transformer models, CoRR abs/2105.14839 (2021). URL: https://arxiv.org/abs/2105. 14839. arXiv:2105.14839.
[30] C. Xu, W. Zhou, T. Ge, F. Wei, M. Zhou, Bert-of-theseus: Compressing bert by progressive module replacing, ArXiv abs/2002.02925 (2020).
[31] W. Liu, P. Zhou, Z. Zhao, Z. Wang, H. Deng, Q. Ju, Fastbert: a self- distilling bert with adaptive inference time, 2020. arXiv:2004.02178.
[32] R. Schwartz, G. Stanovsky, S. Swayamdipta, J. Dodge, N. A. Smith, The right tool for the job: Matching model and instance complexities, 2020. arXiv:2004.07453.
[33] J. Xin, R. Tang, J. Lee, Y. Yu, J. Lin, Deebert: Dynamic early exiting for accelerating bert inference, 2020. arXiv:2004.12993.
[34] W. Zhou, C. Xu, T. Ge, J. McAuley, K. Xu, F. Wei, BERT loses patience: Fast and robust inference with early exit, 2020. arXiv:2006.04152.
27
[35] Y. Belinkov, N. Durrani, F. Dalvi, H. Sajjad, J. Glass, What do Neu- ral Machine Translation Models Learn about Morphology?, in: Proceed- ings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL), Association for Computational Linguistics, Vancou- ver, 2017. URL: https://aclanthology.coli.uni-saarland.de/pdf/P/ P17/P17-1080.pdf.
[36] F. Dalvi, N. Durrani, H. Sajjad, Y. Belinkov, S. Vogel, Understanding and improving morphological learning in the neural machine translation decoder, in: Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Asian Federation of Natural Language Processing, Taipei, Taiwan, 2017, pp. 142â151. URL: https://aclanthology.org/I17-1015.
[37] Y. Belinkov, L. M`arquez, H. Sajjad, N. Durrani, F. Dalvi, J. Glass, Evaluat- ing layers of representation in neural machine translation on part-of-speech and semantic tagging tasks, in: Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Pa- pers), Asian Federation of Natural Language Processing, Taipei, Taiwan, 2017, pp. 1â10. URL: https://aclanthology.org/I17-1001.
[38] A. Conneau, G. Kruszewski, G. Lample, L. Barrault, M. Baroni, What you can cram into a single vector: Probing sentence embeddings for linguistic properties, in: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL), 2018.
[39] N. F. Liu, M. Gardner, Y. Belinkov, M. E. Peters, N. A. Smith, Lin- guistic knowledge and transferability of contextual representations, in: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), Association for Computa- tional Linguistics, Minneapolis, Minnesota, 2019, pp. 1073â1094. URL: https://www.aclweb.org/anthology/N19-1112.
28
[40] I. Tenney, P. Xia, B. Chen, A. Wang, A. Poliak, R. T. McCoy, N. Kim, B. V. Durme, S. R. Bowman, D. Das, E. Pavlick, What do you learn from context? probing for sentence structure in contextualized word representa- tions, 2019. arXiv:1905.06316.
[41] I. Tenney, D. Das, E. Pavlick, BERT rediscovers the classical NLP pipeline, in: Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics, Association for Computational Linguistics, Florence, Italy, 2019, pp. 4593â4601. URL: https://www.aclweb.org/anthology/ P19-1452. doi:10.18653/v1/P19-1452.
[42] N. Durrani, F. Dalvi, H. Sajjad, Y. Belinkov, P. Nakov, One size does not ï¬t all: Comparing NMT representations of diï¬erent granularities, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), Association for Computa- tional Linguistics, Minneapolis, Minnesota, 2019, pp. 1504â1516. URL: https://aclanthology.org/N19-1154. doi:10.18653/v1/N19-1154. in:
[43] Y. Belinkov, N. Durrani, F. Dalvi, H. Sajjad, J. Glass, On the linguis- tic representational power of neural machine translation models, Compu- tational Linguistics 46 (2020) 1â52. URL: https://aclanthology.org/ 2020.cl-1.1. doi:10.1162/coli_a_00367.
[44] D. Arps, Y. Samih, L. Kallmeyer, H. Sajjad, Probing for constituency struc- ture in neural language models, 2022. doi:10.48550/ARXIV.2204.06201.
[45] Y. Belinkov, S. Gehrmann, E. Pavlick, Interpretability and analysis in neu- ral NLP, in: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts, Association for Computa- tional Linguistics, Online, 2020, pp. 1â5. URL: https://aclanthology. org/2020.acl-tutorials.1. doi:10.18653/v1/2020.acl-tutorials.1.
[46] H. Sajjad, N. Durrani, F. Dalvi, Neuron-level Interpretation of Deep NLP
29
Models: A Survey, CoRR abs/2108.13138 (2021). URL: https://arxiv. org/abs/2108.13138. arXiv:2108.13138.
[47] A. Tamkin, T. Singh, D. Giovanardi, N. D. Goodman, Investigating trans- ferability in pretrained language models, ArXiv abs/2004.14975 (2020).
[48] A. Merchant, E. Rahimtoroghi, E. Pavlick, I. Tenney, What happens to bert embeddings during ï¬ne-tuning?, ArXiv abs/2004.14448 (2020).
[49] N. Durrani, H. Sajjad, F. Dalvi, How transfer learning impacts lin- guistic knowledge in deep NLP models?, in: Findings of the Associa- tion for Computational Linguistics: ACL-IJCNLP 2021, Association for Computational Linguistics, Online, 2021, pp. 4947â4957. URL: https: //aclanthology.org/2021.findings-acl.438. doi:10.18653/v1/2021. findings-acl.438.
[50] F. Dalvi, N. Durrani, H. Sajjad, Y. Belinkov, D. A. Bau, J. Glass, What is one grain of sand in the desert? analyzing individual neurons in deep nlp models, in: Proceedings of the Thirty-Third AAAI Conference on Artiï¬cial Intelligence (AAAI, Oral presentation), 2019.
[51] N. Durrani, H. Sajjad, F. Dalvi, Y. Belinkov, Analyzing individual neu- rons in pre-trained language models, in: Proceedings of the 2020 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), Association for Computational Linguistics, Online, 2020, pp. 4865â 4880. URL: https://aclanthology.org/2020.emnlp-main.395. doi:10. 18653/v1/2020.emnlp-main.395.
[52] T. Zhang, F. Wu, A. Katiyar, K. Q. Weinberger, Y. Artzi, Revisiting few- sample bert ï¬ne-tuning, 2020. arXiv:2006.05987.
[53] F. Dalvi, A. R. Khan, F. Alam, N. Durrani, J. Xu, H. Sajjad, Discov- ering latent concepts learned in BERT, in: International Conference on Learning Representations, 2022. URL: https://openreview.net/forum? id=POTMtpYI1xH.
30
[54] H. Sajjad, N. Durrani, F. Dalvi, F. Alam, A. R. Khan, J. Xu, Analyzing encoded concepts in transformer language models, in: Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics, NAACL â22, Association for Computational Linguistics, Seattle, Washington, USA, 2022.
[55] R. Socher, A. Perelygin, J. Wu, J. Chuang, C. D. Manning, A. Ng, C. Potts, Recursive deep models for semantic compositionality over a sentiment treebank, in: Proceedings of the 2013 Conference on Empiri- cal Methods in Natural Language Processing, Association for Computa- tional Linguistics, Seattle, Washington, USA, 2013, pp. 1631â1642. URL: https://www.aclweb.org/anthology/D13-1170.
[56] A. Williams, N. Nangia, S. Bowman, A broad-coverage challenge cor- pus for sentence understanding through inference, in: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), Association for Computational Linguistics, New Or- leans, Louisiana, 2018, pp. 1112â1122. URL: https://www.aclweb.org/ anthology/N18-1101. doi:10.18653/v1/N18-1101.
[57] P. Rajpurkar, J. Zhang, K. Lopyrev, P. Liang, SQuAD: 100,000+ questions for machine comprehension of text, in: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Process- ing, Association for Computational Linguistics, Austin, Texas, 2016, pp. 2383â2392. URL: https://www.aclweb.org/anthology/D16-1264. doi:10.18653/v1/D16-1264.
[58] L. Bentivogli, I. Dagan, H. T. Dang, D. Giampiccolo, B. Magnini, The ï¬fth pascal recognizing textual entailment challenge, in: In Proc Text Analysis Conference (TACâ09, 2009.
[59] W. B. Dolan, C. Brockett, Automatically constructing a corpus of sen- tential paraphrases, in: Proceedings of the Third International Work-
31
shop on Paraphrasing (IWP2005), 2005. URL: https://www.aclweb.org/ anthology/I05-5002.
[60] D. Cer, M. Diab, E. Agirre, I. Lopez-Gazpio, L. Specia, SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation, in: Proceedings of the 11th International Workshop on Se- mantic Evaluation (SemEval-2017), Association for Computational Lin- guistics, Vancouver, Canada, 2017, pp. 1â14. URL: https://www.aclweb. org/anthology/S17-2001. doi:10.18653/v1/S17-2001.
[61] T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Funtowicz, J. Brew, Huggingfaceâs transformers: State-of-the-art natural language processing, ArXiv abs/1910.03771 (2019).
32 | {
"id": "2004.02178"
} |
2004.03329 | MedDialog: Two Large-scale Medical Dialogue Datasets | Medical dialogue systems are promising in assisting in telemedicine to
increase access to healthcare services, improve the quality of patient care,
and reduce medical costs. To facilitate the research and development of medical
dialogue systems, we build two large-scale medical dialogue datasets:
MedDialog-EN and MedDialog-CN. MedDialog-EN is an English dataset containing
0.3 million conversations between patients and doctors and 0.5 million
utterances. MedDialog-CN is an Chinese dataset containing 1.1 million
conversations and 4 million utterances. To our best knowledge,
MedDialog-(EN,CN) are the largest medical dialogue datasets to date. The
dataset is available at https://github.com/UCSD-AI4H/Medical-Dialogue-System | http://arxiv.org/pdf/2004.03329 | Xuehai He, Shu Chen, Zeqian Ju, Xiangyu Dong, Hongchao Fang, Sicheng Wang, Yue Yang, Jiaqi Zeng, Ruisi Zhang, Ruoyu Zhang, Meng Zhou, Penghui Zhu, Pengtao Xie | cs.LG, cs.AI, cs.CL, stat.ML | null | null | cs.LG | 20200407 | 20200707 | 0 2 0 2
l u J 7 ] G L . s c [
2 v 9 2 3 3 0 . 4 0 0 2 : v i X r a
1â5
MedDialog: Two Large-scale Medical Dialogue Datasets
Xuehai He, Shu Chen, Zeqian Ju, Xiangyu Dong, Hongchao Fang, Sicheng Wang, Yue Yang, Jiaqi Zeng, Ruisi Zhang, Ruoyu Zhang, Meng Zhou, Penghui Zhu, Pengtao Xie UC San Diego [email protected]
Abstract Medical dialogue systems are promising in assisting in telemedicine to increase access to healthcare services, improve the quality of patient care, and reduce medical costs. To facilitate the research and development of medical dialogue systems, we build two large-scale medical dialogue datasets: MedDialog-EN and MedDialog-CN. MedDialog-EN is an English dataset containing 0.3 million conversations between patients and doctors and 0.5 million utterances. MedDialog-CN is an Chinese dataset containing 1.1 million conversations and 4 million utterances. To our best knowledge, MedDialog-(EN,CN) are the largest medical dialogue datasets to date. The dataset is available at https://github.com/UCSD-AI4H/ Medical-Dialogue-System
# 1. Introduction
Telemedicine refers to the practice of delivering patient care remotely, where doctors pro- vide medical consultations to patients using HIPAA compliant video-conferencing tools. As an important complement to traditional face-to-face medicine practiced physically in hos- pitals and clinics, telemedicine has a number of advantages. First, it increases access to care. For people living in medically under-served communities (e.g., rural areas) that are in shortage of clinicians, telemedicine enables them to receive faster and cheaper care com- pared with traveling over a long distance to visit a clinician. Second, it reduces healthcare cost. In a study1 by Jeï¬erson Health, it is shown that diverting patients from emergency departments with telemedicine can save more than $1,500 per visit. Third, telemedicine can improve quality of care. The study in (Pande and Morris, 2015) shows that telemedicine patients score lower for depression, anxiety, and stress, and have 38% fewer hospital admis- sions. Other advantages include improving patient engagement and satisfaction, improving provider satisfaction, etc. Please refer to (Wootton et al., 2017) for a more comprehensive review.
While telemedicine is promising, it has several limitations. First, it puts additional burden to physicians. In additional to practicing face-to-face medicine which already makes physicians highly occupied, physicians need to provide remote consultations in telemedicine, which further increases the risk of physician burnout. Second, diï¬erent from in-hospital patients, the progression of whose medical conditions can be easily tracked by clinicians,
1. https://www.healthleadersmedia.com/clinical-care/cost-savings-telemedicine-estimated-19-120-patient- visit
© X. He et al.
# MedDialog: Two Large-scale Medical Dialogue Datasets
remote patients are diï¬cult to track and monitor. To address such problems, there has been increasing research interests in developing artiï¬cial intelligence (AI) methods to assist in telemedicine. In particular, medical dialogue systems are being developed to server as âvirtual doctorsâ. These âvirtual doctorsâ are aimed to interact with patients via natural dialogues, asking about the medical conditions and history of patients and providing clinical advice. They can also proactively reach out to patients to ask about the progression of patientsâ conditions and provide timely interventions accordingly.
To build medical dialogue systems, a large collection of conversations between patients and doctors are needed as training data. Due to data privacy concerns, such data is very diï¬cult to obtain. The existing medical dialogue datasets are limited in size or biased to certain diseases, which cannot adequately serve the purpose to train medical dialogue systems that can achieve doctor-level intelligence and cover all specialities in medicine.
To address the limitations of existing datasets, we build two large-scale medical dialogue datasets: MedDialog-EN in English and MedDialog-CN in Chinese. MedDialog-EN contains 0.3 million patient-doctor consultations and 0.5 million utterances. MedDialog-CN contains 1.1 million consultations and 4 million utterances. Dialogs in these two datasets cover almost all specialities in medicine, ranging from internal medicine to family medicine and covers a wide spectrum of diseases, including cancer, pneumonia, etc. To our best knowledge, they are the largest medical dialogue datasets to date. The data is open to the public.
# 2. Datasets
# 2.1. MedDialog-EN
The MedDialog-EN dataset contains 257,454 English consultations between patients and doctors. The total number of utterances is 514,908: 257,454 from doctors and 257,454 from patients. Each consultation consists of two parts: (1) description of patientâs medical conditions; (2) conversation between patient and doctor. Figure 1 shows an exemplar con- sultation. The data is crawled from iclinic.com2 and healthcaremagic.com3, which are two online platforms of healthcare services, including symptom self-checker, video consultation, online chat with doctors, etc.
The consultations cover 51 categories of communities including diabetes, elderly prob- lems, pain management, etc. and 96 specialties including andrology, cardiology, nephrology, pharmacology, etc. The consultations are conducted from 2008 to 2020.
# 2.2. MedDialog-CN
The MedDialog-CN dataset contains 1,145,231 Chinese consultations between patients and doctors. The total number of utterances is 3,959,333: 2,179,008 from doctors and 1,780,325 from patients. Each consultation consists of three parts: (1) description of patientâs medical condition and history; (2) conversation between patient and doctor; (3) (optional) diagnosis and treatment suggestions given by the doctor. In the description of patientâs medical condition and history, the following ï¬elds are included: present disease, detailed description of present disease, what help is needed from the doctor, how long the disease has been,
2. https://www.icliniq.com/ 3. https://www.healthcaremagic.com/
2
MedDialog: Two Large-scale Medical Dialogue Datasets
# Description
| get mild left-sided chest pain with low Hb and vitamin B12 levels. Please help.
# Dialogue
Patient:
Hello doctor,
| am a 39-year-old woman. | have mild pain in the left side of the chest (below the neck and above the breast) and then sensation in the upper back for four days. It comes and goes. Sometimes it goes to the right side of the chest also. | had my ECG and blood test 6 months ago. ECG and blood sugar were normal. No hypertension but hemoglobin was 10 and Vitamin B 12 was below average. What can | do as in lockdown it is not possible to see the doctor as a person. Please help. Doctor:
Hello. | would like to ask you some more questions, do you have symptoms of acid peptic disease or GERD? Or any burning sensation in the epigastric region, the center of your chest (heartburn). If you do, it could also present as chest pain so | will guide you accordingly. | would also like to rule out any muscular pain, for which | will encourage you to take a muscle relaxant and see if it helps. Take tablet Muscoril (Thiocolchicoside 4 mg) one tablet once a day when you experience pain and let me know in the follow up in a couple of days to see if it relieves the pain. Like you said your Vitamin B12 levels are below normal, it could also be neuropathic pain for which | will only advise vitamin B12 supplements (tablet Methylcobalamin once a day for three months) or B complex supplements. You can also improve these levels with diet. Take a diet enriched with beef, liver, and chicken. Fish and shellfish such as trout, salmon, tuna fish, and clams. Fortified breakfast cereal. Low-fat milk, yogurt, cheese, and eggs. Since your Hb is also below the ideal levels, take iron-rich foods which are usually Vitamin B12 rich as well. So | will advise diet modification to incorporate these in your everyday routine or take supplements with iron and Vitamin B12 as well. Because anemia can also present with the said symptoms. Lastly, | will encourage you to reduce weight with exercise and changing your diet and switching to a low-fat diet with the addition of more fruits and vegetables to your diet. Try to start with at least 30-40 minutes of cardio workout every day and work it up from there according to your stamina and you will see the visible change that you feel. You will feel more active and fresh. Try to bring your BMI (body mass index) as close to the normal range as you can. Because obesity itself can cause countless problems as well. | hope this helps. ECG (electrocardiography). Do not lift heavyweights. Take a low fat, high fiber diet. After two days.
Figure 1: An exemplar consultation, which includes (1) description of medical conditions of the patient, (2) dialogue between doctor and patient.
3
# MedDialog: Two Large-scale Medical Dialogue Datasets
medications, allergies, and past disease. Figure 2 shows an exemplar consultation. In the conversation, there are cases that multiple consecutive utterances are from the same person (either doctor or patient) and these utterances were posted at diï¬erent time points. If we combine consecutive utterances from the same person into a single one, there are 3,209,660 utterances: 1,981,844 from doctors and 1,227,816 from patients. The data is crawled from haodf.com4, which is an online platform of healthcare services, including medical consultation, scheduling appointment with doctors, etc.
The consultations cover 29 broad categories of specialties including internal medicine, pediatrics, dentistry, etc. and 172 ï¬ne-grained specialties including cardiology, neurology, gastroenterology, urology, etc. The consultations are conducted from 2010 to 2020.
# 3. Advantages of our datasets
To our best knowledge, MedDialog-EN and MedDialog-CN are the largest English and Chinese medical dialog dataset respectively. They have the following advantages.
⢠Large number of conversations and utterances. MedDialog-EN has about 0.3 million conversations and 0.5 million utterances. MedDialog-CN has about 1.1 million conversations and 4 million utterances.
⢠Broad coverage of medical specialities. Consultations in MedDialog-EN are about 96 categories of specialties. Consultations in MedDialog-CN are about 29 broad categories of specialties and 172 ï¬ne-grained specialties.
⢠Diversity of the patients. The patients in MedDialog-EN are from all over the world, with diï¬erent nationalities, ethics, age, gender, occupation, education, income, etc. The patients in MedDialog-CN are from 31 provincial-level administrative divi- sions in China, with diï¬erent ethics, age, gender, occupation, education, income, etc. Such diversity greatly minimizes population biases in these two datasets.
# 4. Related Works
Table 1 shows a comparison of our dataset with several other medical dialogue datasets. The number of dialogs and diseases in our dataset are both much larger than those in other datasets.
Dataset Muzhi (Wei et al., 2018) Dxy (Xu et al., 2019) COVID-EN (Yang et al., 2020) COVID-CN (Yang et al., 2020) MedDialog-EN MedDialog-CN #dialogs #diseases 710 527 603 1,088 257,454 3,407,494 4 5 1 1 96 172
Table 1: Comparison with other datasets.
4. https://www.haodf.com/
4
MedDialog: Two Large-scale Medical Dialogue Datasets
# 5. Conclusions
To facilitate the research and development of medical dialogue systems that can potentially assist in telemedicine, we build two large-scale medical dialogue datasets. MedDialog-EN contains 0.3 million conversations between patients and doctors and 0.5 million utterances. MedDialog-CN contains 1.1 million conversations and 4 million utterances. The datasets are publicly available and are continuously growing.
# References
Reena L Pande and Michael Morris. Leveraging remote behavioral health interventions to improve medical outcomes and reduce costs. Am J Manag Care, 21(2):e000âe000, 2015.
Zhongyu Wei, Qianlong Liu, Baolin Peng, Huaixiao Tou, Ting Chen, Xuan-Jing Huang, Kam-Fai Wong, and Xiang Dai. Task-oriented dialogue system for automatic diagnosis. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 201â207, 2018.
Richard Wootton, John Craig, and Victor Patterson. Introduction to telemedicine. CRC Press, 2017.
Lin Xu, Qixian Zhou, Ke Gong, Xiaodan Liang, Jianheng Tang, and Liang Lin. End-to-end knowledge-routed relational dialogue system for automatic diagnosis. In Proceedings of the AAAI Conference on Artiï¬cial Intelligence, volume 33, pages 7346â7353, 2019.
Wenmian Yang, Guangtao Zeng, Bowen Tan, Zeqian Ju, Subrato Chakravorty, Xuehai He, Shu Chen, Xingyi Yang, Qingyang Wu, Zhou Yu, et al. On the generation of medical dialogues for covid-19. arXiv preprint arXiv:2005.05442, 2020.
5
# MedDialog: Two Large-scale Medical Dialogue Datasets
# Description of medical conditions and history
# Bua: RRR AAA, NAS BURKE.
(Disease: The baby's eyes are red and slightly ulcerated when becoming severe.) PMA > RRA, AFR, PENBACRE, BABS. AT REBRRAAUMARURT (Description of medical condition: The baby's eyes are red and itchy, scratched with hand, and slightly ulcerated when becoming severe. After using Burt's bee Res-Q ointment, it disappeared quickly but came out after two days.) PLR : ZRIRGAAEZS (Help needed: What's wrong with baby's red eyes?) RAZA: â-AW (Hong long the condition has been: Less than one month) We: (Allergies: No) BES | 7 (Past medical history: No) oI
# Dialogue
RA: RAE, MARANON. RIRCULANAN, BM: BAA, BARRE. (Doctor: Thank you for your trust. | have read the medical information in detail. Based on the existing information, is blepharitis. The picture is not very clear. Scratch it often, right?) A MESES BR, BBZATTN, bMS Rok (Patient: Drinks little amount of milk since birth, and the babyâs lips are always dry, and not drooling like other BE : DRAB ap aE (Doctor: Eyes have local arthritis.) iA : (Patient: Yes) RA : AP RDB ABR (Doctor: Use Tobramycin and Dexamethasone eye ointment twice a day) BA RVEZAS (Patient: What's going on?)B RA: SRAM BMRA (Doctor: Consider blepharitis or blepharitis) 1% it severe?) HA, PAPE. ALARAWAS (Doctor: At present, it is not severe. Try to take the medications for a few days first.) A : (Patient: OK) BRA: Ayre
ARSE?
# the diagnosis
# babies.)
(Doctor: Let me know how it works.)
# Diagnosis and suggestions
# PPB ERMH OR : MARA
(Summary of the condition and initial impressions: Blepharitis) BEBW BRA, ATRWRE-ABK, URKEER, DENNER. (Summary of recommendations: For local inflammation, use Tobramycin and Dexamethasone eye ointment eye ointment twice day, monitor the recovery, and go to the hospital if necessary.)
# a
Figure 2: An exemplar consultation, which includes (1) description of medical conditions and history of the patient, (2) dialogue between doctor and patient, and (3) diagnosis and treatment suggestions given by the doctor.
6 | {
"id": "2005.05442"
} |
2004.03685 | Towards Faithfully Interpretable NLP Systems: How should we define and evaluate faithfulness? | With the growing popularity of deep-learning based NLP models, comes a need
for interpretable systems. But what is interpretability, and what constitutes a
high-quality interpretation? In this opinion piece we reflect on the current
state of interpretability evaluation research. We call for more clearly
differentiating between different desired criteria an interpretation should
satisfy, and focus on the faithfulness criteria. We survey the literature with
respect to faithfulness evaluation, and arrange the current approaches around
three assumptions, providing an explicit form to how faithfulness is "defined"
by the community. We provide concrete guidelines on how evaluation of
interpretation methods should and should not be conducted. Finally, we claim
that the current binary definition for faithfulness sets a potentially
unrealistic bar for being considered faithful. We call for discarding the
binary notion of faithfulness in favor of a more graded one, which we believe
will be of greater practical utility. | http://arxiv.org/pdf/2004.03685 | Alon Jacovi, Yoav Goldberg | cs.CL, cs.LG | Accepted to ACL 2020 | null | cs.CL | 20200407 | 20200427 | 0 2 0 2
r p A 7 2 ] L C . s c [
3 v 5 8 6 3 0 . 4 0 0 2 : v i X r a
# Towards Faithfully Interpretable NLP Systems: How should we deï¬ne and evaluate faithfulness?
# Alon Jacovi Bar Ilan University [email protected]
# Yoav Goldberg Bar Ilan University and Allen Institute for AI [email protected]
# Abstract
With the growing popularity of deep-learning based NLP models, comes a need for inter- pretable systems. But what is interpretability, and what constitutes a high-quality interpreta- tion? In this opinion piece we reï¬ect on the current state of interpretability evaluation re- search. We call for more clearly differentiat- ing between different desired criteria an inter- pretation should satisfy, and focus on the faith- fulness criteria. We survey the literature with respect to faithfulness evaluation, and arrange the current approaches around three assump- tions, providing an explicit form to how faith- fulness is âdeï¬nedâ by the community. We provide concrete guidelines on how evaluation of interpretation methods should and should not be conducted. Finally, we claim that the current binary deï¬nition for faithfulness sets a potentially unrealistic bar for being considered faithful. We call for discarding the binary no- tion of faithfulness in favor of a more graded one, which we believe will be of greater prac- tical utility.
# Introduction
Fueled by recent advances in deep-learning and language processing, NLP systems are increasingly being used for prediction and decision-making in many ï¬elds (Vig and Belinkov, 2019), including sensitive ones such as health, commerce and law (Fort and Couillault, 2016). Unfortunately, these highly ï¬exible and highly effective neural models are also opaque. There is therefore a critical need for explaining learning-based modelsâ decisions.
The emerging research topic of interpretability or explainability1 has grown rapidly in recent years. Unfortunately, not without growing pains.
1Despite ï¬ne-grained distinctions between the terms, within the scope of this work we use the terms âinterpretabilityâ and âexplainabilityâ interchangeably.
One such pain is the challenge of deï¬ningâand evaluatingâwhat constitutes a quality interpreta- tion. Current approaches deï¬ne interpretation in a rather ad-hoc manner, motivated by practical use- cases and applications. However, this view often fails to distinguish between distinct aspects of the interpretationâs quality, such as readability, plausi- bility and faithfulness (Herman, 2017).2 We argue (§2, §5) such conï¬ation is harmful, and that faith- fulness should be deï¬ned and evaluated explicitly, and independently from plausibility.
Our main focus is the evaluation of the faith- fulness of an explanation. Intuitively, a faithful interpretation is one that accurately represents the reasoning process behind the modelâs prediction. We ï¬nd this to be a pressing issue in explainabil- ity: in cases where an explanation is required to be faithful, imperfect or misleading evaluation can have disastrous effects.
While literature in this area may implicitly or explicitly evaluate faithfulness for speciï¬c expla- nation techniques, there is no consistent and for- mal deï¬nition of faithfulness. We uncover three assumptions that underlie all these attempts. By making the assumptions explicit and organizing the literature around them, we âconnect the dotsâ be-
2Unfortunately, the terms in the literature are not yet stan- dardized, and vary widely. âReadabilityâ and âplausibilityâ are also referred to as âhuman-interpretabilityâ and âpersuasive- nessâ, respectively (e.g., Lage et al. (2019); Herman (2017)). To our knowledge, the term âfaithful interpretabilityâ was coined in Harrington et al. (1985), reinforced by Ribeiro et al. (2016), and is, we believe, most commonly used (e.g., Gilpin et al. (2018); Wu and Mooney (2018); Lakkaraju et al. (2019)). Chakraborty et al. (2017) refers to this issue (more or less) as âaccountabilityâ. Sometimes referred to as how âtrustworthyâ (Camburu et al., 2019) or âdescriptiveâ (Carmona et al., 2015; Biecek, 2018) the interpretation is, or as âdescriptive accuracyâ (Murdoch et al., 2019). Also related to the âtransparencyâ (Baan et al., 2019), the âï¬delityâ (Guidotti et al., 2018) or the ârobustnessâ (Alvarez-Melis and Jaakkola, 2018) of the interpretation method. And frequently, simply âexplainabilityâ is inferred to require faithfulness by default.
tween seemingly distinct evaluation methods, and also provide a basis for discussion regarding the desirable properties of faithfulness (§6).
Finally, we observe a trend by which faithfulness is treated as a binary property, followed by showing that an interpretation method is not faithful. We claim that this is unproductive (§7), as the assump- tions are nearly impossible to satisfy fully, and it is all too easy to disprove the faithfulness of an interpretation method via a counter-example. What can be done? We argue for a more practical view of faithfulness, calling for a graded criteria that mea- sures the extent and likelihood of an interpretation to be faithful, in practice (§8). While we started to work in this area, we pose the exact formalization of these criteria, and concrete evaluations methods for them, as a central challenge to the community for the coming future.
# 2 Faithfulness vs. Plausibility
There is considerable research effort in attempting to deï¬ne and categorize the desiderata of a learned systemâs interpretation, most of which revolves around speciï¬c use-cases (Lipton, 2018; Guidotti et al., 2018, inter alia).
Two particularly notable criteria, each useful for a different purposes, are plausibility and faithful- ness. âPlausibilityâ refers to how convincing the interpretation is to humans, while âfaithfulnessâ refers to how accurately it reï¬ects the true reason- ing process of the model (Herman, 2017; Wiegreffe and Pinter, 2019).
Naturally, it is possible to satisfy one of these properties without the other. For example, con- sider the case of interpretation via post-hoc text generationâwhere an additional âgeneratorâ com- ponent outputs a textual explanation of the modelâs decision, and the generator is learned with super- vision of textual explanations (Zaidan and Eisner, 2008; Rajani et al., 2019; Strout et al., 2019). In this case, plausibility is the dominating property, while there is no faithfulness guarantee.
Despite the difference between the two criteria, many authors do not clearly make the distinction, and sometimes conï¬ate the two.3 Moreoever, the majority of works do not explicitly name the cri- teria under consideration, even when they clearly belong to one camp or the other.4
3E.g., Lundberg and Lee (2017); P¨orner et al. (2018); Wu and Mooney (2018).
4 E.g., Mohseni and Ragan (2018); Arras et al. (2016);
We argue that this conï¬ation is dangerous. For example, consider the case of recidivism prediction, where a judge is exposed to a modelâs prediction and its interpretation, and the judge believes the interpretation to reï¬ect the modelâs reasoning pro- cess. Since the interpretationâs faithfulness carries legal consequences, a plausible but unfaithful inter- pretation may be the worst-case scenario. The lack of explicit claims by research may cause misinfor- mation to potential users of the technology, who are not versed in its inner workings.5 Therefore, clear distinction between these terms is critical.
# Inherently Interpretable?
A distinction is often made between two methods of achieving interpretability: (1) interpreting existing models via post-hoc techniques; and (2) designing inherently interpretable models. Rudin (2018) ar- gues in favor of inherently interpretable models, which by design claim to provide more faithful in- terpretations than post-hoc interpretation of black- box models.
We warn against taking this argumentation at face-value: a method being âinherently inter- pretableâ is merely a claim that needs to be veriï¬ed before it can be trusted. Indeed, while attention mechanisms have been considered as âinherently in- terpretableâ (Ghaeini et al., 2018; Lee et al., 2017), recent work cast doubt regarding their faithfulness (Serrano and Smith, 2019; Jain and Wallace, 2019; Wiegreffe and Pinter, 2019).
# 4 Evaluation via Utility
While explanations have many different use-cases, such as model debugging, lawful guarantees or health-critical guarantees, one other possible use- case with particularly prominent evaluation litera- ture is Intelligent User Interfaces (IUI), via Human- Computer Interaction (HCI), of automatic models assisting human decision-makers. In this case, the goal of the explanation is to increase the degree of trust between the user and the system, giving the user more nuance towards whether the systemâs decision is likely correct, or not. In the general case, the ï¬nal evaluation metric is the performance of the user at their task (Abdul et al., 2018). For example, Feng and Boyd-Graber (2019) evaluate
Xiong et al. (2018); Weerts et al. (2019).
5As Kaur et al. (2019) concretely show, even experts are prone to overly trust the faithfulness of explanations, despite no guarantee.
various explanations of a model in a setting of trivia question answering.
However, in the context of faithfulness, we must warn against HCI-inspired evaluation, as well: in- creased performance in this setting is not in- dicative of faithfulness; rather, it is indicative of correlation between the plausibility of the ex- planations and the modelâs performance.
To illustrate, consider the following ï¬ctional case of a non-faithful explanation system, in an HCI evaluation setting: the explanation given is a heat-map of the textual input, attributing scores to various tokens. Assume the system explanations behave in the following way: when the output is correct, the explanation consists of random content words; and when the output is incorrect, it consists of random punctuation marks. In other words, the explanation is more likely to appear plausible when the model is correct, while at the same time not re- ï¬ecting the true decision process of the model. The user, convinced by the nicer-looking explanations, performs better using this system. However, the explanation consistently claimed random tokens to be highly relevant to the modelâs reasoning process. While the system is concretely useful, the claims given by the explanation do not reï¬ect the modelâs decisions whatsoever (by design).
While the above scenario is extreme, this misun- derstanding is not entirely unlikely, since any de- gree of correlation between plausibility and model performance will result in increased user perfor- mance, regardless of any notion of faithfulness.
# 5 Guidelines for Evaluating Faithfulness
We propose the following guidelines for evaluating the faithfulness of explanations. These guidelines address common pitfalls and sub-optimal practices we observed in the literature.
Be explicit in what you evaluate. Conï¬ating plausability and faithfulness is harmful. You should be explicit on which one of them you evaluate, and use suitable methodologies for each one. Of course, the same applies when designing interpre- tation techniquesâbe clear about which properties are being prioritized.
Faithfulness evaluation should not involve human-judgement on the quality of interpreta- tion. We note that: (1) humans cannot judge if an interpretation is faithful or not: if they understood the model, interpretation would be unnecessary;
(2) for similar reasons, we cannot obtain supervi- sion for this problem, either. Therefore, human judgement should not be involved in evaluation for faithfulness, as human judgement measures plaus- ability.
Faithfulness evaluation should not involve human-provided gold labels. We should be able to interpret incorrect model predictions, just the same as correct ones. Evaluation methods that rely on gold labels are inï¬uenced by human priors on what should the model do, and again push the evaluation in the direction of plausability.
Do not trust âinherent interpretabilityâ claims. Inherent interpretability is a claim until proven oth- erwise. Explanations provided by âinherently in- terpretableâ models must be held to the same stan- dards as post-hoc interpretation methods, and be evaluated for faithfulness using the same set of evaluation techniques.
Faithfulness evaluation of IUI systems should not rely on user performance. End-task user performance in HCI settings is merely indicative of correlation between plausibility and model perfor- mance, however small this correlation is. While im- portant to evaluate the utility of the interpretations for some use-cases, it is unrelated to faithfulness.
# 6 Deï¬ning Faithfulness
What does it mean for an interpretation method to be faithful? Intuitively, we would like the provided interpretation to reï¬ect the true reasoning process of the model when making a decision. But what is a reasoning process of a model, and how can reasoning processes be compared to each other?
Lacking a standard deï¬nition, different works evaluate their methods by introducing tests to mea- sure properties that they believe good interpreta- tions should satisfy. Some of these tests measure aspects of faithfulness. These ad-hoc deï¬nitions are often unique to each paper and inconsistent with each other, making it hard to ï¬nd commonalities. We uncover three assumptions that underlie all these methods, enabling us to organize the litera- ture along standardized axes, and relate seemingly distinct lines of work. Moreover, exposing the underlying assumptions enables an informed dis- cussion regarding their validity and merit (we leave such a discussion for future work, by us or others). These assumptions, to our knowledge, encapsu- late the current working deï¬nitions of faithfulness
used by the research community.
Assumption 1 (The Model Assumption). Two models will make the same predictions if and only if they use the same reasoning process.
An interpretation system is un- Corollary 1.1. faithful if it results in different interpretations of models that make the same decisions.
As demonstrated by a recent example concerning NLP models, it can be used for proof by counter- example. Theoretically, if all possible models which can perfectly mimic the modelâs decisions also provide the same interpretations, then they could be deemed faithful. Conversely, showing that two models provide the same results but dif- ferent interpretations, disprove the faithfulness of the method. Wiegreffe and Pinter (2019) show how these counter-examples can be derived with adversarial training of models which can mimic the original model, yet provide different explanations.6 Corollary 1.2. An interpretation is unfaithful if it results in different decisions than the model it interprets.
A more direct application of the Model Assump- tion is via the notion of ï¬delity (Guidotti et al., 2018; Lakkaraju et al., 2019). For cases in which the explanation is itself a model capable of making decisions (e.g., decision trees or rule lists (Sushil et al., 2018)), ï¬delity is deï¬ned as the degree to which the explanation model can mimic the original modelâs decisions (as an accuracy score). For cases where the explanation is not a computable model, Doshi-Velez and Kim (2017) propose a simple way of mapping explanations to decisions via crowd- sourcing, by asking humans to simulate the modelâs decision without any access to the model, and only access to the input and explanation (termed for- ward simulation). This idea is further explored and used in practice by Nguyen (2018).
Assumption 2 (The Prediction Assumption). On similar inputs, the model makes similar deci- sions if and only if its reasoning is similar.
Corollary 2. An interpretation system is unfaith- ful if it provides different interpretations for similar inputs and outputs.
Since the interpretation serves as a proxy for the modelâs âreasoningâ, it should satisfy the same con- straints. In other words, interpretations of similar
6We note that in context, Wiegreffe and Pinter also utilize the model assumption to show that some explanations do carry useful information on the modelâs behavior.
decisions should be similar, and interpretations of dissimilar decisions should be dissimilar.
This assumption is more useful to disprove the faithfulness of an interpretation rather than prove it, since a disproof requires ï¬nding appropriate cases where the assumption doesnât hold, where a proof would require checking a (very large) satisfactory quantity of examples, or even the entire input space. One recent discussion in the NLP community (Jain and Wallace, 2019; Wiegreffe and Pinter, 2019) concerns the use of this underlying assump- tion for evaluating attention heat-maps as expla- nations. The former attempts to provide different explanations of similar decisions per instance. The latter critiques the former and is based more heavily on the model assumption, described above.
Additionally, Kindermans et al. (2019) propose to introduce a constant shift to the input space, and evaluate whether the explanation changes signiï¬- cantly as the ï¬nal decision stays the same. Alvarez- Melis and Jaakkola (2018) formalize a generaliza- tion of this technique under the term interpretabil- ity robustness: interpretations should be invariant to small perturbations in the input (a direct conse- quence of the prediction assumption). Wolf et al. (2019) further expand on this notion as âconsis- tency of the explanation with respect to the modelâ. Unfortunately, robustness measures are difï¬cult to apply in NLP settings due to the discrete input.
Assumption 3 (The Linearity Assumption).7 Certain parts of the input are more important to the model reasoning than others. Moreover, the contributions of different parts of the input are in- dependent from each other.
Corollary 3. Under certain circumstances, heat- map interpretations can be faithful.
This assumption is employed by methods that consider heat-maps8 (e.g., attention maps) over the input as explanations, particularly popular in NLP. Heat-maps are claims about which parts of the in- put are more relevant than others to the modelâs decision. As such, we can design âstress testsâ to verify whether they uphold their claims.
One method proposed to do so is erasure, where the âmost relevantâ parts of the inputâaccording to the explanationâare erased from the input, in
7This assumption has gone through justiï¬ed scrutiny in recent work. As mentioned previously, we do not necessarily endorse it. Nevertheless, it is used in parts of the literature.
8Also referred to as feature-attribution explanations (Kim
et al., 2017).
expectation that the modelâs decision will change (Arras et al., 2016; Feng et al., 2018; Serrano and Smith, 2019). Otherwise, the âleast relevantâ parts of the input may be erased, in expectation that the modelâs decision will not change (Jacovi et al., 2018). Yu et al. (2019); DeYoung et al. (2019) propose two measures of comprehensiveness and sufï¬ciency as a formal generalization of erasure: as the degree by which the model is inï¬uenced by the removal of the high-ranking features, or by inclusion of solely the high-ranking features.
# Is Faithful Interpretation Impossible?
The aforementioned assumptions are currently uti- lized to evaluate faithfulness in a binary manner, whether an interpretation is strictly faithful or not. Speciï¬cally, they are most often used to show that a method is not faithful, by constructing cases in which the assumptions do not hold for the sug- gested method.9 In other words, there is a clear trend of proof via counter-example, for various interpretation methods, that they are not glob- ally faithful.
We claim that this is unproductive, as we ex- pect these various methods to consistently result in negative (not faithful) results, continuing the cur- rent trend. This follows because an interpretation functions as an approximation of the model or de- cisionâs true reasoning process, so it by deï¬nition loses information. By the pigeonhole principle, there will be inputs with deviation between inter- pretation and reasoning.
This is observed in practice, in numerous work that show adversarial behavior, or pathological be- haviours, that arise from the deeply non-linear and high-dimensional decision boundaries of current models.10 Furthermore, because we lack super- vision regarding which models or decisions are indeed mappable to human-readable concepts, we cannot ignore the approximation errors.
This poses a high bar for explanation methods to fulï¬ll, a bar which we estimate will not be over- come soon, if at all. What should we do, then, if we desire a system that provides faithful explanations?
9Whether for attention (Baan et al., 2019; Pruthi et al., 2019; Jain and Wallace, 2019; Serrano and Smith, 2019; Wiegreffe and Pinter, 2019), saliency methods (Alvarez-Melis and Jaakkola, 2018; Kindermans et al., 2019), or others (Ghor- bani et al., 2019; Feng et al., 2018).
10Kim et al. (2017); Feng et al. (2018, §6) discuss this point in the context of heat-map explanations.
# 8 Towards Better Faithfulness Criteria
We argue that a way out of this standstill is in a more practical and nuanced methodology for deï¬n- ing and evaluating faithfulness. We propose the fol- lowing challenge to the community: We must de- velop formal deï¬nition and evaluation for faith- fulness that allows us the freedom to say when a method is sufï¬ciently faithful to be useful in practice.
We note two possible approaches to this end:
1. Across models and tasks: The degree (as grayscale) of faithfulness at the level of spe- ciï¬c models and tasks. Perhaps some models or tasks allow sufï¬ciently faithful interpreta- tion, even if the same is not true for others.11 For example, the method may not be faithful for some question-answering task, but faithful for movie review sentiment, perhaps based on various syntactic and semantic attributes of those tasks.
2. Across input space: The degree of faithful- ness at the level of subspaces of the input space, such as neighborhoods of similar in- puts, or singular inputs themselves. If we are able to say with some degree of conï¬dence whether a speciï¬c decisionâs explanation is faithful to the model, even if the interpretation method is not considered universally faithful, it can be used with respect to those speciï¬c areas or instances only.
# 9 Conclusion
The opinion proposed in this paper is two-fold:
First, interpretability evaluation often conï¬ates evaluating faithfulness and plausibility together. We should tease apart the two deï¬nitions and focus solely on evaluating faithfulness without any super- vision or inï¬uence of the convincing power of the interpretation.
Second, faithfulness is often evaluated in a bi- nary âfaithful or not faithfulâ manner, and we be- lieve strictly faithful interpretation is a âunicornâ which will likely never be found. We should instead evaluate faithfulness on a more nuanced âgrayscaleâ that allows interpretations to be useful even if they are not globally and deï¬nitively faithful.
11As noted by Wiegreffe and Pinter (2019); Vashishth et al. (2019), although in the context of attention solely.
# Acknowledgements
We thank Yanai Elazar for welcome input on the presentation and organization of the paper. We also thank the reviewers for additional feedback and pointing to relevant literature in HCI and IUI.
This project has received funding from the Eu- ropoean Research Council (ERC) under the Eu- ropoean Unionâs Horizon 2020 research and inno- vation programme, grant agreement No. 802774 (iEXTRACT).
# References
Ashraf M. Abdul, Jo Vermeulen, Danding Wang, Brian Y. Lim, and Mohan S. Kankanhalli. 2018. Trends and trajectories for explainable, accountable and intelligible systems: An HCI research agenda. In Proceedings of the 2018 CHI Conference on Hu- man Factors in Computing Systems, CHI 2018, Mon- treal, QC, Canada, April 21-26, 2018, page 582. ACM.
David Alvarez-Melis and Tommi S. Jaakkola. 2018. interpretability methods. On the robustness of CoRR, abs/1806.08049.
Leila Arras, Franziska Horn, Gr´egoire Montavon, Klaus-Robert M¨uller, and Wojciech Samek. 2016. âwhat is relevant in a text document?â: An in- CoRR, terpretable machine learning approach. abs/1612.07843.
Joris Baan, Maartje ter Hoeve, Marlies van der Wees, Anne Schuth, and Maarten de Rijke. 2019. Do trans- former attention heads provide transparency in ab- stractive summarization? CoRR, abs/1907.00570.
Przemyslaw Biecek. 2018. DALEX: explainers for complex predictive models in R. J. Mach. Learn. Res., 19:84:1â84:5.
Oana-Maria Camburu, Eleonora Giunchiglia, Jakob Foerster, Thomas Lukasiewicz, and Phil Blunsom. 2019. Can i trust the explainer? verifying post-hoc explanatory methods.
Vicente Iv´an S´anchez Carmona, Tim Rockt¨aschel, Se- bastian Riedel, and Sameer Singh. 2015. Towards extracting faithful and descriptive representations of latent variable models. In AAAI Spring Symposia.
Supriyo Chakraborty, Richard Tomsett, Ramya Raghavendra, Daniel Harborne, Moustafa Alzantot, Federico Cerutti, Mani Srivastava, Alun Preece, Simon Julier, Raghuveer M Rao, et al. 2017. In- terpretability of deep learning models: a survey In 2017 IEEE SmartWorld, Ubiquitous of results. Intelligence & Computing, Advanced & Trusted Computed, Scalable Computing & Communica- Internet tions, Cloud & Big Data Computing,
of People and Smart City Innovation (Smart- World/SCALCOM/UIC/ATC/CBDCom/IOP/SCI), pages 1â6. IEEE.
Jay DeYoung, Sarthak Jain, Nazneen Fatema Rajani, Eric Lehman, Caiming Xiong, Richard Socher, and Byron C. Wallace. 2019. Eraser: A benchmark to evaluate rationalized nlp models.
Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.
Shi Feng and Jordan Boyd-Graber. 2019. What can ai do for me? evaluating machine learning interpreta- tions in cooperative play. In Proceedings of the 24th International Conference on Intelligent User Inter- faces, IUI 19, page 229239, New York, NY, USA. Association for Computing Machinery.
Shi Feng, Eric Wallace, Alvin Grissom II, Mohit Iyyer, Pedro Rodriguez, and Jordan L. Boyd-Graber. 2018. Pathologies of neural models make interpretation In Proceedings of the 2018 Conference difï¬cult. on Empirical Methods in Natural Language Process- ing, Brussels, Belgium, October 31 - November 4, 2018, pages 3719â3728. Association for Computa- tional Linguistics.
Kar¨en Fort and Alain Couillault. 2016. Yes, we care! results of the ethics and natural language processing surveys. In LREC.
Reza Ghaeini, Xiaoli Z. Fern, and Prasad Tadepalli. 2018. Interpreting recurrent and attention-based neural models: a case study on natural language in- ference. CoRR, abs/1808.03894.
Amirata Ghorbani, Abubakar Abid, and James Zou. 2019. Interpretation of neural networks is fragile. In Proceedings of the AAAI Conference on Artiï¬cial Intelligence, volume 33, pages 3681â3688.
Leilani H Gilpin, David Bau, Ben Z Yuan, Ayesha Ba- jwa, Michael Specter, and Lalana Kagal. 2018. Ex- plaining explanations: An overview of interpretabil- ity of machine learning. In 2018 IEEE 5th Interna- tional Conference on data science and advanced an- alytics (DSAA), pages 80â89. IEEE.
Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. 2018. A survey of methods for explaining black box models. ACM Comput. Surv., 51(5):93:1â93:42.
L.A. Harrington, M.D. Morley, A. ËScedrov, and S.G. Simpson. 1985. Harvey Friedmanâs Research on the Foundations of Mathematics. Studies in Logic and the Foundations of Mathematics. Elsevier Science.
Bernease Herman. 2017. The promise and peril of hu- man evaluation for model interpretability. CoRR, abs/1711.07414. Withdrawn.
Alon Jacovi, Oren Sar Shalom, and Yoav Goldberg. 2018. Understanding convolutional neural networks In Proceedings of the 2018 for text classiï¬cation. EMNLP Workshop BlackboxNLP: Analyzing and In- terpreting Neural Networks for NLP, pages 56â65.
Sarthak Jain and Byron C. Wallace. 2019. Attention is not explanation. In Proceedings of the 2019 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, NAACL-HLT 2019, Minneapo- lis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 3543â3556. Association for Computational Linguistics.
Harmanpreet Kaur, Harsha Nori, Samuel Jenkins, Rich Caruana, Hanna M. Wallach, and Jennifer Wortman Vaughan. 2019. Interpreting interpretability: Under- standing data scientists use of interpretability tools for machine learning.
Been Kim, Martin Wattenberg, Justin Gilmer, Car- rie Cai, James Wexler, Fernanda Viegas, and Rory Sayres. 2017. Interpretability beyond feature attri- bution: Quantitative testing with concept activation vectors (tcav).
Julius Ade- bayo, Maximilian Alber, Kristof T. Sch¨utt, Sven D¨ahne, Dumitru Erhan, and Been Kim. 2019. The (un)reliability of saliency methods. In Woj- ciech Samek, Gr´egoire Montavon, Andrea Vedaldi, Lars Kai Hansen, and Klaus-Robert M¨uller, edi- tors, Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, volume 11700 of Lec- ture Notes in Computer Science, pages 267â280. Springer.
Jeffrey He, Menaka Narayanan, Been Kim, Sam Gershman, and Fi- nale Doshi-Velez. 2019. An evaluation of the CoRR, human-interpretability of explanation. abs/1902.00006.
Himabindu Lakkaraju, Ece Kamar, Rich Caruana, and Jure Leskovec. 2019. Faithful and customizable ex- planations of black box models. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, AIES 2019, Honolulu, HI, USA, January 27- 28, 2019, pages 131â138. ACM.
Jaesong Lee, Joong-Hwi Shin, and Jun-Seok Kim. Interactive visualization and manipulation 2017. of attention-based neural machine translation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 121â126, Copenhagen, Den- mark. Association for Computational Linguistics.
Zachary C. Lipton. 2018. The mythos of model inter- pretability. Commun. ACM, 61(10):36â43.
Scott M. Lundberg and Su-In Lee. 2017. A uniï¬ed In Ad- approach to interpreting model predictions. vances in Neural Information Processing Systems
30: Annual Conference on Neural Information Pro- cessing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 4765â4774.
Sina Mohseni and Eric D. Ragan. 2018. A human- grounded evaluation benchmark for local explana- tions of machine learning. CoRR, abs/1801.05075.
W. James Murdoch, Chandan Singh, Karl Kumbier, Reza Abbasi-Asl, and Bin Yu. 2019. Interpretable machine learning: deï¬nitions, methods, and applica- tions. ArXiv, abs/1901.04592.
Dong Nguyen. 2018. Comparing automatic and hu- man evaluation of local explanations for text clas- In Proceedings of the 2018 Conference siï¬cation. of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1069â 1078, New Orleans, Louisiana. Association for Com- putational Linguistics.
Nina P¨orner, Hinrich Sch¨utze, and Benjamin Roth. 2018. Evaluating neural network explanation meth- ods using hybrid documents and morphological pre- diction. CoRR, abs/1801.06422.
Danish Pruthi, Mansi Gupta, Bhuwan Dhingra, Gra- ham Neubig, and Zachary C. Lipton. 2019. Learn- ing to deceive with attention-based explanations. CoRR, abs/1909.07913.
Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain yourself! leveraging language models for commonsense rea- soning. CoRR, abs/1906.02361.
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. âWhy Should I Trust You?â: Ex- plaining the predictions of any classiï¬er. In Proceed- ings of the 22Nd ACM SIGKDD International Con- ference on Knowledge Discovery and Data Mining, KDD â16, pages 1135â1144, New York, NY, USA. ACM.
Cynthia Rudin. 2018. Please stop explaining black CoRR, box models for high stakes decisions. abs/1811.10154.
Soï¬a Serrano and Noah A. Smith. 2019. Is attention In Proceedings of the 57th Confer- interpretable? ence of the Association for Computational Linguis- tics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 2931â2951.
Julia Strout, Ye Zhang, and Raymond J. Mooney. 2019. Do human rationales improve machine expla- nations? CoRR, abs/1905.13714.
Madhumita Sushil, Simon ËSuster, and Walter Daele- mans. 2018. Rule induction for global explana- tion of trained models. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and In- terpreting Neural Networks for NLP, pages 82â97, Brussels, Belgium. Association for Computational Linguistics.
Shikhar Vashishth, Shyam Upadhyay, Gaurav Singh Atten- and Manaal Faruqui. 2019. Tomar, CoRR, tion interpretability across NLP tasks. abs/1909.11218.
Jesse Vig and Yonatan Belinkov. 2019. Analyzing the structure of attention in a transformer language model. CoRR, abs/1906.04284.
Hilde J. P. Weerts, Werner van Ipenburg, and Mykola A human-grounded evalu- CoRR, Pechenizkiy. 2019. ation of SHAP for alert processing. abs/1907.03324.
Sarah Wiegreffe and Yuval Pinter. 2019. Attention is not not explanation. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing, EMNLP- IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 11â20. Association for Computational Linguistics.
Lior Wolf, Tomer Galanti, and Tamir Hazan. 2019. A formal approach to explainability. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, AIES 2019, Honolulu, HI, USA, January 27- 28, 2019, pages 255â261. ACM.
Jialin Wu and Raymond J. Mooney. 2018. Faithful mul- timodal explanation for visual question answering. CoRR, abs/1809.02805.
Wenting Xiong, Iftitahu Niâmah, Juan M. G. Huesca, Werner van Ipenburg, Jan Veldsink, and Mykola Pechenizkiy. 2018. Looking deeper into deep learning model: Attribution-based explanations of textcnn. CoRR, abs/1811.03970.
Mo Yu, Shiyu Chang, Yang Zhang, and Tommi S. Jaakkola. 2019. Rethinking cooperative rationaliza- tion: Introspective extraction and complement con- trol. CoRR, abs/1910.13294.
Omar Zaidan and Jason Eisner. 2008. Modeling anno- tators: A generative approach to learning from an- notator rationales. In Proceedings of the 2008 Con- ference on Empirical Methods in Natural Language Processing, pages 31â40, Honolulu, Hawaii. Associ- ation for Computational Linguistics. | {
"id": "1702.08608"
} |
2004.02709 | Evaluating Models' Local Decision Boundaries via Contrast Sets | Standard test sets for supervised learning evaluate in-distribution
generalization. Unfortunately, when a dataset has systematic gaps (e.g.,
annotation artifacts), these evaluations are misleading: a model can learn
simple decision rules that perform well on the test set but do not capture a
dataset's intended capabilities. We propose a new annotation paradigm for NLP
that helps to close systematic gaps in the test data. In particular, after a
dataset is constructed, we recommend that the dataset authors manually perturb
the test instances in small but meaningful ways that (typically) change the
gold label, creating contrast sets. Contrast sets provide a local view of a
model's decision boundary, which can be used to more accurately evaluate a
model's true linguistic capabilities. We demonstrate the efficacy of contrast
sets by creating them for 10 diverse NLP datasets (e.g., DROP reading
comprehension, UD parsing, IMDb sentiment analysis). Although our contrast sets
are not explicitly adversarial, model performance is significantly lower on
them than on the original test sets---up to 25\% in some cases. We release our
contrast sets as new evaluation benchmarks and encourage future dataset
construction efforts to follow similar annotation processes. | http://arxiv.org/pdf/2004.02709 | Matt Gardner, Yoav Artzi, Victoria Basmova, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hanna Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nelson F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, Ally Zhang, Ben Zhou | cs.CL | null | null | cs.CL | 20200406 | 20201001 | 0 2 0 2
t c O 1 ] L C . s c [
2 v 9 0 7 2 0 . 4 0 0 2 : v i X r a
Evaluating Modelsâ Local Decision Boundaries via Contrast Sets Matt Gardner*® Yoav Artziâ âVictoria Basmova®* Jonathan Berant®* Ben Bogin® Sihao Chenâ Pradeep Dasigi® Dheeru Dua7 Yanai Elazar°* Ananth Gottumukkala® Nitish Guptaâ Hanna Hajishirzi®* Gabriel Iharco* Daniel Khashabi® Kevin Lin* Jiangming Liu®? Nelson F. Liu! Phoebe Mulcaire* Qiang Ning® Sameer Singhâ Noah A. Smith°* Sanjay Subramanian® Reut Tsarfaty°* Eric Wallacet Ally Zhang" Ben Zhou? Allen Institute for AI "Cornell University *Bar-Ilan University *Tel-Aviv University âUniversity of Pennsylvania âUniversity of Washington °UC Irvine +UC Berkeley âUniversity of Edinburgh âStanford University
# â¦Allen Institute for AI
# â£Bar-Ilan University
# +UC Berkeley
# â University of Edinburgh [email protected]
# Abstract
Standard test sets for supervised learning eval- uate in-distribution generalization. Unfortu- nately, when a dataset has systematic gaps (e.g., annotation artifacts), these evaluations are misleading: a model can learn simple deci- sion rules that perform well on the test set but do not capture the abilities a dataset is intended to test. We propose a more rigorous annotation paradigm for NLP that helps to close system- atic gaps in the test data. In particular, after a dataset is constructed, we recommend that the dataset authors manually perturb the test in- stances in small but meaningful ways that (typ- ically) change the gold label, creating contrast sets. Contrast sets provide a local view of a modelâs decision boundary, which can be used to more accurately evaluate a modelâs true lin- guistic capabilities. We demonstrate the efï¬- cacy of contrast sets by creating them for 10 di- verse NLP datasets (e.g., DROP reading com- prehension, UD parsing, and IMDb sentiment analysis). Although our contrast sets are not explicitly adversarial, model performance is signiï¬cantly lower on them than on the origi- nal test setsâup to 25% in some cases. We re- lease our contrast sets as new evaluation bench- marks and encourage future dataset construc- tion efforts to follow similar annotation pro- cesses.
Original Example: Two similarly-colored and similarly-posed chow dogs are face to face in one image. Example Textual Perturbations: Two similarly-colored and similarly-posed cats are face to face in one image. Three similarly-colored and similarly-posed chow dogs are face to face in one image. Two differently-colored but similarly-posed chow dogs are face to face in one image. Example Image Perturbation: Two similarly-colored and similarly-posed chow dogs are face to face in one image.
Figure 1: An example contrast set for NLVR2 (Suhr and Artzi, 2019). The label for the original example is TRUE and the label for all of the perturbed exam- ples is FALSE. The contrast set allows probing of a modelâs decision boundary local to examples in the test set, which better evaluates whether the model has cap- tured the relevant phenomena than standard metrics on i.i.d. test data.
# Introduction
Progress in natural language processing (NLP) has long been measured with standard benchmark datasets (e.g., Marcus et al., 1993). These bench- marks help to provide a uniform evaluation of new modeling developments. However, recent work shows a problem with this standard evaluation paradigm based on i.i.d. test sets: datasets often
have systematic gaps (such as those due to various kinds of annotator bias) that (unintentionally) al- low simple decision rules to perform well on test data (Chen et al., 2016; Gururangan et al., 2018; Geva et al., 2019). This is strikingly evident when models achieve high test accuracy but fail on sim- ple input perturbations (Jia and Liang, 2017; Feng et al., 2018; Ribeiro et al., 2018a), challenge ex- amples (Naik et al., 2018), and covariate and label shifts (Ben-David et al., 2010; Shimodaira, 2000; Lipton et al., 2018).
Matt Gardner led the project. All other authors are
listed in alphabetical order.
To more accurately evaluate a modelâs true ca- pabilities on some task, we must collect data that
ï¬lls in these systematic gaps in the test set. To ac- complish this, we expand on long-standing ideas of constructing minimally-constrastive examples (e.g. Levesque et al., 2011). We propose that dataset authors manually perturb instances from their test set, creating contrast sets which characterize the correct decision boundary near the test instances (Section 2). Following the dataset construction process, one should make small but (typically) label-changing modiï¬cations to the existing test instances (e.g., Figure 1). These perturbations should be small, so that they preserve whatever lexical/syntactic artifacts are present in the original example, but change the true label. They should be created without a model in the loop, so as not to bias the contrast sets towards quirks of particular models. Having a set of contrasting perturbations for test instances allows for a consistency metric that measures how well a modelâs decision bound- ary aligns with the âcorrectâ decision boundary around each test instance.
Perturbed test sets only need to be large enough to draw substantiated conclusions about model be- havior and thus do not require undue labor on the original dataset authors. We show that using about a person-week of work can yield high-quality per- turbed test sets of approximately 1000 instances for many commonly studied NLP benchmarks, though the amount of work varies greatly (Section 3).
We apply this annotation paradigm to a diverse set of 10 existing NLP datasetsâincluding visual reasoning, reading comprehension, sentiment anal- ysis, and syntactic parsingâto demonstrate its wide applicability and efï¬cacy (Section 4). Al- though contrast sets are not intentionally adversar- ial, state-of-the-art models perform dramatically worse on our contrast sets than on the original test sets, especially when evaluating consistency. We believe that contrast sets provide a more accurate reï¬ection of a modelâs true performance, and we re- lease our datasets as new benchmarks.1 We recom- mend that creating contrast sets become standard practice for NLP datasets.
# 2 Contrast Sets
# 2.1 The Problem
We ï¬rst give a sketch of the problem that contrast sets attempt to solve in a toy two-dimensional clas- siï¬cation setting as shown in Figure 2. Here, the
1All of our new test sets are available at https://allennlp. org/contrast-sets.
(a) A two-dimensional dataset that requires a complex decision boundary to achieve high accuracy.
(b) If the same data distribution is instead sampled with systematic gaps (e.g., due to annotator bias), a simple decision boundary can perform well on i.i.d. test data (shown outlined in pink).
(c) Since ï¬lling in all gaps in the distribution is infeasi- ble, a contrast set instead ï¬lls in a local ball around a test instance to evaluate the modelâs decision boundary.
Figure 2: An illustration of how contrast sets provide a more comprehensive model evaluation when datasets have systematic gaps.
true underlying data distribution requires a com- plex decision boundary (Figure 2a). However, as is common in practice, our toy dataset is rife with sys- ematic gaps (e.g., due to annotator bias, repeated patterns, etc.). This causes simple decision bound- aries to emerge (Figure 2b). And, because our biased dataset is split i.i.d. into train and test sets, this simple decision boundary will perform well on est data. Ideally, we would like to fill in all of a datasetâs systematic gaps, however, this is usually impossible. Instead, we create a contrast set: a col- lection of instances tightly clustered in input space around a single test instance, or pivot (Figure 2c; an e-ball in our toy example). This contrast set allows us to measure how well a modelâs decision bound- ary aligns with the correct decision boundary local to the pivot. In this case, the contrast set demon- strates that the modelâs simple decision boundary is incorrect. We repeat this process around numerous pivots to form entire evaluation datasets.
When we move from toy settings to complex NLP tasks, the precise nature of a âsystematic gapâ in the data becomes harder to deï¬ne. Indeed, the geometric view in our toy examples does not corre- spond directly to expertsâ perception of data; there are many ways to âlocally perturbâ natural lan-
# Dataset
# Dataset
# Original Instance
# Contrastive Instance (color = edit)
IMDb Hardly one to be faulted for his ambition or his vi- sion, it is genuinely unexpected, then, to see all Parkâs effort add up to so very little. . . . The premise is promising, gags are copious and offbeat humour abounds but it all fails miserably to create any mean- ingful connection with the audience. (Label: Negative) Hardly one to be faulted for his ambition or his vision, here we see all Parkâs effort come to fruition. . . . The premise is perfect, gags are hilarious and offbeat humour abounds, and it creates a deep connection with the audience. (Label: Positive) MATRES Colonel Collins followed a normal progression once she was picked as a NASA astronaut. (âpickedâ was before âfollowedâ) Colonel Collins followed a normal progression before she was picked as a NASA astronaut. (âpickedâ was after âfollowedâ) UD English They demanded talks with local US commanders. I attach a paper on gas storage value modeling. I need to get a job at the earliest opportunity. They demanded talks with great urgency. I attach a paper on my own initiative. I need to get a job at House of Pies. PERSPECTRUM DROP Claim: Should uniforms be worn at school. Perspective: School uniforms emphasize the socio-economic divisions they are supposed to eliminate. Label: Against Context: In the spring of 1625 the Spanish re- gained Bahia in Brazil and Breda in the Nether- lands from the Dutch. In the autumn they repulsed the English at Cadiz. Question: What event happened ï¬rst, the Span- ish repulsed the English at Cadiz or the Spanish regained Bahia? Claim: Should uniforms be banned at school. Perspective: School uniforms emphasize the socio-economic divisions they are supposed to eliminate. Label: For Context: In the spring of 1625 the Spanish re- gained Bahia in Brazil and Breda in the Nether- lands from the Dutch. In winter the year earlier they had repulsed the English at Cadiz. Question: What event happened ï¬rst, the Span- ish repulsed the English at Cadiz or the Spanish regained Bahia? QUOREF Context: Matt Helm is a secret agent. His assign- ment is to stop the sinister Tung-Tze, armed with spy gadgets. Helm prevails with Gail by his side as he destroys Tung-Tze. Question: Who is armed with spy gadgets? Context: Matt Helm is a secret agent. His assign- ment is to stop the sinister Tung-Tze, even though he is armed with spy gadgets. Helm prevails with Gail by his side as he destroys Tung-Tze. Question: Who is armed with spy gadgets? MC-TACO Context: She renews in Ranchipur an acquain- tance with a former lover, Tom Ransome, now a dissolute alcoholic. Question: How frequently does Tom drink? Candidate Answer: Every other night Label: Likely Context: She renews in Ranchipur an acquain- tance with a former lover, Tom Ransome, who keeps very healthy habits. Question: How frequently does Tom drink? Candidate Answer: Every other night Label: Unlikely
Table 1: We create contrast sets for 10 datasets and show instances from seven of them here.
guage. We do not expect intuition, even of experts, to exhaustively reveal gaps.
Nevertheless, the presence of these gaps is well- documented (Gururangan et al., 2018; Poliak et al., 2018; Min et al., 2019), and Niven and Kao (2019) give an initial attempt at formally characterizing them. In particular, one common source is annota- tor bias from data collection processes (Geva et al., 2019). For example, in the SNLI dataset (Bowman et al., 2015), Gururangan et al. (2018) show that the words sleeping, tv, and cat almost never appear in an entailment example, either in the training set or the test set, though they often appear in contra- diction examples. This is not because these words are particularly important to the phenomenon of entailment; their absence in entailment examples is a systematic gap in the data that can be exploited by models to achieve artiï¬cially high test accuracy.
This is but one kind of systematic gap; there are also biases due to the writing styles of small groups of annotators (Geva et al., 2019), the distributional biases in the data that was chosen for annotation, as well as numerous other biases that are more subtle and harder to discern (Shah et al., 2020).
Completely removing these gaps in the initial data collection process would be ideal, but is likely impossibleâlanguage has too much inherent vari- ability in a very high-dimensional space. Instead, we use contrast sets to ï¬ll in gaps in the test data to give more thorough evaluations than what the original data provides.
# 2.2 Deï¬nitions
We begin by deï¬ning a decision boundary as a par- tition of some space into labels.2 This partition can be represented by the set of all points in the space with their associated labels: {(x, y)}. This deï¬ni- tion differs somewhat from the canonical deï¬nition, which is a collection of hypersurfaces that separate labels. There is a bijection between partitions and these sets of hypersurfaces in continuous spaces, however, so they are equivalent deï¬nitions. We choose to use the partition to represent the decision boundary as it makes it very easy to deï¬ne a local decision boundary and to generalize the notion to discrete spaces, which we deal with in NLP.
A local decision boundary around some pivot x is the set of all points xâ and their associated la- bels yâ that are within some distance ¢ of x. That is, a local decision boundary around x is the set {(x',yâ) | d(x,xâ) < e}. Note here that even though a âboundaryâ or âsurfaceâ is hard to visu- alize in a discrete input space, using this partition representation instead of hypersurfaces gives us a uniform definition of a local decision boundary in any input space; all that is needed is a distance function d.
A contrast set C(x) is any sample of points from a local decision boundary around x. In other words, C(«) consists of inputs xâ that are similar to x ac- cording to some distance function d. Typically these points are sampled such that yâ 4 y. To eval- uate a model using these contrast sets, we define the contrast consistency of a model to be whether it makes correct predictions y on every element in the set: all({g = yâ V(xâ, yâ) ⬠C(x)}). Since the points xâ were chosen from the local decision boundary, we expect contrast consistency on expert- built contrast sets to be a significantly more accu- rate evaluation of whether model predictions match the task definition than a random selection of input / output pairs.
# 2.3 Contrast sets in practice
Given these deï¬nitions, we now turn to the actual construction of contrast sets in practical NLP set- tings. There were two things left unspeciï¬ed in the deï¬nitions above: the distance function d to use in discrete input spaces, and the method for sampling from a local decision boundary. While there has been some work trying to formally characterize dis-
2In this discussion we are talking about the true decision boundary, not a modelâs decision boundary.
tances for adversarial robustness in NLP (Michel et al., 2019; Jia et al., 2019), we find it more useful in our setting to simply rely on expert judgments to generate a similar but meaningfully different xâ given x, addressing both the distance function and the sampling method.
Future work could try to give formal treatments of these issues, but we believe expert judgments are sufï¬cient to make initial progress in improving our evaluation methodologies. And while expert- crafted contrast sets can only give us an upper bound on a modelâs local alignment with the true decision boundary, an upper bound on local align- ment is often more informative than a potentially biased i.i.d. evaluation that permits artiï¬cially sim- ple decision boundaries. To give a tighter upper bound, we draw pivots x from some i.i.d. test set, and we do not provide i.i.d. contrast sets at training time, which could provide additional artiï¬cially simple decision boundaries to a model.
Figure 1 displays an example contrast set for the NLVR2 visual reasoning dataset (Suhr and Artzi, 2019). Here, both the sentence and the image are modiï¬ed in small ways (e.g., by changing a word in the sentence or ï¬nding a similar but different image) to make the output label change.
A contrast set is not a collection of adversarial examples (Szegedy et al., 2014). Adversarial ex- amples are almost the methodological opposite of contrast sets: they change the input such that a modelâs decision changes but the gold label does not (Jia and Liang, 2017; Wallace et al., 2019a). On the other hand, contrast sets are model-agnostic, constructed by experts to characterize whether a modelâs decision boundary locally aligns to the true decision boundary around some point. Doing this requires input changes that also induce changes to the gold label.
We recommend that the original dataset authorsâ the experts on the linguistic phenomena intended to be reï¬ected in their datasetâconstruct the contrast sets. This is best done by ï¬rst identifying a list of phenomena that characterize their dataset. In syntactic parsing, for example, this list might in- clude prepositional phrase attachment ambiguities, coordination scope, clausal attachment, etc. After the standard dataset collection process, the authors should sample pivots from their test set and perturb them according to the listed phenomena.
# 2.4 Design Choices of Contrast Sets
Here, we discuss possible alternatives to our ap- proach for constructing contrast sets and our rea- sons for choosing the process we did.
Post-hoc Construction of Contrast Sets Im- proving the evaluation for existing datasets well after their release is usually too late: new mod- els have been designed, research papers have been published, and the community has absorbed po- tentially incorrect insights. Furthermore, post-hoc contrast sets may be biased by existing models. We instead recommend that new datasets include contrast sets upon release, so that the authors can characterize beforehand when they will be satisï¬ed that a model has acquired the datasetâs intended ca- pabilities. Nevertheless, contrast sets constructed post-hoc are still better than typical i.i.d. test sets, and where feasible we recommend creating con- trast sets for existing datasets (as we do in this work).
Crowdsourcing Contrast Sets We recommend that the dataset authors construct contrast sets them- selves rather than using crowd workers. The orig- inal authors are the ones who best understand their datasetâs intended phenomena and the distinc- tion between in-distribution and out-of-distribution examplesâthese ideas can be difï¬cult to distill to non-expert crowd workers. Moreover, the effort to create contrast sets is a small fraction of the effort required to produce a new dataset in the ï¬rst place.
Automatic Construction of Contrast Sets Au- tomatic perturbations, such as paraphrasing with back-translation or applying word replacement rules, can ï¬ll in some parts of the gaps around a pivot (e.g., Ribeiro et al., 2018b, 2019). However, it is very challenging to come up with rules or other automated methods for pushing pivots across a de- cision boundaryâin most cases this presupposes a model that can already perform the intended task. We recommend annotators spend their time con- structing these types of examples; easier examples can be automated.
Adversarial Construction of Contrast Sets Some recent datasets are constructed using base- line models in the data collection process, either to ï¬lter out examples that existing models answer correctly (e.g., Dua et al., 2019; Dasigi et al., 2019) or to generate adversarial inputs (e.g., Zellers et al., 2018, 2019; Wallace et al., 2019b; Nie et al., 2019).
Unlike this line of work, we choose not to have a model in the loop because this can bias the data to the failures of a particular model (cf. Zellers et al., 2019), rather than generally characterizing the local decision boundary. We do think it is acceptable to use a model on a handful of initial perturbations to understand which phenomena are worth spending time on, but this should be separate from the ac- tual annotation processâobserving model outputs while perturbing data creates subtle, undesirable biases towards the idiosyncrasies of that model.
# 2.5 Limitations of Contrast Sets
Solely Negative Predictive Power Contrast sets only have negative predictive power: they reveal if a model does not align with the correct local deci- sion boundary but cannot conï¬rm that a model does align with it. This is because annotators cannot ex- haustively label all inputs near a pivot and thus a contrast set will necessarily be incomplete. How- ever, note that this problem is not unique to contrast setsâsimilar issues hold for the original test set as well as adversarial test sets (Jia and Liang, 2017), challenge sets (Naik et al., 2018), and input pertur- bations (Ribeiro et al., 2018a; Feng et al., 2018). See Feng et al. (2019) for a detailed discussion of how dataset analysis methods only have negative predictive power.
Dataset-Speciï¬c Instantiations The process for creating contrast sets is dataset-speciï¬c: although we present general guidelines that hold across many tasks, experts must still characterize the type of phenomena each individual dataset is intended to capture. Fortunately, the original dataset authors should already have thought deeply about such phenomena. Hence, creating contrast sets should be well-deï¬ned and relatively straightforward.
# 3 How to Create Contrast Sets
Here, we walk through our process for creating con- trast sets for three datasets. Examples are shown in Figure 1 and Table 1.
DROP DROP (Dua et al., 2019) is a reading com- prehension dataset that is intended to cover com- positional reasoning over numbers in a paragraph, including ï¬ltering, sorting, and counting sets, and doing numerical arithmetic. The data has three main sources of paragraphs, all from Wikipedia articles: descriptions of American football games, descriptions of census results, and summaries of
wars. There are many common patterns used by the crowd workers that make some questions ar- tiï¬cially easy: 2 is the most frequent answer to How many. . . ? questions, questions asking about the ordering of events typically follow the linear order of the paragraph, and a large fraction of the questions do not require compositional reasoning.
Our strategy for constructing contrast sets for DROP was three-fold. First, we added more com- positional reasoning steps. The questions about American football passages in the original data very often had multiple reasoning steps (e.g., How many yards difference was there between the Bron- cosâ ï¬rst touchdown and their last?), but the ques- tions about the other passage types did not. We drew from common patterns in the training data and added additional reasoning steps to questions in our contrast sets. Second, we inverted the seman- tics of various parts of the question. This includes perturbations such as changing shortest to longest, later to earlier, as well as changing questions ask- ing for counts to questions asking for sets (How many countries. . . to Which countries. . . ). Finally, we changed the ordering of events. A large num- ber of questions about war paragraphs ask which of two events happened ï¬rst. We changed (1) the order the events were asked about in the question, (2) the order that the events showed up in the pas- sage, and (3) the dates associated with each event to swap their temporal order.
NLVR2 We next consider NLVR2, a dataset where a model is given a sentence about two pro- vided images and must determine whether the sen- tence is true (Suhr et al., 2019). The data collection process encouraged highly compositional language, which was intended to require understanding the re- lationships between objects, properties of objects, and counting. We constructed NLVR2 contrast sets by modifying the sentence or replacing one of the images with freely-licensed images from web searches. For example, we might change The left image contains twice the number of dogs as the right image to The left image contains three times the number of dogs as the right image. Similarly, given an image pair with four dogs in the left and two dogs in the right, we can replace individual im- ages with photos of variably-sized groups of dogs. The textual perturbations were often changes in quantiï¬ers (e.g., at least one to exactly one), enti- ties (e.g., dogs to cats), or properties thereof (e.g.,
orange glass to green glass). An example contrast set for NLVR2 is shown in Figure 1.
UD Parsing Finally, we discuss dependency parsing in the universal dependencies (UD) formal- ism (Nivre et al., 2016). We look at dependency parsing to show that contrast sets apply not only to modern âhigh-levelâ NLP tasks but also to long- standing linguistic analysis tasks. We ï¬rst chose a speciï¬c type of attachment ambiguity to target: the classic problem of prepositional phrase (PP) attachment (Collins and Brooks, 1995), e.g. We ate spaghetti with forks versus We ate spaghetti with meatballs. We use a subset of the English UD tree- banks: GUM (Zeldes, 2017), the English portion of LinES (Ahrenberg, 2007), the English portion of ParTUT (Sanguinetti and Bosco, 2015), and the dependency-annotated English Web Treebank (Sil- veira et al., 2014). We searched these treebanks for sentences that include a potentially structurally ambiguous attachment from the head of a PP to either a noun or a verb. We then perturbed these sentences by altering one of their noun phrases such that the semantics of the perturbed sentence required a different attachment for the PP. We then re-annotated these perturbed sentences to indicate the new attachment(s).
Summary While the overall process we recom- mend for constructing contrast sets is simple and uniï¬ed, its actual instantiation varies for each dataset. Dataset authors should use their best judg- ment to select which phenomena they are most interested in studying and craft their contrast sets to explicitly test those phenomena. Care should be taken during contrast set construction to ensure that the phenomena present in contrast sets are similar to those present in the original test set; the purpose of a contrast set is not to introduce new challenges, but to more thoroughly evaluate the original intent of the test set.
# 4 Datasets and Experiments
# 4.1 Original Datasets
We create contrast sets for 10 NLP datasets (full descriptions are provided in Section A):
NLVR2 (Suhr et al., 2019) ⢠IMDb sentiment analysis (Maas et al., 2011) ⢠MATRES Temporal RE (Ning et al., 2018) ⢠English UD parsing (Nivre et al., 2016) ⢠PERSPECTRUM (Chen et al., 2019) ⢠DROP (Dua et al., 2019)
Dataset # Examples # Sets Model Original Test Contrast Consistency NLVR2 994 479 LXMERT 76.4 61.1 (â15.3) 30.1 IMDb 488 488 BERT 93.8 84.2 (â9.6) 77.8 MATRES 401 239 CogCompTime2.0 73.2 63.3 (â9.9) 40.6 UD English 150 150 Biafï¬ne + ELMo 64.7 46.0 (â18.7) 17.3 PERSPECTRUM 217 217 RoBERTa 90.3 85.7 (â4.6) 78.8 DROP 947 623 MTMSN 79.9 54.2 (â25.7) 39.0 QUOREF 700 415 XLNet-QA 70.5 55.4 (â15.1) 29.9 ROPES 974 974 RoBERTa 47.7 32.5 (â15.2) 17.6 BoolQ 339 70 RoBERTa 86.1 71.1 (â15.0) 59.0 MC-TACO 646 646 RoBERTa 38.0 14.0 (â24.0) 8.0
Table 2: Models struggle on the contrast sets compared to the original test sets. For each dataset, we use a (sometimes near) state-of-the-art model and evaluate it on the â# Examplesâ examples in the contrast sets (not including the original example). We report percentage accuracy for NLVR2, IMDb, PERSPECTRUM, MATRES, and BoolQ; F1 scores for DROP and QUOREF; Exact Match (EM) scores for ROPES and MC-TACO; and unla- beled attachment score on modiï¬ed attachments for the UD English dataset. We also report contrast consistency: the percentage of the â# Setsâ contrast sets for which a modelâs predictions are correct for all examples in the set (including the original example). More details on datasets, models, and metrics can be found in §A and §B.
Quoref (Dasigi et al., 2019) ⢠ROPES (Lin et al., 2019) ⢠BoolQ (Clark et al., 2019) ⢠MC-TACO (Zhou et al., 2019) We choose these datasets because they span a variety of tasks (e.g., reading comprehension, sen- timent analysis, visual reasoning) and input-output formats (e.g., classiï¬cation, span extraction, struc- tured prediction). We include high-level tasks for which dataset artifacts are known to be prevalent, as well as longstanding formalism-based tasks, where data artifacts have been less of an issue (or at least have been less well-studied).
# 4.2 Contrast Set Construction
The contrast sets were constructed by NLP re- searchers who were deeply familiar with the phe- nomena underlying the annotated dataset; in most cases, these were the original dataset authors. Our contrast sets consist of up to about 1,000 total ex- amples and average 1â5 examples per contrast set (Table 2). We show representative examples from the different contrast sets in Table 1. For most datasets, the average time to perturb each exam- ple was 1â3 minutes, which translates to approxi- mately 17â50 hours of work to create 1,000 exam- ples. However, some datasets, particularly those with complex output structures, took substantially
longer: each example for dependency parsing took an average of 15 minutes (see Appendix B for more details).
# 4.3 Models Struggle on Contrast Sets
For each dataset, we use a model that is at or near state-of-the-art performance. Most models in- volve ï¬ne-tuning a pretrained language model (e.g., ELMo (Peters et al., 2018), BERT (Devlin et al., 2019), ROBERTA (Liu et al., 2019), XLNet (Yang et al., 2019), etc.) or applying a task-speciï¬c archi- tecture on top of one (e.g., Hu et al. (2019) add a DROP-speciï¬c model on top of BERT). We train each model on the original training set and evaluate it on both the original test set and our contrast sets. Existing models struggle on the contrast sets (Ta- ble 2), particularly when evaluating contrast con- sistency. Model performance degrades differently across datasets; however, note that these numbers are not directly comparable due to differences in dataset size, model architecture, contrast set design, etc. On IMDb and PERSPECTRUM, the model achieves a reasonably high consistency, suggesting that, while there is deï¬nitely still room for improve- ment, the phenomena targeted by those datasets are already relatively well captured by existing models. Of particular note is the very low consistency score for dependency parsing. The parser that we
use achieves 95.7% unlabeled attachment score on the English Penn Treebank (Dozat and Manning, 2017, trained with ELMo embeddings). A con- sistency score of 17.3 on a common attachment ambiguity suggests that this parser may not be as strong as common evaluations lead us to believe. Overall, our results suggest that models have âover- ï¬tâ to artifacts that are present in existing datasets; they achieve high test scores but do not completely capture a datasetâs intended phenomena.
# 4.4 Humans Succeed On Contrast Sets
An alternative explanation for why models fail on the contrast sets is that they are simply harder or noisier than regular test sets, i.e., humans would also perform worse on the contrast sets. We show that this is not the case. For four datasets, we choose at least 100 test instances and one corre- sponding contrast set instance (i.e., an example be- fore and after perturbation). We (the authors) test ourselves on these examples (ensuring that those who were tested were different from those who cre- ated the examples). Human performance is com- parable across the original test and contrasts set examples on these datasets (Table 3).
Dataset Original Test Contrast Set IMDb 94.3 93.9 (â0.4) PERSPECTRUM 91.5 90.3 (â1.2) QUOREF 95.2 88.4 (â6.8) ROPES 76.0 73.0 (â3.0)
Table 3: Humans achieve similar performance on the contrast sets and the original test sets. The metrics here are the same as those in Table 2.
# 4.5 Fine-Grained Analysis of Contrast Sets
Each example in the contrast sets can be labeled ac- cording to which particular phenomenon it targets. This allows automated error reporting. For exam- ple, for the MATRES dataset we tracked whether a perturbation changed appearance order, tense, or temporal conjunction words. These ï¬ne-grained la- bels show that the model does comparatively better at modeling appearance order (66.5% of perturbed examples correct) than temporal conjunction words (60.0% correct); see Appendix B.3 for full details. A similar analysis on DROP shows that MTMSN does substantially worse on event re-ordering (47.3 F1) than on adding compositional reasoning steps (67.5 F1). We recommend authors categorize their
perturbations up front in order to simplify future analyses and bypass some of the pitfalls of post-hoc error categorization (Wu et al., 2019).
Additionally, itâs worth discussing the depen- dency parsing result. The attachment decision that we targeted was between a verb, a noun, and a preposition. With just two reasonable attachment choices, a contrast consistency of 17.3 means that the model is almost always unable to change its attachment based on the content of the preposi- tional phrase. Essentially, in a trigram such as demanded talks with (Table 1), the model has a bias for whether demanded or talks has a stronger afï¬nity to with, and makes a prediction accordingly. Given that trigrams are rare and annotating parse trees is expensive, it is not clear that traditional evaluation metrics with i.i.d test sets would ever ï¬nd this problem. By robustly characterizing local decision boundaries, contrast sets surface errors that are very challenging to ï¬nd with other means.
# 5 Related Work
The fundamental idea of ï¬nding or creating data that is âminimally differentâ has a very long history. In linguistics, for instance, the term minimal pair is used to denote two words with different meaning that differ by a single sound change, thus demon- strating that the sound change is phonemic in that language (Pike, 1946). Many people have used this idea in NLP (see below), creating challenge sets or providing training data that is âminimally differentâ in some sense, and we continue this tradition. Our main contribution to this line of work, in addition to the resources that we have created, is giving a simple and intuitive geometric interpretation of âbiasâ in dataset collection, and showing that this long-standing idea of minimal data changes can be effectively used to solve this problem on a wide variety of NLP tasks. We additionally generalize the idea of a minimal pair to a set, and use a con- sistency metric, which we contend more closely aligns with what NLP researchers mean by âlan- guage understandingâ.
Training on Perturbed Examples Many previ- ous works have provided minimally contrastive examples on which to train models. Selsam et al. (2019), Tafjord et al. (2019), Lin et al. (2019), and Khashabi et al. (2020) designed their data collec- tion process to include contrastive examples. Data augmentation methods have also been used to miti- gate gender (Zhao et al., 2018), racial (Dixon et al.,
2018), and other biases (Kaushik et al., 2020) dur- ing training, or to introduce useful inductive bi- ases (Andreas, 2020).
Challenge Sets The idea of creating challeng- ing contrastive evaluation sets has a long his- tory (Levesque et al., 2011; Ettinger et al., 2017; Glockner et al., 2018; Naik et al., 2018; Isabelle et al., 2017). Challenge sets exist for various phe- nomena, including ones with âminimalâ edits sim- ilar to our contrast sets, e.g., in image caption- ing (Shekhar et al., 2017), machine translation (Sen- nrich, 2017; Burlot and Yvon, 2017; Burlot et al., 2018), and language modeling (Marvin and Linzen, 2018; Warstadt et al., 2019). Minimal pairs of ed- its that perturb gender or racial attributes are also useful for evaluating social biases (Rudinger et al., 2018; Zhao et al., 2018; Lu et al., 2018). Our key contribution over this prior work is in grouping per- turbed instances into a contrast set, for measuring local alignment of decision boundaries, along with our new, related resources. Additionally, rather than creating new data from scratch, contrast sets augment existing test examples to ï¬ll in systematic gaps. Thus contrast sets often require less effort to create, and they remain grounded in the original data distribution of some training set.
Since the initial publication of this paper, Shmid- man et al. have further demonstrated the utility of contrast sets by applying these ideas to the evalua- tion of morphological disambiguation in Hebrew.
Recollecting Test Sets Recht et al. (2019) cre- ate new test sets for CIFAR and ImageNet by closely following the procedure used by the origi- nal datasets authors; Yadav and Bottou (2019) per- form similar for MNIST. This line of work looks to evaluate whether reusing the exact same test set in numerous research papers causes the community to adaptively âoverï¬tâ its techniques to that test set. Our goal with contrast sets is differentâwe look to eliminate the biases in the original annota- tion process to better evaluate models. This cannot be accomplished by simply collecting more data because the new data will capture similar biases.
# 6 Conclusion
We presented a new annotation paradigm, based on long-standing ideas around contrastive examples, for constructing more rigorous test sets for NLP. Our procedure maintains most of the established processes for dataset creation but ï¬lls in some of
the systematic gaps that are typically present in datasets. By shifting evaluations from accuracy on i.i.d. test sets to consistency on contrast sets, we can better examine whether models have learned the desired capabilities or simply captured the id- iosyncrasies of a dataset. We created contrast sets for 10 NLP datasets and released this data as new evaluation benchmarks.
We recommend that future data collection efforts create contrast sets to provide more comprehensive evaluations for both existing and new NLP datasets. While we have created thousands of new test exam- ples across a wide variety of datasets, we have only taken small steps towards the rigorous evaluations we would like to see in NLP. The last several years have given us dramatic modeling advancements; our evaluation methodologies and datasets need to see similar improvements.
# Acknowledgements
We thank the anonymous reviewers for their help- ful feedback on this paper, as well as many others who gave constructive comments on a publicly- available preprint. Various authors of this paper were supported in part by ERC grant 677352, NSF grant 1562364, NSF grant IIS-1756023, NSF CA- REER 1750499, ONR grant N00014-18-1-2826 and DARPA grant N66001-19-2-403.
# References
Lars Ahrenberg. 2007. LinES: an English-Swedish par- allel treebank. In NODALIDA.
Jacob Andreas. 2020. Good-enough compositional data augmentation. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7556â7566, Online. Association for Computational Linguistics.
Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. 2010. A theory of learning from different domains. Machine Learning.
Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large anno- tated corpus for learning natural language inference. In EMNLP.
Franck Burlot, Yves Scherrer, Vinit Ravishankar, OndËrej Bojar, Stig-Arne Gr¨onroos, Maarit Ko- ponen, Tommi Nieminen, and Franc¸ois Yvon. test suites for 2018. English-Czech, English-German, English-Finnish In Proceedings of the Third and Turkish-English. Conference on Machine Translation: Shared Task
Papers, Belgium, Brussels. Association for Compu- tational Linguistics.
Franck Burlot and Franc¸ois Yvon. 2017. Evaluating the morphological competence of machine transla- tion systems. In Proceedings of the Second Confer- ence on Machine Translation, pages 43â55, Copen- hagen, Denmark. Association for Computational Linguistics.
Danqi Chen, Jason Bolton, and Christopher D. Man- the In ning. 2016. A thorough examination of CNN/Daily Mail reading comprehension task. ACL.
Sihao Chen, Daniel Khashabi, Wenpeng Yin, Chris Callison-Burch, and Dan Roth. 2019. Seeing things from a different angle: Discovering diverse perspec- tives about claims. In NAACL.
Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. BoolQ: Exploring the surprising difï¬culty of natural Yes/No questions. In NAACL.
Prepo- sitional phrase attachment through a backed-off model. In Third Workshop on Very Large Corpora.
Pradeep Dasigi, Nelson F Liu, Ana Marasovic, Noah A Smith, and Matt Gardner. 2019. Quoref: A read- ing comprehension dataset with questions requiring coreferential reasoning. In EMNLP.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language under- standing. In NAACL.
Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2018. Measuring and mitigat- In ACM ing unintended bias in text classiï¬cation. AIES.
Timothy Dozat and Christopher D Manning. 2017. Deep biafï¬ne attention for neural dependency pars- ing. In ICLR.
Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. DROP: A reading comprehension benchmark requir- ing discrete reasoning over paragraphs. In NAACL.
Allyson Ettinger, Sudha Rao, Hal Daum´e III, and Emily M. Bender. 2017. Towards linguistically gen- eralizable NLP systems: A workshop and shared In Proceedings of the First Workshop on task. Building Linguistically Generalizable NLP Systems, Copenhagen, Denmark. Association for Computa- tional Linguistics.
Shi Feng, Eric Wallace, and Jordan Boyd-Graber. 2019. Misleading failures of partial-input baselines. In ACL.
Shi Feng, Eric Wallace, Alvin Grissom II, Mohit Iyyer, Pedro Rodriguez, and Jordan Boyd-Graber. 2018. Pathologies of neural models make interpretations difï¬cult. In EMNLP.
Mor Geva, Yoav Goldberg, and Jonathan Berant. 2019. Are we modeling the task or the annotator? An in- vestigation of annotator bias in natural language un- derstanding datasets. In EMNLP.
Max Glockner, Vered Shwartz, and Yoav Goldberg. 2018. Breaking NLI systems with sentences that re- quire simple lexical inferences. In ACL.
Suchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel R. Bowman, and Noah A. Smith. 2018. Annotation artifacts in natural lan- guage inference data. In NAACL.
Minghao Hu, Yuxing Peng, Zhen Huang, and Dong- sheng Li. 2019. A multi-type multi-span network for reading comprehension that requires discrete rea- soning. In EMNLP.
Pierre Isabelle, Colin Cherry, and George Foster. 2017. A challenge set approach to evaluating machine translation. In EMNLP.
Robin Jia and Percy Liang. 2017. Adversarial exam- ples for evaluating reading comprehension systems. In EMNLP.
Robin Jia, Aditi Raghunathan, Kerem G¨oksel, and Percy Liang. 2019. Certiï¬ed robustness to adver- In Proceedings of the sarial word substitutions. 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China. Association for Computational Linguistics.
Divyansh Kaushik, Eduard Hovy, and Zachary C Lip- ton. 2020. Learning the difference that makes a dif- In ference with counterfactually-augmented data. ICLR.
Daniel Khashabi, Tushar Khot, and Ashish Sabhwawal. 2020. More bang for your buck: Natural perturba- tion for robust question answering. In EMNLP.
Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. RACE: Large-scale read- ing comprehension dataset from examinations. In EMNLP.
Hector J. Levesque, Ernest Davis, and Leora Morgen- stern. 2011. The winograd schema challenge. In KR.
Kevin Lin, Oyvind Tafjord, Peter Clark, and Matt Gard- ner. 2019. Reasoning over paragraph effects in situ- ations. In EMNLP MRQA Workshop.
Zachary Lipton, Yu-Xiang Wang, and Alexander Smola. 2018. Detecting and correcting for label shift with black box predictors. In ICML.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692.
Kaiji Lu, Piotr Mardziel, Fangjing Wu, Preetam Aman- charla, and Anupam Datta. 2018. Gender bias in neural natural language processing. arXiv preprint arXiv:1807.11714.
Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In ACL.
Mitchell Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated In Compu- corpus of english: The Penn treebank. tational Linguistics.
Rebecca Marvin and Tal Linzen. 2018. Targeted syn- tactic evaluation of language models. In EMNLP.
and Juan Miguel Pino. 2019. On evaluation of ad- versarial perturbations for sequence-to-sequence models. In NAACL.
Sewon Min, Eric Wallace, Sameer Singh, Matt Gard- ner, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2019. Compositional questions do not necessitate multi-hop reasoning. In ACL.
Aakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, and Graham Neubig. 2018. Stress test evaluation for natural language inference. In COLING.
Yixin Nie, Adina Williams, Emily Dinan, Mo- hit Bansal, Jason Weston, and Douwe Kiela. 2019. Adversarial NLI: A new benchmark for arXiv preprint natural arXiv:1910.14599.
Qiang Ning, Sanjay Subramanian, and Dan Roth. 2019. An Improved Neural Baseline for Temporal Relation Extraction. In EMNLP.
Qiang Ning, Hao Wu, and Dan Roth. 2018. A Multi- Axis Annotation Scheme for Event Temporal Rela- tions. In ACL.
Timothy Niven and Hung-Yu Kao. 2019. Probing neu- ral network comprehension of natural language ar- guments. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, Florence, Italy. Association for Computational Lin- guistics.
Joakim Nivre, Marie-Catherine de Marneffe, Filip Gin- ter, Yoav Goldberg, Jan HajiËc, Christopher D. Man- ning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016. Universal dependencies v1: A multilingual treebank collection. In LREC.
Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. In NAACL.
K.L. Pike. 1946. Phonemics: A Technique for Reduc- ing Languages to Writing. Number v. 2 in Phone- mics: A Technique for Reducing Languages to Writ- ing. Summer Institute of Linguistics.
Adam Poliak, Jason Naradowsky, Aparajita Haldar, Rachel Rudinger, and Benjamin Van Durme. 2018. Hypothesis only baselines in natural language infer- ence. In *SEM.
Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. 2019. Do ImageNet classi- ï¬ers generalize to ImageNet? In ICML.
Marco Tulio Ribeiro, Carlos Guestrin, and Sameer Singh. 2019. Are red roses red? Evaluating con- sistency of question-answering models. In ACL.
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018a. Semantically equivalent adversar- ial rules for debugging NLP models. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 856â865, Melbourne, Australia. Association for Computational Linguistics.
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018b. Semantically equivalent adversar- ial rules for debugging NLP models. In ACL.
Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in coreference resolution. In NAACL.
Manuela Sanguinetti and Cristina Bosco. 2015. Part- TUT: The Turin university parallel treebank. In Harmonization and Development of Resources and Tools for Italian Natural Language Processing within the PARLI Project.
Daniel Selsam, Matthew Lamm, Benedikt B¨unz, Percy Liang, Leonardo de Moura, and David L. Dill. 2019. Learning a SAT solver from single-bit supervision. In International Conference on Learning Represen- tations.
Rico Sennrich. 2017. How grammatical is character- level neural machine translation? Assessing MT quality with contrastive translation pairs. In EACL.
Deven Shah, H. Andrew Schwartz, and Dirk Hovy. 2020. Predictive biases in natural language process- ing models: A conceptual framework and overview. In ACL.
Ravi Shekhar, Sandro Pezzelle, Yauhen Klimovich, Aur´elie Herbelot, Moin Nabi, Enver Sangineto, and Raffaella Bernardi. 2017. Foil it! Find One mis- match between image and language caption. In ACL.
Hidetoshi Shimodaira. 2000. Improving predictive in- ference under covariate shift by weighting the log- In Journal of Statistical Plan- likelihood function. ning and Inference.
Avi Shmidman, Joshua Guedalia, Shaltiel Shmidman, Moshe Koppel, and Reut Tsarfaty. A novel chal- lenge set for hebrew morphological disambiguation and diacritics restoration. In Findings of EMNLP.
Natalia Silveira, Timothy Dozat, Marie-Catherine de Marneffe, Samuel Bowman, Miriam Connor, John Bauer, and Chris Manning. 2014. A gold stan- dard dependency corpus for English. In LREC.
Alane Suhr and Yoav Artzi. 2019. NLVR2 visual bias analysis. arXiv preprint arXiv:1909.10411.
Alane Suhr, Stephanie Zhou, Ally Zhang, Iris Zhang, Huajun Bai, and Yoav Artzi. 2019. A corpus for reasoning about natural language grounded in pho- tographs. In ACL.
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfellow, and Intriguing properties of neural Rob Fergus. 2014. networks. In ICLR.
Oyvind Tafjord, Matt Gardner, Kevin Lin, and Peter Clark. 2019. QuaRTz: An open-domain dataset of qualitative relationship questions. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5941â5946, Hong Kong, China. Association for Computational Linguistics.
Hao Tan and Mohit Bansal. 2019. LXMERT: Learning cross-modality encoder representations from trans- formers. In EMNLP.
Naushad UzZaman, Hector Llorens, Leon Derczyn- ski, James Allen, Marc Verhagen, and James Puste- jovsky. 2013. SemEval-2013 Task 1: TempEval-3: Evaluating time expressions, events, and temporal relations. In *SEM.
Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gard- ner, and Sameer Singh. 2019a. Universal adversar- ial triggers for attacking and analyzing NLP. In EMNLP.
Eric Wallace, Pedro Rodriguez, Shi Feng, Ikuya Ya- mada, and Jordan Boyd-Graber. 2019b. Trick me if you can: Human-in-the-loop generation of adver- sarial question answering examples. In TACL.
Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mo- hananey, Wei Peng, Sheng-Fu Wang, and Samuel R. Bowman. 2019. BLiMP: A benchmark of lin- guistic minimal pairs for english. arXiv preprint arXiv:1912.00582.
Tongshuang Wu, Marco Tulio Ribeiro, Jeffrey Heer, and Daniel Weld. 2019. Errudite: Scalable, repro- ducible, and testable error analysis. In ACL.
Chhavi Yadav and L´eon Bottou. 2019. Cold case: The lost MNIST digits. In NeurIPS.
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. XLNet: Generalized autoregressive pretraining for language understanding. In NeurIPS.
Amir Zeldes. 2017. The GUM corpus: Creating multi- layer resources in the classroom. In LREC.
Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. SWAG: A large-scale adversar- ial dataset for grounded commonsense inference. In EMNLP.
Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. HellaSwag: Can a machine really ï¬nish your sentence? In ACL.
Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Or- donez, and Kai-Wei Chang. 2018. Gender bias in coreference resolution: Evaluation and debiasing methods. In NAACL.
Ben Zhou, Daniel Khashabi, Qiang Ning, and Dan Roth. 2019. âGoing on a vacationâ takes longer than âgoing for a walkâ: A study of temporal common- sense understanding. In EMNLP.
# A Dataset Details
Here, we provide details for the datasets that we build contrast sets for.
2 Natural Language Visual Reasoning (NLVR2) Given a natural language sentence about two photographs, the task is to determine if the sentence is true (Suhr et al., 2019). The dataset has highly compositional language, e.g., The left image contains twice the number of dogs as the right image, and at least two dogs in total are standing. To succeed at NLVR2, a model is supposed to be able to detect and count objects, recognize spatial relationships, and understand the natural language that describes these phenomena.
Internet Movie Database (IMDb) The task is to predict the sentiment (positive or negative) of a movie review (Maas et al., 2011). We use the same set of reviews from Kaushik et al. (2020) in order to analyze the differences between crowd-edited reviews and expert-edited reviews.
Temporal relation extraction (MATRES) The task is to determine what temporal relationship ex- ists between two events, i.e., whether some event happened before or after another event (Ning et al., 2018). MATRES has events and temporal rela- tions labeled for approximately 300 news articles. The event annotations are taken from the data pro- vided in the TempEval3 workshop (UzZaman et al., 2013) and the temporal relations are re-annotated based on a multi-axis formalism. We assume that the events are given and only need to classify the relation label between them.
English UD Parsing We use a combination of four English treebanks (GUM, EWT, LinES, Par- TUT) in the Universal Dependencies parsing frame- work, covering a range of genres. We focus on the problem of prepositional phrase attachment: whether the head of a prepositional phrase attaches to a verb or to some other dependent of the verb. We manually selected a small set of sentences from these treebanks that had potentially ambiguous at- tachments.
Reasoning about perspectives (PERSPEC- TRUM) Given a debate-worthy natural language claim, the task is to identify the set of relevant argumentative sentences that represent perspectives for/against the claim (Chen et al., 2019). We focus on the stance prediction sub-task: a binary
prediction of whether a relevant perspective is for/against the given claim.
Discrete Reasoning Over Paragraphs (DROP) A reading comprehension dataset that requires nu- merical reasoning, e.g., adding, sorting, and count- ing numbers in paragraphs (Dua et al., 2019). In order to compute the consistency metric for the span answers of DROP, we report the average num- ber of contrast sets in which F1 for all instances is above 0.8.
QUOREF A reading comprehension task with span selection questions that require coreference resolution (Dasigi et al., 2019). In this dataset, most questions can be localized to a single event in the passage, and reference an argument in that event that is typically a pronoun or other anaphoric ref- erence. Correctly answering the question requires resolving the pronoun. We use the same deï¬nition for consistency for QUOREFas we did for DROP.
Reasoning Over Paragraph Effects in Situa- tions (ROPES) A reading comprehension dataset that requires applying knowledge from a back- ground passage to new situations (Lin et al., 2019). This task has background paragraphs drawn mostly from science texts that describe causes and effects (e.g., that brightly colored ï¬owers attract insects), and situations written by crowd workers that in- stantiate either the cause (e.g., bright colors) or the effect (e.g., attracting insects). Questions are writ- ten that query the application of the statements in the background paragraphs to the instantiated situa- tion. Correctly answering the questions is intended to require understanding how free-form causal lan- guage can be understood and applied. We use the same consistency metric for ROPES as we did for DROP and QUOREF.
BoolQ A dataset of reading comprehension in- stances with Boolean (yes or no) answers (Clark et al., 2019). These questions were obtained from organic Google search queries and paired with para- graphs from Wikipedia pages that are labeled as sufï¬cient to deduce the answer. As the questions are drawn from a distribution of what people search for on the internet, there is no clear set of âintended phenomenaâ in this data; it is an eclectic mix of different kinds of questions.
MC-TACO A dataset of reading comprehension questions about multiple temporal common-sense
phenomena (Zhou et al., 2019). Given a short para- graph (often a single sentence), a question, and a collection of candidate answers, the task is to determine which of the candidate answers are plau- sible. For example, the paragraph might describe a storm and the question might ask how long the storm lasted, with candidate answers ranging from seconds to weeks. This dataset is intended to test a systemâs knowledge of typical event durations, orderings, and frequency. As the paragraph does not contain the information necessary to answer the question, this dataset is largely a test of background (common sense) knowledge.
# B Contrast Set Details
# B.1 NLVR2
Text Perturbation Strategies We use the fol- lowing text perturbation strategies for NLVR2:
⢠Perturbing quantiï¬ers, e.g., There is at least one dog â There is exactly one dog.
⢠Perturbing numbers, e.g., There is at least one dog â There are at least two dogs.
⢠Perturbing entities, e.g., There is at least one dog â There is at least one cat.
⢠Perturbing properties of entities, e.g., There is at least one yellow dog â There is at least one green dog.
Image Perturbation Strategies For image per- turbations, the annotators collected images that are perceptually and/or conceptually close to the hy- pothesized decision boundary, i.e., they represent a minimal change in some concrete aspect of the image. For example, for an image pair with 2 dogs on the left and 1 dog on the right and the sentence There are more dogs on the left than the right, a reasonable image change would be to replace the right-hand image with an image of two dogs.
Model We use LXMERT (Tan and Bansal, 2019) trained on the NLVR2 training dataset.
Contrast Set Statistics Five annotators created 983 perturbed instances that form 479 contrast sets. Annotation took approximately thirty seconds per textual perturbation and two minutes per image perturbation.
# B.2 IMDb
Perturbation Strategies We minimally perturb reviews to ï¬ip the label while ensuring that the review remains coherent and factually consistent. Here, we provide example revisions:
Original (Negative): I had quite high hopes for this ï¬lm, even though it got a bad review in the paper. I was extremely tolerant, and sat through the entire ï¬lm. I felt quite sick by the end. New (Positive): I had quite high hopes for this ï¬lm, even though it got a bad review in the paper. I was extremely amused, and sat through the entire ï¬lm. I felt quite happy by the end. Original (Positive): This is the greatest ï¬lm I saw in 2002, whereas Iâm used to mainstream movies. It is rich and makes a beautiful artistic act from these 11 short ï¬lms. From the technical info (the chosen directors), I feared it would have an anti-American basis, but ... itâs a kind of (11 times) personal tribute. The weakest point comes from Y. Chahine : he does not manage to âswallow his prideâ and considers this event as a well- merited punishment ... It is really the weakest part of the movie, but this testiï¬es of a real freedom of speech for the whole piece. New (Negative): This is the most horrendous ï¬lm I saw in 2002, whereas Iâm used to mainstream movies. It is low budgeted and makes a less than beautiful artistic act from these 11 short ï¬lms. From the technical info (the chosen directors), I feared it would have an anti- American basis, but ... itâs a kind of (11 times) the same. One of the weakest point comes from Y. Chahine : he does not manage to âswallow his prideâ and considers this event as a well-merited punishment ... It is not the weakest part of the movie, but this testiï¬es of a real freedom of speech for the whole piece.
Model We use the same BERT model setup and training data as Kaushik et al. (2020) which allows us to fairly compare the crowd and expert revisions.
Contrast Set Statistics We use 100 reviews from the validation set and 488 from the test set of Kaushik et al. (2020). Three annotators used approximately 70 hours to construct and validate the dataset.
# B.3 MATRES
MATRES has TimeBank, AQUAINT, and Platinum, with the Platinum section serving as the test set. We use 239 instances (30% of the dataset) from Platinum.
Perturbation Strategies The annotators perturb one or more of the following aspects: appearance order in text, tense of verb(s), and temporal con- junction words. Below are example revisions:
⢠Colonel Collins followed a normal progression once she was picked as a NASA astronaut. (original sentence: âfollowedâ is after âpickedâ)
⢠Once Colonel Collins was picked as a NASA astronaut, she followed a normal progression. (appearance order change in text; âfollowedâ is still after âpickedâ)
⢠Colonel Collins followed a normal progression before she was picked as a NASA astronaut. (changed the temporal conjunction word from âonceâ to âbeforeâ and âfollowedâ is now before âpickedâ)
⢠Volleyball is a popular sport in the area, and more than 200 people were watching the game, the chief said. (original sentence: âwatchingâ is before âsaidâ)
⢠Volleyball is a popular sport in the area, and more than 200 people would be watching the game, the chief said. (changed the verb tense: âwatchingâ is after âsaidâ)
Model We use CogCompTime 2.0 (Ning et al., 2019).
Contrast Set Statistics Two annotators created 401 perturbed instances that form 239 contrast sets. The annotators used approximately 25 hours to construct and validate the dataset.
Analysis We recorded the perturbation strategy used for each example. 49% of the perturbations changed the âappearance orderâ, 31% changed the âtenseâ, 24% changed the âtemporal conjunction wordsâ, and 10% had other changes. We double count the examples that have multiple perturbations. The model accuracy on the different perturbations is reported in the table below.
Perturbation Type Accuracy Overall 63.3% Appearance Order Tense Change Temporal Conjunction Other Changes 66.5% 61.8% 60.0% 61.8%
Table 4: Accuracy breakdown of the perturbation types for MATRES.
# B.4 Syntactic Parsing
Perturbation Strategies The annotators per- turbed noun phrases adjacent to prepositions (leav- ing the preposition unchanged). For example, The clerics demanded talks with local US commanders â The clerics demanded talks with great urgency. The different semantic content of the noun phrase changes the syntactic path from the preposition with to the parent word of the parent of the prepo- sition; in the initial example, the parent is comman- ders and the grandparent is the noun talks; in the perturbed version, the grandparent is now the verb demanded.
Model We use a biafï¬ne parser following the architecture of Dozat and Manning (2017) with ELMo embeddings (Peters et al., 2018), trained on the combination of the training sets for the tree- banks that we drew test examples from (GUM, EWT, LinES, and ParTUT).
Contrast Set Statistics One annotator created 150 perturbed examples that form 150 contrast sets. 75 of the contrast sets consist of a sentence in which a prepositional phrase attaches to a verb, paired with an altered version where it attaches to a noun instead. The other 75 sentences were altered in the opposite direction.
Analysis The process of creating a perturbation for a syntactic parse is highly time-consuming. Only a small fraction of sentences in the test set could be altered in the desired way, even after ï¬ltering to ï¬nd relevant syntactic structures and eliminate unambiguous prepositions (e.g. of al- ways attaches to a noun modifying a noun, mak- ing it impossible to change the attachment without changing the preposition). Further, once a poten- tially ambiguous sentence was identiï¬ed, annota- tors had to come up with an alternative noun phrase that sounded natural and did not require extensive changes to the structure of the sentence. They then had to re-annotate the relevant section of the sen- tence, which could include new POS tags, new UD word features, and new arc labels. On average, each perturbation took 10â15 minutes. Expand- ing the scope of this augmented dataset to cover other syntactic features, such as adjective scope, apposition versus conjunction, and other forms of clausal attachment, would allow for a signiï¬cantly larger dataset but would require a large amount of annotator time. The very poor contrast consistency on our dataset (17.3%) suggests that this would be a worthwhile investment to create a more rigorous parsing evaluation.
Notably, the modelâs accuracy for predicting the target prepositionsâ grandparents in the original, unaltered tree (64.7%) is signiï¬cantly lower than the modelâs accuracy for grandparents of all words (78.41%) and for grandparents of all prepositions (78.95%) in the original data. This indicates that these structures are already difï¬cult for the parser due to structural ambiguity.
# B.5 PERSPECTRUM
Perturbation Strategies The annotators per- turbed examples in multiple steps. First, they cre- ated non-trivial negations of the claim, e.g., Should we live in space? â Should we drop the ambition to live in space?. Next, they labeled the perturbed claim with respect to each perspective. For exam- ple:
Claim: Should we live in space? Perspective: Humanity in many ways deï¬nes itself through exploration and space is the next logical frontier. Label: True
Claim: Should we drop the ambition to live in space? Perspective: Humanity in many ways deï¬nes itself through exploration and space is the next logical frontier. Label: False
Model We use a ROBERTA model (Liu et al., 2019) ï¬netuned on PERSPECTRUM following the training process from (Chen et al., 2019).
Contrast Set Statistics The annotators created 217 perturbed instances that form 217 contrast sets. Each example took approximately three minutes to annotate: one minute for an annotator to negate each claim and one minute each for two separate annotators to adjudicate stance labels for each con- trastive claim-perspective pair.
# B.6 DROP
Perturbation Strategies See Section 3 in the main text for details about our perturbation strate- gies.
Model We use MTMSN (Hu et al., 2019), a DROP question answering model that is built on top of BERT Large (Devlin et al., 2019).
Contrast Set Statistics The total size of the aug- mented test set is 947 examples and contains a total of 623 contrast sets. Three annotators used approximately 16 hours to construct and validate the dataset.
Analysis We bucket 100 of the perturbed in- stances into the three categories of perturbations described in Section 3. For each subset, we evalu- ate MTMSNâs performance and show the results in the Table below.
Perturbation Type Frequency Accuracy Adding Compositional Steps Inversion of Semantics Re-ordering Events 38% 37% 25% 67.5 F1 53.2 F1 47.3 F1
Table 5: Accuracy breakdown of the perturbation types for DROP.
# B.7 QUOREF
Perturbation Strategies We use the following perturbation strategies for QUOREF:
⢠Perturb questions whose answers are entities to instead make the answers a property of those entities, e.g., Who hides their identity ... â What is the nationality of the person who hides their identity ....
⢠Perturb questions to add compositionality, e.g., What is the name of the person ... â What is the name of the father of the person ....
Add sentences between referring expressions and antecedents to the context paragraphs. ⢠Replace antecedents with less frequent named entities of the same type in the context para- graphs.
Model We use XLNet-QA, the best model from Dasigi et al. (2019), which is a span extraction model built on top of XLNet (Yang et al., 2019).
Contrast Set Statistics Four annotators created 700 instances that form 415 contrast sets. The mean contrast set size (including the original example) is 2.7(±1.2). The annotators used approximately 35 hours to construct and validate the dataset.
# B.8 ROPES
Perturbation Strategies We use the following perturbation strategies for ROPES:
⢠Perturbing the background to have the oppo- site causes and effects or qualitative relation, e.g., Gibberellins are hormones that cause the plant to grow â Gibberellins are hormones that cause the plant to stop growing.
⢠Perturbing the situation to associate different entities with different instantiations of a cer- tain cause or effect. For example, Grey tree frogs live in wooded areas and are difï¬cult to see when on tree trunks. Green tree frogs live in wetlands with lots of grass and tall plants. â Grey tree frogs live in wetlands areas and are difï¬cult to see when on stormy days in the plants. Green tree frogs live in wetlands with lots of leaves to hide on.
⢠Perturbing the situation to have more complex reasoning steps, e.g., Sue put 2 cubes of sugar into her tea. Ann decided to use granulated sugar and added the same amount of sugar to her tea. â Sue has 2 cubes of sugar but Ann has the same amount of granulated sugar. They exchange the sugar to each other and put the sugar to their ice tea.
⢠Perturbing the questions to have presupposi- tions that match the situation and background.
Model We use the best model from Lin et al. (2019), which is a span extraction model built on top of a RoBERTa model (Liu et al., 2019) that is ï¬rst ï¬netuned on RACE (Lai et al., 2017).
Contrast Set Statistics Two annotators created 974 perturbed instances which form 974 contrast sets. The annotators used approximately 65 hours to construct and validate the dataset.
# B.9 BoolQ
Perturbation Strategies We use a diverse set of perturbations, including adjective, entity, and event changes. We show three representative examples below:
Paragraph: The Fate of the Furious premiered in Berlin on April 4, 2017, and was theatrically released in the United States on April 14, 2017, playing in 3D, IMAX 3D and 4DX internationally. . . A spinoff ï¬lm starring Johnson and Stathamâs characters is scheduled for re- lease in August 2019, while the ninth and tenth ï¬lms are scheduled for releases on the years 2020 and 2021. Question: Is âFate and the Furiousâ the last movie? Answer: False New Question: Is âFate and the Furiousâ the ï¬rst of multiple movies? New Answer: True Perturbation Strategy: Adjective Change
Paragraph: Sanders played football primarily at cor- nerback, but also as a kick returner, punt returner, and occasionally wide receiver. . . An outï¬elder in baseball, he played professionally for the New York Yankees, the Atlanta Braves, the Cincinnati Reds and the San Fran- cisco Giants, and participated in the 1992 World Series with the Braves. Question: Did Deion Sanders ever win a world series? Answer: False New Question: Did Deion Sanders ever play in a world series? New Answer: True Perturbation strategy: Event Change
Paragraph: The White House is the ofï¬cial residence and workplace of the President of the United States. It is located at 1600 Pennsylvania Avenue NW in Wash- ington, D.C. and has been the residence of every U.S. President since John Adams in 1800. The term is often used as a metonym for the president and his advisers. Question: Does the president live in the White House? Answer: True New Question: Did George Washington live in the White House? New Answer: False Perturbation Strategy: Entity Change
Model We use ROBERTA base and follow the standard ï¬netuning process from Liu et al. (2019).
Contrast Set Statistics The annotators created 339 perturbed questions generated that form 70 contrast sets. One annotator created the dataset and a separate annotator veriï¬ed it. This entire process took approximately 16 hours.
# B.10 MC-TACO
Perturbation Strategies The main goal when perturbing MC-TACO questions is to retain a simi- lar question that requires the same temporal knowl- edge to answer, while there are additional con- straints with slightly different related context that changes the answers. We also modiï¬ed the answers accordingly to make sure the question has a combi- nation of plausible and implausible candidates.
Model We use the best baseline model from the original paper (Zhou et al., 2019) which is based on ROBERTAbase (Liu et al., 2019).
Contrast Set Statistics The annotators created 646 perturbed question-answer pairs that form 646 contrast sets. Two annotators used approximately 12 hours to construct and validate the dataset. | {
"id": "1909.10411"
} |
2004.03705 | Deep Learning Based Text Classification: A Comprehensive Review | Deep learning based models have surpassed classical machine learning based
approaches in various text classification tasks, including sentiment analysis,
news categorization, question answering, and natural language inference. In
this paper, we provide a comprehensive review of more than 150 deep learning
based models for text classification developed in recent years, and discuss
their technical contributions, similarities, and strengths. We also provide a
summary of more than 40 popular datasets widely used for text classification.
Finally, we provide a quantitative analysis of the performance of different
deep learning models on popular benchmarks, and discuss future research
directions. | http://arxiv.org/pdf/2004.03705 | Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, Jianfeng Gao | cs.CL, cs.LG, stat.ML | null | null | cs.CL | 20200406 | 20210104 | 1 2 0 2 n a J 4 ] L C . s c [
3 v 5 0 7 3 0 . 4 0 0 2 : v i X r a
# Deep Learning Based Text Classification: A Comprehensive Review
Shervin Minaee, Snapchat Inc Nal Kalchbrenner, Google Brain, Amsterdam Erik Cambria, Nanyang Technological University, Singapore Narjes Nikzad, University of Tabriz Meysam Chenaghlu, University of Tabriz Jianfeng Gao, Microsoft Research, Redmond
Abstract. Deep learning based models have surpassed classical machine learning based approaches in various text classification tasks, including sentiment analysis, news categorization, question answering, and natural language inference. In this paper, we provide a comprehensive review of more than 150 deep learning based models for text classification developed in recent years, and discuss their technical contributions, similarities, and strengths. We also provide a summary of more than 40 popular datasets widely used for text classification. Finally, we provide a quantitative analysis of the performance of different deep learning models on popular benchmarks, and discuss future research directions.
Additional Key Words and Phrases: Text Classification, Sentiment Analysis, Question Answering, News Categorization, Deep Learning, Natural Language Inference, Topic Classification.
ACM Reference Format: Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, and Jianfeng Gao. 2020. Deep Learning Based Text Classification: A Comprehensive Review. 1, 1 (February 2020), 43 pages. https://doi.org/10.1145/nnnnnnn.nnnnnnn
# 1 INTRODUCTION
Text classification, also known as text categorization, is a classical problem in natural language processing (NLP), which aims to assign labels or tags to textual units such as sentences, queries, paragraphs, and documents. It has a wide range of applications including question answering, spam detection, sentiment analysis, news categorization, user intent classification, content moderation, and so on. Text data can come from different sources, including web data, emails, chats, social media, tickets, insurance claims, user reviews, and questions and answers from customer services, to name a few. Text is an extremely rich source of information. But extracting insights from text can be challenging and time-consuming, due to its unstructured nature.
Text classification can be performed either through manual annotation or by automatic labeling. With the growing scale of text data in industrial applications, automatic text classification is becoming increasingly important. Approaches to automatic text classification can be grouped into two categories:
Rule-based methods ⢠Machine learning (data-driven) based methods
Authorsâ addresses: Shervin Minaee, [email protected],Snapchat Inc; Nal Kalchbrenner, [email protected],Google Brain, Amsterdam; Erik Cambria, [email protected],Nanyang Technological University, Singapore; Narjes Nikzad, [email protected],University of Tabriz; Meysam Chenaghlu, [email protected],University of Tabriz; Jianfeng Gao, [email protected],Microsoft Research, Redmond.
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. © 2020 Copyright held by the owner/author(s). Publication rights licensed to ACM. XXXX-XXXX/2020/2-ART $15.00 https://doi.org/10.1145/nnnnnnn.nnnnnnn
, Vol. 1, No. 1, Article . Publication date: February 2020.
2
Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, and Jianfeng Gao
Rule-based methods classify text into different categories using a set of pre-defined rules, and require a deep domain knowledge. On the other hand, machine learning based approaches learn to classify text based on observations of data. Using pre-labeled examples as training data, a machine learning algorithm learns inherent associations between texts and their labels.
Machine learning models have drawn lots of attention in recent years. Most classical machine learning based models follow the two-step procedure. In the first step, some hand-crafted features are extracted from the documents (or any other textual unit). In the second step, those features are fed to a classifier to make a prediction. Popular hand-crafted features include bag of words (BoW) and their extensions. Popular choices of classification algorithms include Naïve Bayes, support vector machines (SVM), hidden Markov model (HMM), gradient boosting trees, and random forests. The two-step approach has several limitations. For example, reliance on the hand- crafted features requires tedious feature engineering and analysis to obtain good performance. In addition, the strong dependence on domain knowledge for designing features makes the method difficult to generalize to new tasks. Finally, these models cannot take full advantage of large amounts of training data because the features (or feature templates) are pre-defined.
Neural approaches have been explored to address the limitations due to the use of hand-craft features. The core component of these approaches is a machine-learned embedding model that maps text into a low-dimensional continuous feature vector, thus no hand-crafted features is needed. One of earliest embedding models is latent semantic analysis (LSA) developed by Dumais et al. [1] in 1989. LSA is a linear model with less than 1 million parameters, trained on 200K words. In 2001, Bengio et al. [2] propose the first neural language model based on a feed-forward neural network trained on 14 million words. However, these early embedding models underperform classical models using hand-crafted features, and thus are not widely adopted. A paradigm shift starts when much larger embedding models are developed using much larger amounts of training data. In 2013, Google develops a series of word2vec models [3] that are trained on 6 billion words and immediately become popular for many NLP tasks. In 2017, the teams from AI2 and University of Washington develops a contextual embedding model based on a 3-layer bidirectional LSTM with 93M parameters trained on 1B words. The model, called ELMo [4], works much better than word2vec because they capture contextual information. In 2018, OpenAI starts building embedding models using Transformer [5], a new NN architecture developed by Google. Transformer is solely based on attention which substantially improves the efficiency of large-scale model training on TPU. Their first model is called GPT [6], which is now widely used for text generation tasks. The same year, Google develops BERT [7] based on bidirectional transformer. BERT consists of 340M parameters, trained on 3.3 billion words, and is the current state of the art embedding model. The trend of using larger models and more training data continues. By the time this paper is published, OpenAIâs latest GPT-3 model [8] contains 170 billion parameters, and Googleâs GShard [9] contains 600 billion parameters.
Although these gigantic models show very impressive performance on various NLP tasks, some researchers argue that they do not really understand language and are not robust enough for many mission-critical domains [10â14]. Recently, there is an growing interest in exploring neuro-symbolic hybrid models (e.g., [15â18]) to address some of the fundamental limitations of neural models, such as lack of grounding, being unable to perform symbolic reasoning, not interpretable. These works, although important, are beyond the scope of this paper.
While there are many good reviews and text books on text classification methods and applications in general e.g., [19â21], this survey is unique in that it presents a comprehensive review on more than 150 deep learning (DL) models developed for various text classification tasks, including sentiment analysis, news categorization, topic classification, question answering (QA), and natural language inference (NLI), over the course of the past six years. In particular, we group these works into several categories based on their neural network architectures, including recurrent neural networks (RNNs), convolutional neural networks (CNNs), attention, Transformers, Capsule Nets, and so on. The contributions of this paper can be summarized as follows:
, Vol. 1, No. 1, Article . Publication date: February 2020.
Deep Learning Based Text Classification: A Comprehensive Review ⢠3
We present a detailed overview of more than 150 DL models proposed for text classification. ⢠We review more than 40 popular text classification datasets. ⢠We provide a quantitative analysis of the performance of a selected set of DL models on 16 popular
benchmarks.
⢠We discuss remaining challenges and future directions.
1.1 Text Classification Tasks Text Classification (TC) is the process of categorizing texts (e.g., tweets, news articles, customer reviews) into organized groups. Typical TC tasks include sentiment analysis, news categorization and topic classification. Recently, researchers show that it is effective to cast many natural language understanding (NLU) tasks (e.g., extractive question answering, natural language inference) as TC by allowing DL-based text classifiers to take a pair of texts as input (e.g., [7, 22, 23]). This section introduces five TC tasks discussed in this paper, including three typical TC tasks and two NLU tasks that are commonly cast as TC in many recent DL studies.
Sentiment Analysis. This is the task of analyzing peopleâs opinions in textual data (e.g., product reviews, movie reviews, or tweets), and extracting their polarity and viewpoint. The task can be cast as either a binary or a multi-class problem. Binary sentiment analysis classifies texts into positive and negative classes, while multi-class sentiment analysis classifies texts into fine-grained labels or multi-level intensities.
News Categorization. News contents are among the most important information sources. A news classification system helps users obtain information of interest in real-time by e.g., identifying emerging news topics or recommending relevant news based on user interests.
Topic Analysis. The task, also known as topic classification, aims to identify the theme or topics of a text (e.g., whether a product review is about âcustomer supportâ or âease of useâ).
Question Answering (QA). There are two types of QA tasks: extractive and generative. Extractive QA is a TC task: Given a question and a set of candidate answers (e.g., text spans in a document in SQuAD [24]), a system classifies each candidate answer as correct or not. Generative QA is a text generation task since it requires generating answers on the fly. This paper only discusses extractive QA.
Natural language inference (NLI). NLI, also known as recognizing textual entailment (RTE), predicts whether the meaning of one text can be inferred from another. An NLI system needs to assign to a pair of text units a label such as entailment, contradiction, and neutral [25]. Paraphrasing is a generalized form of NLI, also known as text pair comparison, the task of measuring the semantic similarity of a sentence pair indicating how likely one sentence is a paraphrase of the other.
1.2 Paper Structure The rest of the paper is structured as follows: Section 2 presents a comprehensive review of more than 150 DL-based text classification models. Section 3 presents a recipe of building text classifiers using DL models. Section 4 reviews some of the most popular TC datasets. Section 5 presents a quantitative performance analysis of a selected set of DL models on 16 benchmarks. Section 6 discusses the main challenges and future directions for DL-based TC methods. Section 7 concludes the paper.
, Vol. 1, No. 1, Article . Publication date: February 2020.
4
Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, and Jianfeng Gao
2 DEEP LEARNING MODELS FOR TEXT CLASSIFICATION This section reviews more than 150 DL models proposed for various TC tasks. For clarify, we group these models into several categories based on their model architectures1:
Feed-forward networks view text as a bag of words (Section 2.1). ⢠RNN-based models view text as a sequence of words, and are intended to capture word dependencies and
text structures (Section 2.2).
CNN-based models are trained to recognize patterns in text, such as key phrases, for TC (Section 2.3). ⢠Capsule networks address the information loss problem suffered by the pooling operations of CNNs, and
recently have been applied to TC (Section 2.4).
⢠The attention mechanism is effective to identify correlated words in text, and has become a useful tool in developing DL models (Section 2.5).
⢠Memory-augmented networks combine neural networks with a form of external memory, which the models can read from and write to (Section 2.6).
⢠Graph neural networks are designed to capture internal graph structures of natural language, such as syntactic and semantic parse trees (Section 2.7).
Siamese Neural Networks are designed for text matching, a special case of TC (Section 2.8) . ⢠Hybrid models combine attention, RNNs, CNNs, etc. to capture local and global features of sentences and
documents (Section 2.9).
⢠Transformers allow for much more parallelization than RNNs, making it possible to efficiently (pre-)train very big language models using GPUs (Section 2.10).
⢠Finally, in Section 2.11, we review modeling technologies that are beyond supervised learning, including unsupervised learning using autoencoder and adversarial training, and reinforcement learning.
Readers are expected to be reasonably familiar with basic DL models to comprehend the content of this section. Readers are referred to the DL textbook by Goodfellow et al. [26] for more details.
2.1 Feed-Forward Neural Networks Feed-forward networks are among the simplest DL models for text representation. Yet, they have achieved high accuracy on many TC benchmarks. These models view text as a bag of words. For each word, they learn a vector representation using an embedding model such as word2vec [27] or Glove [28], take the vector sum or average of the embeddings as the representation of the text, pass it through one or more feed-forward layers, known as Multi-Layer Perceptrons (MLPs), and then perform classification on the final layerâs representation using a classifier such as logistic regression, Naïve Bayes, or SVM [29]. An example of these models is the Deep Average Network (DAN) [29], whose architecture is shown in Fig. 1. Despite its simplicity, DAN outperforms other more sophisticated models which are designed to explicitly learn the compositionality of texts. For example, DAN outperforms syntactic models on datasets with high syntactic variance. Joulin et al. [30] propose a simple and efficient text classifier called fastText. Like DAN, fastText views text as a bag of words. Unlike DAN, fastText uses a bag of n-grams as additional features to capture local word order information. This turns out to be very efficient in practice, achieving comparable results to the methods that explicitly use the word order [31].
Le and Mikolov [32] propose doc2vec, which uses an unsupervised algorithm to learn fixed-length feature representations of variable-length pieces of texts, such as sentences, paragraphs, and documents. As shown in Fig. 2, the architecture of doc2vec is similar to that of the Continuous Bag of Words (CBOW) model [3, 27]. The only difference is the additional paragraph token that is mapped to a paragraph vector via matrix ð·. In doc2vec,
1These categories are introduced mainly for a pedagogical purpose. They are by no means exclusive to each other. For example, the Transformer uses a composite structure consisting of feed-forward layers and the attention mechanism, and memory-augment networks also involve the attention mechanism.
, Vol. 1, No. 1, Article . Publication date: February 2020.
# Deep Learning Based Text Classification: A Comprehensive Review â¢
Deep Learning Based Text Classification: A Comprehensive Review + 5
softmax ha = f(We-hy +bo) CELT) fs = (1 - av + bi) 4 => 4 Fast Predator is a masterpiece cn oe 3 ca
# Fig. 1. The architecture of the Deep Average Network (DAN) [29].
the concatenation or average of this vector with a context of three words is used to predict the fourth word. The paragraph vector represents the missing information from the current context and can act as a memory of the topic of the paragraph. After being trained, the paragraph vector is used as features for the paragraph (e.g., in lieu of or in addition to BoW), and fed to a classifier for prediction. Doc2vec achieves new state of the art results on several TC tasks when it is published.
Classifier Average/Concatenate ooo rumen Paragraph the cat sat id
# Fig. 2. The doc2vec model [32].
2.2 RNN-Based Models RNN-based models view text as a sequence of words, and are intended to capture word dependencies and text structures for TC. However, vanilla RNN models do not perform well, and often underperform feed-forward neural networks. Among many variants to RNNs, Long Short-Term Memory (LSTM) is the most popular architecture, which is designed to better capture long term dependencies. LSTM addresses the gradient vanishing or exploding problems suffered by the vanilla RNNs by introducing a memory cell to remember values over arbitrary time intervals, and three gates (input gate, output gate, forget gate) to regulate the flow of information into and out of the cell. There have been works on improving RNNs and LSTM models for TC by capturing richer information, such as tree structures of natural language, long-span word relations in text, document topics, and so on.
Tai et al. [33] develop a Tree-LSTM model, a generalization of LSTM to tree-structured network typologies, to learn rich semantic representations. The authors argue that Tree-LSTM is a better model than the chain-structured LSTM for NLP tasks because natural language exhibits syntactic properties that would naturally combine words to phrases. They validate the effectiveness of Tree-LSTM on two tasks: sentiment classification and predicting the semantic relatedness of two sentences. The architectures of these models are shown in Fig. 3. Zhu et al. [34] also extend the chain-structured LSTM to tree structures, using a memory cell to store the history of multiple child
, Vol. 1, No. 1, Article . Publication date: February 2020.
5
6
Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, and Jianfeng Gao
cells or multiple descendant cells in a recursive process. They argue that the new model provides a principled way of considering long-distance interaction over hierarchies, e.g., language or image parse structures.
4 uw ye ys yA yo EK r 4 4 a ar on ~ } >| >| tal rN ry ry ry ry ws zy 22 23 14 22 Av + a
Fig. 3. (Left) A chain-structured LSTM network and (right) a tree-structured LSTM network with arbitrary branching factor [33]. Here ð¥ð and ð¦ð denote the input and output of each cell.
To model long-span word relations for machine reading, Cheng et al. [35] augment the LSTM architecture with a memory network in place of a single memory cell. This enables adaptive memory usage during recurrence with neural attention, offering a way to weakly induce relations among tokens. This model achieves promising results on language modeling, sentiment analysis, and NLI.
The Multi-Timescale LSTM (MT-LSTM) neural network [36] is also designed to model long texts, such as sentences and documents, by capturing valuable information with different timescales. MT-LSTM partitions the hidden states of a standard LSTM model into several groups. Each group is activated and updated at different time periods. Thus, MT-LSTM can model very long documents. MT-LSTM is reported to outperform a set of baselines, including the models based on LSTM and RNN, on TC.
RNNs are good at capturing the local structure of a word sequence, but face difficulties remembering long-range dependencies. In contrast, latent topic models are able to capture the global semantic structure of a document but do not account for word ordering. Dieng et al. [37] propose a TopicRNN model to integrate the merits of RNNs and latent topic models. It captures local (syntactic) dependencies using RNNs and global (semantic) dependencies using latent topics. TopicRNN is reported to outperform RNN baselines for sentiment analysis.
There are other interesting RNN-based models. Liu et al. [38] use multi-task learning to train RNNs to leverage labeled training data from multiple related tasks. Johnson and Rie [39] explore a text region embedding method using LSTM. Zhou et al. [40] integrate a Bidirectional-LSTM (Bi-LSTM) model with two-dimensional max- pooling to capture text features. Wang et al. [41] propose a bilateral multi-perspective matching model under the âmatching-aggregationâ framework. Wan et al. [42] explore semantic matching using multiple positional sentence representations generated by a bi-directional LSMT model.
It is worth noting that RNNs belong to a broad category of DNNs, known as recursive neural networks. A recursive neural network applies the same set of weights recursively over a structure input to produce a structured prediction or a vector representation over variable-size input. Whereas RNNs are recursive neural networks with a linear chain structure input, there are recursive neural networks that operate on hierarchical structures, such as parse trees of natural language sentences [43], combining child representations into parent representations. RNNs are the most popular recursive neural networks for TC because they are effective and easy to use â they view text as a sequence of tokens without requiring additional structure labels such as parse trees.
2.3 CNN-Based Models RNNs are trained to recognize patterns across time, whereas CNNs learn to recognize patterns across space [44]. RNNs work well for the NLP tasks such as POS tagging or QA where the comprehension of long-range semantics is required, while CNNs work well where detecting local and position-invariant patterns is important. These
, Vol. 1, No. 1, Article . Publication date: February 2020.
# Deep Learning Based Text Classification: A Comprehensive Review â¢
Deep Learning Based Text Classification: A Comprehensive Review + 7
patterns could be key phrases that express a particular sentiment like âI likeâ or a topic like âendangered speciesâ. Thus, CNNs have become one of the most popular model architectures for TC.
One of the first CNN-based models for TC is proposed by Kalchbrenner et al. [45]. The model uses dynamic ð-max-pooling, and is called the Dynamic CNN (DCNN). As illustrated in Fig. 4, the first layer of DCNN constructs a sentence matrix using the embedding for each word in the sentence. Then a convolutional architecture that alternates wide convolutional layers with dynamic pooling layers given by dynamic ð-max-pooling is used to generate a feature map over the sentence that is capable of explicitly capturing short and long-range relations of words and phrases. The pooling parameter ð can be dynamically chosen depending on the sentence size and the level in the convolution hierarchy.
Dyareene| [Wiican Fong em] [lr metas) Poseiengaen roared [i at Se
Fig. 4. The architecture of DCNN model [45].
Later, Kim [46] proposes a much simpler CNN-based model than DCNN for TC. As shown in Fig. 5, Kimâs model uses only one layer of convolution on top of the word vectors obtained from an unsupervised neural language model i.e., word2vec. Kim also compares four different approaches to learning word embeddings: (1) CNN-rand, where all word embeddings are randomly initialized and then modified during training; (2) CNN-static, where the pre-trained word2vec embeddings are used and stay fixed during model training; (3) CNN-non-static, where the word2vec embeddings are fine-tuned during training for each task; and (4) CNN-multi-channel, where two sets of word embedding vectors are used, both are initialized using word2vec, with one updated during model training while the other fixed. These CNN-based models are reported to improve upon the state of the art on sentiment analysis and question classification.
There have been efforts of improving the architectures of CNN-based models of [45, 46]. Liu et al. [47] propose a new CNN-based model that makes two modifications to the architecture of Kim-CNN [46]. First, a dynamic max-pooling scheme is adopted to capture more fine-grained features from different regions of the document. Second, a hidden bottleneck layer is inserted between the pooling and output layers to learn compact document representations to reduce model size and boost model performance. In [48, 49], instead of using pre-trained low-dimensional word vectors as input to CNNs, the authors directly apply CNNs to high-dimensional text data to learn the embeddings of small text regions for classification.
Character-level CNNs have also been explored for TC [50, 51]. One of the first such models is proposed by Zhang et al. [50]. As illustrated in Fig. 6, the model takes as input the characters in a fixed-sized, encoded as one-hot vectors, passes them through a deep CNN model that consists of six convolutional layers with pooling operations and three fully connected layers. Prusa et al. [52] present an approach to encoding text using CNNs
, Vol. 1, No. 1, Article . Publication date: February 2020.
7
8
Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, and Jianfeng Gao
wait for the vise Ep tdadl | and a H oo. do | Chee eH n't . : rent a it l J l J L L J nx k representation of Convolutional layer with Max-overstime Fully connected layer sentence with static and muttple fiter widths and pooling with dropout and non-static channele feature maps softmax output
Fig. 5. The architecture of a sample CNN model for text classification. courtesy of Yoon Kim [46].
that greatly reduces memory consumption and training time required to learn character-level text representations. This approach scales well with alphabet size, allowing to preserve more information from the original text to enhance classification performance.
Length [Some Text | SM a IN a a w< \ im \eY/ Convolutions Max-pooling Conv. and Pool. layers _â_-Fully-connected
Fig. 6. The architecture of a character-level CNN model [50].
There are studies on investigating the impact of word embeddings and CNN architectures on model performance. Inspired by VGG [53] and ResNets [54], Conneau et al. [55] present a Very Deep CNN (VDCNN) model for text processing. It operates directly at the character level and uses only small convolutions and pooling operations. This study shows that the performance of VDCNN improves with the increase of the depth. Duque et al. [56] modify the structure of VDCNN to fit mobile platformsâ constraints without much performance degradation. They are able to compress the model size by 10x to 20x with an accuracy loss between 0.4% to 1.3%. Le et al. [57] show that deep models indeed outperform shallow models when the text input is represented as a sequence of characters. However, a simple shallow-and-wide network outperforms deep models such as DenseNet[58] with word inputs. Guo et al. [59] study the impact of word embedding and propose to use weighted word embeddings via a multi-channel CNN model. Zhang et al. [60] examine the impact of different word embedding methods and pooling mechanisms, and find that using non-static word2vec and GloVe outperforms one-hot vectors, and that max-pooling consistently outperforms other pooling methods.
There are other interesting CNN-based models. Mou et al. [61] present a tree-based CNN to capture sentence- level semantics. Pang et al. [62] cast text matching as the image recognition task, and use multi-layer CNNs to identify salient n-gram patterns. Wang et al. [63] propose a CNN-based model that combines explicit and implicit representations of short text for TC. There is also a growing interest in applying CNNs to biomedical text classification [64â67].
, Vol. 1, No. 1, Article . Publication date: February 2020.
# Deep Learning Based Text Classification: A Comprehensive Review â¢
2.4 Capsule Neural Networks CNNs classify images or texts by using successive layers of convolutions and pooling. Although pooling operations identify salient features and reduce the computational complexity of convolution operations, they lose information regarding spatial relationships and are likely to mis-classify entities based on their orientation or proportion.
To address the problems of pooling, a new approach is proposed by Hinton et al., called capsule networks (CapsNets) [68, 69]. A capsule is a group of neurons whose activity vector represents different attributes of a specific type of entity such as an object or an object part. The vectorâs length represents the probability that the entity exists, and the orientation of the vector represents the attributes of the entity. Unlike max-pooling of CNNs, which selects some information and discards the rest, capsules ârouteâ each capsule in the lower layer to its best parent capsule in the upper layer, using all the information available in the network up to the final layer for classification. Routing can be implemented using different algorithms, such as dynamic routing-by-agreement [69] or the EM algorithm [70].
Recently, capsule networks have been applied to TC, where capsules are adapted to represent a sentence or document as a vector. [71â73] propose a TC model based on a variant of CapsNets. The model consists of four layers: (1) an n-gram convolutional layer, (2) a capsule layer, (3) a convolutional capsule layer, and (4) a fully connected capsule layer. The authors experiment three strategies to stabilize the dynamic routing process to alleviate the disturbance of the noise capsules that contain background information such as stopwords or the words that are unrelated to any document categories. They also explore two capsule architectures, Capsule-A and Capsule-B as in Fig. 7. Capsule-A is similar to the CapsNet in [69]. Capsule-B uses three parallel networks with filters with different window sizes in the n-gram convolutional layer to learn a more comprehensive text representation. CapsNet-B performs better in the experiments.
Class Probablity Class Probabity ry wxtwaxa Capsule Average Poairg FCCap F i i i W@xOxDxdxd Conan Â¥ K2xCxOxdxd ) | K2xCxDxdxd) (K2xCxDxdxd Soncon Sonvcap Canoes BxCxd T f PrimCap axcxe axxd axcxa Pinon Princo Pinoy 3x 300 Conv, 8 Lookup Tate, 300] Lop Tove 300 â Text Text Capsule-A Capsule-B
Fig. 7. CapsNet A and B for text classification [71].
The CapsNet-based model proposed by Kim et al. [74] uses a similar architecture. The model consists of (1) an input layer that takes a document as a sequence of word embeddings; (2) a convolutional layer that generates feature maps and uses a gated-linear unit to retain spatial information; (3) a convolutional capsule layer to form global features by aggregating local features detected by the convolutional layer; and (4) a text capsule layer to predict class labels. The authors observe that objects can be more freely assembled in texts than in images. For example, a documentâs semantics can remain the same even if the order of some sentences is changed, unlike the the positions of the eyes and nose on a human face. Thus, they use a static routing schema, which consistently outperforms dynamic routing [69] for TC. Aly et al. [75] propose to use CapsNets for Hierarchical Multilabel Classification (HMC), arguing that the CapsNetâs capability of encoding child-parent relations makes it a better
, Vol. 1, No. 1, Article . Publication date: February 2020.
9
10
Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, and Jianfeng Gao
10 + Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, and Jianfeng Gao
solution than traditional methods to the HMC task where documents are assigned one or multiple class labels organized in a hierarchical structure. Their modelâs architecture is similar to the ones in [71, 72, 74].
Ren et al. [76] propose another variant of CapsNets using a compositional coding mechanism between capsules and a new routing algorithm based on ð-means clustering. First, the word embeddings are formed using all codeword vectors in codebooks. Then features captured by the lower-level capsules are aggregated in high-level capsules via ð-means routing.
2.5 Models with Attention Mechanism Attention is motivated by how we pay visual attention to different regions of an image or correlate words in one sentence. Attention becomes an increasingly popular concept and useful tool in developing DL models for NLP [77, 78]. In a nutshell, attention in language models can be interpreted as a vector of importance weights. In order to predict a word in a sentence, we estimate using the attention vector how strongly it is correlated with, or âattends toâ, other words and take the sum of their values weighted by the attention vector as the approximation of the target.
This section reviews some of the most prominent attention models which create new state of the arts on TC tasks, when they are published.
Yang et al. [79] propose a hierarchical attention network for text classification. This model has two distinctive characteristics: (1) a hierarchical structure that mirrors the hierarchical structure of documents, and (2) two levels of attention mechanisms applied at the word and sentence-level, enabling it to attend differentially to more and less important content when constructing the document representation. This model outperforms previous methods by a substantial margin on six TC tasks. Zhou et al. [80] extend the hierarchical attention model to cross-lingual sentiment classification. In each language, a LSTM network is used to model the documents. Then, classification is achieved by using a hierarchical attention mechanism, where the sentence-level attention model learns which sentences of a document are more important for determining the overall sentiment. while the word-level attention model learns which words in each sentence are decisive.
Shen et al. [81] present a directional self-attention network for RNN/CNN-free language understanding, where the attention between elements from input sequence(s) is directional and multi-dimensional. A light-weight neural net is used to learn sentence embedding, solely based on the proposed attention without any RNN/CNN structure. Liu et al. [82] present a LSTM model with inner-attention for NLI. This model uses a two-stage process to encode a sentence. Firstly, average pooling is used over word-level Bi-LSTMs to generate a first stage sentence representation. Secondly, attention mechanism is employed to replace average pooling on the same sentence for better representations. The sentenceâs first-stage representation is used to attend words appeared in itself.
Attention models are widely applied to pair-wise ranking or text matching tasks too. Santos et al. [83] propose a two-way attention mechanism, known as Attentive Pooling (AP), for pair-wise ranking. AP enables the pooling layer to be aware of the current input pair (e.g., a question-answer pair), in a way that information from the two input items can directly influence the computation of each otherâs representations. In addition to learning the representations of the input pair, AP jointly learns a similarity measure over projected segments of the pair, and subsequently derives the corresponding attention vector for each input to guide the pooling. AP is a general framework independent of the underlying representation learning, and can be applied to both CNNs and RNNs, as illustrated in Fig. 8 (a). Wang et al. [84] view TC as a label-word matching problem: each label is embedded in the same space with the word vector. The authors introduce an attention framework that measures the compatibility of embeddings between text sequences and labels via cosine similarity, as shown in Fig. 8 (b).
Kim et al. [85] propose a semantic sentence matching approach using a densely-connected recurrent and co- attentive network. Similar to DenseNet [58], each layer of this model uses concatenated information of attentive features as well as hidden features of all the preceding recurrent layers. It enables preserving the original and
, Vol. 1, No. 1, Article . Publication date: February 2020.
# Deep Learning Based Text Classification: A Comprehensive Review â¢
Deep Learning Based Text Classification: A Comprehensive Review +
(a) (b) & = Q A z|,3| |22 1, = A-softmax BS sr 82r] 32) ââ < a â~ a &| 4 3s rie fo fi fe E} Zal< 4 x=â>V | | | = y <= 5 2 (maayeD) fo i = v-+c\ ||-@ iii & 23/35 ° pert G Ooms er earn ge 3 | |e - 1 a
Fig. 8. (a) The architecture of attentive pooling networks [83]. (b) The architecture of label-text matching model [84].
the co-attentive feature information from the bottom-most word embedding layer to the uppermost recurrent layer. Yin et al. [86] present another attention-based CNN model for sentence pair matching. They examine three attention schemes for integrating mutual influence between sentences into CNNs, so that the representation of each sentence takes into consideration its paired sentence. These interdependent sentence pair representations are shown to be more powerful than isolated sentence representations, as validated on multiple classification tasks including answer selection, paraphrase identification, and textual entailment. Tan et al. [87] employ multiple attention functions to match sentence pairs under the matching-aggregation framework. Yang et al. [88] introduce an attention-based neural matching model for ranking short answer texts. They adopt value-shared weighting scheme instead of position-shared weighting scheme for combining different matching signals and incorporated question term importance learning using question attention network. This model achieves promising results on the TREC QA dataset.
There are other interesting attention models. Lin et al. [89] used self-attention to extract interpretable sentence embeddings. Wang et al. [90] proposed a densely connected CNN with multi-scale feature attention to produce variable n-gram features. Yamada and Shindo [91] used neural attentive bag-of-entities models to perform TC using entities in a knowledge base. Parikh et al. [92] used attention to decompose a problem into sub-problems that can be solved separately. Chen et al. [93] explored generalized pooling methods to enhance sentence embedding, and proposed a vector-based multi-head attention model. Basiri et al. [94] proposed an attention-based bidirectional CNN-RNN deep model for sentiment analysis.
2.6 Memory-Augmented Networks While the hidden vectors stored by an attention model during encoding can be viewed as entries of the modelâs internal memory, memory-augmented networks combine neural networks with a form of external memory, which the model can read from and write to.
Munkhdalai and Yu [95] present a memory-augmented neural network, called Neural Semantic Encoder (NSE), for TC and QA. NSE is equipped with a variable sized encoding memory that evolves over time and maintains the understanding of input sequences through read, compose and write operations, as shown in Fig. 9.
Weston et al. [96] design a memory network for a synthetic QA task, where a series of statements (memory entries) are provided to the model as supporting facts to the question. The model learns to retrieve one entry at a time from memory based on the question and previously retrieved memory. Sukhbaatar et al. [97] extend this work and propose end-to-end memory networks, where memory entries are retrieved in a soft manner with
, Vol. 1, No. 1, Article . Publication date: February 2020.
11
12
Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, and Jianfeng Gao
12 + Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, and Jianfeng Gao
Input
# Fig. 9. The architecture of NSE [95].
attention mechanism, thus enabling end-to-end training. They show that with multiple rounds (hops), the model is able to retrieve and reason about several supporting facts to answer a specific question.
Kumar et al. [98] propose a Dynamic Memory Metwork (DMN), which processes input sequences and questions, forms episodic memories, and generates relevant answers. Questions trigger an iterative attention process, which allows the model to condition its attention on the inputs and the result of previous iterations. These results are then reasoned over in a hierarchical recurrent sequence model to generate answers. The DMN is trained end-to-end, and obtains state of the art results on QA and POS tagging. Xiong et al. [99] present a detailed analysis of the DMN, and improve its memory and input modules.
2.7 Graph Neural Networks Although natural language texts exhibit a sequential order, they also contain internal graph structures, such as syntactic and semantic parse trees, which define the syntactic and semantic relations among words in sentences. One of the earliest graph-based models developed for NLP is TextRank [100]. The authors propose to represent a natural language text as a graph ðº (ð , ð¸), where ð denotes a set of nodes and ð¸ a set of edges among the nodes. Depending on the applications at hand, nodes can represent text units of various types, e.g., words, collocations, entire sentences, etc. Similarly, edges can be used to represent different types of relations between any nodes, e.g., lexical or semantic relations, contextual overlap, etc.
Modern Graph Neural Networks (GNNs) are developed by extending DL approaches for graph data, such as the text graphs used by TextRank. Deep neural networks, such as CNNs, RNNs and autoencoders, have been generalized over the last few years to handle the complexity of graph data [101]. For example, a 2D convolution of CNNs for image processing is generalized to perform graph convolutions by taking the weighted average of a nodeâs neighborhood information. Among various types of GNNs, convolutional GNNs, such as Graph Convolutional Networks (GCNs) [102] and their variants, are the most popular ones because they are effective and convenient to compose with other neural networks, and have achieved state of the art results in many applications. GCNs are an efficient variant of CNNs on graphs. GCNs stack layers of learned first-order spectral filters followed by a nonlinear activation function to learn graph representations.
A typical application of GNNs in NLP is TC. GNNs utilize the inter-relations of documents or words to infer document labels [102â104]. In what follows, we review some variants of GCNs that are developed for TC.
Peng et al. [105] propose a graph-CNN based DL model to first convert text to graph-of-words, and then use graph convolution operations to convolve the word graph, as shown in Fig. 10. They show through experiments that the graph-of-words representation of texts has the advantage of capturing non-consecutive and long-distance semantics, and CNN models have the advantage of learning different level of semantics.
In [106], Peng et al. propose a TC model based on hierarchical taxonomy-aware and attentional graph capsule CNNs. One unique feature of the model is the use of the hierarchical relations among the class labels, which
, Vol. 1, No. 1, Article . Publication date: February 2020.
# Deep Learning Based Text Classification: A Comprehensive Review â¢
Deep Learning Based Text Classification: A Comprehensive Review + 13
Wordvector-Dchonnel 4s perme Max Pacing 42 hemel == on (© Representing by hgh dimension semantic ec tee tannery (esata wseipr Graph Generation âGraph Preprocessing âDeep Convolution Neural Networks Layers Full Connection Layers Sigmoid Layer
Fig. 10. The architecture of GNN used by Peng et al. [105].
in previous methods are considered independent. Specifically, to leverage such relations, the authors develop a hierarchical taxonomy embedding method to learn their representations, and define a novel weighted margin loss by incorporating the label representation similarity.
Yao et al. [107] use a similar Graph CNN (GCNN) model for TC. They build a single text graph for a corpus based on word co-occurrence and document word relations, then learn a Text Graph Convolutional Network (Text GCN) for the corpus, as shown in Fig. 11. The Text GCN is initialized with one-hot representation for word and document, and then jointly learns the embeddings for both words and documents, as supervised by the known class labels for documents.
# Fig. 11. The architecture of GCNN [107].
Building GNNs for a large-scale text corpus is costly. There have been works on reducing the modeling cost by either reducing the model complexity or changing the model training strategy. An example of the former is the Simple Graph Convolution (SGC) model proposed in [108], where a deep convolutional GNN is simplified by repeatedly removing the non-linearities between consecutive layers and collapsing the resulting functions (weight matrices) into a single linear transformation. An example of the latter is the text-level GNN [109]. Instead of building a graph for an entire text corpus, a text-level GNN produces one graph for each text chunk defined by a sliding window on the text corpus so as to reduce the memory consumption during training. Some of the other promising GNN based works include, GraphSage [103], and contextualized non-local neural networks [110].
2.8 Siamese Neural Networks Siamese neural networks (S2Nets) [111, 112] and their DNN variants, known as Deep Structured Semantic Models (DSSMs) [113, 114], are designed for text matching. The task is fundamental to many NLP applications, such as query-document ranking and answer selection in extractive QA. These tasks can be viewed as special cases of TC. For example, in question-document ranking, we want to classify a document as relevant or irrelevant to a given query.
, Vol. 1, No. 1, Article . Publication date: February 2020.
13
14
Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, and Jianfeng Gao
14 + Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, and Jianfeng Gao
sim(x,y) t t fi L T x vy
Fig. 12. The architecture of a DSSM, illustrated in [115]
Fig. 12. The architecture of a DSSM, illustrated in [115]
As illustrated in Fig. 12, a DSSM (or a S2Net) consists of a pair of DNNs, ð1 and ð2, which map inputs ð¥ and ð¦ into corresponding vectors in a common low-dimensional semantic space [115]. Then the similarity of ð¥ and ð¦ is measured by the cosine distance of the two vectors. While S2Nets assume that ð1 and ð2 share the same architecture and even the same parameters, in DSSMs, ð1 and ð2 can be of different architectures depending on ð¥ and ð¦. For example, to compute the similarity of an image-text pair, ð1 can be a deep CNN and ð2 an RNN or MLP. These models can be applied to a wide range of NLP tasks depending on the definition of (ð¥, ð¦). For example, (ð¥, ð¦) could be a query-document pair for query-document ranking [114, 116], or a question-answer pair in QA [117, 118].
The model parameters ð are often optimized using a pair-wise rank loss. Take document ranking as an example. Consider a query ð¥ and two candidate documents ð¦+ and ð¦â, where ð¦+ is relevant to ð¥ and ð¦â is not. Let simð (ð¥, ð¦) be the cosine similarity of ð¥ and ð¦ in the semantic space parameterized by ð . The training objective is to minimize the margin-based loss as
L(6) = [y + simg(x, y7) â simg(x, y*)], , (1)
where [ð¥]+ := max(0, ð¥) and ð¾ is the margin hyperparameter.
Since texts exhibit a sequential order, it is natural to implement ð1 and ð2 using RNNs or LSTMs to measure the semantic similarity between texts. Fig. 13 shows the architecture of the siamese model proposed in [119], where the two networks use the same LSTM model. Neculoiu et al. [120] present a similar model that uses character-level Bi-LSTMs for ð1 and ð2, and the cosine function to calculate the similarity. Liu et al. [121] model the interaction of a sentence pair with two coupled-LSTMs. In addition to RNNs, BOW models and CNNs are also used in S2Nets to represent sentences. For example, He et al. [122] propose a S2Net that uses CNNs to model multi-perspective sentence similarity. Renter et al. [123] propose a Siamese CBOW model which forms a sentence vector representation by averaging the word embeddings of the sentence, and calculates the sentence similarity as cosine similarity between sentence vectors. As BERT becomes the new state of the art sentence embedding model, there have been attempts to building BERT-based S2Nets, such as SBERT [124] and TwinBERT [125].
S2Nets and DSSMs have been widely used for QA. Das et al. [117] propose a Siamese CNN for QA (SCQA) to measure the semantic similarity between a question and its (candidate) answers. To reduce the computational complexity, SCQA uses character-level representations of question-answer pairs. The parameters of SCQA is trained to maximize the semantic similarities between a question and its relevant answers, as Equation 1, where ð¥ is a question and ð¦ its candidate answer. Tan et al. [118] present a series of siamese neural networks for answer selection. As shown in Fig. 14, these are hybrid models that process text using convolutional, recurrent, and attention neural networks. Other siamese neural networks developed for QA include LSTM-based models for non-factoid answer selection [126], Hyperbolic representation learning [127], and QA using a deep similarity neural network [128].
, Vol. 1, No. 1, Article . Publication date: February 2020.
# Deep Learning Based Text Classification: A Comprehensive Review â¢
Deep Learning Based Text Classification: A Comprehensive Review +
PLSTM,. ¢ ELSTM, PHOH®. (HL Fi f LT: T T cae ame A truly wise man,
Fig. 13. The architecture of the Siamese model proposed by Mueller et al. [119].
poolin ee Bist bitsTM Question WEs Answer WEs ha(t) = ha()sag(0)
Cq 8a poolin Output yer Output er ee Bist bitsTM [ CNN CNN bitsTM tT T F bist | | ra * Question WEs Answer WEs [Question wes Answer WES ha(t) = ha()sag(0)
Cq 8a Output yer Output er [ CNN CNN bitsTM tT T F bist | | ra * [Question wes Answer WES
Fig. 14. The architectures of the Siamese models studied in [118].
Fig. 14. The architectures of the Siamese models studied in [118].
2.9 Hybrid Models Many Hybrid models have been developed to combine LSTM and CNN architectures to capture local and global features of sentences and documents. Zhu et al. [129] propose a Convolutional LSTM (C-LSTM) network. As illustrated in Fig. 15 (a), C-LSTM utilizes a CNN to extract a sequence of higher-level phrase (n-gram) representations, which are fed to a LSTM network to obtain the sentence representation. Similarly, Zhang et al. [130] propose a Dependency Sensitive CNN (DSCNN) for document modeling. As illustrated in Fig. 15 (b), the DSCNN is a hierarchical model, where LSTM learns the sentence vectors which are fed to the convolution and max-pooling layers to generate the document representation.
(a) (b) Convoltion Layer ra Ha cc + [iste ]}+ââ+[ism ] [Kewrage Posting | [Rewrage Pooing] [Average Footing | window feature sequence | J, ¢ 7 q
Fig. 15. (a) The architecture of C-LSTM [129]. (b) The architecture of DSCNN for document modeling [130].
, Vol. 1, No. 1, Article . Publication date: February 2020.
15
16
Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, and Jianfeng Gao
16 + Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, and Jianfeng Gao
Chen et al. [131] perform multi-label TC through a CNN-RNN model that is able to capture both global and local textual semantics and, hence, to model high-order label correlations while having a tractable computational complexity. Tang et al. [132] use a CNN to learn sentence representations, and a gated RNN to learn a document representation that encodes the intrinsic relations between sentences. Xiao et al. [133] view a document as a sequence of characters, instead of words, and propose to use both character-based convolution and recurrent layers for document encoding. This model achieved comparable performances with much less parameters, compared with word-level models. The Recurrent CNN [134] applies a recurrent structure to capture long-range contextual dependence for learning word representations. To reduce the noise, max-pooling is employed to automatically select only the salient words that are crucial to the text classification task.
Chen et al. [135] propose a divide-and-conquer approach to sentiment analysis via sentence type classification, motivated by the observation that different types of sentences express sentiment in very different ways. The authors first apply a Bi-LSTM model to classify opinionated sentences into three types. Each group of sentences is then fed to a one-dimensional CNN separately for sentiment classification.
In [136], Kowsari et al. propose a Hierarchical Deep Learning approach for Text classification (HDLTex). HDLTex employs stacks of hybrid DL model architectures, including MLP, RNN and CNN, to provide specialized understanding at each level of the document hierarchy.
Liu [137] propose a robust Stochastic Answer Network (SAN) for multi-step reasoning in machine reading comprehension. SAN combines neural networks of different types, including memory networks, Transforms, Bi-LSTM, attention and CNN. The Bi-LSTM component obtains the context representations for questions and passages. Its attention mechanism derives a question-aware passage representation. Then, another LSTM is used to generate a working memory for the passage. Finally, a Gated Recurrent Unit based answer module outputs predictions.
Several studies have been focused on combining highway networks with RNNs and CNNs. In typical multi-layer neural networks, information flows layer by layer. Gradient-based training of a DNN becomes more difficult with increasing depth. Highway networks [138] are designed to ease training of very deep neural networks. They allow unimpeded information flow across several layers on information highways, similar to the shortcut connections in ResNet [139]. Kim et al. [140] employ a highway network with CNN and LSTM over characters for language modeling. As illustrated in Fig. 16, the first layer performs a lookup of character embeddings, then convolution and max-pooling operations are applied to obtain a fixed-dimensional representation of the word, which is given to the highway network. The highway networkâs output is used as the input to a multi-layer LSTM. Finally, an affine transformation followed by a softmax is applied to the hidden representation of the LSTM to obtain the distribution over the next word. Other highway-based hybrid models include recurrent highway networks [141], and RNN with highway [142].
2.10 Transformers and Pre-Trained Language Models One of the computational bottlenecks suffered by RNNs is the sequential processing of text. Although CNNs are less sequential than RNNs, the computational cost to capture relationships between words in a sentence also grows with the increasing length of the sentence, similar to RNNs. Transformers [5] overcome this limitation by applying self-attention to compute in parallel for every word in a sentence or document an âattention scoreâ to model the influence each word has on another 2. Due to this feature, Transformers allow for much more parallelization than CNNs and RNNs, which makes it possible to efficiently train very big models on large amounts of data on GPUs.
2Strictly speaking, Transformer is an instance of hybrid models (2.9), since each Transformer layer is a composite structure consisting of a feed-forward layer and a multi-head attention layer.
, Vol. 1, No. 1, Article . Publication date: February 2020.
# Deep Learning Based Text Classification: A Comprehensive Review â¢
Deep Learning Based Text Classification: A Comprehensive Review + 17
io} J 3 zg 8 _ z GH-O : » | ri S [ e i s) Oe low 4 2 / i , S. | g + 0 a B - & g i â i 3 & JL J JL L JL L J Coneatenation Convolution layer with Max- â Highway network Long short-term Softmax Cross entropy of character multiple filters of over- memory network output to loss between embeddings different widths time obtain the next word pooling distribution and prediction layer over next word
Fig. 16. The architecture of the highway network with CNN and LSTM [140].
Since 2018 we have seen the rise of a set of large-scale Transformer-based Pre-trained Language Models (PLMs). Compared to earlier contextualized embedding models based on CNNs [143] or LSTMs [4], Transformer-based PLMs use much deeper network architectures (e.g., 48-layer Transformers [144]), and are pre-trained on much larger amounts of text corpora to learn contextual text representations by predicting words conditioned on their context. These PLMs are fine-tuned using task-specific labels, and have created new state of the art in many downstream NLP tasks, including TC. Although pre-training is unsupervised (or self-supervised), fine-tuning is supervised learning. A recent survey of Qiu et al. [145] categories popular PLMs by their representation types, model architectures, pre-training tasks, and downstream tasks.
PLMs can be grouped into two categories, autoregressive and autoencoding PLMs. One of the earliest autore- gressive PLMs is OpenGPT [6, 144], a unidirectional model which predicts a text sequence word by word from left to right (or right to left), with each word prediction depending on previous predictions. Fig. 17 shows the architecture of OpenGPT. It consists of 12 layers of Transformer blocks, each consisting of a masked multi-head attention module, followed by a layer normalization and a position-wise feed forward layer. OpenGPT can be adapted to downstream tasks such as TC by adding task-specific linear classifiers and fine-tuning using task-specific labels.
One of the most widely used autoencoding PLMs is BERT [7]. Unlike OpenGPT which predicts words based on previous predictions, BERT is trained using the masked language modeling (MLM) task that randomly masks some tokens in a text sequence, and then independently recovers the masked tokens by conditioning on the encoding vectors obtained by a bidirectional Transformer. There have been numerous works on improving BERT. RoBERTa [146] is more robust than BERT, and is trained using much more training data. ALBERT [147] lowers the memory consumption and increases the training speed of BERT. DistillBERT [148] utilizes knowledge distillation during pre-training to reduce the size of BERT by 40% while retaining 99% of its original capabilities and making the inference 60% faster. SpanBERT [149] extends BERT to better represent and predict text spans. Electra [150] uses a more sample-efficient pre-training task than MLM, called replaced token detection. Instead of masking the input, it corrupts it by replacing some tokens with plausible alternatives sampled from a small generator network. ERNIE [151, 152] incorporates domain knowledge from external knowledge bases, such as named entities, for
, Vol. 1, No. 1, Article . Publication date: February 2020.
17
18
Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, and Jianfeng Gao
18 + Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, and Jianfeng Gao
{ Layer Norm Stet | Text1 | delim | Text? | Exact Similarity stat | Text2 | datim | Text1 | extract Entailment [stat | Premise | beim | Hypothesis | Exract = Tans [tes] âTransformer |â + +) âTranstormer |â stat | Context | datim | Answer1 | exact | ->| Transformer ic Multiple Chovee [start [Context | Dalim | Answer2 | Exract | -+[Tansformer|}+{ Linear | [rearsion enbea] sen | Context | beim | AnswerN | Baract |/+[Transionmer
# Fig. 17. The architecture of OpenGPT-1 [144]
model pre-training. ALUM [14] introduces an adversarial loss for model pretraining that improves the modelâs generalization to new tasks and robustness to adversarial attacks. BERT and its variants have been fine-tuned for various NLP tasks, including QA [153], TC [154], and NLI [23, 155].
There have been attempts to combine the strengths of autoregressive and autoencoding PLMs. XLNet [156] integrates the idea of autoregressive models like OpenGPT and bi-directional context modeling of BERT. XLNet makes use of a permutation operation during pre-training that allows context to include tokens from both left and right, making it a generalized order-aware autoregressive language model. The permutation is achieved by using a special attention mask in Transformers. XLNet also introduces a two-stream self-attention schema to allow position-aware word prediction. This is motivated by the observation that word distributions vary greatly depending on word positions. For example, the beginning of a sentence has a considerably different distribution from other positions in the sentence. As show in Fig. 18, to predict the word token in position 1 in a permutation 3-2-4-1, a content stream is formed by including the positional embeddings and token embeddings of all previous words (3, 2, 4), then a query stream is formed by including the content stream and the positional embedding of the word to be predicted (word in position 1), and finally the model makes the prediction based on information from the query stream.
fe) ea @ 3 » Attontion Masks ED (2) (o) (2) ) i o) Saupe a factorization onder: Sonne ee
Fig. 18. The architecture of XLNet [156]: a) Content stream attention, b) Query stream attention, c) Overview of the permutation language modeling training with two- stream attention.
, Vol. 1, No. 1, Article . Publication date: February 2020.
Deep Learning Based Text Classification: A Comprehensive Review â¢
As mentioned earlier, OpenGPT uses a left-to-right Transformer to learn text representation for natural language generation, while BERT uses a bidirectional transformer for natural language understanding. The Unified language Model (UniLM) [157] is designed to tackle both natural language understanding and generation tasks. UniLM is pre-trained using three types of language modeling tasks: unidirectional, bidirectional, and sequence-to-sequence prediction. The unified modeling is achieved by employing a shared Transformer network and utilizing specific self-attention masks to control what context the prediction conditions on, as shown in Fig. 19. The second version of UniLM [158] is reported to achieve new state of the art on a wide range of natural language understanding and generation tasks, significantly outperforming previous PLMs, including OpenGPT-2, XLNet, BERT and its variants.
Ss, s Allow to attend 5ogoG EB Prevent from attending s, ;00 O0000 . /O8 aw 000 505] [5 J (eos) (=, ] [EOS GJ GJ GJ) GJ) or⢠âS,&S,: attend to all tokens S. 7 S â [Transformer Block L_ Ti Segment 1 Segment 2 ' oO irastomer ' t | Leftto-Rightim ; : Transformer Block 2 ' 00 â Transformer Block 1 o o \__Transformer_] ! f ! Se-atendtotetcontexr [808] [S:] [S-] [s+] [Eos] s, âSegment 1 O00 Transformer s j00 OOO s, /GU00B8 Cat Unified LM with oooo00 Transformer hared P: SS SS aoe) Shared Parameters Stool coace (BS) C8) Eos) Css] Self-attention Masks = Segment 1 Segment 2
Fig. 19. Overview of UniLM pre-training [157]. The model parameters are shared across the language modeling objectives i.e., bidirectional, unidirectional, and sequence-to-sequence language modeling. Different self-attention masks are used to control the access to context for each word token.
Raffel et al. [159] present a unified Transformer-based framework that converts many NLP problems into a text-to-text format. They also conduct a systematic study to compare pre-training objectives, architectures, unlabeled datasets, fine-tuning approaches, and other factors on dozens of language understanding tasks.
# 2.11 Beyond Supervised Learning
Unsupervised Learning using Autoencoders. Similar to word embeddings, distributed representations for sentences can also be learned in an unsupervised fashion. by optimizing some auxiliary objectives, such as the reconstruction loss of an autoencoder [160]. The result of such unsupervised learning are sentence encoders, which can map sentences with similar semantic and syntactic properties to similar fixed-size vector representations. The Transformer-based PLMs described in Section 2.10 are also unsupervised models that can be used as sentence encoders. This section discusses unsupervised models based on auto-encoders and their variants.
Kiros et al. [161] propose the Skip-Thought model for unsupervised learning of a generic, sentence encoder. An encoder-decoder model is trained to reconstruct the surrounding sentences of an encoded sentence. Dai and
, Vol. 1, No. 1, Article . Publication date: February 2020.
19
20
Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, and Jianfeng Gao
20 + Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, and Jianfeng Gao
(b) gâLFeLP ferelel } pte) â alhlX) | Inference Network) Oo6lh pl @eee|h or) a p(X|h) | OOOOOO |X (c) RNNS work <EOS> u 3 RNNs work <E0S> RNNs work
(b) gâLFeLP ferelel } pte) â alhlX) | Inference Network) Oo6lh pl yl2â2 Ze) OOly @eee|h or) a p(X|h) | OOOOOO |X
(c) RNNS work <EOS> u 3 RNNs work <E0S> RNNs work
Fig. 20. (a) The neural variational document model for document modeling [166]. (b) The neural answer selection model for QA [166]. (c) The RNN-based variational autoencoder language model [167].
Le [162] investigate the use of a sequence autoencoder, which reads the input sequence into a vector and predicts the input again, for sentence encoding. They show that pre-training sentence encoders on a large unsupervised corpus yields better accuracy than only pre-training word embeddings. Zhang et al. [163] propose a mean-max attention autoencoder, which uses the multi-head self-attention mechanism to reconstruct the input sequence. A mean-max strategy is used in encoding, where both mean and max pooling operations over the hidden vectors are applied to capture diverse information of the input.
While autoencoders learn a compressed representation of input, Variational AutoEncoders (VAEs) [164, 165] learn a distribution representing the data, and can be viewed as a regularized version of the autoencoder [26]. Since a VAE learns to model the data, we can easily sample from the distribution to generate new samples (e.g., new sentences). Miao et al. [166] extend the VAE framework to text, and propose a Neural Variational Document Model (NVDM) for document modeling and a Neural Answer Selection Model (NASM) for QA. As shown in Fig. 20 (a), the NVDM uses an MLP encoder to map a document to a continuous semantic representation. As shown in Fig. 20 (b), the NASM uses LSTM and a latent stochastic attention mechanism to model the semantics of question-answer pairs and predicts their relatedness. The attention model focuses on the phrases of an answer that are strongly connected to the question semantics and is modeled by a latent distribution, allowing the model to deal with the ambiguity inherent in the task. Bowman et al. [167] propose an RNN-based VAE language model, as shown in Fig. 20 (c). This model incorporates distributed latent representations of entire sentences, allowing to explicitly model holistic properties of sentences such as style, topic, and high-level syntactic features. Gururangan et al. [168] pre-train a document model as a VAE on in-domain, unlabeled data and use its internal states as features for text classification. In general, data augmentation using VAE or other models [169, 170] is widely used for semi-supervised or weakly supervised TC.
Adversarial Training. Adversarial training [171] is a regularization method for improving the generalization of a classifier. It does so by improving modelâs robustness to adversarial examples, which are created by making small perturbations to the input. Adversarial training requires the use of labels, and is applied to supervised learning. Virtual adversarial training [172] extend adversarial training to semi-supervised learning. This is done by regularizing a model so that given an example, the model produces the same output distribution as it produces on an adversarial perturbation of that example. Miyato et al. [173] extend adversarial and virtual adversarial
, Vol. 1, No. 1, Article . Publication date: February 2020.
Deep Learning Based Text Classification: A Comprehensive Review â¢
training to supervised and semi-supervised TC tasks by applying perturbations to the word embeddings in an RNN rather than the original input itself. Sachel et al. [174] study LSTM models for semi-supervised TC. They find that using a mixed objective function that combines cross-entropy, adversarial, and virtual adversarial losses for both labeled and unlabeled data, leads to a significant improvement over supervised learning approaches. Liu et al. [175] extend adversarial training to the multi-task learning framework for TC [36], aiming to alleviate the task-independent (shared) and task-dependent (private) latent feature spaces from interfering with each other.
Reinforcement Learning. Reinforcement learning (RL) [176] is a method of training an agent to perform discrete actions according to a policy, which is trained to maximize a reward. Shen et al. [177] use a hard attention model to select a subset of critical word tokens of an input sequence for TC. The hard attention model can be viewed as an agent that takes actions of whether to select a token or not. After going through the entire text sequence, it receives a classification loss, which can be used as the reward to train the agent. Liu et al. [178] propose a neural agent that models TC as a sequential decision process. Inspired by the cognitive process of human text reading, the agent scans a piece of text sequentially and makes classification decisions at the time it wishes. Both the classification result and when to make the classification are part of the decision process, controlled by a policy trained with RL. Shen et al. [179] present a multi-step Reasoning Network (ReasoNet) for machine reading comprehension. ReasoNets tasks multiple steps to reason over the relation among queries, documents, and answers. Instead of using a fixed number of steps during inference, ReasoNets introduce a termination state to relax this constraint on the reasoning steps. With the use of RL, ReasoNets can dynamically determine whether to continue the comprehension process after digesting intermediate results, or to terminate reading when it concludes that existing information is adequate to produce an answer. Li et al. [180] combine RL, GANs, and RNNs to build a new model, termed Category Sentence Generative Adversarial Network (CS-GAN), which is able to generate category sentences that enlarge the original dataset and to improve its generalization capability during supervised training. Zhang et al. [181] propose a RL-based method of learning structured representations for text classification. They propose two LSTM-based models. The first one selects only important, task-relevant words in the input text. The other one discovers phrase structures of sentences. Structure discovery using these two models is formulated as a sequential decision process guided by a policy network, which decides at each step which model to use, as illustrated in Fig. 21. The policy network is optimized using policy gradient.
Policy Network(PNet) Structured Representation Model Classification Network(CNet) a a = A a| action action Ag AL, | representation ( x 4 sb â7 SON ~ ~ t n, 7 % w~ON ââ_ 4p / \ state (8 >{ Sp pes SL) LG: ($1 {82 )->s+-> 5, > NX Na? Y > = 7 if word input X; 0X2 Xe HE} ââ+|U t Delayed Reward: P(|X)
Fig. 21. The RL-based method of learning structured representations for text classification [181]. The policy network samples an action at each state. The structured representation model updates the state and outputs the final sentence representation to the classification network at the end of the episode. The text classification loss is used as a (negative) reward to train the policy.
As a summary of this section, Figure 22 illustrates the timeline of some of the most popular DL-based models for TC since 2013.
, Vol. 1, No. 1, Article . Publication date: February 2020.
21
22
Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, and Jianfeng Gao
22 + Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, and Jianfeng Gao
2013 2014 2015 2016 2017 2018 2019 2020
Fig. 22. Some of the most prominent deep learning models for text embedding and classification published from 2013 to 2020.
3 HOW TO CHOOSE THE BEST NEURAL NETWORK MODEL FOR MY TASK The answer to âwhat the best neural network architecture is for TC?â varies greatly depending on the nature of the target task and domain, the availability of in-domain labels, the latency and capacity constraints of the applications, and so on. Although it has no doubt that developing a text classifier is a trial-and-error process, by analyzing recent results on public benchmarks (e.g., GLUE [22]), we propose the following recipe to make the process easier. The recipe consists of five steps:
(1) PLM Selection. As will be shown in Section 5, using PLMs leads to significant improvements across all popular text classification tasks, and autoencoding PLMs (e.g., BERT or RoBERTa) often work better than autoregressive PLMs (e.g., OpenAI GPT). Hugging Face3 maintains a rich repository of PLMs developed for various tasks and settings.
(2) Domain adaptation. Most PLMs are trained on general-domain text corpora (e.g., Web). If the target domain is dramatically different from general domain, we might consider adapting the PLM using in-domain data by continual pre-training the selected general-domain PLM. For domains with abundant unlabeled text, such as biomedicine, pretraining language models from scratch might also be a good choice [182]. (3) Task-specific model design. Given input text, the PLM produces a sequence of vectors in the contextual representation. Then, one or more task-specific layers are added on the top to generate the final output for the target task. The choice of the architecture of task-specific layers depends on the nature of the task, e.g., the linguistic structure of text needs to be captured. As described in Section 2, feed-forward neural networks view text as a bag of words, RNNs can capture word orders, CNNs are good at recognizing patterns such as key phrases, attention mechanisms are effective to identify correlated words in text, Siamese NNs are used for text matching tasks, and GNNs can be a good choice if graph structures of natural language (e.g., parse trees) are useful for the target task.
(4) Task-specific fine-tuning. Depending on the availability of in-domain labels, the task-specific layers can be either trained alone with the PLM fixed or trained together with the PLM. If multiple similar text classifiers need to be built (e.g., news classifiers for different domains), multi-task fine-tuning [23] is a good choice to leverage labeled data of similar domains.
3https://huggingface.co/
, Vol. 1, No. 1, Article . Publication date: February 2020.
Deep Learning Based Text Classification: A Comprehensive Review ⢠23
(5) Model compression. PLMs are expensive to serve. They often need to be compressed via e.g., knowledge distillation [183, 184] to meet the latency and capacity constraints in real-world applications.
4 TEXT CLASSIFICATION DATASETS This section describes the datasets that are widely used for TC research. We group these datasets, based on their main target applications, into such categories as sentiment analysis, news categorization, topic classification, QA, and NLI.
# 4.1 Sentiment Analysis Datasets
Yelp. Yelp [185] dataset contains the data for two sentiment classification tasks. One is to detect fine-grained sentiment labels and is called Yelp-5. The other predicts the negative and positive sentiments, and is known as Yelp Review Polarity or Yelp-2. Yelp-5 has 650,000 training samples and 50,000 test samples for each class, and Yelp-2 includes 560,000 training samples and 38,000 test samples for negative and positive classes.
IMDb. The IMDB dataset [186] is developed for the task of binary sentiment classification of movie reviews. IMDB consists of equal number of positive and negative reviews. It is evenly divided between training and test sets with 25,000 reviews for each.
Movie Review. The Movie Review (MR) dataset [187] is a collection of movie reviews developed for the task of detecting the sentiment associated with a particular review and determining whether it is negative or positive. It includes 10,662 sentences with even numbers of negative and positive samples. 10-fold cross validation with random split is usually used for testing on this dataset.
SST. The Stanford Sentiment Treebank (SST) dataset [43] is an extended version of MR. Two versions are available, one with fine-grained labels (five-class) and the other binary labels, referred to as SST-1 and SST- 2, respectively. SST-1 consists of 11,855 movie reviews which are divided into 8,544 training samples, 1,101 development samples, and 2,210 test samples. SST-2 is partitioned into three sets with the sizes of 6,920, 872 and 1,821 as training, development and test sets, respectively.
MPQA. The Multi-Perspective Question Answering (MPQA) dataset [188] is an opinion corpus with two class labels. MPQA consists of 10,606 sentences extracted from news articles related to a wide variety of news sources. This is an imbalanced dataset with 3,311 positive documents and 7,293 negative documents.
Amazon. This is a popular corpus of product reviews collected from the Amazon website [189]. It contains labels for both binary classification and multi-class (5-class) classification. The Amazon binary classification dataset consists of 3,600,000 and 400,000 reviews for training and test, respectively. The Amazon 5-class classification dataset (Amazon-5) consists of 3,000,000 and 650,000 reviews for training and test, respectively.
# 4.2 News Classification Datasets
AG News. The AG News dataset [50] is a collection of news articles collected from more than 2,000 news sources by ComeToMyHead, an academic news search engine. This dataset includes 120,000 training samples and 7,600 test samples. Each sample is a short text with a four-class label.
20 Newsgroups. The 20 Newsgroups dataset [190] is a collection of newsgroup documents posted on 20 different topics. Various versions of this dataset are used for text classification, text clustering and so one. One of the most popular versions contains 18,821 documents that are evenly classified across all topics.
, Vol. 1, No. 1, Article . Publication date: February 2020.
24
Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, and Jianfeng Gao
24 + Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, and Jianfeng Gao
Sogou News. The Sogou News dataset [154] is a mixture of the SogouCA and SogouCS news corpora. The classification labels of the news are determined by their domain names in the URL. For example, the news with URL http://sports.sohu.com is categorized as a sport class.
Reuters news. The Reuters-21578 dataset [191] is one of the most widely used data collections for text categorization, and is collected from the Reuters financial newswire service in 1987. ApteMod is a multi-class version of Reuters-21578 with 10,788 documents. It has 90 classes, 7,769 training documents and 3,019 test documents. Other datasets derived from a subset of the Reuters dataset include R8, R52, RCV1, and RCV1-v2. Other datasets developed for news categorization includes: Bing news [192], BBC [193], Google news [194].
# 4.3 Topic Classification Datasets
DBpedia. The DBpedia dataset [195] is a large-scale, multilingual knowledge base that has been created from the most commonly used infoboxes within Wikipedia. DBpedia is published every month and some classes and properties are added or removed in each release. The most popular version of DBpedia contains 560,000 training samples and 70,000 test samples, each with a 14-class label.
Ohsumed. The Ohsumed collection [196] is a subset of the MEDLINE database. Ohsumed contains 7,400 docu- ments. Each document is a medical abstract that is labeled by one or more classes selected from 23 cardiovascular diseases categories.
EUR-Lex. The EUR-Lex dataset [197] includes different types of documents, which are indexed according to several orthogonal categorization schemes to allow for multiple search facilities. The most popular version of this dataset is based on different aspects of European Union law and has 19,314 documents and 3,956 categories.
WOS. The Web Of Science (WOS) dataset [136] is a collection of data and meta-data of published papers available from the Web of Science, which is the worldâs most trusted publisher-independent global citation database. WOS has been released in three versions: WOS-46985, WOS-11967 and WOS-5736. WOS-46985 is the full dataset. WOS-11967 and WOS-5736 are two subsets of WOS-46985.
PubMed. PubMed [198] is a search engine developed by the National Library of Medicine for medical and biological scientific papers, which contains a document collection. Each document has been labeled with the classes of the MeSH set which is a label set used in PubMed. Each sentence in an abstract is labeled with its role in the abstract using one of the following classes: background, objective, method, result, or conclusion.
Other datasets for topic classification includes PubMed 200k RCT [199], Irony (which is composed of anno- tated comments from the social news website reddit, Twitter dataset for topic classification of tweets, arXiv collection) [200], to name a few.
# 4.4 QA Datasets
SQuAD. Stanford Question Answering Dataset (SQuAD) [24] is a collection of question-answer pairs derived from Wikipedia articles. In SQuAD, the correct answers of questions can be any sequence of tokens in the given text. Because the questions and answers are produced by humans through crowdsourcing, it is more diverse than some other question-answering datasets. SQuAD 1.1 contains 107,785 question-answer pairs on 536 articles. SQuAD2.0, the latest version, combines the 100,000 questions in SQuAD1.1 with over 50,000 un-answerable questions written adversarially by crowdworkers in forms that are similar to the answerable ones [201].
MS MARCO. This dataset is released by Microsoft [202]. Unlike SQuAD where all questions are produced by edits; In MS MARCO, all questions are sampled from user queries and passages from real web documents
, Vol. 1, No. 1, Article . Publication date: February 2020.
# Deep Learning Based Text Classification: A Comprehensive Review â¢
Deep Learning Based Text Classification: A Comprehensive Review + 25
using the Bing search engine. Some of the answers in MS MARCO are generative. So, the dataset can be used to develop generative QA systems.
TREC-QA. TREC-QA [203] is one of the most popular and studied datasets for QA research. This dataset has two versions, known as TREC-6 and TREC-50. TREC-6 consists of questions in 6 categories while TREC-50 in fifty classes. For both versions, the training and test datasets contain 5,452 and 500 questions, respectively.
WikiQA. The WikiQA dataset [204] consists of a set of question-answer pairs, collected and annotated for open-domain QA research. The dataset also includes questions for which there is no correct answer, allowing researchers to evaluate answer triggering models.
Quora. The Quora dataset [205] is developed for paraphrase identification (to detect duplicate questions). For this purpose, the authors present a subset of Quora data that consists of over 400,000 question pairs. A binary value is assigned to each question pair indicating whether the two questions are the same or not.
Other datasets for QA includes Situations With Adversarial Generations (SWAG) [206], WikiQA [204], SelQA [207].
# 4.5 NLI Datasets
SNLI. The Stanford Natural Language Inference (SNLI) dataset [208] is widely used for NLI. This dataset consists of 550,152, 10,000 and 10,000 sentence pairs for training, development and test, respectively. Each pair is annotated with one of the three labels: neutral, entailment, contradiction.
Multi-NLI. The Multi-Genre Natural Language Inference (MNLI) dataset [209] is a collection of 433k sentence pairs annotated with textual entailment labels. The corpus is an extension of SNLI, covers a wider range of genres of spoken and written text, and supports a distinctive cross-genre generalization evaluation.
SICK. The Sentences Involving Compositional Knowledge (SICK) dataset [25] consists of about 10,000 English sentence pairs which are annotated with three labels: entailment, contradiction, and neutral.
MSRP. The Microsoft Research Paraphrase (MSRP) dataset [210] is commonly used for the text similarity task. MSRP consists of 4,076 samples for training and 1,725 samples for testing. Each sample is a sentence pair, annotated with a binary label indicating whether the two sentences are paraphrases or not.
Other NLI datasets includes Semantic Textual Similarity (STS) [211], RTE [212], SciTail [213], to name a few.
5 EXPERIMENTAL PERFORMANCE ANALYSIS In this section, we first describe a set of metrics commonly used for evaluating TC modelsâ performance, and then present a quantitative analysis of the performance of a set of DL-based TC models on popular benchmarks.
# 5.1 Popular Metrics for Text Classification
Accuracy and Error Rate. These are primary metrics to evaluate the quality of a classification model. Let TP, FP, TN, FN denote true positive, false positive, true negative, and false negative, respectively. The classification Accuracy and Error Rate are defined in Eq. 2
Accuracy = (TP + TN) ð , Error rate = (FP + FN) ð , (2)
where ð is the total number of samples. Obviously, we have Error Rate = 1 - Accuracy.
Precision / Recall / F1 score. These are also primary metrics, and are more often used than accuracy or error rate for imbalanced test sets, e.g., the majority of the test samples have one class label. Precision and recall for binary classification are defined as Eq. 3. The F1 score is the harmonic mean of the precision and recall, as in Eq. 3. An F1 score reaches its best value at 1 (perfect precision and recall) and worst at 0.
, Vol. 1, No. 1, Article . Publication date: February 2020.
25
26
Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, and Jianfeng Gao
26 + Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, and Jianfeng Gao
Precision = TP TP + FP , Recall = TP TP + FN , F1-score = 2 Prec Rec Prec + Rec (3)
For multi-class classification problems, we can always compute precision and recall for each class label and analyze the individual performance on class labels or average the values to get the overall precision and recall.
Exact Match (EM). The exact match metric is a popular metric for question-answering systems, which measures the percentage of predictions that match any one of the ground truth answers exactly. EM is one of the main metrics used for SQuAD.
Mean Reciprocal Rank (MRR). MRR is often used to evaluate the performance of ranking algorithms in NLP tasks such as query-document ranking and QA. MRR is defined in Eq. 4, where ð is a set of all possible answers, and ððððð is the ranking position of the ground-truth answer.
MRR = 1 |ð | ð âï¸ ð=1 1 ððððð . (4)
Other widely used metrics include Mean Average Precision (MAP), Area Under Curve (AUC), False Discovery Rate, False Omission Rate, to name a few.
5.2 Quantitative Results We tabulate the performance of several of the previously discussed algorithms on popular TC benchmarks. In each table, in addition to the results of a set of representative DL models, we also present results using non-deep- learning models which are either previous state of the art or widely used as baselines before the DL era. We can see that across all these tasks, the use of DL models leads to significant improvements.
Table 1 summarizes the results of the models described in Section 2 on several sentiment analysis datasets, including Yelp, IMDB, SST, and Amazon. We can see that significant improvement in accuracy has been obtained since the introduction of the first DL-based sentiment analysis model, e.g., with around 78% relative reduction in classification error (on SST-2).
Table 2 reports the performance on three news categorization datasets (i.e., AG News, 20-NEWS, Sogou News) and two topic classification datasets (i.e., DBpedia and Ohsummed). A similar trend to that in sentiment analysis is observed.
Tables 3 and 4 present the performance of some DL models on SQuAD, and WikiQA, respectively. It is worth noting that on both datasets the significant performance lift is attributed to the use of BERT.
Table 5 presents the results on two NLI datasets (i.e., SNLI and MNLI). We observe a steady performance improvement on both datasets over the last 5 years.
6 CHALLENGES AND OPPORTUNITIES TC has seen a great progress over the last few years, with the help of DL models. Several novel ideas have been proposed (such as neural embedding, attention mechanism, self attention, Transformer, BERT, and XLNet), which lead to the fast progress over the past decade. Despite the progress, there are still challenges to be addressed. This section presents some of these challenges, and discusses research directions that could help advance the field.
New Datasets for More Challenging Tasks. Although a number of large-scale datasets have been collected for common TC tasks in recent years, there remains a need for new datasets for more challenging TC tasks such as QA with multi-step reasoning, text classification for multi-lingual documents, and TC for extremely long documents.
, Vol. 1, No. 1, Article . Publication date: February 2020.
# Deep Learning Based Text Classification: A Comprehensive Review â¢
Deep Learning Based Text Classification: A Comprehensive Review + 27
Table 1. Accuracy of deep learning based text classification models on sentiment analysis datasets (in terms of classification accuracy), evaluated on the IMDB, SST, Yelp, and Amazon datasets. Italic indicates the non-deep-learning models.
Method Naive Bayes [43] LDA [214] BoW+SVM [31] tf.Î idf [215] Char-level CNN [50] Deep Pyramid CNN [49] ULMFiT [216] BLSTM-2DCNN [40] Neural Semantic Encoder [95] BCN+Char+CoVe [217] GLUE ELMo baseline [22] BERT ELMo baseline [7] CCCapsNet [76] Virtual adversarial training [173] Block-sparse LSTM [218] BERT-base [7, 154] BERT-large [7, 154] ALBERT [147] Multi-Task DNN [23] Snorkel MeTaL [219] BERT Finetune + UDA [220] RoBERTa (+additional data) [146] XLNet-Large (ensemble) [156] IMDB SST-2 Amazon-2 Amazon-5 Yelp-2 Yelp-5 - 67.40 87.80 88.10 - - 95.40 - - 91.80 - - - 94.10 94.99 95.63 95.79 - 83.20 - 95.80 - 96.21 81.80 - - - - 84.46 - 89.50 89.70 90.30 90.40 90.40 - - 93.20 93.50 94.9 95.20 95.60 96.20 - - - - 94.49 96.68 - - - - - - 94.96 - - 96.04 96.07 - - - 96.50 - 97.60 - - - - 59.46 65.82 - - - - - - 60.95 - - 61.60 62.20 - - - 62.88 - 67.74 - - - - 95.12 97.36 97.84 - - - - - 96.48 - 96.73 98.08 98.19 - - - 97.95 - 98.45 96.40 96.80 - - - - 62.05 69.40 70.02 - - - - - 65.85 - 70.58 71.38 - - - 62.92 - 72.20
Modeling Commonsense Knowledge. Incorporating commonsense knowledge into DL models has a potential to significantly improve model performance, pretty much in the same way that humans leverage commonsense knowledge to perform different tasks. For example, a QA system equipped with a commonsense knowledge base could answer questions about the real world. Commonsense knowledge also helps to solve problems in the case of incomplete information. Using widely held beliefs about everyday objects or concepts, AI systems can reason based on âdefaultâ assumptions about the unknowns in a similar way people do. Although this idea has been investigated for sentiment classification [? ], much more research is required to explore to effectively model and use commonsense knowledge in DL models.
Interpretable DL Models. While DL models have achieved promising performance on challenging benchmarks, most of these models are not interpretable. For example, why does a model outperform another model on one dataset, but underperform on other datasets? What exactly have DL models learned? What is a minimal neural network architecture that can achieve a certain accuracy on a given dataset? Although the attention and self- attention mechanisms provide some insight toward answering these questions, a detailed study of the underlying behavior and dynamics of these models is still lacking. A better understanding of the theoretical aspects of these models can help develop better models curated toward various text analysis scenarios.
, Vol. 1, No. 1, Article . Publication date: February 2020.
27
28
Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, and Jianfeng Gao
28 + Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, and Jianfeng Gao
Table 2. Accuracy of classification models on news categorization, and topic classification tasks. Italic indicates the non- deep-learning models.
Method Hierarchical Log-bilinear Model [221] Text GCN [107] Simplfied GCN [108] Char-level CNN [50] CCCapsNet [76] LEAM [84] fastText [30] CapsuleNet B [71] Deep Pyramid CNN [49] ULMFiT [216] L MIXED [174] BERT-large [220] XLNet [156] News Categorization Topic Classification Sogou News DBpedia Ohsumed - - - 98.45 98.72 99.02 98.60 - 99.12 99.20 99.30 99.32 99.38 AG News - 67.61 - 90.49 92.39 92.45 92.50 92.60 93.13 94.99 95.05 - 95.51 20NEWS - 86.34 88.50 - - 81.91 - - - - - - - - - - 95.12 97.25 - 96.80 - 98.16 - - - - 52 68.36 68.50 - - 58.58 55.70 - - - - - -
Table 3. Performance of classification models on SQuAD question answering datasets. Here, the F1 score measures the average overlap between the prediction and ground truth answer. Italic denotes the non-deep-learning models.
SQuAD1.1 SQuAD2.0 Method Sliding Window+Dist. [222] Hand-crafted Features+Logistic Regression [24] BiDAF + Self Attention + ELMo [4] SAN (single model) [137] FusionNet++ (ensemble) [223] SAN (ensemble) [137] BERT (single model) [7] BERT-large (ensemble) [7] BERT + Multiple-CNN [137] XL-Net [156] SpanBERT [149] RoBERTa [146] ALBERT (single model) [147] ALBERT (ensemble) [147] Retro-Reader on ALBERT ELECTRA+ALBERT+EntitySpanFocus EM F1-score 13.00 40.40 78.58 76.82 78.97 79.60 85.08 87.43 - 89.90 88.83 - - - - - 20.00 51.00 85.83 84.39 86.01 86.49 91.83 93.16 - 95.08 94.63 - - - - - EM F1-score - - 63.37 68.65 70.30 71.31 80.00 80.45 84.20 84.64 71.31 86.82 88.10 89.73 90.11 90.42 - - 66.25 71.43 72.48 73.70 83.06 83.51 86.76 88.00 73.70 89.79 90.90 92.21 92.58 92.79
Memory Efficient Models. Most modern neural language models require a significant amount of memory for training and inference. These models have to be compressed in order to meet the computation and storage constraints of edge applications. This can be done either by building student models using knowledge distillation,
, Vol. 1, No. 1, Article . Publication date: February 2020.
# Deep Learning Based Text Classification: A Comprehensive Review â¢
Deep Learning Based Text Classification: A Comprehensive Review + 29
# Table 4. Performance of classification models on the WikiQA datasets.
Method MAP MRR Paragraph vector [32] Neural Variational Inference [166] Attentive pooling networks [83] HyperQA [127] BERT (single model) [7] TANDA-RoBERTa [153] 0.511 0.655 0.688 0.712 0.813 0.920 0.516 0.674 0.695 0.727 0.828 0.933
Table 5. Performance of classification models on natural language inference datasets. For Multi-NLI, Matched and Mis- matched refer to the matched and mismatched test accuracies, respectively. Italic denotes the non-deep-learning models.
SNLI MultiNLI Method Unigrams Features [208] Lexicalized [208] LSTM encoders (100D) [208] Tree Based CNN [61] biLSTM Encoder [209] Neural Semantic Encoders (300D) [95] RNN Based Sentence Encoder [224] DiSAN (300D) [81] Decomposable Attention Model [92] Reinforced Self-Attention (300D) [177] Generalized Pooling (600D) [93] Bilateral multi-perspective matching [41] Multiway Attention Network [87] ESIM + ELMo [4] DMAN with Reinforcement Learning [225] BiLSTM + ELMo + Attn [22] Fine-Tuned LM-Pretrained Transformer [6] Multi-Task DNN [23] SemBERT [155] RoBERTa [146] XLNet [156] Accuracy Matched Mismatched 71.6 78.2 77.6 82.1 81.5 84.6 85.5 85.6 86.3 86.3 86.6 87.5 88.3 88.7 88.8 - 89.9 91.6 91.9 92.6 - - - - - 67.5 - 73.2 - - - 73.8 - 78.5 72.9 88.8 74.1 82.1 86.7 84.4 90.8 90.2 - - - - 67.1 - 73.6 - - - 74.0 - 77.7 73.4 78.9 74.5 81.4 86.0 84.0 90.2 89.8
or by using model compression techniques. Developing a task-agnostic model compression method is an active research topic [226].
Few-Shot and Zero-Shot Learning. Most DL models are supervised models that require large amounts of domain labels. In practice, it is expensive to collect such labels for each new domain. Fine-tuning a PLM (e.g., BERT and OpenGPT) to a specific task requires much fewer domain labels than training a model from scratch, thus opening opportunities of developing new zero-shot or few-shot learning methods based on PLMs.
, Vol. 1, No. 1, Article . Publication date: February 2020.
29
30
Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, and Jianfeng Gao
7 CONCLUSION In this paper, we survey more than 150 DL models, which are developed in the past six years and have significantly improved state of the art on various TC tasks. We also provide an overview of more than 40 popular TC datasets, and present a quantitative analysis of the performance of these models on several public benchmarks. Finally, we discuss some of the open challenges and future research directions.
# ACKNOWLEDGMENTS
The authors would like to thank Richard Socher, Kristina Toutanova, Brooke Cowan, and all the anonymous reviewers for reviewing this work and providing very insightful comments.
# REFERENCES
[1] S. Deerwester, S. T. Dumais, G. W. Furnas, T. K. Landauer, and R. Harshman, âIndexing by latent semantic analysis,â Journal of the American society for information science, vol. 41, no. 6, pp. 391â407, 1990.
[2] Y. Bengio, R. Ducharme, P. Vincent, and C. Jauvin, âA neural probabilistic language model,â Journal of machine learning research, vol. 3, no. Feb, pp. 1137â1155, 2003.
[3] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean, âDistributed representations of words and phrases and their compositionality,â in Advances in neural information processing systems, 2013, pp. 3111â3119.
[4] M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer, âDeep contextualized word representations,â arXiv preprint arXiv:1802.05365, 2018.
[5] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Å. Kaiser, and I. Polosukhin, âAttention is all you need,â in Advances in neural information processing systems, 2017, pp. 5998â6008.
[6] A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever, âImproving language understanding by generative pre-training,â URL https://s3-us-west-2. amazonaws. com/openai-assets/researchcovers/languageunsupervised/language understanding paper. pdf, 2018. [7] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, âBert: Pre-training of deep bidirectional transformers for language understanding,â
arXiv preprint arXiv:1810.04805, 2018.
[8] T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., âLanguage models are few-shot learners,â arXiv preprint arXiv:2005.14165, 2020.
[9] D. Lepikhin, H. Lee, Y. Xu, D. Chen, O. Firat, Y. Huang, M. Krikun, N. Shazeer, and Z. Chen, âGshard: Scaling giant models with conditional computation and automatic sharding,â arXiv preprint arXiv:2006.16668, 2020.
[10] G. Marcus and E. Davis, Rebooting AI: Building artificial intelligence we can trust. Pantheon, 2019. [11] G. Marcus, âThe next decade in ai: four steps towards robust artificial intelligence,â arXiv preprint arXiv:2002.06177, 2020. [12] Y. Nie, A. Williams, E. Dinan, M. Bansal, J. Weston, and D. Kiela, âAdversarial nli: A new benchmark for natural language understanding,â
arXiv preprint arXiv:1910.14599, 2019.
[13] D. Jin, Z. Jin, J. T. Zhou, and P. Szolovits, âIs bert really robust? natural language attack on text classification and entailment,â arXiv preprint arXiv:1907.11932, vol. 2, 2019.
[14] X. Liu, H. Cheng, P. He, W. Chen, Y. Wang, H. Poon, and J. Gao, âAdversarial training for large neural language models,â arXiv preprint arXiv:2004.08994, 2020.
[15] J. Andreas, M. Rohrbach, T. Darrell, and D. Klein, âLearning to compose neural networks for question answering,â arXiv preprint arXiv:1601.01705, 2016.
[16] M. Iyyer, W.-t. Yih, and M.-W. Chang, âSearch-based neural structured learning for sequential question answering,â in Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2017, pp. 1821â1831.
[17] I. Schlag, P. Smolensky, R. Fernandez, N. Jojic, J. Schmidhuber, and J. Gao, âEnhancing the transformer with explicit relational encoding for math problem solving,â arXiv preprint arXiv:1910.06611, 2019.
[18] J. Gao, B. Peng, C. Li, J. Li, S. Shayandeh, L. Liden, and H.-Y. Shum, âRobust conversational ai with grounded text generation,â arXiv preprint arXiv:2009.03457, 2020.
[19] K. Kowsari, K. Jafari Meimandi, M. Heidarysafa, S. Mendu, L. Barnes, and D. Brown, âText classification algorithms: A survey,â Information, vol. 10, no. 4, p. 150, 2019.
[20] C. D. Manning, H. Schütze, and P. Raghavan, Introduction to information retrieval. Cambridge university press, 2008. [21] D. Jurasky and J. H. Martin, âSpeech and language processing: An introduction to natural language processing,â Computational
Linguistics and Speech Recognition. Prentice Hall, New Jersey, 2008.
[22] A. Wang, A. Singh, J. Michael, F. Hill, O. Levy, and S. R. Bowman, âGlue: A multi-task benchmark and analysis platform for natural language understanding,â arXiv preprint arXiv:1804.07461, 2018.
, Vol. 1, No. 1, Article . Publication date: February 2020.
# Deep Learning Based Text Classification: A Comprehensive Review â¢
# Deep Learning Based Text Classification: A Comprehensive Review + 31
[23] X. Liu, P. He, W. Chen, and J. Gao, âMulti-task deep neural networks for natural language understanding,â arXiv:1901.11504, 2019. [24] P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang, âSquad: 100,000+ questions for machine comprehension of text,â arXiv preprint arXiv:1606.05250, 2016.
[25] M. Marelli, L. Bentivogli, M. Baroni, R. Bernardi, S. Menini, and R. Zamparelli, âSemeval-2014 task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment,â in Proceedings of the 8th international workshop on semantic evaluation (SemEval 2014), 2014, pp. 1â8. [26] I. Goodfellow, Y. Bengio, and A. Courville, Deep learning. MIT press, 2016. [27] T. Mikolov, K. Chen, G. Corrado, and J. Dean, âEfficient estimation of word representations in vector space,â arXiv preprint arXiv:1301.3781,
2013.
[28] J. Pennington, R. Socher, and C. Manning, âGlove: Global vectors for word representation,â in Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), 2014, pp. 1532â1543.
[29] M. Iyyer, V. Manjunatha, J. Boyd-Graber, and H. Daumé III, âDeep unordered composition rivals syntactic methods for text classification,â in Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 2015, pp. 1681â1691.
[30] A. Joulin, E. Grave, P. Bojanowski, M. Douze, H. Jégou, and T. Mikolov, âFasttext. zip: Compressing text classification models,â arXiv preprint arXiv:1612.03651, 2016.
[31] S. Wang and C. D. Manning, âBaselines and bigrams: Simple, good sentiment and topic classification,â in Proceedings of the 50th annual meeting of the association for computational linguistics: Short papers-volume 2. Association for Computational Linguistics, 2012, pp. 90â94.
[32] Q. Le and T. Mikolov, âDistributed representations of sentences and documents,â in International conference on machine learning, 2014, pp. 1188â1196.
[33] K. S. Tai, R. Socher, and C. D. Manning, âImproved semantic representations from tree-structured long short-term memory networks,â arXiv preprint arXiv:1503.00075, 2015.
[34] X. Zhu, P. Sobihani, and H. Guo, âLong short-term memory over recursive structures,â in International Conference on Machine Learning, 2015, pp. 1604â1612.
[35] J. Cheng, L. Dong, and M. Lapata, âLong short-term memory-networks for machine reading,â arXiv preprint arXiv:1601.06733, 2016. [36] P. Liu, X. Qiu, X. Chen, S. Wu, and X.-J. Huang, âMulti-timescale long short-term memory neural network for modelling sentences and
documents,â in Proceedings of the 2015 conference on empirical methods in natural language processing, 2015, pp. 2326â2335.
[37] A. B. Dieng, C. Wang, J. Gao, and J. Paisley, âTopicrnn: A recurrent neural network with long-range semantic dependency,â arXiv preprint arXiv:1611.01702, 2016.
[38] P. Liu, X. Qiu, and X. Huang, âRecurrent neural network for text classification with multi-task learning,â arXiv preprint arXiv:1605.05101, 2016.
[39] R. Johnson and T. Zhang, âSupervised and semi-supervised text categorization using lstm for region embeddings,â arXiv preprint arXiv:1602.02373, 2016.
[40] P. Zhou, Z. Qi, S. Zheng, J. Xu, H. Bao, and B. Xu, âText classification improved by integrating bidirectional lstm with two-dimensional max pooling,â arXiv preprint arXiv:1611.06639, 2016.
[41] Z. Wang, W. Hamza, and R. Florian, âBilateral multi-perspective matching for natural language sentences,â arXiv preprint arXiv:1702.03814, 2017.
[42] S. Wan, Y. Lan, J. Guo, J. Xu, L. Pang, and X. Cheng, âA deep architecture for semantic matching with multiple positional sentence representations,â in Thirtieth AAAI Conference on Artificial Intelligence, 2016.
[43] R. Socher, A. Perelygin, J. Wu, J. Chuang, C. D. Manning, A. Y. Ng, and C. Potts, âRecursive deep models for semantic compositionality over a sentiment treebank,â in Proceedings of the 2013 conference on empirical methods in natural language processing, 2013, pp. 1631â1642. [44] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, âGradient-based learning applied to document recognition,â Proceedings of the IEEE,
vol. 86, no. 11, pp. 2278â2324, 1998.
[45] N. Kalchbrenner, E. Grefenstette, and P. Blunsom, âA convolutional neural network for modelling sentences,â in 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014 - Proceedings of the Conference, 2014.
[46] Y. Kim, âConvolutional neural networks for sentence classification,â in EMNLP 2014 - 2014 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference, 2014.
[47] J. Liu, W. C. Chang, Y. Wu, and Y. Yang, âDeep learning for extreme multi-label text classification,â in SIGIR 2017 - Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2017.
[48] R. Johnson and T. Zhang, âEffective use of word order for text categorization with convolutional neural networks,â in NAACL HLT 2015 - 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference, 2015.
[49] ââ, âDeep pyramid convolutional neural networks for text categorization,â in Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2017, pp. 562â570.
, Vol. 1, No. 1, Article . Publication date: February 2020.
31
32
Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, and Jianfeng Gao
[50] X. Zhang, J. Zhao, and Y. LeCun, âCharacter-level convolutional networks for text classification,â in Advances in neural information processing systems, 2015, pp. 649â657.
[51] Y. Kim, Y. Jernite, D. Sontag, and A. M. Rush, âCharacter-aware neural language models,â in Thirtieth AAAI Conference on Artificial Intelligence, 2016.
[52] J. D. Prusa and T. M. Khoshgoftaar, âDesigning a better data representation for deep neural networks and text classification,â in Proceedings - 2016 IEEE 17th International Conference on Information Reuse and Integration, IRI 2016, 2016.
[53] K. Simonyan and A. Zisserman, âVery deep convolutional networks for large-scale image recognition,â in 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings, 2015.
[54] K. He, X. Zhang, S. Ren, and J. Sun, âDeep residual learning for image recognition,â in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016.
[55] A. Conneau, H. Schwenk, L. Barrault, and Y. Lecun, âVery deep convolutional networks for text classification,â arXiv preprint arXiv:1606.01781, 2016.
[56] A. B. Duque, L. L. J. Santos, D. Macêdo, and C. Zanchettin, âSqueezed Very Deep Convolutional Neural Networks for Text Classification,â in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2019. [57] H. T. Le, C. Cerisara, and A. Denis, âDo convolutional networks need to be deep for text classification?â in Workshops at the Thirty-Second
AAAI Conference on Artificial Intelligence, 2018.
[58] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, âDensely connected convolutional networks,â in Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, 2017.
[59] B. Guo, C. Zhang, J. Liu, and X. Ma, âImproving text classification with weighted word embeddings via a multi-channel TextCNN model,â Neurocomputing, 2019.
[60] Y. Zhang and B. Wallace, âA sensitivity analysis of (and practitionersâ guide to) convolutional neural networks for sentence classification,â arXiv preprint arXiv:1510.03820, 2015.
[61] L. Mou, R. Men, G. Li, Y. Xu, L. Zhang, R. Yan, and Z. Jin, âNatural language inference by tree-based convolution and heuristic matching,â arXiv preprint arXiv:1512.08422, 2015.
[62] L. Pang, Y. Lan, J. Guo, J. Xu, S. Wan, and X. Cheng, âText matching as image recognition,â in 30th AAAI Conference on Artificial Intelligence, AAAI 2016, 2016.
[63] J. Wang, Z. Wang, D. Zhang, and J. Yan, âCombining knowledge with deep convolutional neural networks for short text classification,â in IJCAI International Joint Conference on Artificial Intelligence, 2017.
[64] S. Karimi, X. Dai, H. Hassanzadeh, and A. Nguyen, âAutomatic Diagnosis Coding of Radiology Reports: A Comparison of Deep Learning and Conventional Classification Methods,â 2017.
[65] S. Peng, R. You, H. Wang, C. Zhai, H. Mamitsuka, and S. Zhu, âDeepMeSH: Deep semantic representation for improving large-scale MeSH indexing,â Bioinformatics, 2016.
[66] A. Rios and R. Kavuluru, âConvolutional neural networks for biomedical text classification: Application in indexing biomedical articles,â in BCB 2015 - 6th ACM Conference on Bioinformatics, Computational Biology, and Health Informatics, 2015.
[67] M. Hughes, I. Li, S. Kotoulas, and T. Suzumura, âMedical Text Classification Using Convolutional Neural Networks,â Studies in Health Technology and Informatics, 2017.
[68] G. E. Hinton, A. Krizhevsky, and S. D. Wang, âTransforming auto-encoders,â in International conference on artificial neural networks. Springer, 2011, pp. 44â51.
[69] S. Sabour, N. Frosst, and G. E. Hinton, âDynamic routing between capsules,â in Advances in neural information processing systems, 2017, pp. 3856â3866.
[70] S. Sabour, N. Frosst, and G. Hinton, âMatrix capsules with em routing,â in 6th international conference on learning representations, ICLR, 2018, pp. 1â15.
[71] W. Zhao, J. Ye, M. Yang, Z. Lei, S. Zhang, and Z. Zhao, âInvestigating capsule networks with dynamic routing for text classification,â arXiv preprint arXiv:1804.00538, 2018.
[72] M. Yang, W. Zhao, L. Chen, Q. Qu, Z. Zhao, and Y. Shen, âInvestigating the transferring capability of capsule networks for text classification,â Neural Networks, vol. 118, pp. 247â261, 2019.
[73] W. Zhao, H. Peng, S. Eger, E. Cambria, and M. Yang, âTowards scalable and reliable capsule networks for challenging NLP applications,â in ACL, 2019, pp. 1549â1559.
[74] J. Kim, S. Jang, E. Park, and S. Choi, âText classification using capsules,â Neurocomputing, vol. 376, pp. 214â221, 2020. [75] R. Aly, S. Remus, and C. Biemann, âHierarchical multi-label classification of text with capsule networks,â in Proceedings of the 57th
Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, 2019, pp. 323â330.
[76] H. Ren and H. Lu, âCompositional coding capsule network with k-means routing for text classification,â arXiv preprint arXiv:1810.09177, 2018.
[77] D. Bahdanau, K. Cho, and Y. Bengio, âNeural machine translation by jointly learning to align and translate,â arXiv preprint arXiv:1409.0473, 2014.
, Vol. 1, No. 1, Article . Publication date: February 2020.
# Deep Learning Based Text Classification: A Comprehensive Review â¢
# Deep Learning Based Text Classification: A Comprehensive Review + 33
[78] M.-T. Luong, H. Pham, and C. D. Manning, âEffective approaches to attention-based neural machine translation,â arXiv preprint
arXiv:1508.04025, 2015.
[79] Z. Yang, D. Yang, C. Dyer, X. He, A. Smola, and E. Hovy, âHierarchical attention networks for document classification,â in Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: human language technologies, 2016, pp. 1480â1489.
[80] X. Zhou, X. Wan, and J. Xiao, âAttention-based lstm network for cross-lingual sentiment classification,â in Proceedings of the 2016 conference on empirical methods in natural language processing, 2016, pp. 247â256.
[81] T. Shen, T. Zhou, G. Long, J. Jiang, S. Pan, and C. Zhang, âDisan: Directional self-attention network for rnn/cnn-free language understanding,â in Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
[82] Y. Liu, C. Sun, L. Lin, and X. Wang, âLearning natural language inference using bidirectional lstm model and inner-attention,â arXiv preprint arXiv:1605.09090, 2016.
[83] C. d. Santos, M. Tan, B. Xiang, and B. Zhou, âAttentive pooling networks,â arXiv preprint arXiv:1602.03609, 2016. [84] G. Wang, C. Li, W. Wang, Y. Zhang, D. Shen, X. Zhang, R. Henao, and L. Carin, âJoint embedding of words and labels for text
classification,â arXiv preprint arXiv:1805.04174, 2018.
[85] S. Kim, I. Kang, and N. Kwak, âSemantic sentence matching with densely-connected recurrent and co-attentive information,â in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, 2019, pp. 6586â6593.
[86] W. Yin, H. Schütze, B. Xiang, and B. Zhou, âAbcnn: Attention-based convolutional neural network for modeling sentence pairs,â Transactions of the Association for Computational Linguistics, vol. 4, pp. 259â272, 2016.
[87] C. Tan, F. Wei, W. Wang, W. Lv, and M. Zhou, âMultiway attention networks for modeling sentence pairs,â in IJCAI, 2018, pp. 4411â4417. [88] L. Yang, Q. Ai, J. Guo, and W. B. Croft, âanmm: Ranking short answer texts with attention-based neural matching model,â in Proceedings of the 25th ACM international on conference on information and knowledge management, 2016, pp. 287â296.
[89] Z. Lin, M. Feng, C. N. d. Santos, M. Yu, B. Xiang, B. Zhou, and Y. Bengio, âA structured self-attentive sentence embedding,â arXiv preprint arXiv:1703.03130, 2017.
[90] S. Wang, M. Huang, and Z. Deng, âDensely connected cnn with multi-scale feature attention for text classification.â in IJCAI, 2018, pp. 4468â4474.
[91] I. Yamada and H. Shindo, âNeural attentive bag-of-entities model for text classification,â arXiv preprint arXiv:1909.01259, 2019. [92] A. P. Parikh, O. Tackstrom, D. Das, and J. Uszkoreit, âA decomposable attention model for natural language inference,â arXiv preprint
arXiv:1606.01933, 2016.
[93] Q. Chen, Z.-H. Ling, and X. Zhu, âEnhancing sentence embedding with generalized pooling,â arXiv preprint arXiv:1806.09828, 2018. [94] M. E. Basiri, S. Nemati, M. Abdar, E. Cambria, and U. R. Acharya, âAbcdm: An attention-based bidirectional cnn-rnn deep model for
sentiment analysis,â Future Generation Computer Systems, vol. 115, pp. 279â294, 2020.
[95] T. Munkhdalai and H. Yu, âNeural semantic encoders,â in Proceedings of the conference. Association for Computational Linguistics. Meeting, vol. 1. NIH Public Access, 2017, p. 397.
[96] J. Weston, S. Chopra, and A. Bordes, âMemory networks,â in 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings, 2015.
[97] S. Sukhbaatar, J. Weston, R. Fergus et al., âEnd-to-end memory networks,â in Advances in neural information processing systems, 2015, pp. 2440â2448.
[98] A. Kumar, O. Irsoy, P. Ondruska, M. Iyyer, J. Bradbury, I. Gulrajani, V. Zhong, R. Paulus, and R. Socher, âAsk me anything: Dynamic memory networks for natural language processing,â in 33rd International Conference on Machine Learning, ICML 2016, 2016.
[99] C. Xiong, S. Merity, and R. Socher, âDynamic memory networks for visual and textual question answering,â in 33rd International Conference on Machine Learning, ICML 2016, 2016.
[100] R. Mihalcea and P. Tarau, âTextrank: Bringing order into text,â in Proceedings of the 2004 conference on empirical methods in natural language processing, 2004, pp. 404â411.
[101] Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and P. S. Yu, âA comprehensive survey on graph neural networks,â arXiv preprint arXiv:1901.00596, 2019.
[102] T. N. Kipf and M. Welling, âSemi-supervised classification with graph convolutional networks,â arXiv preprint arXiv:1609.02907, 2016. [103] W. Hamilton, Z. Ying, and J. Leskovec, âInductive representation learning on large graphs,â in Advances in neural information processing
systems, 2017, pp. 1024â1034.
[104] P. VeliÄkoviÄ, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. Bengio, âGraph attention networks,â arXiv preprint arXiv:1710.10903, 2017.
[105] H. Peng, J. Li, Y. He, Y. Liu, M. Bao, L. Wang, Y. Song, and Q. Yang, âLarge-scale hierarchical text classification with recursively International World Wide Web Conferences regularized deep graph-cnn,â in Proceedings of the 2018 World Wide Web Conference. Steering Committee, 2018, pp. 1063â1072.
[106] H. Peng, J. Li, Q. Gong, S. Wang, L. He, B. Li, L. Wang, and P. S. Yu, âHierarchical taxonomy-aware and attentional graph capsule rcnns for large-scale multi-label text classification,â arXiv preprint arXiv:1906.04898, 2019.
, Vol. 1, No. 1, Article . Publication date: February 2020.
33
34
Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, and Jianfeng Gao
34 + Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, and Jianfeng Gao
[107] L. Yao, C. Mao, and Y. Luo, âGraph convolutional networks for text classification,â in Proceedings of the AAAI Conference on Artificial
Intelligence, vol. 33, 2019, pp. 7370â7377.
[108] F. Wu, T. Zhang, A. H. d. Souza Jr, C. Fifty, T. Yu, and K. Q. Weinberger, âSimplifying graph convolutional networks,â arXiv preprint arXiv:1902.07153, 2019.
[109] L. Huang, D. Ma, S. Li, X. Zhang, and H. WANG, âText level graph neural network for text classification,â arXiv:1910.02356, 2019. [110] P. Liu, S. Chang, X. Huang, J. Tang, and J. C. K. Cheung, âContextualized non-local neural networks for sequence learning,â in Proceedings
of the AAAI Conference on Artificial Intelligence, vol. 33, 2019, pp. 6762â6769.
[111] J. BROMLEY, J. W. BENTZ, L. BOTTOU, I. GUYON, Y. LECUN, C. MOORE, E. SÃCKINGER, and R. SHAH, âSignature verification using a Siamese time delay neural network,â International Journal of Pattern Recognition and Artificial Intelligence, 1993.
[112] W. tau Yih, K. Toutanova, J. C. Platt, and C. Meek, âLearning discriminative projections for text similarity measures,â in CoNLL 2011 - Fifteenth Conference on Computational Natural Language Learning, Proceedings of the Conference, 2011.
[113] P.-S. Huang, X. He, J. Gao, L. Deng, A. Acero, and L. Heck, âLearning deep structured semantic models for web search using clickthrough data,â in Proceedings of the 22nd ACM international conference on Information & Knowledge Management, 2013, pp. 2333â2338. [114] Y. Shen, X. He, J. Gao, L. Deng, and G. Mesnil, âA latent semantic model with convolutional-pooling structure for information retrieval,â
in ACM International Conference on Conference on Information and Knowledge Management. ACM, 2014, pp. 101â110.
[115] J. Gao, M. Galley, and L. Li, âNeural approaches to conversational ai,â Foundations and Trends® in Information Retrieval, vol. 13, no. 2-3, pp. 127â298, 2019.
[116] A. Severyn and A. Moschittiy, âLearning to rank short text pairs with convolutional deep neural networks,â in SIGIR 2015 - Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2015.
[117] A. Das, H. Yenala, M. Chinnakotla, and M. Shrivastava, âTogether we stand: Siamese networks for similar question retrieval,â in 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016 - Long Papers, 2016.
[118] M. Tan, C. D. Santos, B. Xiang, and B. Zhou, âImproved representation learning for question answer matching,â in 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016 - Long Papers, 2016.
[119] J. Mueller and A. Thyagarajan, âSiamese recurrent architectures for learning sentence similarity,â in 30th AAAI Conference on Artificial Intelligence, AAAI 2016, 2016.
[120] P. Neculoiu, M. Versteegh, and M. Rotaru, âLearning Text Similarity with Siamese Recurrent Networks,â 2016. [121] P. Liu, X. Qiu, and X. Huang, âModelling interaction of sentence pair with coupled-lstms,â arXiv preprint arXiv:1605.05573, 2016. [122] H. He, K. Gimpel, and J. Lin, âMulti-perspective sentence similarity modeling with convolutional neural networks,â in Conference
Proceedings - EMNLP 2015: Conference on Empirical Methods in Natural Language Processing, 2015.
[123] T. Renter, A. Borisov, and M. De Rijke, âSiamese CBOW: Optimizing word embeddings for sentence representations,â in 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016 - Long Papers, 2016.
[124] N. Reimers and I. Gurevych, âSentence-BERT: Sentence Embeddings using Siamese BERT-Networks,â 2019. [125] W. Lu, J. Jiao, and R. Zhang, âTwinbert: Distilling knowledge to twin-structured bert models for efficient retrieval,â arXiv preprint
arXiv:2002.06275, 2020.
[126] M. Tan, C. d. Santos, B. Xiang, and B. Zhou, âLstm-based deep learning models for non-factoid answer selection,â arXiv preprint arXiv:1511.04108, 2015.
[127] Y. Tay, L. A. Tuan, and S. C. Hui, âHyperbolic representation learning for fast and efficient neural question answering,â in Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, 2018, pp. 583â591.
[128] S. Minaee and Z. Liu, âAutomatic question-answering using a deep similarity neural network,â in 2017 IEEE Global Conference on Signal and Information Processing (GlobalSIP). IEEE, 2017, pp. 923â927.
[129] C. Zhou, C. Sun, Z. Liu, and F. Lau, âA c-lstm neural network for text classification,â arXiv preprint arXiv:1511.08630, 2015. [130] R. Zhang, H. Lee, and D. Radev, âDependency sensitive convolutional neural networks for modeling sentences and documents,â in 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2016 - Proceedings of the Conference, 2016.
[131] G. Chen, D. Ye, E. Cambria, J. Chen, and Z. Xing, âEnsemble application of convolutional and recurrent neural networks for multi-label text categorization,â in IJCNN, 2017, pp. 2377â2383.
[132] D. Tang, B. Qin, and T. Liu, âDocument modeling with gated recurrent neural network for sentiment classification,â in Proceedings of the 2015 conference on empirical methods in natural language processing, 2015, pp. 1422â1432.
[133] Y. Xiao and K. Cho, âEfficient character-level document classification by combining convolution and recurrent layers,â arXiv preprint arXiv:1602.00367, 2016.
[134] S. Lai, L. Xu, K. Liu, and J. Zhao, âRecurrent convolutional neural networks for text classification,â in Twenty-ninth AAAI conference on artificial intelligence, 2015.
[135] T. Chen, R. Xu, Y. He, and X. Wang, âImproving sentiment analysis via sentence type classification using bilstm-crf and cnn,â Expert Systems with Applications, vol. 72, pp. 221 â 230, 2017. [Online]. Available: http://www.sciencedirect.com/science/article/pii/ S0957417416305929
, Vol. 1, No. 1, Article . Publication date: February 2020.
# Deep Learning Based Text Classification: A Comprehensive Review â¢
# Deep Learning Based Text Classification: A Comprehensive Review + 35
[136] K. Kowsari, D. E. Brown, M. Heidarysafa, K. J. Meimandi, M. S. Gerber, and L. E. Barnes, âHdltex: Hierarchical deep learning for text classification,â in 2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA). IEEE, 2017, pp. 364â371.
[137] X. Liu, Y. Shen, K. Duh, and J. Gao, âStochastic answer networks for machine reading comprehension,â arXiv:1712.03556, 2017. [138] R. Srivastava, K. Greff, and J. Schmidhuber, âTraining very deep networks,â in Advances in Neural Information Processing Systems, 2015. [139] K. He, X. Zhang, S. Ren, and J. Sun, âDeep residual learning for image recognition,â in Proceedings of the IEEE conference on computer
vision and pattern recognition, 2016, pp. 770â778.
[140] Y. Kim, Y. Jernite, D. Sontag, and A. M. Rush, âCharacter-Aware neural language models,â in 30th AAAI Conference on Artificial Intelligence, AAAI 2016, 2016.
[141] J. G. Zilly, R. K. Srivastava, J. Koutnik, and J. Schmidhuber, âRecurrent highway networks,â in 34th International Conference on Machine Learning, ICML 2017, 2017.
[142] Y. Wen, W. Zhang, R. Luo, and J. Wang, âLearning text representation using recurrent convolutional neural network with highway layers,â arXiv preprint arXiv:1606.06905, 2016.
[143] R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa, âNatural language processing (almost) from scratch,â Journal of machine learning research, vol. 12, no. Aug, pp. 2493â2537, 2011.
[144] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever, âLanguage models are unsupervised multitask learners,â OpenAI Blog, vol. 1, no. 8, p. 9, 2019.
[145] X. Qiu, T. Sun, Y. Xu, Y. Shao, N. Dai, and X. Huang, âPre-trained models for natural language processing: A survey,â arXiv preprint arXiv:2003.08271, 2020.
[146] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov, âRoberta: A robustly optimized bert pretraining approach,â arXiv preprint arXiv:1907.11692, 2019.
[147] Z. Lan, M. Chen, S. Goodman, K. Gimpel, P. Sharma, and R. Soricut, âAlbert: A lite bert for self-supervised learning of language representations,â arXiv preprint arXiv:1909.11942, 2019.
[148] V. Sanh, L. Debut, J. Chaumond, and T. Wolf, âDistilbert, a distilled version of bert: smaller, faster, cheaper and lighter,â arXiv preprint arXiv:1910.01108, 2019.
[149] M. Joshi, D. Chen, Y. Liu, D. S. Weld, L. Zettlemoyer, and O. Levy, âSpanbert: Improving pre-training by representing and predicting spans,â arXiv preprint arXiv:1907.10529, 2019.
[150] K. Clark, M.-T. Luong, Q. V. Le, and C. D. Manning, âElectra: Pre-training text encoders as discriminators rather than generators,â arXiv preprint arXiv:2003.10555, 2020.
[151] Y. Sun, S. Wang, Y. Li, S. Feng, X. Chen, H. Zhang, X. Tian, D. Zhu, H. Tian, and H. Wu, âErnie: Enhanced representation through knowledge integration,â arXiv preprint arXiv:1904.09223, 2019.
[152] Y. Sun, S. Wang, Y.-K. Li, S. Feng, H. Tian, H. Wu, and H. Wang, âErnie 2.0: A continual pre-training framework for language understanding.â in AAAI, 2020, pp. 8968â8975.
[153] S. Garg, T. Vu, and A. Moschitti, âTanda: Transfer and adapt pre-trained transformer models for answer sentence selection,â arXiv preprint arXiv:1911.04118, 2019.
[154] C. Sun, X. Qiu, Y. Xu, and X. Huang, âHow to fine-tune bert for text classification?â in China National Conference on Chinese Computational Linguistics. Springer, 2019, pp. 194â206.
[155] Z. Zhang, Y. Wu, H. Zhao, Z. Li, S. Zhang, X. Zhou, and X. Zhou, âSemantics-aware bert for language understanding,â arXiv preprint arXiv:1909.02209, 2019.
[156] Z. Yang, Z. Dai, Y. Yang, J. Carbonell, R. R. Salakhutdinov, and Q. V. Le, âXlnet: Generalized autoregressive pretraining for language understanding,â in Advances in neural information processing systems, 2019, pp. 5754â5764.
[157] L. Dong, N. Yang, W. Wang, F. Wei, X. Liu, Y. Wang, J. Gao, M. Zhou, and H.-W. Hon, âUnified language model pre-training for natural language understanding and generation,â in Advances in Neural Information Processing Systems, 2019, pp. 13 042â13 054.
[158] H. Bao, L. Dong, F. Wei, W. Wang, N. Yang, X. Liu, Y. Wang, S. Piao, J. Gao, M. Zhou et al., âUnilmv2: Pseudo-masked language models for unified language model pre-training,â arXiv preprint arXiv:2002.12804, 2020.
[159] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu, âExploring the limits of transfer learning with a unified text-to-text transformer,â arXiv preprint arXiv:1910.10683, 2019.
[160] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, âLearning internal representations by error propagation,â California Univ San Diego La Jolla Inst for Cognitive Science, Tech. Rep., 1985.
[161] R. Kiros, Y. Zhu, R. R. Salakhutdinov, R. Zemel, R. Urtasun, A. Torralba, and S. Fidler, âSkip-thought vectors,â in Advances in neural information processing systems, 2015, pp. 3294â3302.
[162] A. M. Dai and Q. V. Le, âSemi-supervised sequence learning,â in Advances in Neural Information Processing Systems, 2015. [163] M. Zhang, Y. Wu, W. Li, and W. Li, âLearning Universal Sentence Representations with Mean-Max Attention Autoencoder,â 2019. [164] D. P. Kingma and M. Welling, âAuto-encoding variational bayes,â in 2nd International Conference on Learning Representations, ICLR 2014
- Conference Track Proceedings, 2014.
, Vol. 1, No. 1, Article . Publication date: February 2020.
35
36
Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, and Jianfeng Gao
+ Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, and Jianfeng Gao
[165] D. J. Rezende, S. Mohamed, and D. Wierstra, âStochastic backpropagation and approximate inference in deep generative models,â ICML, 2014.
[166] Y. Miao, L. Yu, and P. Blunsom, âNeural variational inference for text processing,â in International conference on machine learning, 2016. [167] S. R. Bowman, L. Vilnis, O. Vinyals, A. M. Dai, R. Jozefowicz, and S. Bengio, âGenerating sentences from a continuous space,â in CoNLL 2016 - 20th SIGNLL Conference on Computational Natural Language Learning, Proceedings, 2016.
[168] S. Gururangan, T. Dang, D. Card, and N. A. Smith, âVariational pretraining for semi-supervised text classification,â arXiv preprint arXiv:1906.02242, 2019.
[169] Y. Meng, J. Shen, C. Zhang, and J. Han, âWeakly-supervised neural text classification,â in CIKM, 2018. [170] J. Chen, Z. Yang, and D. Yang, âMixtext: Linguistically-informed interpolation of hidden space for semi-supervised text classification,â
in ACL, 2020.
[171] I. J. Goodfellow, J. Shlens, and C. Szegedy, âExplaining and harnessing adversarial examples,â arXiv preprint arXiv:1412.6572, 2014. [172] T. Miyato, S.-i. Maeda, M. Koyama, K. Nakae, and S. Ishii, âDistributional smoothing with virtual adversarial training,â in ICLR, 2016. [173] T. Miyato, A. M. Dai, and I. Goodfellow, âAdversarial training methods for semi-supervised text classification,â arXiv preprint
arXiv:1605.07725, 2016.
[174] D. S. Sachan, M. Zaheer, and R. Salakhutdinov, âRevisiting lstm networks for semi-supervised text classification via mixed objective function,â in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, 2019, pp. 6940â6948.
[175] P. Liu, X. Qiu, and X. Huang, âAdversarial multi-task learning for text classification,â arXiv preprint arXiv:1704.05742, 2017. [176] R. S. Sutton and A. G. Barto, Reinforcement learning: An introduction. MIT press, 2018. [177] T. Shen, T. Zhou, G. Long, J. Jiang, S. Wang, and C. Zhang, âReinforced self-attention network: a hybrid of hard and soft attention for
sequence modeling,â arXiv preprint arXiv:1801.10296, 2018.
[178] X. Liu, L. Mou, H. Cui, Z. Lu, and S. Song, âFinding decision jumps in text classification,â Neurocomputing, vol. 371, pp. 177â187, 2020. [179] Y. Shen, P.-S. Huang, J. Gao, and W. Chen, âReasonet: Learning to stop reading in machine comprehension,â in Proceedings of the 23rd
ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2017, pp. 1047â1055.
[180] Y. Li, Q. Pan, S. Wang, T. Yang, and E. Cambria, âA generative model for category text generation,â Information Sciences, vol. 450, pp. 301â315, 2018.
[181] T. Zhang, M. Huang, and L. Zhao, âLearning structured representation for text classification via reinforcement learning,â in Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
[182] Y. Gu, R. Tinn, H. Cheng, M. Lucas, N. Usuyama, X. Liu, T. Naumann, J. Gao, and H. Poon, âDomain-specific language model pretraining for biomedical natural language processing,â arXiv preprint arXiv:2007.15779, 2020.
[183] S. Mukherjee and A. H. Awadallah, âXtremedistil: Multi-stage distillation for massive multilingual models,â in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020, pp. 2221â2234.
[184] R. Tang, Y. Lu, L. Liu, L. Mou, O. Vechtomova, and J. Lin, âDistilling task-specific knowledge from bert into simple neural networks,â arXiv preprint arXiv:1903.12136, 2019.
[185] https://www.kaggle.com/yelp-dataset/yelp-dataset. [186] https://www.kaggle.com/lakshmi25npathi/imdb-dataset-of-50k-movie-reviews. [187] B. Pang, L. Lee, and S. Vaithyanathan, âThumbs up?: sentiment classification using machine learning techniques,â in Proceedings of the
ACL conference on Empirical methods in natural language processing, 2002, pp. 79â86.
[188] L. Deng and J. Wiebe, âMpqa 3.0: An entity/event-level sentiment corpus,â in Proceedings of the 2015 conference of the North American chapter of the association for computational linguistics: human language technologies, 2015, pp. 1323â1328.
[189] https://www.kaggle.com/datafiniti/consumer-reviews-of-amazon-products. [190] http://qwone.com/~jason/20Newsgroups/. [191] https://martin-thoma.com/nlp-reuters. [192] F. Wang, Z. Wang, Z. Li, and J.-R. Wen, âConcept-based short text classification and ranking,â in Proceedings of the 23rd ACM International
Conference on Conference on Information and Knowledge Management. ACM, 2014, pp. 1069â1078.
[193] D. Greene and P. Cunningham, âPractical solutions to the problem of diagonal dominance in kernel document clustering,â in Proc. 23rd International Conference on Machine learning (ICMLâ06). ACM Press, 2006, pp. 377â384.
[194] A. S. Das, M. Datar, A. Garg, and S. Rajaram, âGoogle news personalization: scalable online collaborative filtering,â in Proceedings of the 16th international conference on World Wide Web. ACM, 2007, pp. 271â280.
[195] J. Lehmann, R. Isele, M. Jakob, A. Jentzsch, D. Kontokostas, P. N. Mendes, S. Hellmann, M. Morsey, P. Van Kleef, S. Auer et al., âDbpediaâa large-scale, multilingual knowledge base extracted from wikipedia,â Semantic Web, vol. 6, no. 2, pp. 167â195, 2015.
[196] http://davis.wpi.edu/xmdv/datasets/ohsumed.html. [197] E. L. Mencia and J. Fürnkranz, âEfficient pairwise multilabel classification for large-scale problems in the legal domain,â in Joint
European Conference on Machine Learning and Knowledge Discovery in Databases. Springer, 2008, pp. 50â65.
[198] Z. Lu, âPubmed and beyond: a survey of web tools for searching biomedical literature,â Database, vol. 2011, 2011.
, Vol. 1, No. 1, Article . Publication date: February 2020.
# Deep Learning Based Text Classification: A Comprehensive Review â¢
# Deep Learning Based Text Classification: A Comprehensive Review + 37
[199] F. Dernoncourt and J. Y. Lee, âPubmed 200k rct: a dataset for sequential sentence classification in medical abstracts,â arXiv preprint
arXiv:1710.06071, 2017.
[200] B. C. Wallace, L. Kertz, E. Charniak et al., âHumans require context to infer ironic intent (so computers probably do, too),â in Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), 2014, pp. 512â516.
[201] P. Rajpurkar, R. Jia, and P. Liang, âKnow what you donât know: Unanswerable questions for squad,â arXiv preprint:1806.03822, 2018. [202] T. Nguyen, M. Rosenberg, X. Song, J. Gao, S. Tiwary, R. Majumder, and L. Deng, âMs marco: a human-generated machine reading
comprehension dataset,â 2016.
[203] https://cogcomp.seas.upenn.edu/Data/QA/QC/. [204] Y. Yang, W.-t. Yih, and C. Meek, âWikiqa: A challenge dataset for open-domain question answering,â in Proceedings of the 2015 Conference
on Empirical Methods in Natural Language Processing, 2015, pp. 2013â2018.
[205] https://data.quora.com/First-Quora-Dataset-Release-QuestionPairs. [206] R. Zellers, Y. Bisk, R. Schwartz, and Y. Choi, âSwag: A large-scale adversarial dataset for grounded commonsense inference,â arXiv
preprint arXiv:1808.05326, 2018.
[207] T. Jurczyk, M. Zhai, and J. D. Choi, âSelqa: A new benchmark for selection-based question answering,â in 2016 IEEE 28th International Conference on Tools with Artificial Intelligence (ICTAI). IEEE, 2016, pp. 820â827.
[208] S. R. Bowman, G. Angeli, C. Potts, and C. D. Manning, âA large annotated corpus for learning natural language inference,â arXiv preprint arXiv:1508.05326, 2015.
[209] A. Williams, N. Nangia, and S. R. Bowman, âA broad-coverage challenge corpus for sentence understanding through inference,â arXiv preprint arXiv:1704.05426, 2017.
[210] B. Dolan, C. Quirk, and C. Brockett, âUnsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources,â in Proceedings of the 20th international conference on Computational Linguistics. ACL, 2004, p. 350.
[211] D. Cer, M. Diab, E. Agirre, I. Lopez-Gazpio, and L. Specia, âSemeval-2017 task 1: Semantic textual similarity-multilingual and cross-lingual focused evaluation,â arXiv preprint arXiv:1708.00055, 2017.
[212] I. Dagan, O. Glickman, and B. Magnini, âThe PASCAL Recognising Textual Entailment Challenge,â in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2006.
[213] T. Khot, A. Sabharwal, and P. Clark, âScitail: A textual entailment dataset from science question answering,â in 32nd AAAI Conference on Artificial Intelligence, AAAI 2018, 2018.
[214] A. L. Maas, R. E. Daly, P. T. Pham, D. Huang, A. Y. Ng, and C. Potts, âLearning word vectors for sentiment analysis,â in Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies-volume 1, 2011, pp. 142â150. [215] J. C. Martineau and T. Finin, âDelta tfidf: An improved feature space for sentiment analysis,â in Third international AAAI conference on
weblogs and social media, 2009.
[216] J. Howard and S. Ruder, âUniversal language model fine-tuning for text classification,â arXiv preprint arXiv:1801.06146, 2018. [217] B. McCann, J. Bradbury, C. Xiong, and R. Socher, âLearned in translation: Contextualized word vectors,â in Advances in Neural Information Processing Systems, 2017, pp. 6294â6305.
[218] S. Gray, A. Radford, and D. P. Kingma, âGpu kernels for block-sparse weights,â arXiv preprint arXiv:1711.09224, vol. 3, 2017. [219] A. Ratner, B. Hancock, J. Dunnmon, F. Sala, S. Pandey, and C. Ré, âTraining complex models with multi-task weak supervision,â in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, 2019, pp. 4763â4771.
[220] Q. Xie, Z. Dai, E. Hovy, M.-T. Luong, and Q. V. Le, âUnsupervised data augmentation,â arXiv preprint arXiv:1904.12848, 2019. [221] M. Kusner, Y. Sun, N. Kolkin, and K. Weinberger, âFrom word embeddings to document distances,â in International conference on
machine learning, 2015, pp. 957â966.
[222] M. Richardson, C. J. Burges, and E. Renshaw, âMctest: A challenge dataset for the open-domain machine comprehension of text,â in Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, 2013, pp. 193â203.
[223] H.-Y. Huang, C. Zhu, Y. Shen, and W. Chen, âFusionnet: Fusing via fully-aware attention with application to machine comprehension,â arXiv preprint arXiv:1711.07341, 2017.
[224] Q. Chen, X. Zhu, Z.-H. Ling, S. Wei, H. Jiang, and D. Inkpen, âRecurrent neural network-based sentence encoder with gated attention for natural language inference,â arXiv preprint arXiv:1708.01353, 2017.
[225] B. Pan, Y. Yang, Z. Zhao, Y. Zhuang, D. Cai, and X. He, âDiscourse marker augmented network with reinforcement learning for natural language inference,â arXiv preprint arXiv:1907.09692, 2019.
[226] W. Wang, F. Wei, L. Dong, H. Bao, N. Yang, and M. Zhou, âMinilm: Deep self-attention distillation for task-agnostic compression of pre-trained transformers,â arXiv preprint arXiv:2002.10957, 2020.
[227] D. Jurafsky and J. H. Martin, Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition, 1st ed. USA: Prentice Hall PTR, 2000.
[228] R. Sennrich, B. Haddow, and A. Birch, âNeural machine translation of rare words with subword units,â arXiv preprint:1508.07909, 2015. [229] M. Schuster and K. Nakajima, âJapanese and korean voice search,â in 2012 IEEE International Conference on Acoustics, Speech and Signal
Processing (ICASSP). IEEE, 2012, pp. 5149â5152.
, Vol. 1, No. 1, Article . Publication date: February 2020.
37
38
Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, and Jianfeng Gao
[230] T. Kudo, âSubword regularization: Improving neural network translation models with multiple subword candidates,â in ACL 2018 - 56th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference (Long Papers), 2018.
[231] D. E. Rumelhart, G. E. Hinton, R. J. Williams et al., âLearning representations by back-propagating errors,â Cognitive modeling, vol. 5, no. 3, p. 1, 1988.
[232] http://colah.github.io/posts/2015-08-Understanding-LSTMs/. [233] K. Fukushima, âNeocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in
position,â Biological cybernetics, vol. 36, no. 4, pp. 193â202, 1980.
[234] A. Krizhevsky, I. Sutskever, and G. E. Hinton, âImagenet classification with deep convolutional neural networks,â in Advances in neural information processing systems, 2012, pp. 1097â1105.
[235] S. Minaee, Y. Boykov, F. Porikli, A. Plaza, N. Kehtarnavaz, and D. Terzopoulos, âImage segmentation using deep learning: A survey,â arXiv preprint arXiv:2001.05566, 2020.
[236] O. Abdel-Hamid, A.-r. Mohamed, H. Jiang, L. Deng, G. Penn, and D. Yu, âConvolutional neural networks for speech recognition,â IEEE/ACM Transactions on audio, speech, and language processing, vol. 22, no. 10, pp. 1533â1545, 2014.
[237] S. Minaee, A. Abdolrashidi, H. Su, M. Bennamoun, and D. Zhang, âBiometric recognition using deep learning: A survey,â arXiv preprint arXiv:1912.00271, 2019.
[238] J. Gehring, M. Auli, D. Grangier, D. Yarats, and Y. N. Dauphin, âConvolutional sequence to sequence learning,â in Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 2017, pp. 1243â1252.
, Vol. 1, No. 1, Article . Publication date: February 2020.
Deep Learning Based Text Classification: A Comprehensive Review â¢
A DEEP NEURAL NETWORK OVERVIEW This appendix introduces some of the commonly used deep learning models for NLP, including MLPs, CNNs, RNNs, LSTMs, encoder-decoders, and Transformers. Interested readers are referred to [26] for a comprehensive discussion.
A.1 Neural Language Models and Word Embedding Language modeling adopts data-driven approaches to capture salient statistical properties of text sequences in natural language, which can later be used to predict future words in a sequence, or to perform slot-filling in related tasks. N-gram models are the simplest statistical language models, which capture the relation between successive tokens. However, these models cannot capture long-distance dependence of tokens which often encodes semantic relations [227]. Therefore, there have been a lot of efforts of developing richer language models, among which one of the most successful is the neural language model [2].
Neural language models learn to represent textual-tokens (such as words) as dense vectors, referred as to word embeddings, in a self-supervised fashion. These learned representations can then be used for various NLP applications. One popular neural language model is word2vec [27], which learns to map the words that come in similar contexts to similar vector representations. The learned word2vec representations also allow for some simple algebraic operations on word embeddings in vector space, as shown in Eq. 5.
âððððâ²â² â âðððâ²â² + âð¤ððððâ²â² = âðð¢ðððâ²â² (5)
Despite its popularity and semantic richness, word2vec suffers from some problems such as out of vocabulary (OOV) extension, inability to capture word morphology and word context. There have been many works trying to improve word2vec model, and depending on the textual units they deal with and whether being context dependent or not, they can be grouped into the following categories:
Word-Level Embedding ⢠Subword Embedding ⢠Contextual Embedding
Word-Level Embedding. Two main categories of word-level embedding models are prediction-based and count-based models. The models in the former category are trained to recover the missing tokens in a token sequence. Word2vec is an early example of this category, which proposed two architectures for word embedding, Continuous Bag of Words (CBOW) and Skip-Gram [3, 27], as shown in Fig. 23. A Skip-Gram model predicts
(@) i ) Wr t Wee) | Wer | Wer Wie [Wie Ws | Wet) |Wue) : ae
Fig. 23. Two word2vec models [27] (a) CBOW (b) Skip-Gram
each context word from the central word, while a CBOW model predicts the central word based on its context words. The training objectives of these model are to maximize the prediction probability of the correct words.
, Vol. 1, No. 1, Article . Publication date: February 2020.
39
40
Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, and Jianfeng Gao
40 + Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, and Jianfeng Gao
For example, the training objectives of CBOW and Skip-Gram are shown in Eq. 6 and Eq. 7, respectively.
| C |âð¶ âï¸
Lð¶ðµðð = â 1 |C| â ð¶ ð=ð¶+1 log ð (ð¤ð |ð¤ðâð¶, . . . , ð¤ðâ1, ð¤ð+1, . . . , ð¤ð+ð ) (6)
Lððððâðºððð = â[log ð (ð£ â² ð¤ â¤ð£ð¤ð¼ ) + ð âï¸ log ð (âð£ â² Ëð¤ð â¤ð£ð¤ð¼ )] ð=1 Ëð¤ð â¼ð (7)
GloVe [28] is one of the most widely used count-based embedding models. It performs matrix factorization on the co-occurrence matrix of words to learn the embeddings.
Subword and Character Embedding. Word-level embedding models suffer from problems such as OOV. One remedy is to segment words into subwords or characters for embeddings. Character-based embedding models not only can handle the OOV words [50, 51], but also can reduce the embedding model size. Subword methods find the most frequent character segments (subwords), and then learn the embeddings of these segments. FastText [30] is a popular subword embedding model, which represents each word as a bag of character n-grams. This is similar to the letter tri-grams used in DSSMs. Other popular subword tokenizers include byte pair encoding [228], WordPiece [229], SentencePiece [230], and so on.
Contextual Embedding. The meaning of a word depends on its context. For example, the word âplayâ in the sentence âkid is playingâ has a different meaning from when it is in âthis play was written by Mozartâ. Therefore, word embedding is desirable to be context sensitive. Neither Word2vec nor Glove is context sensitive. They simply map a word into the same vector regardless of its context. Contextualized word embedding models, on the other hand, can map a word to different embedding vectors depending on its context. ELMo [4] is the first large-scale context-sensitive embedding model which uses two LSTMs in forward and backward directions to encode word context.
A.2 Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM) RNNs [231] are widely used for processing sequential data, such as text, speech, video. The architecture of a vanilla RNN model is shown in Fig. 24 (left). The model gets the input from the current time ðð and the hidden state from the previous step âðâ1 and generates a hidden state and optionally an output. The hidden state from the last time-stamp (or a weighted average of all hidden states) can be used as the representation of the input sequence for downstream tasks.
Fig. 24. (Left) The architecture of a RNN. (Right) The architecture of a standard LSTM module [232].
RNNs cannot capture long-term dependencies of very long sequences, which appear in many real applications, due to the gradient vanishing and explosion issue. LSTM is a variation of RNNs designed to better capture
, Vol. 1, No. 1, Article . Publication date: February 2020.
# Deep Learning Based Text Classification: A Comprehensive Review â¢
Deep Learning Based Text Classification: A Comprehensive Review + 41
long-term dependencies. As shown in Fig. 24 (right) and Eq. 8, the LSTM layer consists of a memory cell, which remembers values over arbitrary time intervals, and three gates (input gate, output gate, forget gate) that regulate the flow of information in and out the cell. The relationship between input, hidden states, and different gates of LSTM is shown in Equation 8:
ðð¡ = ð (W(ð )ð¥ð¡ + U(ð )âð¡ â1 + ð (ð ) ), ðð¡ = ð (W(ð)ð¥ð¡ + U(ð)âð¡ â1 + ð (ð) ), ðð¡ = ð (W(ð)ð¥ð¡ + U(ð)âð¡ â1 + ð (ð) ), ðð¡ = ðð¡ â ðð¡ â1 + ðð¡ â tanh(W(ð)ð¥ð¡ + U(ð)âð¡ â1 + ð (ð) ), âð¡ = ðð¡ â tanh(ðð¡ ) (8)
where ð¥ð¡ â ð
ð is a k-D word embedding input at time-step ð¡, ð is the element-wise sigmoid function, â is the element-wise product, W, U and ð are model parameters, ðð¡ is the memory cell, the forget gate ðð¡ determines whether to reset the memory cell, and the input gate ðð¡ and output gate ðð¡ control the input and output of the memory cell, respectively.
A.3 Convolutional Neural Networks (CNNs) CNNs are originally developed for computer vision tasks, but later on made their way in various NLP applications. CNNs were initially proposed by Fukushima in his seminal paper "Neocognitron" [233], based on the model of the human visual system proposed by Hubel and Wiesel. Yann LeCun and his colleagues popularized CNNs by developing an efficient method of training CNNs based on back-propagation [44]. The architecture of the CNN model developed by LeCun et al. is shown in Fig. 25.
Output Input Convolution Pooling Convolution Pooling Fully-connected
Fig. 25. Architecture of a CNN model, courtesy of Yann LeCun [44].
CNNs consist of three types of layers: (1) the convolutional layers, where a sliding kernel is applied to a region of an image (or a text segment) to extract local features; (2) the nonlinear layers, where a non-linear activation function is applied to (local) feature values; and (3) the pooling layers, where local features are aggregated (via the max-pooling or mean-pooling operation) to form global features. One advantage of CNNs is the weight sharing mechanism due to the use of the kernels, which results in a significantly smaller number of parameters than a similar fully-connected neural network, making CNNs much easier to train. CNNs have been widely used in computer vision, NLP, and speech recognition problems [45, 139, 234â238].
A.4 Encoder-Decoder Models Encoder-Decoder models learn to map input to output via a two-stage process: (1) the encoding stage, where an encoder ð (.) compresses input ð¥ into a latent-space vector representation ð§ as ð§ = ð (ð¥); and (2) the decoding stage, where a decoder ð(.) reconstructs or predicts output ð¦ from ð§ as ð¦ = ð(ð§). The latent representation ð§ is
, Vol. 1, No. 1, Article . Publication date: February 2020.
41
42
Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, and Jianfeng Gao
42 + Shervin Minaee, Nal Kalchbrenner, Erik Cambria, Narjes Nikzad, Meysam Chenaghlu, and Jianfeng Gao
expected to capture the underlying semantics of the input. These models are widely used in sequence-to-sequence tasks such as machine translation, as illustrated in Fig. 26.
Er liebte zu essen . Softmax Er liebte zu essen He loved to eat
Fig. 26. A simple encoder-decoder model for machine translation. The input is a sequence of words in English, and the output is its translated version in German.
Autoencoders are special cases of the encoder-decoder models in which the input and output are the same. Autoencoders can be trained in an unsupervised fashion by minimizing the reconstruction loss.
A.5 Attention Mechanism Attention is motivated by how we pay visual attention to different regions of an image or correlate words in one sentence. Attention becomes an increasingly popular concept and useful tool in developing deep learning models for NLP [77, 78]. In a nutshell, attention in language models can be interpreted as a vector of importance weights. In order to predict a word in a sentence, using the attention vector, we estimate how strongly it is correlated with, or âattends toâ, other words and take the sum of their values weighted by the attention vector as the approximation of the target.
Bahdanau et al. [77] conjectured that the use of a fixed-length state vector in CNNs is the bottleneck in improving the performance of the encoder-decoder model, and proposed to allow the decoder to search for parts in a source sentence that are relevant to predicting the target word, without having to compress the source sentence into the state vector. As shown in Fig. 27 (left), a linear combination of hidden vectors of input words â, weighted by attention scores ð¼, is used to generate the output ð¦. As we can see from Fig. 27 (right), different words in the source sentence are attended with different weights when generating a word in the target sentence.
agreement European Economic Area = on the ft accord| tal économique! européenne ate signé| aoat 1992 cend>|
agreement European Economic Area = on the ft accord| tal économique! européenne ate signé| pg Od oe 7 aoat 1992 Xx % % x cend>|
pg Od oe 7 Xx % % x
Fig. 27. (Left) The proposed attention mechanism in [77]. (Right) An example of attention mechanism in French to English machine translation, which shows the impact of each word in French in translating to English, Brighter cells have more impact.
, Vol. 1, No. 1, Article . Publication date: February 2020.
Deep Learning Based Text Classification: A Comprehensive Review ⢠43
Self-attention is a special attention mechanism, which allows to learn the correlation among the words in the same sentence [35]. This is very useful in NLP tasks such as machine reading, abstractive summarization, and image captioning. Transformers, which will be described later, also use self-attention.
A.6 Transformer One of the computational bottlenecks suffered by RNNs is the sequential processing of text. Although CNNs are less sequential than RNNs, the computational cost to capture meaningful relationships between words in a sentence also grows with increasing length of the sentence, similar to RNNs. Transformers [5] overcome this limitation by computing in parallel for every word in a sentence or document an âattention scoreâ to model the influence each word has on another. Due to this feature, Transformers allow for much more parallelization than CNNs and RNNs, and make it possible to efficiently train very big models on large amounts of data on GPU clusters.
(a) (b) (c) Output Probabilties Scaled Dot-Product Attention Multi-Head Attention Nx Excoarg QP Encoding exerting) [esting T Inputs Outputs (shifted right)
Fig. 28. (a) The Transformer model architecture. (b) Scaled Dot-Product Attention. (3) Multi-Head Attention consists of several attention layers running in parallel. [5]
.
As shown in Fig. 28 (a), the Transformer model consists of stacked layers in both encoder and decoder components. Each layer has two sub-layers comprising a multi-head attention layer (Fig. 28 (c)) followed by a position-wise feed forward network. For each set of queries ð, keys ð¾ and values ð , the multi-head attention module performs attention â times using the scaled dot-product attention as in Fig. 28 (b), where Mask (option) is the attention mask that is applied to prevent the target word information to be predicted from leaking to the decoder (during training) before prediction. Experiments show that multi-head attention is more effective than single-head attention. The attention of multiple heads can be interpreted as each head processing a different subspace at a different position. Visualization of the self-attention of multiple heads reveal that each head processes syntax and semantic structures [5].
, Vol. 1, No. 1, Article . Publication date: February 2020. | {
"id": "1810.04805"
} |
2004.13845 | DARE: Data Augmented Relation Extraction with GPT-2 | Real-world Relation Extraction (RE) tasks are challenging to deal with,
either due to limited training data or class imbalance issues. In this work, we
present Data Augmented Relation Extraction(DARE), a simple method to augment
training data by properly fine-tuning GPT-2 to generate examples for specific
relation types. The generated training data is then used in combination with
the gold dataset to train a BERT-based RE classifier. In a series of
experiments we show the advantages of our method, which leads in improvements
of up to 11 F1 score points against a strong base-line. Also, DARE achieves new
state of the art in three widely used biomedical RE datasets surpassing the
previous best results by 4.7 F1 points on average. | http://arxiv.org/pdf/2004.13845 | Yannis Papanikolaou, Andrea Pierleoni | cs.CL, cs.LG, stat.ML | null | null | cs.CL | 20200406 | 20200406 | 0 2 0 2
2020:
r p A 6 ] L C . s c [ 1 v 5 4 8 3 1 . 4 0 0 2 : v i X r a
# DARE: Data Augmented Relation Extraction with GPT-2
# Yannis Papanikolaou and Andrea Pierleoni Healx, Cambridge, UK {yannis.papanikolaou, andrea.pierleoni}@healx.io
# Abstract
Real-world Relation Extraction (RE) tasks are challenging to deal with, either due to limited In training data or class imbalance issues. this work, we present Data Augmented Rela- tion Extraction (DARE), a simple method to augment training data by properly ï¬ne-tuning GPT-2 to generate examples for speciï¬c rela- tion types. The generated training data is then used in combination with the gold dataset to train a BERT-based RE classiï¬er. In a series of experiments we show the advantages of our method, which leads in improvements of up to 11 F1 score points against a strong base- line. Also, DARE achieves new state of the art in three widely used biomedical RE datasets surpassing the previous best results by 4.7 F1 points on average.
# 1 Introduction
Relation Extraction (RE) is the task of identify- ing semantic relations from text, for given entity mentions within it. This task, along with Named Entity Recognition, has recently become increas- ingly important due to the advent of knowledge graphs and their applications. In this work, we fo- cus on supervised RE (Zeng et al., 2014; Lin et al., 2016; Wu et al., 2017; Verga et al., 2018), where relation types come from a set of predeï¬ned cat- egories, as opposed to Open Information Extrac- tion approaches that represent relations among en- tities using their surface forms (Banko et al., 2007; Fader et al., 2011).
RE is inherently linked to Natural Language Understanding in the sense that a successful RE model should manage to adequately capture So, almost language structure and meaning. inevitably, in language the latest advances architec- modelling with Transformer-based tures (Radford et al., 2018a; Devlin et al., 2018; Radford et al., 2018b) have been quickly em-
ployed to also deal with RE tasks (Soares et al., 2019; Lin et al., 2019; 2019; Papanikolaou et al., 2019).
These recent works have mainly leveraged the discriminative power of BERT-based models to improve upon the state of the art (SOTA). In this work we take a step further and try to assess whether the text generating capabilities of another language model, GPT-2 (Radford et al., 2018b), can be applied to augment training data and suc- cessfully deal with class imbalance and small- sized training sets.
Speciï¬cally, given a RE task we ï¬ne-tune one pretrained GPT-2 model per relation type and then use the resulting ï¬ne-tuned models to generate new training samples. We then combine the gener- ated data with the gold dataset and ï¬ne-tune a pre- trained BERT model (Devlin et al., 2018) on the resulting dataset to perform RE.
We conduct extensive experiments, studying dif- ferent conï¬gurations for our approach and com- pare DARE against two strong baselines and the SOTA on three well established biomedical RE benchmark datasets. The results show that our ap- proach yields signiï¬cant improvements against the rest of the approaches. To the best of our knowl- edge, this is the ï¬rst work augmenting training data with GPT-2 for RE. In Table 1 we show some generated examples with GPT-2 models ï¬ne-tuned on the datasets that are used in the experiments (re- fer to Section 4).
In the following, we provide a brief overview of related works in Section 2, we then describe our approach in Section 3, followed by our experimen- tal results (Section 4) and the conclusions (Section 5).
Dataset(relation type) CDR(Induce) DDI2013(Effect) DDI2013(Advise) DDI2013(Mechanism) ChemProt(Activate) ChemProt(Inhibit) ChemProt(Product)
Generated sentences DISEASE was the most common adverse reaction ( 21 % ) reported for DRUG, and occurred in approximately 50 % of patients . DRUGA may enhance the effects of alcohol, barbiturates, DRUGB, and other cns depressants. caution should be observed when DRUGA and DRUGB are coadministered. co-administration of DRUGA decreased the oral bioavailability (48%) of DRUGB, a substrate for cyp2d6. DRUG enhances PROTEIN sensitivity via activation of the pi3k / akt signaling pathway. DRUG, a novel orally bioavailable xanthine PROTEIN inhibitor, the enzyme PROTEIN catalyzes the two-electron reduction of DRUG to produce acetyl groups.
Table 1: Examples of generated sentences with ï¬ne-tuned GPT-2 models. Each model is ï¬ne-tuned on examples from the speciï¬c relation type.
# 2 Related Work
Relation Extraction is usually modelled as a text classiï¬cation task. Therefore most methods to deal with class imbalance or limited data in RE follow the respective methods from text classiï¬ca- tion. In the following, we describe the different approaches that have been followed in the litera- ture.
One approach is to deal with imbalance at the classiï¬er level, by penalizing misclassiï¬cation er- rors differently for each class, depending on the class frequency (Lewis et al., 2004; Zhou and Liu, 2005) or by explicitly adjusting prior class proba- bilities (Lawrence et al., 1998).
Another popular approach relies on either under- sampling the majority class(es) or oversampling the minority one(s), transforming the training data with the aim of balancing it. One of the simplest approaches, random majority undersampling, sim- ply removes a random portion of examples from majority classes so that per class training examples are roughly equal (Japkowicz and Stephen, 2002). An improved version of the previous method, bal- anced bagging (Hido et al., 2009), employs an en- semble of classiï¬ers that have been trained with random majority undersampling.
phrases with their synonyms (Zhang et al., 2015). Chen et al. (2011) employed topic models to gen- erate additional training examples by sampling from the topic-word and document-topic distribu- tions. Ratner et al. (2016) proposed a data aug- mentation framework that employs transforma- tion operations provided by domain experts, such as a word swap, to learn a sequence generation model. Kaï¬e et al. (2017) used both a template- based method and an LSTM-based approach to generate new samples for visual question answer- ing.
A similar method to our approach was proposed by Sun et al. (2019a) who presented a framework to successfully deal with catastrophic forgetting in language lifelong learning (LLL). Speciï¬cally and given a set of tasks in the framework of LLL, they ï¬ne-tune GPT-2 to simultaneously learn to solve a task while generating training samples for it. When dealing with a new task, the model is trained on the generated training samples from previous tasks alongside the data of the new task, therefore avoiding catastrophic forgetting.
Oversampling approaches for textual data have been somehow limited as opposed to those for image data (Wong et al., 2016; Fawzi et al., 2016; Wang and Perez, 2017; Frid-Adar et al., 2018), since text semantics depend inherently on the ex- act order or structure of word tokens.
A simple approach is to replace words or
Our work falls into the oversampling techniques for text, but our focus is RE. Importantly, we do not need any domain expertise, templates, syn- onym thesaurus or to train a model from scratch, which makes our approach easily adaptable to any domain, with relatively low requirements in re- sources.
# 3 Methods
In this section we present brieï¬y the GPT-2 model and before giving a detailed introduction to our ap- proach.
# 3.1 GPT-2
GPT-2 (Radford et al., 2018b) is a successor of the GPT language model (Radford et al., 2018a). Both models are deep neural network architectures using the Transformer (Vaswani et al., 2017), pre- trained on vast amounts of textual data. Both mod- els are pre-trained with a standard language mod- elling objective, which is to predict the next word token given k previously seen word tokens. This is achieved by maximizing the following likelihood:
L(U ) = logP (ui|uiâ1, ..., uiâk; Î) (1)
i X
where Î are the neural network parameters. The authors have gradually provided publicly four different ï¬avours of GPT-2, with 124M, 355M, 774M and 1558M parameters respectively. In our experiments we use the second largest model (774M), since it seems to represent a good com- promise between accuracy and hardware require- ments1.
# 3.2 Data Augmented Relation Extraction
Let D = [s0, ...sd] be a RE dataset containing d sequences. Furthermore, we assume that each sequence s = [w0, ...wn] will be a sequence of n word tokens and that e1 = [we1i, ...we1j ] and e2 = [we2k, ...we2l] will represent a pair of entity mentions in s. Furthermore, let L = [l1, ..., lc] be a set of c relation types. Then, RE is the task of learning a function that maps each triple (si, e1, e2) to L, i.e.,
h = fÎ(si, e1, e2), h â L (2)
where Î are the parameters of the model.
In this work we employ a RE classiï¬er based on a pretrained BERT language model. This classiï¬er follows the same principle followed by Devlin et al. (2018), using a special token (CLS) for classiï¬cation. The only modiï¬cation is that we mask entity mentions with generic entity types, i.e., $ENTITY A$ or $ENTITY B$. It should be noted that the method that we introduce here is not classiï¬er speciï¬c, so any other classiï¬er can be used instead.
1https://openai.com/blog/gpt-2-1-5b-release/
To generate new training data, we split the D dataset into c subsets where each Dc subset con- tains only examples from relation type c. Subse- quently, we ï¬ne-tune GPT-2 on each Dc for ï¬ve epochs and then prompt each resulting ï¬ne-tuned model to generate new sentences, ï¬ltering out sen- tences that do not contain the special entity masks or that are too small (less than 8 tokens). The generated sequences are combined for all relation types into a dataset Dsynth.
Subsequently, we build an ensemble of RE clas- siï¬ers, each of them being ï¬ne-tuned on a subset of Dsynth and the whole D, such that the per- relation type generated instances are equal to the number of gold instances for that relation, multi- plied by ratio, i.e., |Dsynthâ² c| = |Dc| â r. In our experiments we have set r = 1.0 (refer to Section 4.6 for a short study of its inï¬uence). Algorithm 1 illustrates our method.
# ALGORITHM 1: DARE
# Input: D, L
for each relation type c â L do
Dc = {s | s â D, rel type(s) = c}; ï¬ne-tune GPT-2 on Dc; generate Dsynthc with GPT-2;
end Dsynth = Dsynth1 ⪠... ⪠Dsynthc; for each classiï¬er in ensemble do Dsynthâ² â¼ Dsynth s.t. |Dsynthâ² Dsynthâ² = Dsynthâ² train RE classiï¬er on D ⪠Dsynthâ²;
# end predict on Dtest with majority voting over the ensemble;
We would like to note that in early experiments, we also experimented with ï¬ne-tuning over the whole D, by adding a special token to the begin- ning of each sentence that encoded the relation type, e.g., <0>: or <1>:. Then during genera- tion, we would prompt the model with the differ- ent special tokens and let it generate a training in- stance from the respective relation type. However, this approach did not prove effective leading to worse results than just using gold data, primarily because frequent classes âinï¬uencedâ more GPT- 2 and the model was generating many incorrectly labeled samples.
# 4 Experimental Evaluation
In this section we present the empirical evalua- tion of our method. We ï¬rst describe the ex- perimental setup, the base- lines against which we evaluate DARE and sub- sequently present the experiments and report the relevant results.
# 4.1 Setup
In all experiments we used the second-largest GPT-2 model (774M parameters). All experi- ments were carried out on a machine equipped with a GPU V100-16GB. For the implementation, we have used HuggingFaceâs Transformers library (Wolf et al., 2019).
To ï¬ne-tune GPT-2 we employed Adam as the optimizer, a sequence length of 128, a batch size of 4 with gradient accumulation over 2 batches (be- ing equivalent to a batch size of 8) and a learning rate of 3e â 5. In all datasets and for all relation types we ï¬ne-tuned for 5 epochs. For generation we used a temperature of 1.0, ï¬xed the top-k pa- rameter to 5 and generated sequences of up to 100 word tokens. An extensive search for the above op- timal hyper-parameter values is left to future work. Since all of our datasets are from the biomedi- cal domain, we found out empirically (see Section 4.4 for the relevant experiment) that it was ben- eï¬cial to ï¬rst ï¬ne-tune a GPT-2 model on 500k PubMed abstracts, followed by a second round of ï¬ne-tuning per dataset, per relation type.
In all cases, we used a pre-trained BERT model (the largest uncased model) as a RE classiï¬er, which we ï¬ne-tuned on either the gold or the gold+generated datasets. We used the AdamW op- timizer (Loshchilov and Hutter, 2017), a sequence length of 128, a batch size of 32 and a learning rate of 2e â 5, We ï¬ne-tuned for 5 epochs, keep- ing the best model with respect to the validation set loss. Also, we used a softmax layer to output predictions and we assigned a relation type to each instance si as follows:
if max(pc) ⥠t if max(pc) < t (3) where c â L and 0 < t < 1 is a threshold that maximizes the micro-F score on the validation set. For DARE, in all experiments we train an en- semble of twenty classiï¬ers, where each classiï¬er has been trained on the full gold set and a sub-
sample of the generated data. In this way, we man- age to alleviate the effect of potential noisy gener- ated instances.
# 4.2 Datasets
To evaluate DARE, we employ three RE datasets from the biomedical domain, their statistics being provided in Table 2.
The BioCreative V CDR corpus (Li et al., 2016) contains chemical-disease relations. The dataset is a binary classiï¬cation task with one relation type, chemical induces disease, and annotations are at the document level, having already been split into train, development and test splits. For simplic- ity, we followed the work of Papanikolaou et al. (2019) and considered only intra-sentence rela- tions. We have included the dataset in our GitHub repository to ease replication. In the following, we dub this dataset as CDR.
corpus (Segura Bedmar et al., 2013) contains MedLine abstracts and DrugBank documents describing The dataset has four drug-drug interactions. relation types and annotations are at the sentence level. The dataset is provided with a train and test split for both MedLine and DrugBank instances. Following previous works, we concatenated the two training sets into one. Also, we randomly sampled 10% as a development set. In the following this dataset will be referred to as DDI2013. The
corpus (Krallinger et al., 2017) covers chemical-protein the interactions, containing ï¬ve relation types, vast majority of them being at the sentence level. The dataset comes with a train-development-test In the following we will refer to it as split. ChemProt.
# 4.3 Baselines
The above datasets suffer both from class imbal- ance and a limited number of positives. For exam- ple the rarest relation type in DDI2013 has only 153 instances in the training set, while the respec- tive one in ChemProt has only 173 data points. Therefore, we consider two suitable baselines for such scenarios, the balanced bagging approach and the class weighting method, both described in Section 2. Both baselines use the base classiï¬er described in Section 4.1. Also, in both cases we
Dataset CDR DDI2013 ChemProt |L| 1 4 5 Training 3,597(1,453) 22,501(153 658 1,083 1,353) 14,266(173 229 726 754 2,221) Development 3,876 4,401 8,937 Test 3,806 5,689 12,132
Table 2: Statistics for the datasets used in the experiments. For the training data we provide in parentheses the number of positives across each class. We do not include in |L| the null class which signiï¬es a non-existing relation.
GPT-2 Vanilla ï¬ne-tuned Precision Recall 0.69 0.75 F1 0.70 0.73 0.71 0.68
Table 3: DARE results on CDR when using a vanilla GPT-2 model or a model that has ï¬rst been ï¬ne-tuned on 500k abstracts from PubMed. In either case the re- sulting model is then ï¬ne-tuned per relation type to gen- erate new examples.
consider an ensemble of ten models2. Finally, for the class weighting approach we set each classâs weight as
weightc = f reqmin f reqc (4)
with min being the rarest class.
# 4.4 Fine-tuning GPT-2 on In-domain Data
Since all our datasets come from the biomedical domain, we hypothesized that a ï¬rst round of ï¬ne- tuning GPT-2 on in-domain data could be beneï¬- cial instead of directly employing the vanilla GPT- 2 model. We designed a short experiment using the CDR dataset to test this hypothesis. To clarify, any of the two models (i.e, the vanilla and the one ï¬netuned in in-domain data) would then be ï¬ne- tuned per relation type to come up with the ï¬nal GPT-2 models that would generate the new train- ing examples.
0.8 0.7 0.6 0.5 0.4 Balanced Bagging DARE 1,000 500 # positive samples BB DARE 0.47 0.38 0.53 0.42 0.62 0.55 0.68 0.63 0.73 0.70 0 Dataset 50 250 500 1000 all(1453) 1,500
# F1
Table 3 illustrates the results of this experiment. As we expect, this ï¬rst round of ï¬ne-tuning proves signiï¬cantly favourable. We note that when in- specting the generated examples from the vanilla GPT-2 model, generated sentences often contained a peculiar mix of news stories with the compound- disease relations.
Figure 1: DARE vs balanced bagging(BB) for different sizes of positive samples on CDR dataset. Both meth- ods employ ensembles of BERT RE classiï¬ers.
# 4.5 DARE on Imbalanced Datasets
In this experiment, we wanted to evaluate the ef- fect of our method when dealing with great im-
2We considered up to 20 models in initial experiments, but there is hardly any improvement after even ï¬ve models, since the data are repeated.
0.68 DARE Balanced Bagging 0.66 0.64 0.62 1 2 ratio 3 4
# F1
Figure 2: DARE performance for different generated dataset sizes in each base classiï¬er. For each relation type we add ratioâ|Dc| examples.
balance, i.e., datasets with very few positive sam- ples. To that end, we considered the CDR dataset and sampled different numbers of positive exam- ples from the dataset (50, 250, 500, 1000 and all positives) and combined them with all the nega- tive instances. The resulting ï¬ve datasets were used to train either a balanced bagging ensemble or DARE.
In Figure 1, we show the results, averaging across ï¬ve different runs. In all cases, our ap- proach has a steady, signiï¬cant advantage over the balanced bagging baseline, their difference reach- ing up to 11 F1 score points when only few pos- itives (⤠250) are available. As we add more samples, the differences start to smooth out as ex- pected. These results clearly illustrate that DARE can boost the predictive power of a classiï¬er when dealing with few positive samples, by cheaply gen- erating training data of arbitrary sizes.
# 4.6 Effect of Generated Data Size
Our next experiment focuses in studying the effect of different sizes of generated data on DAREâs per- formance.
As explained, our method relies on ï¬ne-tuning GPT-2 to generate examples for each relation type that will, ideally, come from the same distribution as the ones from the gold training data. Neverthe- less, we should expect that this procedure will not be perfect, generating also noisy samples. As men- tioned previously, we try to alleviate this effect by training an ensemble of classiï¬ers, each trained on the whole gold and a part of the generated dataset. An important question that arises therefore, is how to determine the optimal ratio of generated ex-
amples to include in each classiï¬er. If too few, the improvements will be insigniï¬cant, if too many we risk to have the model being inï¬uenced by the noise.
In order to gain empirical insight into the above question we design a short experiment using the CDR dataset, for different sizes of generated data. As gold set, we consider a random subset of 1,000 positive examples and all negatives, to make more prominent the effect of class imbalance.
In Figure 2 we show the results for ï¬ve dif- ferent generated data sizes. Interestingly, adding more data does not necessarily boost classiï¬er per- formance, since the noisy patterns in the gener- ated data seem to inï¬uence more the classiï¬er than those in the gold data. In the following, we choose a ratio = 1, adding for each relation type a num- ber of generated instances equal to the number of gold instances. It should be noted that we are not limited in the total generated data that we will use since we can ï¬ne-tune an arbitrary number of clas- siï¬ers on combinations of the gold data and sub- sets of the generated data.
# 4.7 DARE against the SOTA and Baselines
Taking into account the previous observations, we proceed to compare DARE against the SOTA and the two previously described baselines. Table 4 describes the results. For the multi-class datasets we report the micro-F score in order to make our results comparable with previous works. Also, in the Supplementary Material we report the per class results for DARE against the SOTA and the class weighting baseline, for the two multi-class datasets in order to ease comparison with past or future works.
Comparing DARE against the SOTA, we ob- serve a steady advantage of our method across all datasets, ranging from 3 to 8 F1 points. These results are somehow expected, since we employ BERT-large as our base classiï¬er which has proven substantially better than Convolu- tional (CNN) or Recurrent neural networks (RNN) across a variety of tasks (Devlin et al., 2018). In CDR, Papanikolaou et al. (2019) have used BioBERT(Lee et al., 2019) which is a BERT base (cased) model pre-trained on PubMed, while we use BERT large (uncased), in ChemProt, Peng et al. (2018) use ensembles of SVM, CNN and RNN models while in DDI2013 Sun et al. (2019b) have used hybrid CNN-RNN models.
Dataset CDR Conï¬guration SOTA (Papanikolaou et al., 2019) BERT+class weighting BERT+balanced bagging DARE SOTA (Peng et al., 2018) BERT+class weighting BERT+balanced bagging BERT+DARE SOTA (Sun et al., 2019b) BERT+class weighting BERT+balanced bagging BERT+DARE Precision Recall 0.80 0.74 0.79 0.75 0.58 0.67 0.71 0.68 0.74 0.71 0.72 0.74 0.61 0.66 0.61 0.68 0.72 0.75 0.69 0.79 0.77 0.81 0.74 0.82 ChemProt DDI2013 F1 0.70 0.69 0.70 0.73 0.65 0.70 0.70 0.73 0.75 0.76 0.73 0.78
Table 4: Comparison of DARE vs the previous SOTA and two baselines suited for imbalanced datasets. Only statistically signiï¬cant results to the second best model are marked in bold. Statistical signiï¬cance is determined with a McNemar p-test at 0.05 signiï¬cance level.
When observing results for the baselines, we no- tice that they perform roughly on par. DARE is bet- ter from 2 to 5 F1 points against the baselines, an improvement that is smaller than that against the SOTA, but still statistically signiï¬cant in all cases. Overall, and in accordance with the results from the experiment in Section 4.5, we observe that DARE manages to leverage the GPT-2 automati- cally generated data, to steadily improve upon the SOTA and two competitive baselines.
# References
Michele Banko, Michael J Cafarella, Stephen Soder- land, Matthew Broadhead, and Oren Etzioni. 2007. In IJ- Open information extraction from the web. CAI, pages 2670â2676.
Enhong Chen, Yanggang Lin, Hui Xiong, Qiming Luo, and Haiping Ma. 2011. Exploiting probabilistic topic models to improve text categorization under class imbalance. Information Processing & Manage- ment, 47(2):202â214.
# 5 Conclusions
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.
We have presented DARE, a novel method to aug- ment training data in Relation Extraction. Given a gold RE dataset, our approach proceeds by ï¬ne- tuning a pre-trained GPT-2 model per relation type and then uses the ï¬ne-tuned models to generate new training data. We sample subsets of the syn- thetic data with the gold dataset to ï¬ne-tune an en- semble of RE classiï¬ers that are based on BERT. Through a series of experiments we show empir- ically that our method is particularly suited to deal with class imbalance or limited data settings, recording improvements up to 11 F1 score points over two strong baselines. We also report new SOTA performance on three biomedical RE bench- marks.
Our work can be extended with minor improve- ments on other Natural Language Understanding tasks, a direction that we would like to address in future work.
Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information ex- traction. In Proceedings of the conference on empir- ical methods in natural language processing, pages 1535â1545. Association for Computational Linguis- tics.
Alhussein Fawzi, Horst Samulowitz, Deepak Turaga, and Pascal Frossard. 2016. Adaptive data augmen- In 2016 IEEE In- tation for image classiï¬cation. ternational Conference on Image Processing (ICIP), pages 3688â3692. Ieee.
Maayan Frid-Adar, Idit Diamant, Eyal Klang, Michal Amitai, Jacob Goldberger, and Hayit Greenspan. 2018. Gan-based synthetic medical image augmen- tation for increased cnn performance in liver lesion classiï¬cation. Neurocomputing, 321:321â331.
Shohei Hido, Hisashi Kashima, and Yutaka Takahashi. 2009. Roughly balanced bagging for imbalanced data. Statistical Analysis and Data Mining: The ASA Data Science Journal, 2(5-6):412â426.
Nathalie Japkowicz and Shaju Stephen. 2002. The In- class imbalance problem: A systematic study. telligent data analysis, 6(5):429â449.
Kushal Kaï¬e, Mohammed Yousefhussien, and Christo- pher Kanan. 2017. Data augmentation for visual question answering. In Proceedings of the 10th In- ternational Conference on Natural Language Gener- ation, pages 198â202.
Martin Krallinger, Obdulia Rabal, Saber A Akhondi, et al. 2017. Overview of the biocreative vi chemical- protein interaction track. In Proceedings of the sixth BioCreative challenge evaluation workshop.
Steve Lawrence, Ian Burns, Andrew Back, Ah Chung Tsoi, and C Lee Giles. 1998. Neural network classi- ï¬cation and prior class probabilities. In Neural net- works: tricks of the trade, pages 299â313. Springer.
Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. Biobert: pre-trained biomed- ical language representation model for biomedical text mining. arXiv preprint arXiv:1901.08746.
David D Lewis, Yiming Yang, Tony G Rose, and Fan Li. 2004. Rcv1: A new benchmark collection for Journal of machine text categorization research. learning research, 5(Apr):361â397.
Jiao Li, Yueping Sun, Robin J Johnson, Daniela Sci- aky, Chih-Hsuan Wei, Robert Leaman, Allan Peter Davis, Carolyn J Mattingly, Thomas C Wiegers, and Zhiyong Lu. 2016. Biocreative v cdr task corpus: a resource for chemical disease relation extraction. Database, 2016.
Chen Lin, Timothy Miller, Dmitriy Dligach, Steven Bethard, and Guergana Savova. 2019. A bert-based universal model for both within-and cross-sentence clinical temporal relation extraction. In Proceedings of the 2nd Clinical Natural Language Processing Workshop, pages 65â71.
Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. 2016. Neural relation extraction with selective attention over instances. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), volume 1, pages 2124â2133.
Fixing weight decay regularization in adam. arXiv preprint arXiv:1711.05101.
Yannis Papanikolaou, Ian Roberts, and Andrea Pier- leoni. 2019. Deep bidirectional transformers for relation extraction without supervision. EMNLP- IJCNLP 2019, page 67.
Yifan Peng, Anthony Rios, Ramakanth Kavuluru, and Zhiyong Lu. 2018. Extracting chemicalâprotein re- lations with ensembles of svm and deep learning models. Database, 2018.
Alec Radford, Karthik Narasimhan, Tim Salimans, and Improving language under- Ilya Sutskever. 2018a. standing by generative pre-training. Technical re- port, Technical report, OpenAi.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2018b. Lan- guage models are unsupervised multitask learners. Technical report, Technical report, OpenAi.
Alexander J Ratner, Christopher M De Sa, Sen Wu, Daniel Selsam, and Christopher R´e. 2016. Data pro- gramming: Creating large training sets, quickly. In Advances in neural information processing systems, pages 3567â3575.
Isabel Segura Bedmar, Paloma Mart´ınez, and Mar´ıa Herrero Zazo. 2013. Semeval-2013 task 9: Ex- traction of drug-drug interactions from biomedical texts (ddiextraction 2013). Association for Compu- tational Linguistics.
Peng Shi and Jimmy Lin. 2019. Simple bert models for relation extraction and semantic role labeling. arXiv preprint arXiv:1904.05255.
Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, and Tom Kwiatkowski. 2019. Matching the blanks: Distributional similarity for relation learn- ing. arXiv preprint arXiv:1906.03158.
Fan-Keng Sun, Cheng-Hao Ho, and Hung-Yi Lee. 2019a. Lamal: Language modeling is all you need for lifelong language learning. arXiv preprint arXiv:1909.03329.
Xia Sun, Ke Dong, Long Ma, Richard Sutcliffe, Fei- juan He, Sushing Chen, and Jun Feng. 2019b. Drug- drug interaction extraction via recurrent hybrid con- volutional neural networks with an improved focal loss. Entropy, 21(1):37.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz Kaiser, and Illia Polosukhin. 2017. Attention is all In Advances in neural information pro- you need. cessing systems, pages 5998â6008.
Patrick Verga, Emma Strubell, and Andrew McCallum. 2018. Simultaneously self-attending to all mentions for full-abstract biological relation extraction. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), volume 1, pages 872â884.
Jason Wang and Luis Perez. 2017. The effectiveness of data augmentation in image classiï¬cation using deep learning. Convolutional Neural Networks Vis. Recognit.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Râemi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingfaceâs trans- formers: State-of-the-art natural language process- ing. ArXiv, abs/1910.03771.
Sebastien C Wong, Adam Gatt, Victor Stamatescu, and Mark D McDonnell. 2016. Understanding data aug- mentation for classiï¬cation: when to warp? In 2016 international conference on digital image comput- ing: techniques and applications (DICTA), pages 1â 6. IEEE.
Yi Wu, David Bamman, and Stuart Russell. 2017. Ad- versarial training for relation extraction. In Proceed- ings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1778â1783.
Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classiï¬cation via con- volutional deep neural network. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 2335â2344.
Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text clas- In Advances in neural information pro- siï¬cation. cessing systems, pages 649â657.
Zhi-Hua Zhou and Xu-Ying Liu. 2005. Training cost- sensitive neural networks with methods addressing the class imbalance problem. IEEE Transactions on knowledge and data engineering, 18(1):63â77.
# A Supplemental Material
In this section we present additionally the results per class for ChemProt and DDI2013, for DARE against the class weighting baseline and the SOTA.
relation type SOTA Class Weighting DARE 0.70 0.79 0.81 0.73 0.59
Table 5: ChemProt results per relation type for DARE vs SOTA and best baseline in terms of F1.
relation type SOTA Class Weight DARE 0.80 0.78 0.58 0.80
Table 6: DDI2013 results per relation type for DARE vs state-of-the-art and best baseline in terms of F1. | {
"id": "1901.08746"
} |
2004.02178 | FastBERT: a Self-distilling BERT with Adaptive Inference Time | Pre-trained language models like BERT have proven to be highly performant.
However, they are often computationally expensive in many practical scenarios,
for such heavy models can hardly be readily implemented with limited resources.
To improve their efficiency with an assured model performance, we propose a
novel speed-tunable FastBERT with adaptive inference time. The speed at
inference can be flexibly adjusted under varying demands, while redundant
calculation of samples is avoided. Moreover, this model adopts a unique
self-distillation mechanism at fine-tuning, further enabling a greater
computational efficacy with minimal loss in performance. Our model achieves
promising results in twelve English and Chinese datasets. It is able to speed
up by a wide range from 1 to 12 times than BERT if given different speedup
thresholds to make a speed-performance tradeoff. | http://arxiv.org/pdf/2004.02178 | Weijie Liu, Peng Zhou, Zhe Zhao, Zhiruo Wang, Haotang Deng, Qi Ju | cs.CL | This manuscript has been accepted to appear at ACL 2020 | null | cs.CL | 20200405 | 20200429 | 0 2 0 2
r p A 9 2 ] L C . s c [
2 v 8 7 1 2 0 . 4 0 0 2 : v i X r a
# FastBERT: a Self-distilling BERT with Adaptive Inference Time
Weijie Liu1,2, Peng Zhou2, Zhe Zhao2, Zhiruo Wang3, Haotang Deng2 and Qi Ju2,â 1Peking University, Beijing, China 2Tencent Research, Beijing, China 3Beijing Normal University, Beijing, China [email protected], {rickzhou, nlpzhezhao, haotangdeng, damonju}@tencent.com, [email protected]
# Abstract
Pre-trained language models like BERT have proven to be highly performant. However, they are often computationally expensive in many practical scenarios, for such heavy models can hardly be readily implemented with limited re- sources. To improve their efï¬ciency with an as- sured model performance, we propose a novel speed-tunable FastBERT with adaptive infer- ence time. The speed at inference can be ï¬ex- ibly adjusted under varying demands, while redundant calculation of samples is avoided. this model adopts a unique self- Moreover, distillation mechanism at ï¬ne-tuning, further enabling a greater computational efï¬cacy with minimal loss in performance. Our model achieves promising results in twelve English and Chinese datasets. It is able to speed up by a wide range from 1 to 12 times than BERT if given different speedup thresholds to make a speed-performance tradeoff.
number of requests during the holidays is ï¬ve to ten times more than that of the workdays. A large number of servers need to be deployed to enable BERT in industrial settings, and many spare servers need to be reserved to cope with the peak period of requests, demanding huge costs.
To improve their usability, many attempts in model acceleration have been made, such as quan- tinization (Gong et al., 2014), weights pruning (Han et al., 2015), and knowledge distillation (KD) (Romero et al., 2014). As one of the most popular methods, KD requires additional smaller student models that depend entirely on the bigger teacher model and trade task accuracy for ease in computa- tion (Hinton et al., 2015). Reducing model sizes to achieve acceptable speed-accuracy balances, how- ever, can only solve the problem halfway, for the model is still set as ï¬xated, rendering them unable to cope with drastic changes in request amount.
# Introduction
Last two years have witnessed signiï¬cant improve- ments brought by language pre-training, such as BERT (Devlin et al., 2019), GPT (Radford et al., 2018), and XLNet (Yang et al., 2019). By pre- training on unlabeled corpus and ï¬ne-tuning on la- beled ones, BERT-like models achieved huge gains on many Natural Language Processing tasks.
By inspecting many NLP datasets (Wang et al., 2018), we discerned that the samples have differ- ent levels of difï¬culty. Heavy models may over- calculate the simple inputs, while lighter ones are prone to fail in complex samples. As recent studies (Kovaleva et al., 2019) have shown redundancy in pre-training models, it is useful to design a one- size-ï¬ts-all model that caters to samples with vary- ing complexity and gains computational efï¬cacy with the least loss of accuracy.
Despite this gain in accuracy, these models have greater costs in computation and slower speed at in- ference, which severely impairs their practicalities. Actual settings, especially with limited time and resources in the industry, can hardly enable such models into operation. For example, in tasks like sentence matching and text classiï¬cation, one often requires to process billions of requests per second. Whatâs more, the number of requests varies with time. In the case of an online shopping site, the
Based on this appeal, we propose FastBERT, a pre-trained model with a sample-wise adaptive mechanism. It can adjust the number of executed layers dynamically to reduce computational steps. This model also has a unique self-distillation pro- cess that requires minimal changes to the structure, achieving faster yet as accurate outcomes within a single framework. Our model not only reaches a comparable speedup (by 2 to 11 times) to the BERT model, but also attains competitive accuracy in comparison to heavier pre-training models.
ââCorresponding author: Qi Ju ([email protected])
Experimental results on six Chinese and six En- glish NLP tasks have demonstrated that FastBERT achieves a huge retrench in computation with very little loss in accuracy. The main contributions of this paper can be summarized as follows:
⢠This paper proposes a practical speed-tunable BERT model, namely FastBERT, that bal- ances the speed and accuracy in the response of varying request amounts;
⢠The sample-wise adaptive mechanism and the self-distillation mechanism are combined to improve the inference time of NLP model for the ï¬rst time. Their efï¬cacy is veriï¬ed on twelve NLP datasets;
⢠The code is publicly available at https:// github.com/autoliuweijie/FastBERT.
# 2 Related work
BERT (Devlin et al., 2019) can learn universal knowledge from mass unlabeled data and produce more performant outcomes. Many works have fol- lowed: RoBERTa (Liu et al., 2019) that uses larger corpus and longer training steps. T5 (Raffel et al., 2019) that scales up the model size even more. UER (Zhao et al., 2019) pre-trains BERT in differ- ent Chinese corpora. K-BERT (Liu et al., 2020) injects knowledge graph into BERT model. These models achieve increased accuracy with heavier settings and even more data.
However, such unwieldy sizes are often ham- pered under stringent conditions. To be more spe- ciï¬c, BERT-base contains 110 million parameters by stacking twelve Transformer blocks (Vaswani et al., 2017), while BERT-large expands its size to even 24 layers. ALBERT (Lan et al., 2019) shares the parameters of each layer to reduce the model size. Obviously, the inference speed for these mod- els would be much slower than classic architec- tures (e.g., CNN (Kim, 2014), RNN (Wang, 2018), etc). We think a large proportion of computation is caused by redundant calculation.
Knowledge distillation: Many attempts have been made to distill heavy models (teachers) into their lighter counterparts (students). PKD-BERT (Sun et al., 2019a) adopts an incremental extrac- tion process that learns generalizations from inter- mediate layers of the teacher model. TinyBERT (Jiao et al., 2019) performs a two-stage learning in- volving both general-domain pre-training and task- speciï¬c ï¬ne-tuning. DistilBERT (Sanh et al., 2019)
Prediction P, Loss(P,, P,) Softmax Prediction Al Big Model Softmax (Teacher) Small Model (Student) EL tnpatx ââf
Figure 1: Classic knowledge distillation approach: Dis- till a small model using a separate big model.
further leveraged the inductive bias within large models by introducing a triple loss. As shown in Figure 1, student model often require a separated structure, whose effect however, depends mainly on the gains of the teacher. They are as indiscrimi- nate to individual cases as their teachers, and only get faster in the cost of degraded performance.
Adaptive inference: Conventional approaches in adaptive computations are performed token-wise or patch-wise, who either adds recurrent steps to individual tokens (Graves, 2016) or dynamically ad- justs the number of executed layers inside discrete regions of images (Teerapittayanon et al., 2016; Figurnov et al., 2017). To the best of our knowl- edge, there has been no work in applying adaptive mechanisms to NLP pre-training language models for efï¬ciency improvements so far.
# 3 Methodology
Distinct to the above efforts, our approach fusions the adaptation and distillation into a novel speed-up approach, shown in Figure 2, achieving competitive results in both accuracy and efï¬ciency.
# 3.1 Model architecture
As shown in Figure 2, FastBERT consists of backbone and branches. The backbone is built upon 12-layers Transformer encoder with an ad- ditional teacher-classiï¬er, while the branches in- clude student-classiï¬ers which are appended to each Transformer output to enable early outputs.
# 3.1.1 Backbone
The backbone consists of three parts: the em- bedding layer, the encoder containing stacks of Transformer blocks (Vaswani et al., 2017), and the teacher classiï¬er. The structure of the embedding layer and the encoder conform with those of BERT
pos. Not too bad itis worth reading pos. Excellent! but a bit difficult to ur a Predicted labels One batch of input sentences Branch
Figure 2: The inference process of FastBERT, where the number of executed layers with each sample varies based on its complexity. This illustrates a sample-wise adaptive mechanism. Taking a batch of inputs (batch size = 4) as an example, the Transformer0 and Student-classiï¬er0 inferred their labels as probability distributions and calculate the individual uncertainty. Cases with low uncertainty are immediately removed from the batch, while those with higher uncertainty are sent to the next layer for further inference.
(Devlin et al., 2019). Given the sentence length n, an input sentence s = [w0, w1, ...wn] will be transformed by the embedding layers to a sequence of vector representations e like (1),
same architecture with the teacher are added to the output of each Transformer block to enable early outputs, especially in those simple cases. The stu- dent classiï¬ers can be described as (4),
e = Embedding(s), (1)
psi = Student Classif ier i(hi). (4)
where e is the summation of word, position, and segment embeddings. Next, the transformer blocks in the encoder performs a layer-by-layer feature extraction as (2),
hi = T ransf ormer i(hiâ1), (2)
The student classiï¬er is designed carefully to bal- ance model accuracy and inference speed, for sim- ple networks may impair the performance, while a heavy attention module severely slows down the inference speed. Our classiï¬er has proven to be lighter with ensured competitive accuracy, detailed veriï¬cations are showcased in Section 4.1.
where hi (i = â1, 0, 1, ..., L â 1) is the output features at the ith layer, and hâ1 = e. L is the number of Transformer layers.
# 3.2 Model training
Following the ï¬nal encoding output is a teacher classiï¬er that extracts in-domain features for down- stream inferences. It has a fully-connected layer narrowing the dimension from 768 to 128, a self- attention joining a fully-connected layer without changes in vector size, and a fully-connected layer with a sof tmax function projecting vectors to an N -class indicator pt as in (3), where N is the task- speciï¬c number of classes.
FastBERT requires respective training steps for the backbone and the student classiï¬ers. The parame- ters in one module is always frozen while the other module is being trained. The model is trained in preparation for downstream inference with three steps: the major backbone pre-training, entire back- bone ï¬ne-tuning, and self-distillation for student classiï¬ers.
# 3.2.1 Pre-training
pt = T eacher Classif ier(hLâ1).
3.1.2 Branches To provide FastBERT with more adaptability, mul- tiple branches, i.e. the student classiï¬ers, in the
The pre-training of backbone resembles that of BERT in the same way that our backbone re- sembles BERT. Any pre-training method used for BERT-like models (e.g., BERT-WWM (Cui et al., 2019), RoBERTa (Liu et al., 2019), and ERNIE
(Sun et al., 2019b)) can be directly applied. Note that the teacher classiï¬er, as it is only used for inference, stays unaffected at this time. Also conve- niently, FastBERT does not even need to perform pre-training by itself, for it can load high-quality pre-trained models freely.
3.2.2 Fine-tuning for backbone For each downstream task, we plug in the task- speciï¬c data into the model, ï¬ne-tuning both the major backbone and the teacher classiï¬er. The structure of the teacher classiï¬er is as previously described. At this stage, all student classiï¬ers are not enabled.
3.2.3 Self-distillation for branch With the backbone well-trained for knowledge ex- traction, its output, as a high-quality soft-label con- taining both the original embedding and the gener- alized knowledge, is distilled for training student classiï¬ers. As student are mutually independent, their predictions ps are compared with the teacher soft-label pt respectively, with the differences mea- sured by KL-Divergence in (5),
N . Drxlps.r) = Yo pali) tog. (5) rf ped)
i=1 As there are L â 1 student classiï¬ers in the Fast- BERT, the sum of their KL-Divergences is used as the total loss for self-distillation, which is formu- lated in (6),
L-2 Loss(Dsp; +s Psp-2s Pt) = >, Dx (Psi+Pt)s i=0 (6)
(6) where psi refers to the probability distribution of the output from student-classiï¬er i.
Since this process only requires the teachers out- put, we are free to use an unlimited number of unla- beled data, instead of being restricted to the labeled ones. This provides us with sufï¬cient resources for self-distillation, which means we can always improve the student performance as long as the teacher allows. Moreover, our method differs from the previous distillation method, for the teacher and student outputs lie within the same model. This learning process does not require additional pre- training structures, making the distillation entirely a learning process by self.
# 3.3 Adaptive inference
With the above steps, FastBERT is well-prepared to perform inference in an adaptive manner, which
means we can adjust the number of executed en- coding layers within the model according to the sample complexity.
At each Transformer layer, we measure for each sample on whether the current inference is credible enough to be terminated.
Given an input sequence, the uncertainty of a student classiï¬erâs output ps is computed with a normalized entropy in (7),
at Ds(t) log ps(i) Uncertainty = log T > @%
where ps is the distribution of output probability, and N is the number of labeled classes.
With the deï¬nition of the uncertainty, we make an important hypothesis.
Hypothesis 1. LUHA: the Lower the Uncertainty, the Higher the Accuracy.
Deï¬nition 1. Speed: The threshold to distinguish high and low uncertainty.
LUHA is veriï¬ed in Section 4.4. Both Uncer- tainty and Speed range between 0 and 1. The adap- tive inference mechanism can be described as: At each layer of FastBERT, the corresponding student classiï¬er will predict the label of each sample with measured Uncertainty. Samples with Uncertainty below the Speed will be sifted to early outputs, while samples with Uncertainty above the Speed will move on to the next layer.
Intuitively, with a higher Speed, fewer samples will be sent to higher layers, and overall inference speed will be faster, and vice versa. Therefore, Speed can be used as a halt value for weighing the inference accuracy and efï¬ciency.
Table 1: FLOPs of each operation within the FastBERT (M = Million, N = the number of labels).
Operation Sub-operation FLOPs Total FLOPs Transformer Self-attention (768 â 768) Feedforward (768 â 3072 â 768) 603.0M 1207.9M 1809.9M Classiï¬er Fully-connect (768 â 128) Self-attention (128 â 128) Fully-connect (128 â 128) Fully-connect (128 â N ) 25.1M 16.8M 4.2M - 46.1M
Table 2: Comparison of accuracy (Acc.) and FLOPs (speedup) between FastBERT and Baselines in six Chinese datasets and six English datasets.
Dataset/ Model ChnSentiCorp FLOPs (speedup) Acc. Book review Acc. FLOPs (speedup) Shopping review FLOPs (speedup) Acc. Acc. LCQMC FLOPs (speedup) Acc. Weibo FLOPs (speedup) THUCNews Acc. FLOPs (speedup) BERT 95.25 21785M (1.00x) 86.88 21785M (1.00x) 96.84 21785M (1.00x) 86.68 21785M (1.00x) 97.69 21785M (1.00x) 96.71 21785M (1.00x) DistilBERT (6 layers) DistilBERT (3 layers) DistilBERT (1 layers) 88.58 87.33 81.33 10918M (2.00x) 5428M (4.01x) 1858M (11.72x) 83.31 81.17 77.40 10918M (2.00x) 5428M (4.01x) 1858M (11.72x) 95.40 94.84 91.35 10918M (2.00x) 5428M (4.01x) 1858M (11.72x) 84.12 84.07 71.34 10918M (2.00x) 5428M (4.01x) 1858M (11.72x) 97.69 97.58 96.90 10918M (2.00x) 5428M (4.01x) 1858M (11.72x) 95.54 95.14 91.13 10918M (2.00x) 5428M (4.01x) 1858M (11.72x) FastBERT (speed=0.1) FastBERT (speed=0.5) FastBERT (speed=0.8) 95.25 92.00 89.75 10741M (2.02x) 3191M (6.82x) 2315M (9.40x) 86.88 86.64 85.14 13613M (1.60x) 5170M (4.21x) 3012M (7.23x) 96.79 96.42 95.72 4885M (4.45x) 2517M (8.65x) 2087M (10.04x) 86.59 84.05 77.45 12930M (1.68x) 6352M (3.42x) 3310M (6.57x) 97.71 97.72 97.69 3691M (5.90x) 3341M (6.51x) 1982M (10.09x) 96.71 95.64 94.97 3595M (6.05x) 1979M (11.00x) 1854M (11.74x) Dataset/ Model Acc. Ag.news FLOPs (speedup) Acc. Amz.F FLOPs (speedup) Acc. Dbpedia FLOPs (speedup) Acc. Yahoo FLOPs (speedup) Acc. Yelp.F FLOPs (speedup) Acc. Yelp.P FLOPs (speedup) BERT 94.47 21785M (1.00x) 65.50 21785M (1.00x) 99.31 21785M (1.00x) 77.36 21785M (1.00x) 65.93 21785M (1.00x) 96.04 21785M (1.00x) DistilBERT (6 layers) DistilBERT (3 layers) DistilBERT (1 layers) 94.64 93.98 92.88 10872M (2.00x) 5436M (4.00x) 1816M (12.00x) 64.05 63.84 59.48 10872M (2.00x) 5436M (4.00x) 1816M (12.00x) 99.10 99.05 98.95 10872M (2.00x) 5436M (4.00x) 1816M (12.00x) 76.73 76.56 74.93 10872M (2.00x) 5436M (4.00x) 1816M (12.00x) 64.25 63.50 58.59 10872M (2.00x) 5436M (4.00x) 1816M (12.00x) 95.31 93.23 91.59 10872M (2.00x) 5436M (4.00x) 1816M (12.00x) FastBERT (speed=0.1) FastBERT (speed=0.5) FastBERT (speed=0.8) 94.38 93.14 92.53 6013M (3.62x) 2108M (10.33x) 1858M (11.72x) 65.50 64.64 61.70 21005M (1.03x) 10047M (2.16x) 2356M (9.24x) 99.28 99.05 99.04 2060M (10.57x) 1854M (11.74x) 1853M (11.75x) 77.37 76.57 75.05 16172M (1.30x) 4852M (4.48x) 1965M (11.08x) 65.93 64.73 60.66 20659M (1.05x) 9827M (2.21x) 2602M (8.37x) 95.99 95.32 94.31 6668M (3.26x) 3456M (6.30x) 2460M (8.85x)
# 4 Experimental results
# 4.2 Baseline and dataset
In this section, we will verify the effectiveness of FastBERT on twelve NLP datasets (six in English and six in Chinese) with detailed explanations.
# 4.2.1 Baseline
In this section, we compare FastBERT against two baselines:
# 4.1 FLOPs analysis
Floating-point operations (FLOPs) is a measure of the computational complexity of models, which indicates the number of ï¬oating-point operations that the model performs for a single process. The FLOPs has nothing to do with the modelâs oper- ating environment (CPU, GPU or TPU) and only reveals the computational complexity. Generally speaking, the bigger the modelâs FLOPs is, the longer the inference time will be. With the same ac- curacy, models with low FLOPs are more efï¬cient and more suitable for industrial uses.
We list the measured FLOPs of both structures in Table 1, from which we can infer that, the cal- culation load (FLOPs) of the Classiï¬er is much lighter than that of the Transformer. This is the basis of the speed-up of FastBERT, for although it adds additional classiï¬ers, it achieves acceleration by reducing more computation in Transformers.
⢠BERT1 The 12-layer BERT-base model was pre-trained on Wiki corpus and released by Google (Devlin et al., 2019).
⢠DistilBERT2 The most famous distillation method of BERT with 6 layers was released by Huggingface (Sanh et al., 2019). In addition, we use the same method to distill the Distil- BERT with 3 and 1 layer(s), respectively.
# 4.2.2 Dataset
To verify the effectiveness of FastBERT, especially in industrial scenarios, six Chinese and six En- glish datasets pressing closer to actual applica- tions are used. The six Chinese datasets include
1https://github.com/google-research/ bert
2https://github.com/huggingface/ transformers/tree/master/examples/ distillation
ââ ChnsentiCorp = ââ Book review âsâ Shopping review ââ LCQMC ââ Weibo 1.0 ââ THUCNews 12x. 10x. Ace. ° 1 Speedup (times) PF g 0.94 = os] a o7- 075 od a A a a a 01 02 03 04 05 06 07 08 09 1.0 01 02 03 04 05 06 07 08 09 1.0 Ox 1x 2x 3x 4x 5x 6x 7x 8x 9x 10x 11x 12x Speed Speed Speedup (times) (a) (3) () ââ AgNews ââ amr âsâ Dbpedia ââ Yahoo ââ Yelp. ââ Yelp.P 1.0 = ââââ eS 1.07â~ = a | a | Speedup (times) 0.94 TT 0.1 02 03 04 05 06 07 08 09 1.0 01 02 03 Speed (d) Tt 04 05 06 0.7 08 09 1.0 Ox TOI 1x 2x 3x 4x 5x 6x 7x Bx 9X 10x 11x 12x Speed Speedup (times) (e) ()
Figure 3: The trade-offs of FastBERT on twelve datasets (six in Chinese and six in English): (a) and (d) are Speed- Accuracy relations, showing changes of Speed (the threshold of Uncertainty) in dependence of the accuracy; (b) and (e) are Speed-Speedup relations, indicating that the Speed manages the adaptibility of FastBERT; (c) and (f) are the Speedup-Accuracy relations, i.e. the trade-off between efï¬ciency and accuracy.
the sentence classiï¬cation tasks (ChnSentiCorp, Book review(Qiu et al., 2018), Shopping review, Weibo and THUCNews) and a sentences-matching task (LCQMC(Liu et al., 2018)). All the Chinese datasets are available at the FastBERT project. The six English datasets (Ag.News, Amz.F, DBpedia, Yahoo, Yelp.F, and Yelp.P) are sentence classiï¬ca- tion tasks and were released in (Zhang et al., 2015).
# 4.3 Performance comparison
To perform a fair comparison, BERT / DistilBERT / FastBERT all adopt the same conï¬guration as follows. In this paper, L = 12. The number of self-attention heads, the hidden dimension of em- bedding vectors, and the max length of the input sentence are set to 12, 768 and 128 respectively. Both FastBERT and BERT use pre-trained parame- ters provided by Google, while DistilBERT is pre- trained with (Sanh et al., 2019). We ï¬ne-tune these models using the AdamW (Loshchilov and Hut- ter) algorithm, a 2 à 10â5 learning rate, and a 0.1 warmup. Then, we select the model with the best accuracy in 3 epochs. For the self-distillation of FastBERT, we increase the learning rate to 2Ã10â4 and distill it for 5 epochs.
tained by using BERT as the benchmark. It can be observed that with the setting of Speed = 0.1, FastBERT can speed up 2 to 5 times without los- ing accuracy for most datasets. If a little loss of accuracy is tolerated, FastBERT can be 7 to 11 times faster than BERT. Comparing to DistilBERT, FastBERT trades less accuracy to catch higher ef- ï¬ciency. Figure 3 illustrates FastBERTâs tradeoff in accuracy and efï¬ciency. The speedup ratio of FastBERT are free to be adjusted between 1 and 12, while the loss of accuracy remains small, which is a very attractive feature in the industry.
5S | 100%-} :95:75 Bes 4p fass5 S 8 8 80% FAB 7485 60:39 6088 «42 asd oe LO 60%] IF s0.08: 12 100% âJ. 96.20 eo. fo} = 90:09 85S 80% Po 7905 79.51 74:18 22 4 62.45 63.78 62.63 38~ cox 57.18: 2 100% 23 0" B82 98:85) 93.99 90.16 5 76g) 52s con 77.60 76.05 3 8 < 4 66.17: 5S~ 60%] zee ° T Uncertainty
We evaluate the text inference capabilities of these models on the twelve datasets and report their accuracy (Acc.) and sample-averaged FLOPs under different Speed values. The result of comparisons are shown in Table 2, where the Speedup is ob-
Figure 4: The relation of classiï¬er accuracy and aver- age case uncertainty: Three classiï¬ers at the bottom, in the middle, and on top of the FastBERT were analyzed, and their accuracy within various uncertainty intervals were calculated with the Book Review dataset.
80% Set speed = 0.8 aL Average = 0.92, median = 0 40% 5 20% ow laa. 60% + Set speed = 0.5 40% 4 GL Average = 2.3, median = 1 20% âlh... 60% al Set speed = 0.3 40% aL Average = 3.2, median = 2 20% 0% Lhd, 0 123 4 5 6 7 8 9 10 11 Distribution of layers
Figure 5: The distribution of executed layers on aver- age in the Book review dataset, with experiments at three different speeds (0.3, 0.5 and 0.8).
# 4.4 LUHA hypothesis veriï¬cation
As is described in the Section 3.3, the adaptive in- ference of FastBERT is based on the LUHA hypoth- esis, i.e., âthe Lower the Uncertainty, the Higher the Accuracyâ. Here, we prove this hypothesis us- ing the book review dataset. We intercept the clas- siï¬cation results of Student-Classiï¬er0, Student- Classiï¬er5, and Teacher-Classiï¬er in FastBERT, then count their accuracy in each uncertainty inter- val statistically. As shown in Figure 4, the statisti- cal indexes conï¬rm that the classiï¬er follows the LUHA hypothesis, no matter it sits at the bottom, in the middle or on top of the model.
From Figure 4, it is easy to mistakenly conclude that Students has better performance than Teacher due to the fact that the accuracy of Student in each uncertainty range is higher than that of Teacher. This conclusion can be denied by analysis with Figure 6(a) together. For the Teacher, more sam- ples are located in areas with lower uncertainty, while the Studentsâ samples are nearly uniformly distributed. Therefore the overall accuracy of the Teacher is still higher than that of Students.
# 4.5 In-depth study
In this section, we conduct a set of in-depth analysis of FastBERT from three aspects: the distribution of exit layer, the distribution of sample uncertainty, and the convergence during self-distillation.
Speed = 0.0 Speed = 0.3 | 4 104 104 94 94 84 84 77 ~ 74 4s 574 55 4 4 34 34. 275 27 17 14 0 Aaa 0 0.5 1.0 ie} 0.3 0.5 1.0 Uncertainty Uncertainty @ (b) Speed = 0.5 Speed = 0.8 aie 14 " 10 10 94 4 94 Al e ef Lan abatl 7 7-4 tae 5 J 67 ge 867 4 54 5 et ool 4 4 35 35 275 27 14 14 Det 0} Aaa 0 * ie} 0.5 1.0 ie} 0.5 0.8 1.0 Uncertainty Uncertainty (c) (d)
~
# 4s
# 5 ge 4
Figure 6: The distribution of Uncertainty at different layers of FastBERT in the Book review dataset: (a) The speed is set to 0.0, which means that all samples will pass through all the twelve layers; (b) â¼ (d): The Speed is set to 0.3, 0.5, and 0.8 respectively, iand only the samples with Uncertainty higher than Speed will be sent to the next layer.
# 4.5.1 Layer distribution
In FastBERT, each sample walks through a dif- ferent number of Transformer layers due to varied complexity. For a certain condition, fewer executed layers often requires less computing resources. As illustrated in Figure 5, we investigate the distri- bution of exit layers under different constraint of Speeds (0.3, 0.5 and 0.8) in the book review dataset. Take Speed = 0.8 as an example, at the ï¬rst layer Transformer0, 61% of the samples is able to com- plete the inference. This signiï¬cantly eliminates unnecessary calculations in the next eleven layers.
# 4.5.2 Uncertainty distribution
The distribution of sample uncertainty predicted by different student classiï¬ers varies, as is illustrated in Figure 6. Observing these distributions help us to
88% Speed = 0.5 fo | 20,000M 86% I i Ace. [-15,000M _, Sear i ââ FLOPs o < | a | tL 40,000M 82% ' | 5,000M 80% +14 0 1 2 3 4 5 6 7 8 Fine-tuning Self-distillation (0~3 epochs) (3~8 epochs) Epoch
Figure 7: The change in accuracy and FLOPs of Fast- BERT during ï¬ne-tuning and self-distillation with the Book review dataset. The accuracy ï¬rstly increases at the ï¬ne-tuning stage, while the self-distillation reduces the FLOPs by six times with almost no loss in accuracy.
further understand FastBERT. From Figure 6(a), it can be concluded that the higher the layer is posited, the lower the uncertainty with given Speed will be, indicating that the high-layer classiï¬ers more de- cisive than the lower ones. It is worth noting that at higher layers, there are samples with uncertainty below the threshold of Uncertainty (i.e., the Speed), for these high-layer classiï¬ers may reverse the pre- vious judgments made by the low-layer classiï¬ers.
4.5.3 Convergence of self-distillation Self-distillation is a crucial step to enable Fast- BERT. This process grants student classiï¬ers with the abilities to infer, thereby ofï¬oading work from the teacher classiï¬er. Taking the Book Review dataset as an example, we ï¬ne-tune the FastBERT with three epochs then self-distill it for ï¬ve more epochs. Figure 7 illustrates its convergence in accuracy and FLOPs during ï¬ne-tune and self- distillation. It could be observed that the accuracy increases with ï¬ne-tuning, while the FLOPs de- crease during the self-distillation stage.
# 4.6 Ablation study
Adaptation and self-distillation are two crucial mechanisms in FastBERT. We have preformed ab- lation studies to investigate the effects brought by these two mechanisms using the Book Re- view dataset and the Yelp.P dataset. The results are presented in Table 3, in which âwithout self- distillationâ implies that all classiï¬ers, including both the teacher and the students, are trained in the ï¬ne-tuning; while âwithout adaptive inferenceâ means that the number of executed layers of each sample is ï¬xated to two or six.
Table 3: Results of ablation studies on the Book review and Yelp.P datasets.
Conï¬g. Book review Acc. FLOPs (speedup) Acc. Yelp.P FLOPs (speedup) FastBERT speed=0.2 speed=0.7 86.98 85.69 9725M (2.23x) 3621M (6.01x) 95.90 94.67 52783M (4.12x) 2757M (7.90x) FastBERT without self-distillation speed=0.2 speed=0.7 86.22 85.02 9921M (2.19x) 4282M (5.08x) 95.55 94.54 4173M (5.22x) 2371M (9.18x) layer=6 layer=2 FastBERT without adaptive inference 11123M (1.95x) 3707M (5.87x) 86.42 95.18 82.88 93.11 11123M (1.95x) 3707M (5.87x)
From Table 3, we have observed that: (1) At almost the same level of speedup, FastBERT with- out self-distillation or adaption performs poorer; (2) When the model is accelerated more than ï¬ve times, downstream accuracy degrades dramati- cally without adaption. It is safe to conclude that both the adaptation and self-distillation play a key role in FastBERT, which achieves both signiï¬cant speedups and favorable low losses of accuracy.
# 5 Conclusion
In this paper, we propose a fast version of BERT, namely FastBERT. Speciï¬cally, FastBERT adopts a self-distillation mechanism during the training phase and an adaptive mechanism in the inference phase, achieving the goal of gaining more efï¬- ciency with less accuracy loss. Self-distillation and adaptive inference are ï¬rst introduced to NLP model in this paper. In addition, FastBERT has a very practical feature in industrial scenarios, i.e., its inference speed is tunable.
Our experiments demonstrate promising results on twelve NLP datasets. Empirical results have shown that FastBERT can be 2 to 3 times faster than BERT without performance degradation. If we slack the tolerated loss in accuracy, the model is free to tune its speedup between 1 and 12 times. Be- sides, FastBERT remains compatible to the parame- ter settings of other BERT-like models (e.g., BERT- WWM, ERNIE, and RoBERTa), which means these public available models can be readily loaded
for FastBERT initialization.
# 6 Future work
These promising results point to future works in (1) linearizing the Speed-Speedup curve; (2) extend- ing this approach to other pre-training architectures such as XLNet (Yang et al., 2019) and ELMo (Pe- ters et al., 2018); (3) applying FastBERT on a wider range of NLP tasks, such as named entity recogni- tion and machine translation.
# Acknowledgments
This work is funded by 2019 Tencent Rhino-Bird Elite Training Program. Work done while this au- thor was an intern at Tencent.
# References
Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, and Guoping Hu. 2019. Pre-training with whole word masking for chinese BERT. arXiv preprint arXiv:1906.08101.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of ACL, pages 4171â4186.
Michael Figurnov, Maxwell D Collins, Yukun Zhu, Li Zhang, Jonathan Huang, Dmitry Vetrov, and Rus- lan Salakhutdinov. 2017. Spatially adaptive compu- tation time for residual networks. In Proceedings of CVPR, pages 1790â1799.
Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. 2014. Compressing deep convolutional networks using vector quantization. arXiv preprint arXiv:1412.6115.
Alex Graves. 2016. Adaptive computation time arXiv preprint for recurrent neural networks. arXiv:1603.08983.
Song Han, Jeff Pool, John Tran, and William Dally. 2015. Learning both weights and connections for In Advances in NeurIPS, efï¬cient neural network. pages 1135â1143.
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. Com- puter Science, 14(7):38â39.
Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun TinyBERT: Distilling BERT for Liu. 2019. arXiv preprint natural arXiv:1909.10351.
Yoon Kim. 2014. Convolutional neural networks for sentence classiï¬cation. In Proceedings of EMNLP, pages 1746â1751.
Olga Kovaleva, Alexey Romanov, Anna Rogers, and Anna Rumshisky. 2019. Revealing the dark se- crets of BERT. In Proceedings of EMNLP-IJCNLP, pages 4356â4365.
Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. ALBERT: A lite BERT for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942.
Weijie Liu, Peng Zhou, Zhe Zhao, Zhiruo Wang, Qi Ju, Haotang Deng, and Ping Wang. 2020. K-BERT: Enabling language representation with knowledge graph. In Proceedings of AAAI.
Xin Liu, Qingcai Chen, Chong Deng, Huajun Zeng, Jing Chen, Dongfang Li, and Buzhou Tang. 2018. Lcqmc: A large-scale chinese question matching In Proceedings of the ICCL, pages 1952â corpus. 1962.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692.
Ilya Loshchilov and Frank Hutter. decay regularization in adam. arXiv:1711.05101. Fixing weight arXiv preprint
Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of NAACL-HLT, pages 2227â2237.
Yuanyuan Qiu, Hongzheng Li, Shen Li, Yingdi Jiang, Renfen Hu, and Lijiao Yang. 2018. Revisiting cor- relations between intrinsic and extrinsic evaluations of word embeddings. In Proceedings of CCL, pages 209â221. Springer.
Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing by generative pre-training. Technical re- port, OpenAI.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a uniï¬ed text-to-text trans- former. arXiv preprint arXiv:1910.10683.
Adriana Romero, Nicolas Ballas, Samira Ebrahimi Ka- hou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. 2014. Fitnets: Hints for thin deep nets. arXiv preprint arXiv:1412.6550.
Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. DistilBERT, a distilled ver- sion of bert: smaller, faster, cheaper and lighter. In NeurIPS EMC2 Workshop.
Siqi Sun, Yu Cheng, Zhe Gan, and Jingjing Liu. 2019a. Patient knowledge distillation for bert model com- pression. In Proceedings of EMNLP-IJCNLP, pages 4314â4323.
Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. 2019b. ERNIE: Enhanced rep- resentation through knowledge integration. arXiv preprint arXiv:1904.09223.
Surat Teerapittayanon, Bradley McDanel, and Hsiang- Tsung Kung. 2016. Branchynet: Fast inference via early exiting from deep neural networks. In Proceed- ings of ICPR, pages 2464â2469.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz Kaiser, and Illia Polosukhin. 2017. Attention is all In Advances in NeurIPS, pages 5998â you need. 6008.
Alex Wang, Amanpreet Singh, Julian Michael, Fe- lix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A multi-task benchmark and analysis plat- In Pro- form for natural language understanding. ceedings of EMNLP, pages 353â355.
Baoxin Wang. 2018. Disconnected recurrent neural networks for text categorization. In Proceedings of ACL, pages 2311â2320.
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. XLNet: Generalized autoregressive pretrain- arXiv preprint ing for language understanding. arXiv:1906.08237.
Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text clas- siï¬cation. In Advances in NeurIPS, pages 649â657.
Zhe Zhao, Hui Chen, Jinbin Zhang, Xin Zhao, Tao Liu, Wei Lu, Xi Chen, Haotang Deng, Qi Ju, and Xiaoyong Du. 2019. UER: An open-source toolkit for pre-training models. In Proceedings of EMNLP- IJCNLP 2019, page 241. | {
"id": "1711.05101"
} |
2004.01970 | BAE: BERT-based Adversarial Examples for Text Classification | Modern text classification models are susceptible to adversarial examples,
perturbed versions of the original text indiscernible by humans which get
misclassified by the model. Recent works in NLP use rule-based synonym
replacement strategies to generate adversarial examples. These strategies can
lead to out-of-context and unnaturally complex token replacements, which are
easily identifiable by humans. We present BAE, a black box attack for
generating adversarial examples using contextual perturbations from a BERT
masked language model. BAE replaces and inserts tokens in the original text by
masking a portion of the text and leveraging the BERT-MLM to generate
alternatives for the masked tokens. Through automatic and human evaluations, we
show that BAE performs a stronger attack, in addition to generating adversarial
examples with improved grammaticality and semantic coherence as compared to
prior work. | http://arxiv.org/pdf/2004.01970 | Siddhant Garg, Goutham Ramakrishnan | cs.CL | Accepted at EMNLP 2020 Main Conference | null | cs.CL | 20200404 | 20201008 | 0 2 0 2
t c O 8 ] L C . s c [
3 v 0 7 9 1 0 . 4 0 0 2 : v i X r a
# BAE: BERT-based Adversarial Examples for Text Classiï¬cation
# Siddhant Gargâ â Amazon Alexa AI Search Manhattan Beach, CA, USA [email protected]
Goutham Ramakrishnanâ â Health at Scale Corporation San Jose, CA, USA [email protected]
# Abstract
Modern text classiï¬cation models are suscep- tible to adversarial examples, perturbed ver- sions of the original text indiscernible by hu- mans which get misclassiï¬ed by the model. Recent works in NLP use rule-based synonym replacement strategies to generate adversarial examples. These strategies can lead to out- of-context and unnaturally complex token re- placements, which are easily identiï¬able by humans. We present BAE, a black box attack for generating adversarial examples using con- textual perturbations from a BERT masked lan- guage model. BAE replaces and inserts to- kens in the original text by masking a por- tion of the text and leveraging the BERT-MLM to generate alternatives for the masked tokens. Through automatic and human evaluations, we show that BAE performs a stronger attack, in addition to generating adversarial examples with improved grammaticality and semantic coherence as compared to prior work.
# Introduction
Recent studies have exposed the vulnerability of ML models to adversarial attacks, small in- put perturbations which lead to misclassiï¬cation by the model. Adversarial example generation in NLP (Zhang et al., 2019) is more challeng- ing than in commonly studied computer vision tasks (Szegedy et al., 2014; Kurakin et al., 2017; Papernot et al., 2017) because of (i) the discrete nature of the input space and (ii) the need to ensure semantic coherence with the original text. A major bottleneck in applying gradient based (Goodfellow et al., 2015) or generator model (Zhao et al., 2018) based approaches to generate adversarial examples in NLP is the backward propagation of the pertur- bations from the continuous embedding space to the discrete token space.
The government made a quick decision BAE-R 4 2 Bar | MASK } made a quick decision judge , doctor , captain fea -1 So] The [MASK } government made a quick decision state , british , federal The government | MASK} made a quick decision Officials , then , immediately
Figure 1: We use BERT-MLM to predict masked to- kens in the text for generating adversarial examples. The MASK token replaces a word (BAE-R attack) or is inserted to the left/right of the word (BAE-I).
Initial works for attacking text models relied on introducing errors at the character level (Ebrahimi et al., 2018; Gao et al., 2018) or adding and deleting words (Li et al., 2016; Liang et al., 2017; Feng et al., 2018) for creating adversarial examples. These techniques often result in unnatural looking adver- sarial examples which lack grammatical correct- ness, thereby being easily identiï¬able by humans. Rule-based synonym replacement strategies (Alzantot et al., 2018; Ren et al., 2019) have re- cently lead to more natural looking adversarial ex- amples. Jin et al. (2019) combine both these works by proposing TextFooler, a strong black-box attack baseline for text classiï¬cation models. However, the adversarial examples generated by TextFooler solely account for the token level similarity via word embeddings, and not the overall sentence se- mantics. This can lead to out-of-context and unnat- urally complex replacements (see Table 3), which are easily human-identiï¬able. Consider a simple example: âThe restaurant service was poorâ. To- ken level synonym replacement of âpoorâ may lead to an inappropriate choice such as âbrokeâ, while a context-aware choice such as âterribleâ leads to better retention of semantics and grammaticality.
â Equal contribution by authors â Work completed as a graduate student at UW-Madison
Therefore, a token replacement strategy contin- gent on retaining sentence semantics using a pow-
erful language model (Devlin et al., 2018; Radford et al., 2019) can alleviate the errors made by ex- isting techniques for homonyms (tokens having multiple meanings). In this paper, we present BAE (BERT-based Adversarial Examples), a novel tech- nique using the BERT masked language model (MLM) for word replacements to better ï¬t the over- all context of the English language. In addition to replacing words, we also propose inserting new to- kens in the sentence to improve the attack strength of BAE. These perturbations in the input sentence are achieved by masking a part of the input and using a LM to ï¬ll in the mask (See Figure 1).
Our BAE attack beats the previous baselines by a large margin on empirical evaluation over multiple datasets and models. We show that, surprisingly, just a few replace/insert operations can reduce the accuracy of even a powerful BERT classiï¬er by over 80% on some datasets. Moreover, our human evaluation reveals the improved grammaticality of the adversarial examples generated by BAE over the baseline TextFooler, which can be attributed to the BERT-MLM. To the best of our knowledge, we are the ï¬rst to use a LM for generating adversarial examples. We summarize our contributions as:
⢠We propose BAE, an adversarial example gen- eration technique using the BERT-MLM.
⢠We introduce 4 BAE attack modes by replac- ing and inserting tokens, all of which are al- most always stronger than previous baselines on 7 text classiï¬cation datasets.
⢠Through human evaluation, we show that BAE yields adversarial examples with improved grammaticality and semantic coherence.
2 Methodology Problem Definition. We are given a dataset (S,Y) = {(Si,y1),---(Sm,ym)} and a trained classification model C : S + Y. We assume the soft-label black-box setting where the attacker can only query the classifier for output probabilities on a given input, and does not have access to the model parameters, gradients or training data. For an in- put pair (S=[t),...,t»],y), we want to generate an adversarial example Saay such that C(Sady) ZY. Additionally we would like Sada, to be grammati- cally correct and semantically similar to S. BAE. For generating an adversarial example Sady, we introduce 2 types of token-level perturbations: (i) Replace a token t ⬠S with another and (ii) In- sert a new token ¢â in S. Some tokens in the input
# Algorithm 1: BAE-R Pseudocode
Algorithm 1: BAE-R Pseudocode Input: Sentence S = [t1,..., t»], ground truth label y, Classifier model C Output: Adversarial Example Sadav Initialization: S,a, <_S Compute token importance J; V ti; ⬠S for i in descending order of I; do Sm © Saav{1:iâ1][M]Sadvfi+1-n] Predict top-K tokens T for mask M ⬠Sar T < FILTER(T) L={}// python-style dict for t ⬠Tdo | Lit] = Saavpreâa[t]Saavfi+ain) end if Jt ¢T st C(Lit]) 4y then Return: Saay < L{tâ] where C(L{tâ]) 4 y, L{¢â] has maximum similarity with S else Sadv + L{tâ] where L{tâ] causes maximum reduction in probability of y in C(L[tâ]) end if end Return: Sua. <â
contribute more towards the ï¬nal prediction by C than others. Replacing these tokens or inserting a new token adjacent to them can thus have a stronger effect on altering the classiï¬er prediction. This intu- ition stems from the fact that the replaced/inserted tokens changes the local context around the origi- nal token. We estimate token importance Ii of each ti â S, by deleting ti from S and computing the de- crease in probability of predicting the correct label y, similar to Jin et al. (2019); Ren et al. (2019).
The Replace (R) and Insert (I) operations are performed on a token t by masking it and insert- ing a mask token adjacent to it respectively. The pre-trained BERT-MLM is used to predict the mask tokens (See Figure 1) in line with recent work (Shi and Huang, 2020) which uses this to analyse robust- ness of paraphrase identiï¬cation models to mod- ifying shared words. BERT-MLM is a powerful LM trained on a large training corpus (â¼ 2 billion words), and hence the predicted mask tokens ï¬t well into the grammar and context of the text.
The BERT-MLM, however, does not guarantee semantic coherence to the original text as demon- strated by the following simple example. Consider the sentence: âthe food was goodâ. For replacing the token âgoodâ, BERT-MLM may predict the to- ken âbadâ, which ï¬ts well into the grammar and con- text of the sentence, but changes the original senti- ment of the sentence. To achieve a high semantic similarity with the original text on introducing per- turbations, we ï¬lter the set of top K tokens (K is a pre-deï¬ned constant) predicted by BERT-MLM for the masked token, using a Universal Sentence En-
Model Adversarial Attack Amazon Yelp Datasets IMDB MR wordLSTM Original TextFooler BAE-R BAE-I BAE-R/I BAE-R+I 88.0 31.0 (0.747) 21.0 (0.827) 17.0 (0.924) 16.0 (0.902) 4.0 (0.848) 85.0 28.0 (0.829) 20.0 (0.885) 22.0 (0.928) 19.0 (0.924) 9.0 (0.902) 82.0 20.0 (0.828) 22.0 (0.852) 23.0 (0.933) 8.0 (0.896) 5.0 (0.871) 81.16 25.49 (0.906) 24.17 (0.914) 19.11 (0.966) 15.08 (0.949) 7.50 (0.935) wordCNN Original TextFooler BAE-R BAE-I BAE-R/I BAE-R+I 82.0 42.0 (0.776) 16.0 (0.821) 18.0 (0.934) 13.0 (0.904) 2.0 (0.859) 85.0 36.0 (0.827) 23.0 (0.846) 26.0 (0.941) 17.0 (0.916) 9.0 (0.891) 81.0 31.0 (0.854) 23.0 (0.856) 29.0 (0.924) 20.0 (0.892) 14.0 (0.861) 76.66 21.18 (0.910) 20.81 (0.920) 19.49 (0.971) 15.56 (0.956) 7.87 (0.938) BERT Original TextFooler BAE-R BAE-I BAE-R/I BAE-R+I 96.0 30.0 (0.787) 36.0 (0.772) 20.0 (0.922) 11.0 (0.899) 14.0 (0.830) 95.0 27.0 (0.833) 31.0 (0.856) 25.0 (0.936) 16.0 (0.916) 12.0 (0.871) 85.0 32.0 (0.877) 46.0 (0.835) 31.0 (0.929) 22.0 (0.909) 16.0 (0.856) 85.28 30.74 (0.902) 44.05 (0.871) 32.05 (0.958) 20.34 (0.941) 19.21 (0.917)
Table 1: Automatic evaluation of adversarial attacks on 4 Sentiment Classiï¬cation tasks. We report the test set accuracy. The average semantic similarity, between the original and adversarial examples, obtained from USE are reported in parentheses. Best performance, in terms of maximum drop in test accuracy, is highlighted in boldface.
coder (USE) based sentence similarity scorer (Cer et al., 2018). For the R operation, we additionally ï¬lter out predicted tokens that do not form the same part of speech (POS) as the original token.
If multiple tokens can cause C' to misclassify S when they replace the mask, we choose the token which makes Sad, most similar to the original S based on the USE score. If no token causes misclas- sification, then we choose the one that decreases the prediction probability P(C(Saay)=y) the most. We apply these token perturbations iteratively in decreasing order of token importance, until either C(Sadv)#y (successful attack) or all the tokens of S have been perturbed (failed attack).
We present 4 attack modes for BAE based on the
R and I operations, where for each token t in S: ⢠BAE-R: Replace token t (See Algorithm 1) ⢠BAE-I: Insert a token to the left or right of t ⢠BAE-R/I: Either replace token t or insert a
token to the left or right of t
⢠BAE-R+I: First replace token t, then insert a token to the left or right of t
IMDB are sentiment classiï¬cation datasets used in recent works (Sarma et al., 2018) and MR (Pang and Lee, 2005) contains movie reviews based on sentiment polarity. MPQA (Wiebe and Wilson, 2005) is a dataset for opinion polarity detection, Subj (Pang and Lee, 2004) for classifying a sen- tence as subjective or objective and TREC (Li and Roth, 2002) for question type classiï¬cation.
We use 3 popular text classiï¬cation mod- els: word-LSTM (Hochreiter and Schmidhuber, 1997), word-CNN (Kim, 2014) and a ï¬ne-tuned BERT (Devlin et al., 2018) base-uncased classiï¬er. We train models on the training data and perform the adversarial attack on the test data. For complete model details, refer to Appendix A.
As a baseline, we consider TextFooler (Jin et al., 2019) which performs synonym replacement using a ï¬xed word embedding space (MrkËsi´c et al., 2016). We only consider the top K=50 synonyms from the BERT-MLM predictions and set a threshold of 0.8 for the cosine similarity between USE based embeddings of the adversarial and input text.
Generating adversarial examples through masked language models has also been recently explored by Li et al. (2020) since our original submission.
# 3 Experiments
We evaluate BAE on Datasets and Models. different text classiï¬cation tasks. Amazon, Yelp,
Automatic Evaluation Results. We perform the 4 BAE attacks and summarize the results in Tables 1 and 2. Across datasets and models, our BAE attacks are almost always more effective than the baseline attack, achieving signiï¬cant drops of 40-80% in test accuracies, with higher average se- mantic similarities as shown in parentheses.
Model wordLSTM Adversarial Attack Original TextFooler BAE-R BAE-I BAE-R/I BAE-R+I MPQA 89.43 48.49 (0.745) 45.66 (0.748) 40.94 (0.871) 31.60 (0.820) 25.57 (0.766) Datasets Subj 91.9 58.5 (0.882) 50.2 (0.899) 49.8 (0.958) 43.1 (0.946) 29.0 (0.929) TREC 90.2 42.4 (0.834) 32.4 (0.870) 18.0 (0.964) 20.4 (0.954) 11.8 (0.874) wordCNN BERT Original TextFooler BAE-R BAE-I BAE-R/I BAE-R+I Original TextFooler BAE-R BAE-I BAE-R/I BAE-R+I 89.06 48.77 (0.733) 44.43 (0.735) 44.43 (0.876) 32.17 (0.818) 27.83 (0.764) 90.66 36.23 (0.761) 43.87 (0.764) 33.49 (0.862) 24.53 (0.826) 24.34 (0.766) 91.3 58.9 (0.889) 51.0 (0.899) 49.8 (0.958) 41.5 (0.940) 31.1 (0.922) 97.0 69.5 (0.858) 77.2 (0.828) 74.6 (0.918) 64.0 (0.903) 58.5 (0.875) 93.2 47.6 (0.812) 29.6 (0.843) 15.4 (0.953) 13.0 (0.936) 8.4 (0.858) 97.6 42.8 (0.866) 37.2 (0.824) 32.2 (0.931) 23.6 (0.908) 20.2 (0.825)
° ee â BAEI B70 â BAER 5 60 BAERS <° 3 "3 . âââ 8 Narevum horcent verturtetion
# (a) Word-LSTM
100 90 oe >* tae g7 BAER! 8 60 5 50 © a0 30 20 ° Maximum Percent perturbation
# (b) BERT
Table 2: Automatic evaluation of adversarial attacks on MPQA, Subj and TREC datasets. Other details follow those from Table 1. All 4 modes of BAE attacks almost always outperform TextFooler.
Figure 2: Graphs comparing attack effec- tiveness on the TREC dataset, as a function of maximum % perturbation to the input.
With just one exception, BAE-R+I is the strongest attack since it allows both replacement and insertion at the same token position. We the BAE-R and observe a general trend that BAE-I attacks often perform comparably, while the BAE-R/I and BAE-R+I attacks are much stronger. We observe that the BERT classiï¬er is more robust to BAE and TextFooler attacks than the word-LSTM and word-CNN possibly due to its large size and pre-training on a large corpus.
The TextFooler attack is sometimes stronger than the BAE-R attack for the BERT classiï¬er. We at- tribute this to the shared parameter space between the BERT-MLM and the BERT classiï¬er before ï¬ne-tuning. The predicted tokens from BERT- MLM may not be able to drastically change the internal representations learned by the BERT clas- siï¬er, hindering their ability to adversarially affect the classiï¬er prediction.
Additionally, we make some interesting observa- tions pertaining to the average semantic similarity of the adversarial examples with the original sen- tences (computed using USE). From Tables 1, 2 we observe that across different models and datasets, all BAE attacks have higher average semantic simi- larity than TextFooler. Notably, the BAE-I attack achieves the highest semantic similarity among all the 4 modes. This can be explained by the fact that all tokens of the original sentence are retained, in
the original order, in the adversarial example gener- ated by BAE-I. Interestingly, we observe that the average semantic similarity of the BAE-R+I at- tack is always higher than the BAE-R attack. This lends support to the importance of the âInsertâ op- eration in ameliorating the effect of the âReplaceâ operation. We further investigate this through an ablation study discussed later. Effectiveness. We study the effectiveness of BAE on limiting the number of R/I operations permitted on the original text. We plot the attack performance as a function of maximum % perturbation (ratio of number of word replacements and insertions to the length of the original text) for the TREC dataset. From Figure 2, we clearly observe that the BAE attacks are consistently stronger than TextFooler. The classiï¬er models are relatively robust to pertur- bations up to 20%, while the effectiveness saturates at 40-50%. Surprisingly, a 50% perturbation for the TREC dataset translates to replacing or inserting just 3-4 words, due to the short text lengths. Qualitative Examples. We present adversarial examples generated by the attacks on sentences from the IMDB and Yelp datasets in Table 3. All attack strategies successfully changed the classiï¬ca- tion to negative, however the BAE attacks produce more natural looking examples than TextFooler. The tokens predicted by the BERT-MLM ï¬t well in the sentence context, while TextFooler tends to re-
Original [Positive Sentiment]: This ï¬lm offers many delights and surprises. TextFooler: This ï¬ick citations disparate revel and surprises. BAE-R: This movie offers enough delights and surprises BAE-I: This lovely ï¬lm platform offers many pleasant delights and surprises BAE-R/I: This lovely ï¬lm serves several pleasure and surprises . BAE-R+I: This beautiful movie offers many pleasant delights and surprises . Dataset Amazon IMDB MR Sentiment Accuracy (%) Original TF R R+I 95.7 90.3 93.3 79.1 83.1 82.0 85.2 84.3 84.6 83.8 79.3 82.4 Original [Positive Sentiment]: Our server was great and we had perfect service. TextFooler: Our server was tremendous and we assumed faultless services. BAE-R: Our server was decent and we had outstanding service. BAE-I: Our server was great enough and we had perfect service but. BAE-R/I: Our server was great enough and we needed perfect service but. BAE-R+I: Our server was decent company and we had adequate service. Dataset Amazon IMDB MR Naturalness (1-5) Original TF R 4.26 4.35 4.19 3.17 3.41 3.35 3.91 3.89 3.84 R+I 3.71 3.76 3.74
Table 3: Qualitative examples of each attack on the BERT classiï¬er (Replacements: Red, Inserts: Blue)
Table 4: Human evaluation results (TF: TextFooler and R(R+I): BAE-R(R+I)).
place words with complex synonyms, which can be easily detected. Moreover, BAEâs additional degree of freedom to insert tokens allows for a successful attack with fewer perturbations.
Human Evaluation. We perform human eval- uation of our BAE attacks on the BERT classiï¬er. For 3 datasets, we consider 100 samples from each test set shufï¬ed randomly with their successful ad- versarial examples from BAE-R, BAE-R+I and TextFooler. We calculate the sentiment accuracy by asking 3 annotators to predict the sentiment for each sentence in this shufï¬ed set. To evaluate the naturalness of the adversarial examples, we ï¬rst present the annotators with 50 other original data samples to get a sense of the data distribution. We then ask them to score each sentence (on a Likert scale of 1-5) in the shufï¬ed set on its grammar and likelihood of being from the original data. We average the 3 scores and present them in Table 4.
importance of human-centered evaluation. The gap between the scores on the original data and the ad- versarial examples speaks for the limitations of the attacks, however BAE represents an important step forward towards improved adversarial examples. Replace vs. Insert. Our BAE attacks allow inser- tion operations in addition to replace. We analyze the beneï¬ts of this ï¬exibility of R/I operations in Table 5. From Table 5, the splits A and B are the % of test points which compulsorily need I and R operations respectively for a successful attack. We can observe that the split A is larger than B thereby indicating the importance of the I operation over R. Test points in split C require both R and I opera- tions for a successful attack. Interestingly, split C is largest for Subj, which is the most robust to at- tack (Table 2) and hence needs both R/I operations. Thus, this study gives positive insights towards the importance of having the ï¬exibility to both replace and insert words.
Both BAE-R and BAE-R+I attacks almost always outperform TextFooler in both metrics. BAE-R outperforms BAE-R+I since the latter in- serts tokens to strengthen the attack, at the expense of naturalness and sentiment accuracy. Interest- ingly, the BAE-R+I attacks achieve higher aver- age semantic similarity scores than BAE-R, as dis- cussed in Section 3. This exposes the shortcomings of using USE for evaluating the retention of se- mantics of adversarial examples, and reiterates the
Dataset Word-LSTM A B C Word-CNN A B C A BERT B MR 15.1 Subj 14.4 TREC 16.6 10.1 12.3 1.6 3.1 5.1 0.2 12.4 16.2 20.0 9.6 13.8 5.0 2.8 7.4 1.4 24.3 13.9 14.0 12.9 11.4 8.6 C 5.7 7.5 2.4
Table 5: Analyzing relative importance of âReplaceâ and âInsertâ perturbations for BAE. A denotes % of test instances which are successfully attacked by BAE-R/I, but not BAE-R, i.e. A : (R/I) â© R. Simi- larly, B : (R/I) â© I and C : (R/I) â© R â© I.
We present complete effectiveness graphs and details of human evaluation in Appendix B and C. BAE is implemented1 in TextAttack (Morris et al., 2020), a popular suite of NLP adversarial attacks.
# 4 Conclusion
In this paper, we have presented a new tech- nique for generating adversarial examples (BAE) through contextual perturbations based on the BERT Masked Language Model. We propose in- serting and/or replacing tokens from a sentence, in their order of importance for the text classiï¬- cation task, using a BERT-MLM. Automatic and human evaluation on several datasets demonstrates the strength and effectiveness of our attack.
1https://github.com/QData/TextAttack/ blob/master/textattack/attack_recipes/ bae_garg_2019.py
# Acknowledgments
The authors thank Arka Sadhu, Kalpesh Krishna, Aws Albarghouthi, Yingyu Liang and Justin Hsu for providing in-depth feedback for this research. The authors thank Jack Morris and Jin Yong Yoo for integrating BAE in the TextAttack framework. This work is supported, in part, by the National Science Foundation CCF under award 1652140.
# Broader Ethical Impact
Our work addresses the important problem of adver- sarial vulnerabilities of modern text classiï¬cation models. While we acknowledge the possibility of its misuse to maliciously attack publicly available text classiï¬ers, we believe our work represents an important step forward in analyzing the robustness of NLP models. We hope our work inspires im- proved defenses against adversarial attacks on text classiï¬cation models.
# References
Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial ex- amples. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics.
Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Universal sentence encoder.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.
Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018. HotFlip: White-box adversarial exam- In Proceedings of the ples for text classiï¬cation. 56th Annual Meeting of the Association for Compu- tational Linguistics, pages 31â36, Melbourne, Aus- tralia. Association for Computational Linguistics.
Shi Feng, Eric Wallace, Mohit Iyyer, Pedro Rodriguez, Alvin Grissom II, and Jordan L. Boyd-Graber. 2018. Right answer for the wrong reason: Discovery and mitigation. CoRR, abs/1804.07781.
Ji Gao, Jack Lanchantin, Mary Lou Soffa, and Yanjun Qi. 2018. Black-box generation of adversarial text sequences to evade deep learning classiï¬ers. CoRR, abs/1801.04354.
Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversar- ial examples. In International Conference on Learn- ing Representations.
Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Com., 9(8):1735â1780.
Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2019. Is bert really robust? natural lan- guage attack on text classiï¬cation and entailment. arXiv preprint arXiv:1907.11932.
Yoon Kim. 2014. Convolutional neural networks for sentence classiï¬cation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1746â1751.
Alexey Kurakin, Ian Goodfellow, and Samy Bengio. 2017. Adversarial examples in the physical world. ICLR Workshop.
Dianqi Li, Yizhe Zhang, Hao Peng, Liqun Chen, Chris Brockett, Ming-Ting Sun, and Bill Dolan. 2020. Contextualized perturbation for textual adversarial attack.
Jiwei Li, Will Monroe, and Dan Jurafsky. 2016. Un- derstanding neural networks through representation erasure. CoRR, abs/1612.08220.
Xin Li and Dan Roth. 2002. Learning question clas- In Proceedings of the 19th International siï¬ers. Conference on Computational Linguistics - Volume 1, COLING â02, pages 1â7, Stroudsburg, PA, USA. Association for Computational Linguistics.
Bin Liang, Hongcheng Li, Miaoqiang Su, Pan Bian, Xirong Li, and Wenchang Shi. 2017. Deep text clas- siï¬cation can be fooled. CoRR, abs/1704.08006.
John X. Morris, Eli Liï¬and, Jin Yong Yoo, Jake Grigsby, Di Jin, and Yanjun Qi. 2020. Textattack: A framework for adversarial attacks, data augmenta- tion, and adversarial training in nlp.
Nikola MrkËsi´c, Ivan Vuli´c, Diarmuid ´O S´eaghdha, Ira Leviant, Roi Reichart, Milica GaËsi´c, Anna Korho- nen, and Steve Young. 2017. Semantic specializa- tion of distributional word vector spaces using mono- lingual and cross-lingual constraints. Transactions of the Association for Computational Linguistics.
Nikola MrkËsi´c, Diarmuid ´O S´eaghdha, Blaise Thom- son, Milica GaËsi´c, Lina Rojas-Barahona, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2016. Counter-ï¬tting word vectors to lin- guistic constraints. In Proceedings of HLT-NAACL.
Bo Pang and Lillian Lee. 2004. A sentimental educa- tion: Sentiment analysis using subjectivity. In Pro- ceedings of ACL, pages 271â278.
Bo Pang and Lillian Lee. 2005. Seeing stars: Exploit- ing class relationships for sentiment categorization with respect to rating scales. In Proceedings of ACL.
Ian Goodfel- low, Somesh Jha, Z. Berkay Celik, and Ananthram Swami. 2017. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia Conference on Computer and Communica- tions Security, ASIA CCS â17, pages 506â519, New York, NY, USA. ACM.
Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
Shuhuai Ren, Yihe Deng, Kun He, and Wanxiang Che. 2019. Generating natural language adversarial ex- amples through probability weighted word saliency. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Associa- tion for Computational Linguistics.
Prathusha K Sarma, Yingyu Liang, and Bill Sethares. 2018. Domain adapted word embeddings for im- In Proceedings of proved sentiment classiï¬cation. the 56th Annual Meeting of the Association for Com- putational Linguistics, pages 37â42.
Zhouxing Shi and Minlie Huang. 2020. Robustness to modiï¬cation with shared words in paraphrase identi- ï¬cation.
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In International Conference on Learning Representations.
Janyce Wiebe and Theresa Wilson. 2005. Annotating expressions of opinions and emotions in language. Language Resources and Evaluation.
Wei Emma Zhang, Quan Z. Sheng, and Ahoud Abdul- rahmn F. Alhazmi. 2019. Generating textual adver- sarial examples for deep learning models: A survey. CoRR, abs/1901.06796.
Zhengli Zhao, Dheeru Dua, and Sameer Singh. 2018. Generating natural adversarial examples. In Interna- tional Conference on Learning Representations.
# Appendix
# A Experimental Reproducibility
Dataset and Models The dataset statistics are reported in Table 5 and we give a brief overview of the dataset and the task for which it is used along with public links to download the datasets.
⢠Amazon: Amazon product reviews dataset 2.
⢠Yelp: A restaurant reviews dataset from Yelp2.
IMDB: IMDB movie reviews dataset2. 2https://archive.ics.uci.edu/ml/ datasets/Sentiment+Labelled+Sentences
Dataset Amazon Yelp IMDB MR MPQA Subj TREC 2 2 2 2 2 2 6 900 900 900 9595 9543 9000 5951 100 100 100 1067 1060 1000 500 10.29 11.66 17.56 20.04 3.24 23.46 7.57
Table 5: Summary statistics for the datasets
⢠MR: A movie reviews dataset based on sub- jective rating and sentiment polarity 3.
⢠MPQA: An unbalanced dataset for polarity detection of opinions 4.
⢠TREC: A dataset for classifying types of ques- tions with 6 classes 5.
⢠SUBJ: A dataset for classifying a sentence as objective or subjective. 2
Training Details On the sentence classiï¬cation task, we target three models: word-based convo- lutional neural network (WordCNN), word-based LSTM, and the state-of-the-art BERT. We use 100 ï¬lters of sizes 3,4,5 for the WordCNN model with a dropout of 0.3. Similar to (Jin et al., 2019) we use a 1-layer bi-directional LSTM with 150 hidden units and a dropout of 0.3. For both models, we use the 300 dimensional pre-trained counter ï¬tted word embeddings (MrkËsi´c et al., 2017).
For the BERT classiï¬er, we used the BERT base uncased model which has 12-layers, 12 attention heads and 768 hidden dimension size. Across all models and datasets, we use the standard BERT uncased vocabulary of size 30522. We ï¬rst train all three models on the training data split and use early stopping on the test dataset. For BERT ï¬ne-tuning, we use the standard setting of an Adam classiï¬er having a learning rate of 2Ã10â5 and 2 ï¬ne-tuning epochs.
For our BAE attacks, we use a pre-trained BERT Base-uncased MLM to predict the masked tokens. We only consider the top K=50 synonyms from the BERT-MLM predictions and set a threshold of 0.8 for the cosine similarity between USE based embeddings of the adversarial and input text.
3https://www.cs.cornell.edu/people/ pabo/movie-review-data/
# 4http://mpqa.cs.pitt.edu/ 5http://cogcomp.org/Data/QA/QC/
For R operations, we ï¬lter out predicted tokens which form a different POS than the original token in the sentence. For both R and I operations, we ï¬l- ter out stop words using NLTK from the set of pre- dicted tokens. Additionally we ï¬lter out antonyms using synonym embeddings (MrkËsi´c et al., 2016) for sentiment analysis tasks.
B Results Figures 3 - 8 are the complete set of graphs showing the attack effectiveness for all seven datasets.
C Human Evaluation We ask the human evaluators to judge the natural- ness of texts presented to them, i.e. whether they think they are adversarial examples or not. They were instructed to do so on the basis of grammar and how likely they think it is from the original dataset, and rate each example on the following Likert scale of 1-5:
1) Sure adversarial sample 2) Likely an adversarial example 3) Neutral 4) Likely an original sample 5) Sure original sample. From the results of Table 3, it is clear that BAE-R always beats the sentiment accuracy and naturalness score of TextFooler. The latter is due to unnaturally long and complex synonym replace- ments on using TextFooler.
Test accuracy SSS
7 S|
*
(a) Word-LSTM (b) Word-CNN Figure 3: Amazon (c) BERT
(a) Word-LSTM (b) Word-CNN Figure 4: Yelp (c) BERT
Test accuracy âhs 0 ââ
- 10 Oe
'
Test accuracy
# (a) Word-LSTM
# (b) Word-CNN Figure 5: IMDB
# (c) BERT
(a) Word-LSTM (b) Word-CNN Figure 6: MR (c) BERT
Test accuracy | S
; S|:
Test accuracy [ oe Maximum Percent Perturbation
i Se _
WE Saas Maximum Percent Perturbation
# (a) Word-LSTM
# (b) Word-CNN Figure 7: MPQA
# (c) BERT
âTest Accuracy ge 3 8 8 s S Maximum Percent Perturbation
HS
= Maximum Percent Perturbation
(a) Word-LSTM (b) Word-CNN Figure 8: Subj (c) BERT | {
"id": "1907.11932"
} |
2004.01909 | Conversational Question Reformulation via Sequence-to-Sequence Architectures and Pretrained Language Models | This paper presents an empirical study of conversational question
reformulation (CQR) with sequence-to-sequence architectures and pretrained
language models (PLMs). We leverage PLMs to address the strong token-to-token
independence assumption made in the common objective, maximum likelihood
estimation, for the CQR task. In CQR benchmarks of task-oriented dialogue
systems, we evaluate fine-tuned PLMs on the recently-introduced CANARD dataset
as an in-domain task and validate the models using data from the TREC 2019 CAsT
Track as an out-domain task. Examining a variety of architectures with
different numbers of parameters, we demonstrate that the recent text-to-text
transfer transformer (T5) achieves the best results both on CANARD and CAsT
with fewer parameters, compared to similar transformer architectures. | http://arxiv.org/pdf/2004.01909 | Sheng-Chieh Lin, Jheng-Hong Yang, Rodrigo Nogueira, Ming-Feng Tsai, Chuan-Ju Wang, Jimmy Lin | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20200404 | 20200404 | 0 2 0 2
r p A 4 ] L C . s c [
1 v 9 0 9 1 0 . 4 0 0 2 : v i X r a
# Conversational Question Reformulation via Sequence-to-Sequence Architectures and Pretrained Language Models
Sheng-Chieh Linâ1, Jheng-Hong Yangâ1, Rodrigo Nogueira2, Ming-Feng Tsai1, Chuan-Ju Wang1 and Jimmy Lin2
1Research Center for Information Technology Innovation, Academia Sinica 2David R. Cheriton School of Computer Science, University of Waterloo
# Abstract
This paper presents an empirical study of conversational question reformulation (CQR) with sequence-to-sequence architectures and pretrained language models (PLMs). We lever- age PLMs to address the strong token-to-token independence assumption made in the com- mon objective, maximum likelihood estima- tion, for the CQR task. In CQR benchmarks of task-oriented dialogue systems, we evaluate ï¬ne-tuned PLMs on the recently-introduced CANARD dataset as an in-domain task and validate the models using data from the TREC 2019 CAsT Track as an out-domain task. Ex- amining a variety of architectures with dif- ferent numbers of parameters, we demon- strate that the recent text-to-text transfer trans- former (T5) achieves the best results both on CANARD and CAsT with fewer parameters, compared to similar transformer architectures.
# Introduction
Natural-language dialogue capabilities play an es- sential role as an enabling technology in intelligent personal assistants to understand and connect peo- ple (Gao et al., 2018). Effective dialogue systems require many components, including natural lan- guage understanding, dialogue state tracking, and natural language generation (Zhao and Eskenazi, 2016). Of late, practitioners in industry (Ren et al., 2018) and researchers in academia (Elgohary et al., 2019) have made substantial progress in a variety of methods to improve end-to-end task-oriented dialogue systems.
Original question Context Rewritten question
(b) Pretrained language model (dashed lines) to augment maximum likelihood estimation
Figure 1: Conversational question reformulation.
present an example from Elgohary et al. (2019) in Figure 1 to illustrate the task of conversational question reformulation (CQR).
However, as we can observe from Figure 1(a), applying maximum likelihood estimation (MLE) purely based on human-rewritten sentences intro- duces a strong independence assumption that does not consider conversation dependencies or linguis- tic structure. Thanks to great progress made by language models pretrained on large corpora using self-supervised learning objectives (Devlin et al., 2018; Radford et al., 2018; Dong et al., 2019; Raffel et al., 2019), there are now many mod- els equipped with knowledge of various language structures extracted from human-generated texts. We can leverage these models to relax the indepen- dence assumption in a pure MLE objective, shown in Figure 1(b).
We list the contributions of this work as follows:
Due to the complex and nuanced nature of hu- man communication, conversations often contain utterances that include coreference, ellipsis, and other phenomena; thus, a good dialogue system should be able to resolve these ambiguities to ac- curately reconstruct the userâs original intent. We
âContributed equally.
⢠We conduct, to our knowledge, the ï¬rst empiri- cal study leveraging pretrained language models to relax the independence assumption made in using an MLE objective in a CQR task.
⢠We achieve the state of the art in terms of BLEU on two CQR benchmarks of task-oriented dia- logue systems: (a) conversational open-domain
question answering with CANARD and (b) con- versational search with TREC CAsT.
In summary, this work demonstrates a simple yet effective way to resolve coreference and ellipsis in a CQR task by leveraging pretrained language models. Furthermore, among representative mod- els, we ï¬nd that a well-tuned text-to-text transfer transformer (T5) reaches performance that is on par with humans on the in-domain CANARD dataset and achieves the best performance on the out-of- domain CAsT dataset.
# 2 Related Work
Conversational search (Radlinski and Craswell, 2017) covers a broad range of techniques that facil- itate an IR task in a conversational context: nat- ural language interactions, cumulative clariï¬ca- tion (Aliannejadi et al., 2019), feedback collection, and information needs proï¬ling during conversa- tions. CQR is an important component of conver- sational search systems. In order to resolve usersâ information needs to retrieve relevant answers, a CQR module that leverages pretrained models is a promising approach, compared to alternatives that track dialogue states based on âcheapâ but noisy implicit feedback from users (Ahmad et al., 2018, 2019) or âexpensiveâ but sparse judgments (Jeffrey et al., 2019).
Open-domain question answering (QA) sys- tems return answers in response to user questions, both in natural language, from a broad range of domains (Sun et al., 2018). With great progress coming from contributions by the NLP and IR com- munities, high quality datasets for single-turn (Ra- jpurkar et al., 2018; KoËcisk´y et al., 2018; Dhingra et al., 2017) and multi-turn (conversational) (Reddy et al., 2019; Choi et al., 2018) open-domain QA are available today. These datasets have led to many successful supervised techniques for various tasks (Chen et al., 2017; Seo et al., 2017; Huang et al., 2019).
Recently, to improve dialogue understanding, re- searchers have proposed collecting annotations on resolving multi-turn dialogues in the context of question answering tasks (Ren et al., 2018; Elgo- hary et al., 2019). Building on this line of thought, our work addresses the problem of modeling ques- tion rewrites in multi-turn dialogues, especially in the context of open-domain conversational QA.
# 3 Conversational Question Reformulation
# 3.1 Problem Formulation
We ï¬rst formally deï¬ne the conversational ques- tion reformulation (CQR) task, which is also called question de-contextualization (Elgohary et al., 2019) or conversational question (query) under- standing in the context of task-oriented dialogue systems (Ren et al., 2018). Consider a topic t from a set of topics T , given a topic-oriented ut- terance sequence (i.e., the conversation history): H t = {u1, · · · , ui, ui+1, · · · uN } of N utterances, each of which could be a question qi or an answer ai at the i-th turn. The task is to reformulate the question qi into ¯qi that incorporates the context H t j=1. In other words, we wish to au- tomatically reformulate the input question qi by infusing information that exists in the context H t but is missing from the question itself.
Following the deï¬nitions of Ren et al. (2018) and Elgohary et al. (2019), we further reï¬ne the task scope of reformulating ¯qi. Given a question qi with its historical context H t <i and a human- rewritten ground truth ¯qi, our objective is to induce a function F ({qi, H t <i}) = ¯qi, where ¯qi is com- prised of tokens {yk}m k=1 of length m from the con- text comprising the dialogue sequence {qi, H t <i} (current and historical utterances), modeled as a se- quence of tokens {xk}n k=1 of length n. The tokens ykâs can either be drawn from the context H t <i or the current input qi. In the reconstruction of the ground truth, human annotators are asked to main- tain the sentence structure of qi by copying phrases from the original utterances and performing as few edits as possible.
Finally, given probability P conditioned on a parameterized function ËF and the context (current and historical utterances), the overall objective of the task is then deï¬ned in terms of ï¬nding the pa- rameters θ by maximum likelihood estimation:
T N 0 = arg mpx TT] II Pa (@IP({a. Ed)
# 3.2 Sequence-to-Sequence Architectures and Pretrained Language Models
As both the input qi and the output ¯qi are posed in natural language, a reasonable choice for the para- metric function is a sequence-to-sequence (S2S) model (Sutskever et al., 2014; Vaswani et al., 2017). With this design, we can incorporate context-
dependent sentence-level structures when gener- ating output tokens, since the model can consider both the previously-generated sequences as well as the context.
To extract information from the conversation flow, a simple approach, proposed by Xiong et al. (2018) and Elgohary et al. (2019), is to concatenate the historical utterances Hâ, with the current input q as the context [H!, || q;], and then use a S2S model to infer the output sequence q; based on it. To optimize parameters in the S2S model, we can adopt a supervised learning approach to train the S2S model to generate the g; tokens, given the qj tokens as ground truth output.
However, Eq (1) makes an important assumption: here, we consider each conversation topic t and each i-th turn independently. Since a topic-oriented conversation is often coherent and smoothly spans several utterances, an approximation of the param- eterized function ËF (·, θ) purely based on Eq (1) could be sub-optimal. To relax this assumption, we introduce pretrained language models (Devlin et al., 2018; Radford et al., 2018; Raffel et al., 2019) to leverage language structures extracted from large corpora. Speciï¬cally, we adopt these models and ï¬ne-tune their pretrained weights, as in previous work (Radford et al., 2018; Raffel et al., 2019).
# 4 Experiments
# 4.1 Dataset
To evaluate the capability of various models in re- formulating conversational questions, we conduct experiments on the CANARD dataset (Elgohary et al., 2019), an existing large open-domain dataset for CQR (containing over 30k training samples). Each sample in the CANARD dataset includes an original query from the QuAC dataset (Choi et al., 2018), its context (historical utterances and their answers), and the corresponding rewritten question by human annotators.
In addition, we also evaluate model performance on the dataset provided by the TREC 2019 Conver- sational Assistant Track (CAsT).1 CAsT organizers manually rewrote each conversational query in the evaluation set according to its contextual informa- tion and previous utterances in the same session. Statistics of the CANARD and CAsT datasets are presented in Table 1.
# 1https://github.com/daltonj/
treccastweb
Table 1: Statistics of the datasets used in this work.
CANARD CAsT Train Dev Test 31,538 3,418 5,571 479
# 4.2 Setup
To train and evaluate our sequence-to-sequence (S2S) models, we construct model input largely fol- lowing Elgohary et al. (2019). Speciï¬cally, we con- catenate each original question and its context by adding special separator tokens between them. Sep- arator tokens are also added to contextual informa- tion to separate historical utterances. The human- rewritten questions serve as the ground truth target sequences. For encoder- or decoder-only models (e.g., GPT-2, BERT, and UniLM), each training in- put sequence (as described above) is concatenated with its target sequence, and the models are trained to recover the target sequence using standard mask- ing tricks.
We train each model on the CANARD training set and select the checkpoint with the best perfor- mance on development set. In addition to compar- ing model performance on the CANARD test set, we directly use the model trained on CANARD to perform CQR on the CAsT dataset.2 Model per- formance is computed by the BLEU score between model output and the human-rewritten ground truth. Table 2 shows the settings of the neural models.
Additional model-speciï¬c training details are as follows. (a) LSTM: Following the script provided by Elgohary et al. (2019), we train a bidirectional LSTM S2S model with attention; the word embed- dings are initialized with GloVE.3 (b) GPT-2 (Rad- ford et al., 2018), which can be characterized as a pretrained decoder-only transformer: To focus on rewriting questions, we ï¬ne-tune the model (GPT-2 medium) by masking the cross entropy loss at the positions of the contextual tokens. (c) BERT (De- vlin et al., 2018), which can be characterized as a pretrained encoder-only transformer: Following the S2S ï¬ne-tuning procedure proposed in Dong et al. (2019), we ï¬ne-tune BERT-large (cased) by randomly masking the tokens with 70% probability in targeted sequences.4 (d) UniLM (Dong et al., 2019), where the model architecture is the same as BERT large and pretrained using three types
2Note that for CAsT, only historical questions are included as contextual information.
3https://github.com/aagohary/canard 4https://github.com/microsoft/unilm
Table 2: Model settings.
# parameters Learning rate Batch size LSTM GPT-2-medium BERT-large UniLM-large T5-base 46M 345M 340M 340M 220M 0.15 10â4 10â5 10â5 10â4 16 32 32 32 256
of language-modeling tasks: The method for ï¬ne- tuning is the same as BERT. (e) T5 (Raffel et al., 2019), an encoderâdecoder transformer that maps natural language understanding tasks to text-to- text transformation tasks: We ï¬ne-tune the T5-base model with the same settings used in Nogueira and Lin (2019).5
In addition, we list human performance of CQR (denoted as Human), as measured by Elgohary et al. (2019), and the baseline performance using questions without any reformulation (denoted as Raw) for comparison.
# 4.3 Results
Our main results in terms of BLEU on CANARD and CAsT are shown in Table 3, using greedy search decoding for inference. In general, all neu- ral S2S models perform better than the original questions (Raw), except for LSTM on CAsT. This indicates that the PLMs (GPT2, BERT, UniLM, and T5) have obtained at least some generalization capability on the CQR task.
Among all neural S2S models, T5 demonstrates a better ability to learn CQR from human-rewritten questions with fewer model parameters. Speciï¬- cally, in the CANARD test set, T5 beats the other neural S2S models with 58.08 BLEU, which is close to human performance, 59.92. Furthermore, on CAsT, T5 achieves the highest BLEU score (75.07), four points better than the second-best model (71.21). These results demonstrate T5âs superior generalization ability.
In addition, we also perform S2S model infer- ence using beam search and top-k random sam- pling decoding.6 Figure 2 (left side) shows that beam search with larger beam widths further im- proves BLEU scores in both datasets. T5 with a beam width of 10 achieves a BLEU score that is on par with human performance on the CANARD test set and reaches 76.22 on CAsT.7 Figure 2 (right
5https://github.com/castorini/ docTTTTTquery
6Note that beam search (top-k random sampling) is equal to greedy search when the beam width (top-k) is set to 1
7We did not perform GPT-2 inference with beam search
Table 3: BLEU score comparison. For simplicity, we compare neural S2S models using greedy search.
CANARD CAsT Dev Test Human Raw 59.92 33.84 36.25 - 60.41 LSTM 43.68 GPT-2-medium 52.63 55.34 BERT-large 57.39 UniLM-large 59.13 T5-base 39.15 50.07 54.34 55.92 58.08 42.24 68.07 69.53 71.21 75.07
@ LST
@ S
# Mode
# GPT2
# BERT
# Uni
# mt
CANARD (Test) CANARD (Test) per bon 69.02 5 Chunk size (random sampling) CAST 5 Chunk size (beam search) CAsT 5 Chunk size (beam search) Chunk size (random sampling)
Figure 2: Decoding sensitivity analysis
side) illustrates that random sampling with larger top-k leads to poor BLEU scores.8 Under this de- coding strategy, T5 still maintains better BLEU scores compared to the other S2S models.
# 5 Conclusion
In this paper, we conduct experiments on conver- sational question reformulation (CQR) via neural sequence-to-sequence (S2S) models and demon- strate that our ï¬ne-tuned T5-base model achieves the state of the art, in one case achieving perfor- mance on par with humans (at least measured by BLEU). In addition, experiments on the CAsT dataset show that our ï¬ne-tuned T5-base model can be directly used in a transfer setting and beats other neural S2S models by quite a large margin.
since the original implementation does not support beam search.
8For random sampling, we perform model inference with 10 repetitions and average over them.
# 6 Acknowledgements
This research was supported in part by the Canada First Research Excellence Fund and the Natural Sci- ences and Engineering Research Council (NSERC) of Canada. Additionally, we would like to thank Google for computational resources in the form of Google Cloud credits.
# References
Wasi Uddin Ahmad, Kai-Wei Chang, and Hongning Wang. 2018. Multi-task learning for document rank- ing and query suggestion. In Proc. ICLR.
Wasi Uddin Ahmad, Kai-Wei Chang, and Hongning Wang. 2019. Context attentive document ranking and query suggestion. In Proc. SIGIR, page 385394.
Mohammad Aliannejadi, Hamed Zamani, Fabio Crestani, and W. Bruce Croft. 2019. Asking clarify- ing questions in open-domain information-seeking conversations. In Proc. SIGIR, page 475484.
Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open- domain questions. In Proc. ACL, pages 1870â1879.
Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen- tau Yih, Yejin Choi, Percy Liang, and Luke Zettle- moyer. 2018. QuAC: Question answering in context. arXiv:1808.07036.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language under- standing. arXiv:1810.04805.
Bhuwan Dhingra, Kathryn Mazaitis, and William W. Cohen. 2017. Quasar: Datasets for question answer- ing by search and reading. arXiv:1707.03904.
Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xi- aodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Uniï¬ed language model pre-training for natural language understand- ing and generation. In Proc. NIPS.
Ahmed Elgohary, Denis Peskov, and Jordan Boyd- Graber. 2019. Can you unpack that? Learning In Proc. EMNLP, to rewrite questions-in-context. pages 5917â5923.
and Lihong Li. 2018. Neural approaches to conversational AI. arXiv:1809.08267.
Hsin-Yuan Huang, Eunsol Choi, and Wen tau Yih. 2019. FlowQA: Grasping ï¬ow in history for con- versational machine comprehension. In Proc. ICLR.
Dalton Jeffrey, Chenyan Xiong, and Jamie Callan. 2019. CAsT 2019: The conversational assistance track overview. In Proc. TREC.
Tom´aËs KoËcisk´y, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, G´abor Melis, and Edward Grefenstette. 2018. The NarrativeQA read- ing comprehension challenge. Trans.of ACL, 6:317â 328.
Rodrigo Nogueira and Jimmy Lin. 2019. From doc2query to docTTTTTquery.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2018. Language models are unsupervised multitask learners.
Filip Radlinski and Nick Craswell. 2017. A theoret- In Proc. ical framework for conversational search. CHIIR, page 117126.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a uniï¬ed text-to-text trans- former. arXiv:1910.10683.
Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you donât know: Unanswerable ques- tions for SQuAD. In Proc. ACL, pages 784â789.
Siva Reddy, Danqi Chen, and Christopher D. Manning. 2019. CoQA: A conversational question answering challenge. Trans. of ACL, 7:249â266.
Gary Ren, Xiaochuan Ni, Manish Malik, and Qifa Ke. 2018. Conversational query understanding using se- quence to sequence modeling. In Proc. WWW, page 17151724.
Min Joon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Bidirectional attention ï¬ow for machine comprehension. In Proc. ICLR.
Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Kathryn Mazaitis, Ruslan Salakhutdinov, and William Co- hen. 2018. Open domain question answering using early fusion of knowledge bases and text. In Proc. EMNLP, pages 4231â4242.
Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Proc. NIPS, page 31043112.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Åukasz Kaiser, and Illia Polosukhin. 2017. At- In Proc. NIPS, pages tention is all you need. 5998â6008.
Wayne Xiong, Lingfeng Wu, Jun Zhang, and Andreas Stolcke. 2018. Session-level language modeling for conversational speech. In Proc. EMNLP, pages 2764â2768.
Tiancheng Zhao and Maxine Eskenazi. 2016. Towards end-to-end learning for dialog state tracking and management using deep reinforcement learning. In Proc. of SIGDIAL, pages 1â10. | {
"id": "1808.07036"
} |
2004.00584 | Deep Entity Matching with Pre-Trained Language Models | We present Ditto, a novel entity matching system based on pre-trained
Transformer-based language models. We fine-tune and cast EM as a sequence-pair
classification problem to leverage such models with a simple architecture. Our
experiments show that a straightforward application of language models such as
BERT, DistilBERT, or RoBERTa pre-trained on large text corpora already
significantly improves the matching quality and outperforms previous
state-of-the-art (SOTA), by up to 29% of F1 score on benchmark datasets. We
also developed three optimization techniques to further improve Ditto's
matching capability. Ditto allows domain knowledge to be injected by
highlighting important pieces of input information that may be of interest when
making matching decisions. Ditto also summarizes strings that are too long so
that only the essential information is retained and used for EM. Finally, Ditto
adapts a SOTA technique on data augmentation for text to EM to augment the
training data with (difficult) examples. This way, Ditto is forced to learn
"harder" to improve the model's matching capability. The optimizations we
developed further boost the performance of Ditto by up to 9.8%. Perhaps more
surprisingly, we establish that Ditto can achieve the previous SOTA results
with at most half the number of labeled data. Finally, we demonstrate Ditto's
effectiveness on a real-world large-scale EM task. On matching two company
datasets consisting of 789K and 412K records, Ditto achieves a high F1 score of
96.5%. | http://arxiv.org/pdf/2004.00584 | Yuliang Li, Jinfeng Li, Yoshihiko Suhara, AnHai Doan, Wang-Chiew Tan | cs.DB, cs.CL | To appear in VLDB 2021 | null | cs.DB | 20200401 | 20200902 | 0 2 0 2
p e S 2 ] B D . s c [
3 v 4 8 5 0 0 . 4 0 0 2 : v i X r a
# Deep Entity Matching with Pre-Trained Language Models
Yuliang Li, Jinfeng Li, Yoshihiko Suhara Megagon Labs {yuliang,jinfeng,yoshi}@megagon.ai
AnHai Doan University of Wisconsin Madison [email protected]
Wang-Chiew Tan Megagon Labs [email protected]
ABSTRACT We present Ditto, a novel entity matching system based on pre- trained Transformer-based language models. We fine-tune and cast EM as a sequence-pair classification problem to leverage such mod- els with a simple architecture. Our experiments show that a straight- forward application of language models such as BERT, DistilBERT, or RoBERTa pre-trained on large text corpora already significantly improves the matching quality and outperforms previous state-of- the-art (SOTA), by up to 29% of F1 score on benchmark datasets. We also developed three optimization techniques to further improve Dittoâs matching capability. Ditto allows domain knowledge to be injected by highlighting important pieces of input information that may be of interest when making matching decisions. Ditto also summarizes strings that are too long so that only the essential information is retained and used for EM. Finally, Ditto adapts a SOTA technique on data augmentation for text to EM to aug- ment the training data with (difficult) examples. This way, Ditto is forced to learn âharderâ to improve the modelâs matching capability. The optimizations we developed further boost the performance of Ditto by up to 9.8%. Perhaps more surprisingly, we establish that Ditto can achieve the previous SOTA results with at most half the number of labeled data. Finally, we demonstrate Dittoâs effectiveness on a real-world large-scale EM task. On matching two company datasets consisting of 789K and 412K records, Ditto achieves a high F1 score of 96.5%.
to consider. As we will illustrate, correctly matching the candidate pairs requires substantial language understanding and domain- specific knowledge. Hence, entity matching remains a challenging task even for the most advanced EM solutions.
We present Ditto, a novel EM solution based on pre-trained Transformer-based language models (or pre-trained language mod- els in short). We cast EM as a sequence-pair classification problem to leverage such models, which have been shown to generate highly contextualized embeddings that capture better language under- standing compared to traditional word embeddings. Ditto further improves its matching capability through three optimizations: (1) It allows domain knowledge to be added by highlighting important pieces of the input that may be useful for matching decisions. (2) It summarizes long strings so that only the most essential informa- tion is retained and used for EM. (3) It augments training data with (difficult) examples, which challenges Ditto to learn âharderâ and also reduces the amount of training data required. Figure 2 depicts Ditto in the overall architecture of a complete EM workflow.
There are 9 candidate pairs of entries to consider for matching in total in Figure 1. The blocking heuristic that matching entries must have one word in common in the title will reduce the number of pairs to only 3: the first entry on the left with the first entry on the right and so on. Perhaps more surprisingly, even though the 3 pairs are highly similar and look like matches, only the first and last pair of entries are true matches. Our system, Ditto, is able to discern the nuances in the 3 pairs to make the correct conclusion for every pair while some state-of-the-art systems are unable to do so.
PVLDB Reference Format: Yuliang Li, Jinfeng Li, Yoshihiko Suhara, AnHai Doan, and Wang-Chiew Tan. Deep Entity Matching with Pre-Trained Language Models. PVLDB, 14(1): XXX-XXX, 2021. doi:10.14778/3421424.3421431
1 INTRODUCTION Entity Matching (EM) refers to the problem of determining whether two data entries refer to the same real-world entity. Consider the two datasets about products in Figure 1. The goal is to determine the set of pairs of data entries, one entry from each table so that each pair of entries refer to the same product.
If the datasets are large, it can be expensive to determine the pairs of matching entries. For this reason, EM is typically accompanied by a pre-processing step, called blocking, to prune pairs of entries that are unlikely matches to reduce the number of candidate pairs
The example illustrates the power of language understanding given by Dittoâs pre-trained language model. It understands that instant immersion spanish deluxe 2.0 is the same as instant immers spanish dlux 2 in the context of software products even though they are syntactically different. Furthermore, one can explicitly emphasize that certain parts of a value are more useful for deciding matching decisions. For books, the domain knowledge that the grade level or edition is important for matching books can be made explicit to Ditto, simply by placing tags around the grade/edition values. Hence, for the second candidate pair, even though the titles are highly similar (i.e., they overlap in many words), Ditto is able to focus on the grade/edition information when making the matching decision. The third candidate pair shows the power of language understanding for the opposite situation. Even though the entries look dissimilar Ditto is able to attend to the right parts of a value (i.e., the manf./modelno under different attributes) and also understand the semantics of the model number to make the right decision.
This work is licensed under the Creative Commons BY-NC-ND 4.0 International License. Visit https://creativecommons.org/licenses/by-nc-nd/4.0/ to view a copy of this license. For any use beyond those covered by this license, obtain permission by emailing [email protected]. Copyright is held by the owner/author(s). Publication rights licensed to the VLDB Endowment. Proceedings of the VLDB Endowment, Vol. 14, No. 1 ISSN 2150-8097. doi:10.14778/3421424.3421431
Contributions In summary, the following are our contributions:
⢠We present Ditto, a novel EM solution based on pre-trained language models (LMs) such as BERT. We fine-tune and cast EM
title manf./modelno price v. title manf./modelno | price instant immersion spanish deluxe topics 49.99 ---~77 instant immers spanish dlux 2 36.11 2.0 entertainment _ encore inc adventure workshop 4th-6th 174 adventure workshop th-6th grade encore software 19.99 =-- ~~ grade 8th edition : | new-sharp shr-el1192bI two-color 56.0 sharp printing calculator sharp el1192bl 37.63 ---" printing calculator 12-digit Icd black red
Figure 1: Entity Matching: determine the matching entries from two datasets.
Table A: Candidate Ditto Matched Pairs Pairs Matcher | OK H Table B: ' s , Train ' amp! ca] : | & Label A 1 1 1
Figure 2: An EM system architecture with Ditto as the matcher. In addition to the training data, the user of Ditto can specify (1) a method for injecting domain knowledge (DK), (2) a summarization module for keeping the essential information, and (3) a data aug- mentation (DA) operator to strengthen the training set.
Outline Section 2 overviews Ditto and pre-trained LMs. Section 3 describes how we optimize Ditto with domain knowledge, sum- marization, and data augmentation. Our experimental results are described in Section 4 and the case study is presented in Section 5. We discuss related work in Section 6 and conclude in Section 7.
2 BACKGROUND AND ARCHITECTURE We present the main concepts behind EM and provide some back- ground on pre-trained LMs before we describe how we fine-tune the LMs on EM datasets to train EM models. We also present a simple method for reducing EM to a sequence-pair classification problem so that pre-trained LMs can be used for solving the EM problem.
as a sequence-pair classification problem to leverage such models with a simple architecture. To our knowledge, Ditto is one of the first EM solutions that leverage pre-trained Transformer-based LMs1 to provide deeper language understanding for EM.
⢠We also developed three optimization techniques to further im- prove Dittoâs matching capability through injecting domain knowledge, summarizing long strings, and augmenting train- ing data with (difficult) examples. The first two techniques help Ditto focus on the right information for making matching deci- sions. The last technique, data augmentation, is adapted from [31] for EM to help Ditto learn âharderâ to understand the data in- variance properties that may exist but are beyond the provided labeled examples and also, reduce the amount of training data required.
⢠We evaluated the effectiveness of Ditto on three benchmark datasets: the Entity Resolution benchmark [26], the Magellan dataset [25], and the WDC product matching dataset [39] of vari- ous sizes and domains. Our experimental results show that Ditto consistently outperforms the previous SOTA EM solutions in all datasets and by up to 31% in F1 scores. Furthermore, Ditto con- sistently performs better on dirty data and is more label efficient: it achieves the same or higher previous SOTA accuracy using less than half the labeled data.
Notations Dittoâs EM pipeline takes as input two collections D and D â² of data entries (e.g., rows of relational tables, XML docu- ments, JSON files, text paragraphs) and outputs a set M â D à D â² of pairs where each pair (e, e â²) â M is thought to represent the same real-world entity (e.g., person, company, laptop, etc.). A data entry e is a set of key-value pairs e = {(attri , vali )}1â¤i â¤k where attri is the attribute name and vali is the attributeâs value represented as text. Note that our definition of data entries is general enough to capture both structured and semi-structured data such as JSON files.
As described earlier, an end-to-end EM system consists of a blocker and a matcher. The goal of the blocking phase is to quickly identify a small subset of D Ã D â² of candidate pairs of high recall (i.e., a high proportion of actual matching pairs are that subset). The goal of a matcher (i.e., Ditto) is to accurately predict, given a pair of entries, whether they refer to the same real-world entity.
2.1 Pre-trained language models Unlike prior learning-based EM solutions that rely on word em- beddings and customized RNN architectures to train the matching model (See Section 6 for a detailed summary), Ditto trains the matching models by fine-tuning pre-trained LMs in a simpler ar- chitecture.
⢠We applied Ditto to a real-world large-scale matching task on two company datasets, containing 789K and 412K entries re- spectively. To deploy an end-to-to EM pipeline efficiently, we developed an advanced blocking technique to help reduce the number of pairs to consider for Ditto. Ditto obtains high ac- curacy, 96.5% F1 on a holdout dataset. The blocking phase also helped speed up the end-to-end EM deployment significantly, by up to 3.8 times, compared to naive blocking techniques.
⢠Finally, we open-source Ditto at https://github.com/megagonlabs/ ditto.
Pre-trained LMs such as BERT [13] and GPT-2 [41] have demon- strated good performance on a wide range of NLP tasks. They are typically deep neural networks with multiple Transformer lay- ers [51], typically 12 or 24 layers, pre-trained on large text corpora such as Wikipedia articles in an unsupervised manner. During pre- training, the model is self-trained to perform auxiliary tasks such as missing token and next-sentence prediction. Studies [9, 50] have shown that the shallow layers capture lexical meaning while the deeper layers capture syntactic and semantic meanings of the input sequence after pre-training.
1There is a concurrent work [6] which applies a similar idea.
A specific strength of pre-trained LMs is that it learns the seman- tics of words better than conventional word embedding techniques such as word2vec, GloVe, or FastText. This is largely because the
Transformer architecture calculates token embeddings from all the tokens in the input sequence and thus, the embeddings it generates are highly-contextualized and captures the semantic and contex- tual understanding of the words. Consequently, such embeddings can capture polysemy, i.e., discern that the same word may have different meanings in different phrases. For example, the word Sharp has different meanings in âSharp resolutionâ versus âSharp TVâ. Pre-trained LMs will embed âSharpâ differently depending on the context while traditional word embedding techniques such as FastText always produce the same vector independent of the context. Such models can also understand the opposite, i.e., that dif- ferent words may have the same meaning. For example, the words immersion and immers (respectively, (deluxe, dlux) and (2.0, 2)) are likely the same given their respective contexts. Thus, such language understanding capability of pre-trained LMs can improve the EM performance.
2.2 Fine-tuning pre-trained language models A pre-trained LM can be fine-tuned with task-specific training data so that it becomes better at performing that task. Here, we fine-tune a pre-trained LM for the EM task with a labeled training dataset consisting of positive and negative pairs of matching and non-matching entries as follows: (1) Add task-specific layers after the final layer of the LM. For EM, we add a simple fully connected layer and a softmax output layer for binary classification.
(2) Initialize the modified network with parameters from the pre- trained LM.
(3) Train the modified network on the training set until it con-
verges. The result is a model fine-tuned for the EM task. See Appendix A for the model architecture. In Ditto, we fine-tune the popular 12-layer BERT model [13], RoBERTa [29], and a 6-layer smaller but faster variant DistilBERT [45]. However, our proposed techniques are independent of the choice of pre-trained LMs and Ditto can potentially perform even better with larger pre-trained LMs. The pair of data entries is serialized (see next section) as input to the LM and the output is a match or no-match decision. Dittoâs archi- tecture is much simpler when compared to many state-of-the-art EM solutions today [14, 34]. Even though the bulk of the âworkâ is simply off-loaded to pre-trained LMs, we show that this simple scheme works surprisingly well in our experiments.
2.3 Serializing the data entries for Ditto Since LMs take token sequences (i.e., text) as input, a key challenge is to convert the candidate pairs into token sequences so that they can be meaningfully ingested by Ditto.
Ditto serializes data entries as follows: for each data entry
e = {(attri , vali )}1â¤i â¤k , we let serialize(e) ::= [COL] attr1 [VAL] val1 . . . [COL] attrk [VAL] valk , where [COL] and [VAL] are special tokens for indicating the start of attribute names and values respectively. For example,the first entry of the second table is serialized as:
[COL] title [VAL] instant immers spanish dlux 2 [COL] manf./modelno [VAL] NULL [COL] price [VAL] 36.11
To serialize a candidate pair (e, e â²), we let serialize(e, e â²) ::= [CLS] serialize(e) [SEP] serialize(e â²) [SEP],
where [SEP] is the special token separating the two sequences and [CLS] is the special token necessary for BERT to encode the sequence pair into a 768-dimensional vector which will be fed into the fully connected layers for classification.
Other serialization schemes There are different ways to seri- alize data entries so that LMs can treat the input as a sequence classification problem. For example, one can also omit the special tokens â[COL]â and/or â[VAL]â, or exclude attribute names attri during serialization. We found that including the special tokens to retain the structure of the input does not hurt the performance in general and excluding the attribute names tend to help only when the attribute names do not contain useful information (e.g., names such as attr1, attr2, ...) or when the entries contain only one column. A more rigorous study on this matter is left for future work.
Heterogeneous schemas As shown, the serialization method of Ditto does not require data entries to adhere to the same schema. It also does not require that the attributes of data entries to be matched prior to executing the matcher, which is a sharp contrast to other EM systems such as DeepER [14] or DeepMatcher2 [34]. Furthermore, Ditto can also ingest and match hierarchically structured data entries by serializing nested attribute-value pairs with special start and end tokens (much like Lisp or XML-style parentheses structure).
3 OPTIMIZATIONS IN DITTO As we will describe in Section 4, the basic version of Ditto, which leverages only the pre-trained LM, is already outperforming the SOTA on average. Here, we describe three further optimization techniques that will facilitate and challenge Ditto to learn âharderâ, and consequently make better matching decisions.
3.1 Leveraging Domain Knowledge Our first optimization allows domain knowledge to be injected into Ditto through pre-processing the input sequences (i.e., seri- alized data entries) to emphasize what pieces of information are potentially important. This follows the intuition that when hu- man workers make a matching/non-matching decision on two data entries, they typically look for spans of text that contain key infor- mation before making the final decision. Even though we can also train deep learning EM solutions to learn such knowledge, we will require a significant amount of training data to do so. As we will describe, this pre-processing step on the input sequences is light- weight and yet can yield significant improvements. Our experiment results show that with less than 5% of additional training time, we can improve the modelâs performance by up to 8%.
There are two main types of domain knowledge that we can provide Ditto.
Span Typing The type of a span of tokens is one kind of domain knowledge that can be provided to Ditto. Product id, street number, publisher are examples of span types. Span types help Ditto avoid
2In DeepMatcher, the requirement that both entries have the same schema can be removed by treating the values in all columns as one value under one attribute.
mismatches. With span types, for example, Ditto is likelier to avoid matching a street number with a year or a product id.
Table 1 summarizes the main span types that human workers would focus on when matching three types of entities in our bench- mark datasets.
Table 1: Main span types for matching entities in our benchmark datasets.
Entity Type Types of Important Spans Publications, Movies, Music Persons (e.g., Authors), Year, Publisher Organizations, Employers Products Last 4-digit of phone, Street number Product ID, Brand, Configurations (num.)
The developer specifies a recognizer to type spans of tokens from attribute values. The recognizer takes a text string v as input and returns a list recognizer(v) = {(si , ti , typei )}i â¥1 of start/end positions of the span in v and the corresponding type of the span. Dittoâs current implementation leverages an open-source Named- Entity Recognition (NER) model [48] to identify known types such as persons, dates, or organizations and use regular expressions to identify specific types such as product IDs, last 4 digits of phone numbers, etc.
After the types are recognized, the original text v is replaced by a new text where special tokens are inserted to reflect the types of the spans. For example, a phone number â(866) 246-6453â may be replaced with â( 866 ) 246 - [LAST] 6453 [/LAST]â where [LAST]/[/LAST] indicates the start/end of the last 4 digits and addi- tional spaces are also added because of tokenization. In our imple- mentation, when we are sure that the span type has only one token or the NER model is inaccurate in determining the end position, we drop the end indicator and keep only the start indicator token.
Intuitively, these newly added special tokens are additional sig- nals to the self-attention mechanism that already exists in pre- trained LMs, such as BERT. If two spans have the same type, then Ditto picks up the signal that they are likelier to be the same and hence, they are aligned together for matching. In the above example,
â..246- [LAST] 6453 [/LAST] .. [SEP] .. [LAST] 0000 [/LAST]..â
when the model sees two encoded sequences with the [LAST] special tokens, it is likely to take the hint to align â6453â with â0000â without relying on other patterns elsewhere in the sequence that may be harder to learn.
Span Normalization The second kind of domain knowledge that can be passed to Ditto rewrites syntactically different but equiva- lent spans into the same string. This way, they will have identical embeddings and it becomes easier for Ditto to detect that the two spans are identical. For example, we can enforce that âVLDB jour- nalâ and âVLDBJâ are the same by writing them as VLDBJ. Similarly, we can enforce the general knowledge that â5 %â vs. â5.00 %â are equal by writing them as â5.0%â.
The developer specifies a set of rewriting rules to rewrite spans. The specification consists of a function that first identifies the spans of interest before it replaces them with the rewritten spans. Ditto contains a number of rewriting rules for numbers, including rules that round all floating point numbers to 2 decimal places and dropping all commas from integers (e.g., â2,020â â â2020â). For
abbreviations, we allow the developers to specify a dictionary of synonym pairs to normalize all synonym spans to be the same.
3.2 Summarizing long entries When the value is an extremely long string, it becomes harder for the LM to understand what to pay attention to when matching. In addition, one limiting factor of Transformer-based pre-trained LMs is that there is a limit on the sequence length of the input. For example, the input to BERT can have at most 512 sub-word tokens. It is thus important to summarize the serialized entries down to the maximum allowed length while retaining the key information. A common practice is to truncate the sequences so that they fit within the maximum length. However, the truncation strategy does not work well for EM in general because the important information for matching is usually not at the beginning of the sequences.
There are many ways to perform summarization [32, 42, 44]. In Dittoâs current implementation, we use a TF-IDF-based summa- rization technique that retains non-stopword tokens with the high TF-IDF scores. We ignore the start and end tags generated by span typing in this process and use the list of stop words from scikit-learn library [37]. By doing so, Ditto feeds only the most informative tokens to the LM. We found that this technique works well in prac- tice. Our experiment results show that it improves the F1 score of Ditto on a text-heavy dataset from 41% to over 93% and we plan to add more summarization techniques to Dittoâs library in the future.
3.3 Augmenting training data We describe how we apply data augmentation to augment the training data for entity matching.
Data augmentation (DA) is a commonly used technique in com- puter vision for generating additional training data from existing examples by simple transformation such as cropping, flipping, ro- tation, padding, etc. The DA operators not only add more training data, but the augmented data also allows to model to learn to make predictions invariant of these transformations.
Similarly, DA can add training data that will help EM models learn âharderâ. Although labeled examples for EM are arguably not hard to obtain, invariance properties are very important to help make the solution more robust to dirty data, such as missing values (NULLs), values that are placed under the wrong attributes or missing some tokens.
Next, we introduce a set of DA operators for EM that will help train more robust models.
Augmentation operators for EM The proposed DA operators are summarized in Table 2. If s is a serialized pair of data entries with a match or no-match label l, then an augmented example is a pair (s â², l), where s â² is obtained by applying an operator o on s and s â² has the same label l as before.
The operators are divided into 3 categories. The first category consists of span-level operators, such as span_del and span_shuffle. These two operators are used in NLP tasks [31, 57] and shown to be effective for text classification. For span_del, we randomly delete from s a span of tokens of length at most 4 without special tokens (e.g., [SEP], [COL], [VAL]). For span_shuffle, we sample a span of length at most 4 and randomly shuffle the order of its tokens.
Table 2: Data augmentation operators in Ditto. The operators are 3 different levels: span-level, attribute-level, and entry-level. All sam- plings are done uniformly at random.
Operator Explanation span_del Delete a randomly sampled span of tokens span_shuffle Randomly sample a span and shuffle the tokensâ order attr_del attr_shuffle entry_swap Delete a randomly chosen attribute and its value Randomly shuffle the orders of all attributes Swap the order of the two data entries e and e â²
These two operators are motivated by the observation that mak- ing a match/no-match decision can sometimes be âtoo easyâ when the candidate pair of data entries contain multiple spans of text sup- porting the decision. For example, suppose our negative examples for matching company data in the existing training data is similar to what is shown below.
[CLS] . . . [VAL] Google LLC . . . [VAL] (866) 246-6453 [SEP] . . . [VAL] Alphabet inc . . . [VAL] (650) 253-0000 [SEP] The model may learn to predict âno-matchâ based on the phone number alone, which is insufficient in general. On the other hand, by corrupting parts of the input sequence (e.g., dropping phone numbers), DA forces the model to learn beyond that, by leveraging the remaining signals, such as the company name, to predict âno- matchâ.
The second category of operators is attribute-level operators: attr_del and attr_shuffle. The operator attr_del randomly deletes an attribute (both name and value) and attr_shuffle randomly shuffles the order of the attributes of both data entries. The motivation for attr_del is similar to span_del and span_shuffle but it gets rid of an attribute entirely. The attr_shuffle operator allows the model to learn the property that the matching decision should be indepen- dent of the ordering of attributes in the sequence.
The last operator, entry_swap, swaps the order of the pair (e, e â²) with probability 1/2. This teaches the model to make symmetric decisions (i.e., F (e, e â²) = F (e â², e)) and helps double the size of the training set if both input tables are from the same data source.
MixDA: interpolating the augmented data Unlike DA opera- tors for images which almost always preserve the image labels, the operators for EM can distort the input sequence so much that the label becomes incorrect. For example, the attr_del operator may drop the company name entirely and the remaining attributes may contain no useful signals to distinguish the two entries.
To address this issue, Ditto applies MixDA, a recently proposed data augmentation technique for NLP tasks [31] illustrated in Fig- ure 3. Instead of using the augmented example directly, MixDA computes a convex interpolation of the original example with the augmented examples. Hence, the interpolated example is some- where in between, i.e., it is a âpartialâ augmentation of the original example and this interpolated example is expected to be less dis- torted than the augmented one.
The idea of interpolating two examples is originally proposed for computer vision tasks [63]. For EM or text data, since we cannot directly interpolate sequences, MixDA interpolates their represen- tations by the language model instead. In practice, augmentation with MixDA slows the training time because the LM is called twice. However, the prediction time is not affected since the DA operators
are only applied to training data. Formally, given an operator o (e.g., span deletion) and an original example s, to apply o on s with MixDA (as Figure 3 illustrates), (1) Randomly sample λ from a Beta distribution λ â¼ Beta(α, α) with a hyper-parameter α â [0, 1] (e.g., 0.8 in our experiments);
(2) Denote by LM(s) the LM representation of a sequence s. Let LM(s â²â²) = λ · LM(s) + (1 â λ) · LM(augment(s, o)).
Namely, LM(s â²â²) is the convex interpolation between the LM outputs of s and the augmented s â² = augment(s, o);
(3) Train the model by feeding LM(s â²â²) to the rest of the network and back-propagate. Back-propagation updates both the LM and linear layerâs parameters.
Input Sequence Sequences Representations Original || ;. eo NN Linear anguage i u Interpolate (wixup) Softmax (BERT) W Augmented = 4-A Loss Back propagate
Figure 3: Data augmentation with MixDA.
4 EXPERIMENTS We present the experiment results on benchmark datasets for EM: the ER Benchmark datasets [26], the Magellan datasets [25] and the WDC product data corpus [39]. Ditto achieves new SOTA results on all these datasets and outperforms the previous best results by up to 31% in F1 score. The results show that Ditto is more robust to dirty data and performs well when the training set is small. Ditto is also more label-efficient as it achieves the previous SOTA results using only 1/2 or less of the training data across multiple subsets of the WDC corpus. Our ablation analysis shows that (1) using pre- trained LMs contributes to over 50% of Dittoâs performance gain and (2) all 3 optimizations, domain knowledge (DK), summarization (SU) and data augmentation (DA), are effective. For example, SU improves the performance on a text-heavy dataset by 52%, DK leads to 1.2% average improvement on the ER-Magellan datasets and DA improves on the WDC datasets by 2.53% on average. In addition, we show in Appendix B that although Ditto leverages deeper neural nets, its training and prediction time is comparable to the SOTA EM systems.
4.1 Benchmark datasets We experimented with all the 13 publicly available datasets used for evaluating DeepMatcher [34]. These datasets are from the ER Benchmark datasets [26] and the Magellan data repository [12]. We summarize the datasets in Table 3 and refer to them as ER-Magellan. These datasets are for training and evaluating matching models for various domains including products, publications, and businesses. Each dataset consists of candidate pairs from two structured tables of entity records of the same schema. The pairs are sampled from the results of blocking and manually labeled. The positive rate (i.e., the ratio of matched pairs) ranges from 9.4% (Walmart-Amazon) to 25% (Company). The number of attributes ranges from 1 to 8.
Among the datasets, the Abt-Buy and Company datasets are text-heavy meaning that at least one attributes contain long text. Also, following [34], we use the dirty version of the DBLP-ACM,
Table 3: The 13 datasets divided into 4 categories of domains. The datasets marked with â are text-heavy (Textual). Each dataset with â has an additional dirty version to test the modelsâ robustness against noisy data.
Datasets Amazon-Google, Walmart-Amazonâ Abt-Buyâ , Beer DBLP-ACM*, DBLP-Scholar*, iTunes-Amazon* Companyâ , Fodors-Zagats Domains software / electronics product citation / music company / restaurant
DBLP-Scholar, iTunes-Amazon, and Walmart-Amazon datasets to measure the robustness of the models against noise. These datasets are generated from the clean version by randomly emptying at- tributes and appending their values to another randomly selected attribute.
Each dataset is split into the training, validation, and test sets using the ratio of 3:1:1. The same split of the datasets is also used in the evaluation of other EM solutions [17, 23, 34]. We list the size of each dataset in Table 5.
The WDC product data corpus [39] contains 26 million product offers and descriptions collected from e-commerce websites [56]. The goal is to find product offer pairs that refer to the same product. To evaluate the accuracy of product matchers, the dataset provides 4,400 manually created golden labels of offer pairs from 4 categories: computers, cameras, watches, and shoes. Each category has a fixed number of 300 positive and 800 negative pairs. For training, the dataset provides for each category pairs that share the same product ID such as GTINs or MPNs mined from the productâs webpage. The negative examples are created by selecting pairs that have high textual similarity but different IDs. These labels are further reduced to different sizes to test the modelsâ label efficiency. We summarize the different subsets in Table 4. We refer to these subsets as the WDC datasets.
Table 4: Different subsets of the WDC product data corpus. Each subset (except Test) is split into a training set and a validation set with a ratio of 4:1 according to the dataset provider [39]. The last col- umn shows the positive rate (%POS) of each category in the xLarge set. The positive rate on the test set is 27.27% for all the categories.
Categories Test Small Medium Large xLarge %POS Computers Cameras Watches Shoes All 1,100 1,100 1,100 1,100 4,400 2,834 1,886 2,255 2,063 9,038 8,094 5,255 6,413 5,805 25,567 33,359 20,036 27,027 22,989 103,411 68,461 42,277 61,569 42,429 214,736 14.15% 16.98% 15.05% 9.76% 14.10%
Each entry in this dataset has 4 attributes: title, description, brand, and specTable. Following the setting in [39] for DeepMatcher, we allow Ditto to use any subsets of attributes to determine the best combination. We found in our experiments that Ditto achieves the best performance when it uses only the title attribute. We provide further justification of this choice in Appendix F.
4.2 Implementation and experimental setup We implemented Ditto in PyTorch [36] and the Transformers library [58]. We currently support 4 pre-trained models: Distil- BERT [45], BERT [13], RoBERTa [29], and XLNet [61]. We use the
base uncased variant of each model in all our experiments. We fur- ther apply the half-precision floating-point (fp16) optimization to accelerate the training and prediction speed. In all the experiments, we fix the max sequence length to be 256 and the learning rate to be 3e-5 with a linearly decreasing learning rate schedule. The batch size is 32 if MixDA is used and 64 otherwise. The training process runs a fixed number of epochs (10, 15, or 40 depending on the dataset size) and returns the checkpoint with the highest F1 score on the validation set. We conducted all experiments on a p3.8xlarge AWS EC2 machine with 4 V100 GPUs (1 GPU per run).
Compared methods. We compare Ditto with the SOTA EM solution DeepMatcher. We also consider other baseline methods including Magellan [25], DeepER [14], and follow-up works of DeepMatcher [17, 23]. We also compare with variants of Ditto without the data augmentation (DA) and/or domain knowledge (DK) optimization to evaluate the effectiveness of each component. We summarize these methods below. We report the average F1 of 5 repeated runs in all the settings. ⢠DeepMatcher: DeepMatcher [34] is the SOTA matching solu- tion. Compared to Ditto, DeepMatcher customizes the RNN ar- chitecture to aggregate the attribute values, then compares/aligns the aggregated representations of the attributes. DeepMatcher leverages FastText [5] to train the word embeddings. When re- porting DeepMatcherâs F1 scores, we use the numbers in [34] for the ER-Magellan datasets and numbers in [39] for the WDC datasets. We also reproduced those results using the open-sourced implementation.
⢠DeepMatcher+: Follow-up work [23] slightly outperforms Deep- Matcher in the DBLP-ACM dataset and [17] achieves better F1 in the Walmart-Amazon and Amazon-Google datasets. According to [34], the Magellan system ([25], based on classical ML mod- els) outperforms DeepMatcher in the Beer and iTunes-Amazon datasets. We also implemented and ran DeepER [14], which is another RNN-based EM solution. We denote by DeepMatcher+ (or simply DM+) the best F1 scores among DeepMatcher and these works aforementioned. We summarize in Appendix C the implementation details and performance of each method.
⢠Ditto: This is the full version of our system with all 3 optimiza- tions, domain knowledge (DK), TF-IDF summarization (SU), and data augmentation (DA) turned on. See the details below.
⢠Ditto(DA): This version only turns on the DA (with MixDA) and SU but does not have the DK optimization. We apply one of the span-level or attribute-level DA operators listed in Table 2 with the entry_swap operator. We compare the different combinations and report the best one. Following [31], we apply MixDA with the interpolation parameter λ sampled from a Beta distribution Beta(0.8, 0.8).
⢠Ditto(DK): With only the DK and SU optimizations on, this version of Ditto is expected to have lower F1 scores but train much faster. We apply the span-typing to datasets of each domain according to Table 1 and apply the span-normalization on the number spans.
⢠Baseline: This base form of Ditto corresponds simply to fine- tuning a pre-trained LM on the EM task. We did not apply any optimizations on the baseline. For each ER-Magellan dataset, we tune the LM for the baseline and found that RoBERTa generally
achieves the best performance. Thus, we use RoBERTa in the other 3 Ditto variants (Ditto, Ditto(DA), and Ditto(DK)) by default across all datasets. The Company dataset is the only exception, where we found that the BERT model performs the best. For the WDC benchmark, since the training sets are large, we use DistilBERT across all settings for faster training.
There is a concurrent work [6], which also applies pre-trained LM to the entity matching problem. The proposed method is similar to the baseline method above, but due to the difference in the evaluation methods ([6] reports the best epoch on the test set, instead of the validation set), the reported results in [6] is not directly comparable. We summarize in Appendix E the difference between Ditto and [6] and explain why the reported results are different.
4.3 Main results Table 5 shows the results of the ER-Magellan datasets. Overall, Ditto (with optimizations) achieves significantly higher F1 scores than the SOTA results (DM+). Ditto without optimizations (i.e., the baseline) achieves comparable results with DM+. Ditto out- performs DM+ in all 13 cases and by up to 31% (Dirty, Walmart- Amazon) while the baseline outperforms DM+ in 12/13 cases except for the Company dataset with long text.
In addition, we found that Ditto is better at datasets with small training sets. Particularly, the average improvement on the 7 small- est datasets is 15.6% vs. 1.48% on average on the rest of datasets. Ditto is also more robust against data noise than DM+. In the 4 dirty datasets, the performance degradation of Ditto is only 0.57 on average while the performance of DM+ degrades by 8.21. These two properties make Ditto more attractive in practical EM settings. Moreover, in Appendix D, we show an evaluation of Dittoâs label efficiency on 5 of the ER-Magellan medium-size datasets. In 4/5 cases, when trained on less than 20% of the original training data, Ditto is able to achieve close or even better performance than DM+ when the full training sets are in use.
Ditto also achieves promising results on the WDC datasets (Table 6). Ditto achieves the highest F1 score of 94.08 when using all the 215k training data, outperforming the previous best result by 3.92. Similar to what we found in the ER-Magellan datasets, the improvements are higher on settings with fewer training examples (to the right of Table 6). The results also show that Ditto is more label efficient than DeepMatcher. For example, when using only 1/2 of the data (Large), Ditto already outperforms DeepMatcher with all the training data (xLarge) by 2.89 in All. When using only 1/8 of the data (Medium), the performance is within 1% close to DeepMatcherâs F1 when 1/2 of the data (Large) is in use. The only exception is the shoes category. This may be caused by the large gap of the positive label ratios between the training set and the test set (9.76% vs. 27.27% according to Table 4).
4.4 Ablation study Next, we analyze the effectiveness of each component (i.e., LM, SU, DK, and DA) by comparing Ditto with its variants without these optimizations. The results are shown in Table 5 and Figure 4.
The use of a pre-trained LM contributes to a large portion of the performance gain. In the ER-Magellan datasets (excluding Com- pany), the average improvement of the baseline compared to Deep- Matcher+ is 7.75, which accounts for 78.5% of the improvement of
Table 5: F1 scores on the ER-Magellan EM datasets. The numbers of DeepMatcher+ (DM+) are the highest available found in [17, 23, 34] or re-produced by us.
Datasets DM+ Ditto Ditto (DA) Ditto (DK) Baseline Size Structured 70.7 75.58 (+4.88) Amazon-Google 78.8 94.37 (+15.57) Beer 98.45 98.99 (+0.54) DBLP-ACM 95.6 (+0.9) 94.7 DBLP-Google 100.00 (+0.0) 100 Fodors-Zagats iTunes-Amazon 97.06 (+5.86) 91.2 Walmart-Amazon 73.6 86.76 (+13.16) 75.08 87.21 99.17 95.73 100.00 97.40 85.50 74.67 90.46 99.10 95.80 100.00 97.80 83.73 74.10 84.59 98.96 95.84 98.14 92.28 85.81 11,460 450 12,363 28,707 946 539 10,242 Dirty 99.03 (+0.93) 98.1 DBLP-ACM 93.8 95.75 (+1.95) DBLP-Google iTunes-Amazon 79.4 95.65 (+16.25) Walmart-Amazon 53.8 85.69 (+31.89) 98.94 95.47 95.29 85.49 99.08 95.57 94.48 80.67 98.92 95.44 92.92 82.56 12,363 28,707 539 10,242 Textual Abt-Buy Company 62.8 89.33 (+26.53) 93.85 (+1.15) 92.7 89.79 93.69 81.69 93.15 88.85 41.00 9,575 112,632
Table 6: F1 scores on the WDC product matching datasets. The numbers for DeepMatcher (DM) are taken from [39].
Size Methods xLarge (1/1) Large (1/2) Medium (1/8) Small (1/20) DM Ditto DM Ditto DM Ditto DM Ditto Computers 90.80 95.45 89.55 91.70 77.82 +4.65 +2.15 88.62 +10.80 70.55 80.76 +10.21 Cameras 89.21 93.78 87.19 91.23 76.53 +4.57 +4.04 88.09 +11.56 68.59 80.89 +12.30 Watches 93.45 96.53 91.28 95.69 79.31 +3.08 +4.41 91.12 +11.81 66.32 85.12 +18.80 Shoes 92.61 90.11 90.39 88.07 79.48 -2.50 -2.32 82.66 +3.18 73.86 75.89 +2.03 All 90.16 94.08 89.24 93.05 79.94 +3.92 +3.81 88.61 +8.67 76.34 84.36 +8.02
the full Ditto (9.87). While DeepMatcher+ and the baseline Ditto (essentially fine-tuning DistilBERT) are comparable on the Struc- tured datasets, the baseline performs much better on all the Dirty datasets and the Abt-Buy dataset. This confirms our intuition that the language understanding capability is a key advantage of Ditto over existing EM solutions. The Company dataset is a special case because the length of the company articles (3,123 words on average) is much greater than the max sequence length of 256. The SU opti- mization increases the F1 score of this dataset from 41% to over 93%. In the WDC datasets, across the 20 settings, LM contributes to 3.41 F1 improvement on average, which explains 55.3% of improvement of the full Ditto (6.16).
Compared to the baseline, the improvement of Ditto(DK) is 1.08 on average and is up to 5.88 on the Beer dataset while the improve- ment is only 0.22 on average on the WDC datasets. We inspected the span-typing output and found that only 66.2% of entry pairs have spans of the same type. This is caused by the current NER module not extracting product-related spans with the correct types.
95 all computers cameras watches shoes 95 95 = ==âa1 ol e-â © om 2 90 â | 90 Ag=---@ 90 a â| af a} 90 & Ditto % 85 â a 80 aâ 80 â sol) Ditto (DA) 80 o 807 5 oe id â, Ditto (DK) 70 70 18 70 joe. Baseline 10k 100k 200k 10k 35k 70k 5k 20k 40k 10k 30k 60k 5k 20k 40k Train+Valid Size
Figure 4: F1 scores on the WDC datasets of different versions of Ditto. DM: DeepMatcher.
We expect DK to be more effective if we use an NER model trained on the product domain.
DA is effective on both datasets and more significantly on the WDC datasets. The average F1 score of the full Ditto improves upon Ditto(DK) (without DA) by 1.39 and 2.53 respectively in the two datasets. In the WDC datasets, we found that the span_del operator always performs the best while the best operators are diverse in the ER-Magellan datasets. We list the best operator for each dataset in Table 7. We note that there is a large space of tuning these operators (e.g., the MixDA interpolation parameter, maximal span length, etc.) and new operators to further improve the performance.
Table 7: Datasets that each DA operator achieves the best perfor- mance. The suffix (S)/(D) and (Both) denote the clean/dirty version of the dataset or both of them. All operators are applied with the entry_swap operator.
Operator Datasets span_shuffle DBLP-ACM (Both), DBLP-Google (Both), Abt-Buy span_del attr_del attr_shuffle Walmart-Amazon(D), Company, all of WDC Beer, iTunes-Amazon(S), Walmart-Amazon(S) Fodors-Zagats, iTunes-Amazon(D)
Table 8: Sizes of the two employer datasets to be matched.
#Candidates original deduplicated original deduplicated Basic blocking TableA TableB Size 789,409 788,094 412,418 62,511 10,652,249
the labeled pairs is 39%. We split the labeled pairs into training, validation, and test sets by the ratio of 3:1:1.
Applying Ditto. The user of Ditto does not need to extensively tune the hyperparameters but only needs to specify the domain knowledge and choose a data augmentation operator. We observe that the street number and the phone number are both useful signals for matching. Thus, we implemented a simple recognizer that tags the first number string in the addr attribute and the last 4 digits of the phone attribute. Since we would like the trained model to be robust against the large number of missing values, we choose the attr_del operator for data augmentation.
We plot the modelâs performance in Figure 5. Ditto achieves the highest F1 score of 96.53 when using all the training data. Ditto outperforms DeepMatcher (DM) in F1 and trains faster (even when using MixDA) than DeepMatcher across different training set sizes.
5 CASE STUDY: EMPLOYER MATCHING We present a case of applying Ditto to a real-world EM task. An online recruiting platform would like to join its internal employer records with newly collected public records to enable downstream aggregation tasks. Given two tables A and B (internal and public) of employer records, the goal is to find, for each record in table B, a record in table A that represents the same employer. Both tables have 6 attributes: name, addr, city, state, zipcode, and phone. Our goal is to find matches with both high precision and recall.
© DM Ditto Ditto (DA) Ditto (Dk) Baseline F1 scores vs. size training time (s) vs. size 97 2000 so] 1500 95 1000 93 500 91 2k3k 6k 12k 2k3k 6k 12k
Figure 5: F1 and training time for the employer matching models.
Basic blocking. Our first challenge is size of the datasets. Table 8 shows that both tables are of nontrivial sizes even after dedupli- cation. The first blocking method we designed is to only match companies with the same zipcode. However, since 60% of records in Table A do not have the zipcode attribute and some large em- ployers have multiple sites, we use a second blocking method that returns for each record in Table B the top-20 most similar records in A ranked by the TF-IDF cosine similarity of name and addr at- tributes. We use the union of these two methods as our blocker, which produces 10 million candidate pairs.
Data labeling. We labeled 10,000 pairs sampled from the results of each blocking method (20,000 labels in total). We sampled pairs of high similarity with higher probability to increase the difficulty of the dataset to train more robust models. The positive rate of all
Advanced blocking. Optionally, before applying the trained model to all the candidate pairs, we can use the labeled data to improve the basic blocking method. We leverage Sentence-BERT [43], a variant of the BERT model that trains sentence embeddings for sentence similarity search. The trained model generates a high-dimensional (e.g., 768 for BERT) vector for each record. Although this model has a relatively low F1 (only 92%) thus cannot replace Ditto, we can use it with vector similarity search to quickly find record pairs that are likely to match. We can greatly reduce the matching time by only testing those pairs of high cosine similarity. We list the running time for each module in Table 9. With this technique, the overall EM process is accelerated by 3.8x (1.69 hours vs. 6.49 hours with/without advanced blocking).
Table 9: Running time for blocking and matching with Ditto. Ad- vanced blocking consists of two steps: computing the representa- tion of each record with Sentence-BERT [43] (Encoding) and sim- ilarity search by blocked matrix multiplication [1] (Search). With advanced blocking, we only match each record with the top-10 most similar records according to the model.
Basic Blocking Encoding (GPU) Search (CPU) Matching (top-10) (ALL) Time (s) 537.26 2,229.26 1,981.97 1,339.36 22,823.43
6 RELATED WORK AND DISCUSSION EM solutions have tackled the blocking problem [2, 8, 16, 35, 54] and the matching problem with rules [11, 15, 47, 53], crowdsourcing [18, 22, 52], or machine learning [4, 10, 18, 25, 46].
Recently, EM solutions used deep learning and achieved promis- ing results [14, 17, 23, 34, 64]. DeepER [14] trains EM models based on the LSTM [21] neural network architecture with word embed- dings such as GloVe [38]. DeepER also proposed a blocking tech- nique to represent each entry by the LSTMâs output. Our advanced blocking technique based on Sentence-BERT [43], described in Section 5, is inspired by this. Auto-EM [64] improves deep learning- based EM models by pre-training the EM model on an auxiliary task of entity type detection. Ditto also leverages transfer learning by fine-tuning pre-trained LMs, which are more powerful in lan- guage understanding. We did not compare Ditto with Auto-EM in experiments because the entity types required by Auto-EM are not available in our benchmarks. However, we expect that pre-training Ditto with EM-specific data/tasks can improve the performance of Ditto further and is part of our future work. DeepMatcher intro- duced a design space for applying deep learning to EM. Following their template architecture, one can think of Ditto as replacing both the attribute embedding and similarity representation com- ponents in the architecture with a single pre-trained LM such as BERT, thus providing a much simpler overall architecture.
All systems, Auto-EM, DeepER, DeepMatcher, and Ditto for- mulate matching as a binary classification problem. The first three take a pair of data entries of the same arity as input and aligns the attributes before passing them to the system for matching. On the other hand, Ditto serializes both data entries as one input with structural tags intact. This way, data entries of different schemas can be uniformly ingested, including hierarchically formatted data such as those in JSON. Our serialization scheme is not only appli- cable to Ditto, but also to other systems such as DeepMatcher. In fact, we serialized data entries to DeepMatcher under one attribute using our scheme and observed that DeepMatcher improved by as much as 5.2% on some datasets.
A concurrent work [6] also applies pre-trained LMs to the en- tity matching problem and achieves good performance. While the proposed method in [6] is similar to the baseline version of Ditto, Ditto can be further optimized using domain knowledge, data augmentation, and summarization. We also present a comprehen- sive experiment analysis on more EM benchmarks using a more standard evaluation method. We provide a detailed comparison between Ditto and [6] in Appendix E.
External knowledge is known to be effective in improving neu- ral network models in NLP tasks [7, 49, 55, 60]. To incorporate
domain knowledge, Ditto modularizes the way domain knowl- edge is incorporated by allowing users to specify and customize rules for preprocessing input entries. Data augmentation (DA) has been extensively studied in computer vision and has recently re- ceived more attention in NLP [31, 57, 59]. We designed a set of DA operators suitable for EM and apply them with MixDA [31], a recently proposed DA strategy based on convex interpolation. To our knowledge, this is the first time data augmentation has been applied to EM.
Active learning is a recent trend in EM to train high-quality matching models with limited labeling resources [19, 23, 30, 40]. Under the active learning framework, the developer interactively labels a small set of examples to improve the model while the up- dated model is used to sample new examples for the next labeling step. Although active learningâs goal of improving label efficiency aligns with data augmentation in Ditto, they are different solu- tions, which can be used together; active learning requires human interaction in each iteration, whereas data augmentation does not. According to [30], one needs to adjust the model size and/or the training process such that the response time becomes acceptable for user interaction in active learning. Thus, we consider apply- ing it to Ditto is not straightforward because of the relatively long fine-tuning time of the Ditto. We leave this aspect to future development of Ditto.
Discussion. Like other deep learning-based EM solutions, Ditto requires a non-trivial amount of labeled training examples (e.g., the case study requires 6k examples to achieve 95% F1) and Dittoâs DA and DK optimizations help reduce the labeling requirement to some extent. Currently, the LMs that we have tested in Ditto are pre-trained on general English text corpora and thus might not capture well EM tasks with a lot of numeric data and/or specific domains such as the scientific domain. For domain-specific tasks, a potential solution is to leverage specialized LMs such as SciB- ERT [3] or BioBERT [27] trained on scientific and biology corpus respectively. For numeric data, a good candidate solution would be a hybrid neural network similar to [20, 62] that combines the numeric features with the textual features.
7 CONCLUSION We present Ditto, an EM system based on fine-tuning pre-trained Transformer-based language models. Ditto uses a simple archi- tecture to leverage pre-trained LMs and is further optimized by injecting domain knowledge, text summarization, and data augmen- tation. Our results show that it outperforms existing EM solutions on all three benchmark datasets with significantly less training data. Dittoâs good performance can be attributed to the improved lan- guage understanding capability mainly through pre-trained LMs, the more accurate text alignment guided by the injected knowledge, and the data invariance properties learned from the augmented data. We plan to further explore our design choices for injecting domain knowledge, text summarization, and data augmentation. In addition, we plan to extend Ditto to other data integration tasks beyond EM, such as entity type detection and schema matching with the ultimate goal of building a BERT-like model for tables.
REFERENCES [1] Firas Abuzaid, Geet Sethi, Peter Bailis, and Matei Zaharia. 2019. To Index or Not to Index: Optimizing Exact Maximum Inner Product Search. In Proc. ICDE â19. IEEE, 1250â1261.
[2] Linkage Rohan Baxter, Rohan Baxter, Peter Christen, et al. 2003. A comparison of fast blocking methods for record. (2003).
[3] Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciBERT: A pretrained language model for scientific text. arXiv preprint arXiv:1903.10676 (2019).
[4] Mikhail Bilenko and Raymond J Mooney. 2003. Adaptive duplicate detection using learnable string similarity measures. In Proc. KDD â03. 39â48.
[5] Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. TACL 5 (2017), 135â146. [6] Ursin Brunner and Kurt Stockinger. 2020. Entity matching with transformer
architectures-a step forward in data integration. In EDBT.
[7] Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, and Si Wei. 2018. Neural natural language inference models enhanced with external knowledge. In Proc. ACL â18. 2406â2417.
[8] Peter Christen. 2011. A survey of indexing techniques for scalable record linkage and deduplication. TKDE 24, 9 (2011), 1537â1555.
[9] Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What Does BERT Look at? An Analysis of BERTâs Attention. In Proc. BlackBoxNLP â19. 276â286.
[10] William W Cohen and Jacob Richman. 2002. Learning to match and cluster large high-dimensional data sets for data integration. In Proc. KDD â02. 475â480. [11] Nilesh Dalvi, Vibhor Rastogi, Anirban Dasgupta, Anish Das Sarma, and Tamas Sarlos. 2013. Optimal hashing schemes for entity matching. In Proc. WWW â13. 295â306.
[12] Sanjib Das, AnHai Doan, Paul Suganthan G. C., Chaitanya Gokhale, Pradap Konda, Yash Govind, and Derek Paulsen. [n.d.]. The Magellan Data Repository. https://sites.google.com/site/anhaidgroup/projects/data.
[13] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proc. NAACL-HLT â19. 4171â4186.
[14] Muhammad Ebraheem, Saravanan Thirumuruganathan, Shafiq Joty, Mourad Ouzzani, and Nan Tang. 2018. Distributed representations of tuples for entity resolution. PVLDB 11, 11 (2018), 1454â1467.
[15] Ahmed Elmagarmid, Ihab F Ilyas, Mourad Ouzzani, Jorge-Arnulfo Quiané-Ruiz, Nan Tang, and Si Yin. 2014. NADEEF/ER: generic and interactive entity resolution. In Proc. SIGMOD â14. 1071â1074.
[16] Jeffrey Fisher, Peter Christen, Qing Wang, and Erhard Rahm. 2015. A clustering- based framework to control block sizes for entity resolution. In Proc. KDD â15. 279â288.
[17] Cheng Fu, Xianpei Han, Le Sun, Bo Chen, Wei Zhang, Suhui Wu, and Hao Kong. 2019. End-to-end multi-perspective matching for entity resolution. In Proc. IJCAI â19. AAAI Press, 4961â4967.
[18] Chaitanya Gokhale, Sanjib Das, AnHai Doan, Jeffrey F Naughton, Narasimhan Rampalli, Jude Shavlik, and Xiaojin Zhu. 2014. Corleone: Hands-off crowdsourc- ing for entity matching. In Proc. SIGMOD â14. 601â612.
[19] Sairam Gurajada, Lucian Popa, Kun Qian, and Prithviraj Sen. 2019. Learning- Based Methods with Human-in-the-Loop for Entity Resolution. In CIKM. 2969â 2970.
[20] Jonathan Herzig, PaweŠKrzysztof Nowak, Thomas Müller, Francesco Piccinno, and Julian Martin Eisenschlos. 2020. TAPAS: Weakly Supervised Table Parsing via Pre-training. arXiv preprint arXiv:2004.02349 (2020).
[21] Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation 9, 8 (1997), 1735â1780.
[22] Adam Marcus Eugene Wu David Karger and Samuel Madden Robert Miller. 2011. Human-powered Sorts and Joins. PVLDB 5, 1 (2011).
[23] Jungo Kasai, Kun Qian, Sairam Gurajada, Yunyao Li, and Lucian Popa. 2019. Low-resource Deep Entity Resolution with Transfer and Active Learning. In Proc. ACL â19. 5851â5861.
[24] Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic opti- mization. arXiv preprint arXiv:1412.6980 (2014).
[25] Pradap Konda, Sanjib Das, Paul Suganthan G. C., AnHai Doan, Adel Ardalan, Jeffrey R. Ballard, Han Li, Fatemah Panahi, Haojun Zhang, Jeffrey F. Naughton, Shishir Prasad, Ganesh Krishnan, Rohit Deep, and Vijay Raghavendra. 2016. Magellan: Toward Building Entity Matching Management Systems. PVLDB 9, 12 (2016), 1197â1208.
[26] Hanna Köpcke, Andreas Thor, and Erhard Rahm. 2010. Evaluation of entity resolution approaches on real-world match problems. PVLDB 3, 1-2 (2010), 484â 493.
[27] Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36, 4 (2020), 1234â1240.
[28] Yuliang Li, Jinfeng Li, Yoshihiko Suhara, AnHai Doan, and Wang-Chiew Tan. 2020. Deep Entity Matching with Pre-Trained Language Models. arXiv preprint
arXiv:2004.00584 (2020).
[29] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019).
[30] Venkata Vamsikrishna Meduri, Lucian Popa, Prithviraj Sen, and Mohamed Sarwat. 2020. A Comprehensive Benchmark Framework for Active Learning Methods in Entity Matching. In SIGMOD. 1133â1147.
[31] Zhengjie Miao, Yuliang Li, Xiaolan Wang, and Wang-Chiew Tan. 2020. Snippext: Semi-supervised Opinion Mining with Augmented Data. In Proc. WWW â20. [32] Rada Mihalcea and Paul Tarau. 2004. TextRank: Bringing order into text. In Proc.
EMNLP â04. 404â411.
[33] Tom M Mitchell et al. 1997. Machine learning. Burr Ridge, IL: McGraw Hill 45, 37 (1997), 870â877.
[34] Sidharth Mudgal, Han Li, Theodoros Rekatsinas, AnHai Doan, Youngchoon Park, Ganesh Krishnan, Rohit Deep, Esteban Arcaute, and Vijay Raghavendra. 2018. Deep learning for entity matching: A design space exploration. In Proc. SIGMOD â18. 19â34.
[35] George Papadakis, Dimitrios Skoutas, Emmanouil Thanos, and Themis Palpanas. 2019. Blocking and Filtering Techniques for Entity Resolution: A Survey. arXiv preprint arXiv:1905.06167 (2019).
[36] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. PyTorch: An imperative style, high-performance deep learning library. In Proc. NeurIPS â19. 8024â8035.
[37] Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. 2011. Scikit-learn: Machine learning in Python. the Journal of machine Learning research 12 (2011), 2825â2830.
[38] Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Proc. EMNLP â14. 1532â1543.
[39] Anna Primpeli, Ralph Peeters, and Christian Bizer. 2019. The WDC training dataset and gold standard for large-scale product matching. In Companion Proc. WWW â19. 381â386.
[40] Kun Qian, Lucian Popa, and Prithviraj Sen. 2017. Active learning for large-scale entity resolution. In CIKM. 1379â1388.
[41] Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language Models are Unsupervised Multitask Learners. (2019). [42] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog 1, 8 (2019), 9.
[43] Nils Reimers and Iryna Gurevych. 2019. Sentence-BERT: Sentence embeddings using Siamese BERT-networks. In Proc. EMNLP-IJCNLP â19. 3982â3992.
[44] Alexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proc. EMNLP â15.
[45] Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Dis- tilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. In Proc. EMC2 â19.
[46] Sunita Sarawagi and Anuradha Bhamidipaty. 2002. Interactive deduplication using active learning. In Proc. KDD â02. 269â278.
[47] Rohit Singh, Venkata Vamsikrishna Meduri, Ahmed Elmagarmid, Samuel Madden, Paolo Papotti, Jorge-Arnulfo Quiané-Ruiz, Armando Solar-Lezama, and Nan Tang. 2017. Synthesizing entity matching rules by examples. PVLDB 11, 2 (2017), 189â 202.
[48] Spacy. [n.d.]. https://spacy.io/api/entityrecognizer. [49] Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. 2019. ERNIE: Enhanced representa- tion through knowledge integration. arXiv preprint arXiv:1904.09223 (2019). [50] Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. BERT Rediscovers the Classical
NLP Pipeline. In Proc. ACL â19. 4593â4601.
[51] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proc. NIPS â17. 5998â6008.
[52] Jiannan Wang, Tim Kraska, Michael J Franklin, and Jianhua Feng. 2012. CrowdER: crowdsourcing entity resolution. PVLDB 5, 11 (2012), 1483â1494.
[53] Jiannan Wang, Guoliang Li, Jeffrey Xu Yu, and Jianhua Feng. 2011. Entity match- ing: How similar is similar. PVLDB 4, 10 (2011), 622â633.
[54] Qing Wang, Mingyuan Cui, and Huizhi Liang. 2015. Semantic-aware blocking for entity resolution. TKDE 28, 1 (2015), 166â180.
[55] Xiang Wang, Xiangnan He, Yixin Cao, Meng Liu, and Tat-Seng Chua. 2019. KGAT: Knowledge Graph Attention Network for Recommendation. In Proc. KDD â19. 950âÄÅ958.
[56] WDC Product Data Corpus. [n.d.]. http://webdatacommons.org/largescaleproductcorpus/v2.
[57] Jason Wei and Kai Zou. 2019. EDA: Easy Data Augmentation Techniques for Boosting Performance on Text Classification Tasks. In Proc. EMNLP-IJCNLP â19. 6382â6388.
[58] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. HuggingFaceâs Transformers: State-of-the-art Natural Language Processing. arXiv preprint arXiv:1910.03771 (2019).
[59] Qizhe Xie, Zihang Dai, Eduard Hovy, Minh-Thang Luong, and Quoc V Le. 2019. Unsupervised data augmentation. arXiv preprint arXiv:1904.12848 (2019). [60] Bishan Yang and Tom Mitchell. 2017. Leveraging Knowledge Bases in LSTMs for
Improving Machine Reading. In Proc. ACL â17. 1436â1446.
[61] Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. XLNet: Generalized autoregressive pretraining for language understanding. In Proc. NeurIPS â19. 5754â5764.
[62] Pengcheng Yin, Graham Neubig, Wen-tau Yih, and Sebastian Riedel. 2020. TaBERT: Pretraining for Joint Understanding of Textual and Tabular Data. arXiv preprint arXiv:2005.08314 (2020).
[63] Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. 2018. mixup: Beyond empirical risk minimization. In Proc. ICLR â18.
[64] Chen Zhao and Yeye He. 2019. Auto-EM: End-to-end Fuzzy Entity-Matching using Pre-trained Deep Models and Transfer Learning. In Proc. WWW â19. 2413â2424.
# A ARCHITECTURE OF THE PRE-TRAINED LANGUAGE MODELS
Figure 6 shows the model architecture of Dittoâs language models such as BERT [13], DistilBERT [45], and RoBERTa [29]. Ditto se- rializes the two input entries entries as one sequence and feeds it to the model as input. The model consists of (1) token embeddings and Transformer layers [58] from a pre-trained language model (e.g., BERT) and (2) task-specific layers (linear followed by softmax). Conceptually, the [CLS] token âsummarizesâ all the contextual information needed for matching as a contextualized embedding vector E â² [CLS] which the task-specific layers take as input for classi- fication.
0o/1 Task-specific -L---------~-~---~----~- = Contextualized 1 E: E' E'm || E'se Embeddings} g = TIES) sa 153 1 Transformer Layer 135 i = ge 1 Transformer Layer ou a a a pt ta® Embeddings, [Es] [ E: | [2 Een Em |[E]} 3 isa Ge Ea) , ise fea gg ee wv 35 attr 1 |{val1] --. {attr | {val1| ... 5x ar First entity e Second entity eâ
Figure 6: Dittoâs model architecture.
# B TRAINING TIME AND PREDICTION TIME EXPERIMENTS
We plot the training time required by DeepMatcher and Ditto in Figure 7. The running time for Ditto ranges from 119 seconds (450 examples) to 1.7 hours (113k examples). Ditto has a similar training time to DeepMatcher although the Transformer-based models used by Dittoare deeper and more complex. The speed-up is due to the fp16 optimization which is not used by DeepMatcher. Ditto with MixDA is about 2-3x slower than Ditto(DK) without MixDA. This is because MixDA requires additional time for generating the augmented pairs and computing with the LM twice. However, this overhead only affects offline training and does not affect online prediction.
Table 11 shows Dittoâs average prediction time per entry pair in each benchmark. The results show that DeepMatcher and Ditto have comparable prediction time. Also, the DK optimization only adds a small overhead to the prediction time (less than 2%). The prediction time between the two benchmarks are different because of the difference in their sequence length distributions.
Table 10: Baseline results from different sources.
DeepER (reproduced) DM (reproduced) DM (reported in [34]) DM (using Dittoâs input) Magellan (reported in [34]) ACL â19 [23] IJCAI â19 [17] Structured Amazon-Google Beer DBLP-ACM DBLP-Scholar Fodors-Zagats iTunes-Amazon Walmart-Amazon 56.08 50 97.63 90.82 97.67 72.46 50.62 67.53 69.23 98.42 94.32 - 86.79 63.33 69.3 72.7 98.4 94.7 100 88 66.9 65.78 - 98.86 94.56 - 88 61.67 49.1 78.8 98.4 92.3 100 91.2 71.9 - - 98.45 92.94 - - - 70.7 - - - - - 73.6 Dirty DBLP-ACM DBLP-Scholar iTunes-Amazon Walmart-Amazon 89.62 86.07 67.80 36.44 97.53 92.8 73.08 47.81 98.1 93.8 79.4 53.8 96.03 93.75 70.83 48.45 91.9 82.5 46.8 37.4 - - - - - - - - Textual Abt-Buy Company 42.99 62.17 66.05 - 62.8 92.7 67.99 90.70 43.6 79.8 - - - -
pM Ditto Ditto(DA) Ditto(DK) Baseline = 10¢ 104 w) ri 2 * £ 10° 103 Pe 2 se at 3 10730" 102 {a i am s 1k 3k 10k 30k 100k 2k 10k 50k 200k Training set size Training set size
Figure 7: Training time vs. dataset size for the ER-Megallan datasets (left) and the WDC datasets (right). Each point corresponds to the training time needed for a dataset using different methods. Ditto(DK) and Baseline do not use MixDA thus is faster than the full Ditto. The DK optimization only adds a small overhead (5%) to the training time. DeepMatcher (DM) ran out of memory on the Company dataset so the data point is not reported.
Table 11: The average prediction time (ms) per data entry pair of Ditto.
Ditto-DistilBERT Ditto-RoBERTa w. DK w/o. DK w. DK w/o. DK DM ER-Magellan WDC 8.01 1.82 7.87 1.80 6.82 2.11 6.78 2.11 6.62 2.30
# C BREAKDOWN OF THE DM+ RESULTS AND EXPERIMENTS
We summarize these baseline results in Table 10 on the ER-
Magellan benchmarks and explain each method next. DeepER: The original paper [14] proposes a DL-based framework for EM. Similar to DeepMatcher, DeepER first aggregates both data entries into their vector representations and uses a feedforward neural network to perform the binary classification based on the similarity of the two vectors. Each vector representation is obtained either by a simple averaging over the GloVe [38] embeddings per attribute or a RNN module over the serialized data entry. DeepER computes the similarity as the cosine similarity of the two vectors. Although [14] reported results on the Walmart-Amazon, Amazon- Google, DBLP-ACM, DBLP-Scholar, and the Fodors-Zagat datasets, the numbers are not directly comparable to the presented results of Ditto because their evaluation and data preparation methods are different (e.g., they used k-fold cross-validation while we use the train/valid/test splits according to [34]). In our experiments, we implemented DeepER with LSTM as the RNN module and GloVe for the tokens embeddings as described in [14] and with the same hyper- parameters (a learning rate of 0.01 and the Adam optimizer [24]). We then evaluate DeepER in our evaluation settings. For each dataset, we report the best results obtained by the simple aggregation and the RNN-based method. DeepMatcher (DM): We have summarized DM in Section 4.2. In addition to simply taking the numbers from the original paper [34], we also ran their open-source version (DM (reproduced)) with the default settings (the Hybrid model with a batch size of 32 and 15 epochs). The reproduced results are in general lower than the original reported numbers in [34] (the 3rd column) because we did not try the other model variants and hyperparameters as in the original experiments. The code failed in the Fodors-Zagat and the Company datasets because of out-of-memory errors.
In this section, we provide a detailed summary of how we obtain the DeepMatcher+ (DM+) baseline results. Recall from Section 4.2 that DM+ is obtained by taking the best performance (highest F1 scores) of multiple baseline methods including DeepER [14], Magellan [25], DeepMatcher [34], and DeepMatcherâs follow-up work [17] and [23].
In addition, one key difference between DM and Ditto is that Ditto serializes the data entries while DM does not. One might wonder if DM can obtain better results by simply replacing its input with the serialized entries produced by Ditto. We found that the
Amazon-Google DBLP-ACM DBLP-Scholar Walmart-Amazon Abt-Buy 80 99 © DM+ (full) 2 Ss |0--O--O--O-r 9) 80 Ditto 2 79 j9--O-- | [O-- Oe gg -- @<4-<-@--@ 380 8 98 qoâ i dien| Ditto (DA) 60 97 92 60 70 Ditto (DK) 50 --@---@- Baseline 50 96 90 0.5k 1k 1.5k 2k full 0.5k 1k 1.5k 2k full 0.5k 1k 1.5k 2k full Train Size 60 0.5k 1k 1.5k 2k full 0.5k 1k 1.5k 2k full
Figure 8: F1 scores on 5 ER-Magellan datasets using different variants of Ditto. We also plot the score of DeepMatcher+ on the full datasets (denoted as DM+(full)) as reference. Recall that full = {11460, 12363, 28707, 10242, 9575} for the 5 datasets respectively.
results do not significantly improved overall, but it is up to 5.2% in the Abt-Buy dataset.
Others: We obtained the results for Magellan by taking the re- ported results from [34] and the two follow-up works [17, 23] of DeepMatcher (denoted as ACL â19 and IJCAI â19 in Table 10). We did not repeat the experiments since they have the same evaluation settings as ours.
scores on the test set among all the training epochs, while we report the test F1 score of the epoch with the best F1 on the validation set. Our evaluation method is more standard since it prevents overfitting the test set (See Chapter 4.6.5 of [33]) and is also used by DeepMatcher and Magellan [34]. It is not difficult to see that over the same set of model snapshots, the F1 score computed by the [6]âs evaluation method would be greater or equal to the F1 score computed using our method, which explains the differences in the reported values between us and [6].
# D LABEL EFFICIENCY EXPERIMENTS ON THE ER-MAGELLAN BENCHMARK
We also evaluate the label efficiency of Ditto on the ER-Magellan benchmark. We conducted the experiments on 5 representative datasets (Amazon-Google, DBLP-ACM, DBLP-Scholar, Walmart- Amazon, and Abt-Buy) of size â¼10k to â¼30k. For each dataset, we vary the training set size from 500 to 2,000 and uniformly sample from the original training set. We then follow the same setting as in Section 4 to evaluate the 4 variants of Ditto: baseline, Ditto(DA), Ditto(DK), and Ditto. We summarize the results in Figure 8. We also plot the result of DM+ trained on the full datasets (denoted as DM+ (full)) as a reference. As shown in Figure 8, Ditto is able to reach similar or better performance to DM+ on 3 of the datasets (Amazon-Google, DBLP-ACM, and Walmart-Amazon) with 2,000 train examples (so ⤠20%). With only 500 examples, Ditto is able to outperform DM+ trained on the full data in the Abt-Buy dataset. These results confirm that Ditto is more label efficient than existing EM solutions.
Table 12 summarizes the detailed comparison of the baseline Ditto, the proposed method in [6], and the full Ditto. Recall that we construct the baseline by taking the best performing pre-trained model among DistilBERT [45], BERT [13], XLNet [61], and RoBERTa [29] following [6]. Although the baseline Ditto does not outperform [6] because of the different evaluation method, the optimized Ditto is able to outperform [6] in 4/5 of the evaluated datasets.
Table 12: The F1 scores of the baseline method with different pre- trained LMs. The first 4 columns are performance of the baseline Ditto using the 4 different LMs. We highlight the LM of the best performance on each dataset, which form the baseline column in Table 5. We turned on the summarization (SU) optimization for the Company dataset to get F1 scores closer to the full Ditto.
# E THE DIFFERENCE BETWEEN DITTO AND A CONCURRENT WORK
There is a concurrent work [6] which also applies pre-trained LMs to entity matching and obtained good results. The method proposed in [6] is essentially identical to the baseline version of Ditto which only serializes the data entries into text sequences and fine-tunes the LM on the binary sequence-pair classification task. On top of that, Ditto also applies 3 optimizations of injecting domain knowl- edge, data augmentation, and summarization to further improve the modelâs performance. We also evaluate Ditto more compre- hensively as we tested Ditto on all the 13 ER-Magellan datasets, the WDC product benchmark, and a company matching dataset while [6] experimented in 5/13 of the ER-Magellan datasets.
DistilBERT XLNet RoBERTa BERT Reported in [6] Structured Amazon-Google Beer DBLP-ACM DBLP-Scholar Fodors-Zagats iTunes-Amazon Walmart-Amazon Dirty DBLP-ACM DBLP-Scholar iTunes-Amazon Walmart-Amazon 71.38 82.48 98.49 94.92 97.27 91.49 79.81 98.60 94.76 90.12 77.91 74.10 48.91 98.85 95.84 95.30 74.81 77.98 98.92 95.26 92.70 61.73 65.92 74.23 98.87 95.46 98.14 92.05 85.81 98.79 95.44 92.92 82.56 71.66 84.59 98.96 94.93 95.98 92.28 81.27 98.81 94.72 92.25 81.55 - - - - - - - 98.90 95.60 94.20 85.50 Textual Abt-Buy Company 82.47 93.16 53.55 71.93 88.85 85.89 84.21 93.61 90.90 - Ditto 75.58 94.37 98.99 95.6 100.0 97.06 86.76 99.03 95.75 95.65 85.69 89.33 93.85
On these 5 evaluated datasets, one might notice that the reported F1 scores in [6] are slightly higher compared to the baselineâs F1 scores shown in Table 5. The reason is that according to [6], for each run on each dataset, the F1 score is computed as the modelâs best F1
Table 13: The 4 attributes of the WDC benchmarks used in training Ditto and DM according to [39].
Attributes Examples %Available Title Corsair Vengeance Red LED 16GB 2x 8GB DDR4 PC4 21300 2666Mhz dual-channel Kit - CMU16GX4M2A2666C16R Novatech 100% Description DDR4 2666MHz C116, 1.2V, XMP 2.0 red-led, Life- 54% time Warranty Brand AMD 19% SpecTable Memory Type DDR4 (PC4-21300) Capacity 16GB (2 x 8GB) Tested Speed 2666MHz Tested Latency 16-18-18-35 Tested Voltage 1.20V Registered / Un- buffered Unbuffered Error Checking Non-ECC Memory Features - red-led XMP 2.0 7%
# F EXPERIMENTS ON DIFFERENT WDC PRODUCT ATTRIBUTES
Following the settings in [39] for the evaluated models, we evaluate Ditto on 4 different subsets of the product attributes as input so
that Ditto and DeepMatcher are evaluated under the same setting. We list the 4 attributes in Table 13. Note that except for title, the attributes can be missing the the data entries. For example, the SpecTable attribute only appears in 7% of the entries in the full training set.
We summarize the results in Table 14. Among all the tested combinations (the same as the ones tested for DeepMatcher in [39]), the combination consisting of only the title attribute works significantly better than the others. The difference ranges from 3.2% (computer, xlarge) to over 30% (watches, small). According to this result, we only report Dittoâs results on the title attribute while allowing DeepMatcher to access all the 4 attributes to ensure its best performance.
The performance of Ditto drops when more attributes are added is because of the sequence length. For example, for the combination title+description, we found that the average sequence length grows from 75.5 (title only) to 342.7 which is beyond our default max length of 256 tokens. As a results, some useful information from the title attributes is removed by the summarization operator.
Table 14: F1 scores of Ditto on the WDC datasets with different subsets of the product attributes
title title_description title_description_brand title_description_brand_specTable small medium large xlarge small medium large xlarge small medium large xlarge small medium large xlarge 88.61 88.09 88.62 82.66 91.12 93.05 94.08 69.51 91.23 93.78 61.64 91.70 95.45 66.56 88.07 90.10 59.57 95.69 96.53 58.16 75.91 73.41 75.60 69.25 70.14 81.56 87.62 68.34 79.51 83.61 59.97 87.39 92.26 65.15 76.33 76.27 57.43 81.03 84.55 59.66 75.43 73.16 73.55 71.57 73.06 84.80 85.19 67.08 78.60 82.61 55.04 86.05 90.36 60.82 77.07 77.39 56.57 81.92 84.46 52.49 75.55 68.81 66.90 71.02 68.67 83.08 76.53 84.25 76.58 79.58 84.44 80.09 88.45 75.63 82.48
84.36 all cameras 80.89 computers 80.76 75.89 85.12 | {
"id": "2005.08314"
} |
2003.13191 | Incremental Learning In Online Scenario | Modern deep learning approaches have achieved great success in many vision
applications by training a model using all available task-specific data.
However, there are two major obstacles making it challenging to implement for
real life applications: (1) Learning new classes makes the trained model
quickly forget old classes knowledge, which is referred to as catastrophic
forgetting. (2) As new observations of old classes come sequentially over time,
the distribution may change in unforeseen way, making the performance degrade
dramatically on future data, which is referred to as concept drift. Current
state-of-the-art incremental learning methods require a long time to train the
model whenever new classes are added and none of them takes into consideration
the new observations of old classes. In this paper, we propose an incremental
learning framework that can work in the challenging online learning scenario
and handle both new classes data and new observations of old classes. We
address problem (1) in online mode by introducing a modified cross-distillation
loss together with a two-step learning technique. Our method outperforms the
results obtained from current state-of-the-art offline incremental learning
methods on the CIFAR-100 and ImageNet-1000 (ILSVRC 2012) datasets under the
same experiment protocol but in online scenario. We also provide a simple yet
effective method to mitigate problem (2) by updating exemplar set using the
feature of each new observation of old classes and demonstrate a real life
application of online food image classification based on our complete framework
using the Food-101 dataset. | http://arxiv.org/pdf/2003.13191 | Jiangpeng He, Runyu Mao, Zeman Shao, Fengqing Zhu | cs.CV | Accepted paper at CVPR 2020 | null | cs.CV | 20200330 | 20210419 | 1 2 0 2
r p A 9 1 ] V C . s c [ 2 v 1 9 1 3 1 . 3 0 0 2 : v i X r a
# Incremental Learning In Online Scenario
# Jiangpeng He [email protected]
# Runyu Mao [email protected]
# Zeman Shao [email protected]
# Fengqing Zhu [email protected]
School of Electrical and Computer Engineering, Purdue University, West Lafayette, Indiana USA
# Abstract
Modern deep learning approaches have achieved great success in many vision applications by training a model using all available task-speciï¬c data. However, there are two major obstacles making it challenging to implement for real life applications: (1) Learning new classes makes the trained model quickly forget old classes knowledge, which is referred to as catastrophic forgetting. (2) As new ob- servations of old classes come sequentially over time, the distribution may change in unforeseen way, making the per- formance degrade dramatically on future data, which is re- ferred to as concept drift. Current state-of-the-art incre- mental learning methods require a long time to train the model whenever new classes are added and none of them takes into consideration the new observations of old classes. In this paper, we propose an incremental learning frame- work that can work in the challenging online learning sce- nario and handle both new classes data and new obser- vations of old classes. We address problem (1) in online mode by introducing a modiï¬ed cross-distillation loss to- gether with a two-step learning technique. Our method out- performs the results obtained from current state-of-the-art ofï¬ine incremental learning methods on the CIFAR-100 and ImageNet-1000 (ILSVRC 2012) datasets under the same ex- periment protocol but in online scenario. We also provide a simple yet effective method to mitigate problem (2) by up- dating exemplar set using the feature of each new observa- tion of old classes and demonstrate a real life application of online food image classiï¬cation based on our complete framework using the Food-101 dataset.
# 1. Introduction
One of the major challenges of current deep learning based methods when applied to real life applications is learning new classes incrementally, where new classes are continuously added overtime. Furthermore, in most real life scenarios, new data comes in sequentially, which may con- tain both the data from new classes or new observations
of old classes. Therefore, a practical vision system is ex- pected to handle the data streams containing both new and old classes, and to process data sequentially in an online learning mode [15], which has similar constrains as in real life applications. For example, a food image recognition system designed to automate dietary assessment should be able to update using each new food image continually with- out forgetting the food categories already learned.
Most deep learning approaches trained on static datasets suffer from the following issues. First is catastrophic for- getting [16], a phenomenon where the performance on the old classes degrades dramatically as new classes are added due to the unavailability of the complete previous data. This problem become more severe in online scenario due to lim- ited run-time and data allowed to update the model. The second issue arises in real life application where the data distribution of already learned classes may change in un- foreseen ways [23], which is related to concept drift [5]. In this work, we aim to develop an incremental learning frame- work that can be deployed in a variety of image classiï¬ca- tion problems and work in the challenging online learning scenario.
A practical deep learning method for classiï¬cation is characterized by (1) its ability to be trained using data streams including both new classes data and new observa- tions of old classes, (2) good performance for both new and old classes on future data streams, (3) short run-time to up- date with constrained resources, and (4) capable of lifelong learning to handle multiple classes in an incremental fash- ion. Although progress has been made towards reaching these goals [14, 21, 2, 31], none of the existing approaches for incremental learning satisfy all the above conditions. They assume the distribution of old classes data remain un- changed overtime and consider only new classes data for incoming data streams. As we mentioned earlier, data dis- tribution are likely to change in real life[23]. When concept drift happens, regardless the effort put into retaining the old classes knowledge, degradation in performance is in- evitable. In addition, although these existing methods have achieved state-of-the-art results, none of them work in the challenging online scenario. They require ofï¬ine training
using all available new data for many epochs, making it im- practical for real life applications.
The main contributions of this paper is summarized as follows.
⢠We introduce a modiï¬ed cross-distillation loss to- gether with a two-step learning technique to make in- cremental learning feasible in online scenario. We show comparable results to the current state-of-the- art [21, 2, 31] on CIFAR-100 [12] and ImageNet-1000 (ILVSC2012) [25]. We follow the same experiment benchmark protocol [21] where all new data belong to new class, but in the challenging online learning scenario where the condition is more constrained for both run-time and number of data allowed to update the model.
⢠We propose an incremental learning framework that is capable of lifelong learning and can be applied to a va- riety of real life online image classiï¬cation problems. In this case, we consider new data belong to both new class and existing class. We provide a simple yet effec- tive method to mitigate concept drift by updating the exemplar set using the feature of each new observation of old classes. Finally, we demonstrate how our com- plete framework can be implemented for food image classiï¬cation using the Food-101 [1] dataset.
# 2. Related Work
In this section, we review methods that are closely re- lated to our work. Incremental learning remains one of the long-standing challenges for machine learning, yet it is very important to brain-like intelligence capable of continuously learning and knowledge accumulation through its lifetime. Traditional methods. Prior to deep learning, SVM classiï¬er [4] is commonly used. One representative work is [24], which learns the new decision boundary by using support vectors that are learned from old data together with new data. An alternative method is proposed in [3] by re- taining the Karush-Kuhn-Tucker conditions instead of sup- port vectors on old data and then update the solution using new data. Other techniques [19, 17, 13] use ensemble of weak classiï¬ers and nearest neighbor classiï¬er.
Deep learning based methods. These methods pro- vide a joint learning of task-speciï¬c features and classiï¬ers. Approaches such as [10, 11] are based on constraining or freezing the weights in order to retain the old tasks perfor- In [10], the last fully connected layer is freezed mance. which discourages change of shared parameters in the fea- ture extraction layers. Inn [11] old tasks knowledge is re- tained by constraining the weights that are related to these tasks. However, constraining or freezing parameters also limits its adaptability to learn from new data. A combina- tion of knowledge distillation loss [9] with standard cross-
entropy loss is proposed to retain the old classes knowledge in [14], where old and new classes are separated in multi- class learning and distillation is used to retain old classes performance. However, performance is far from satisfac- tory when new classes are continuously added, particularly in the case when the new and old classes are closely related. Based on [14], auto encoder is used to retain the knowledge for old classes instead of using distillation loss in [20]. For all these methods, only new data is considered.
In [26] and [28], synthetic data is used to retain the knowledge for old classes by applying a deep generative model [6]. However, the performance of these methods are highly dependent on the reliability of the generative model, which struggles in more complex scenarios.
Rebufï¬ et al proposed iCaRL[21], an approach using a small number of exemplars from each old class to retain knowledge. An end-to-end incremental learning framework is proposed in [2] using exemplar set as well, along with data augmentation and balanced ï¬ne-tuning to alleviate the imbalance between the old and new classes. Incremental learning for large datasets was proposed in [31] in which a linear model is used to correct bias towards new classes in the fully connected layer. However, it is difï¬cult to ap- ply these methods to real life applications since they all re- quire a long ofï¬ine training time with many epochs at each incremental step to achieve a good performance. In addi- tion, they assume the distribution of old classes remain un- changed and only update the classiï¬ers using new classes data. All in all, a modiï¬ed cross-distillation loss along with a two-step learning technique is introduced to make incre- mental learning feasible in the challenging online learning scenario. Furthermore, our complete framework is capable of lifelong learning from scratch in online mode, which is illustrated in Section 4.
# 3. Online Incremental Learning
Online incremental learning [15] is a subarea of incre- mental learning that are additionally bounded by run-time and capability of lifelong learning with limited data com- pared to ofï¬ine learning. However, these constraints are very much related to real life applications where new data comes in sequentially and is in conï¬ict with the traditional assumption that complete data is available. A sequence of model h1, h2, ..., ht is generated on the given stream of data blocks s1, s2, ..., st as shown in Figure 1. In this case, si is a block of new data with block size p, deï¬ned as the number of data used to update the model, which is simi- lar to batch size as in ofï¬ine learning mode. However, each new data is used only once to update the model instead of training the model using the new data with multiple epochs as in ofï¬ine mode. st = {(x(1) )} â Rn Ã{1, ..., M } where n is the data dimension and M is the total number of classes. The model ht : Rn â {1, ..., M }
Predict
Figure 1: Online Scenario. A sequence of model h1, h2, ..., ht is generated using each block of new data with block size p, where (xi t) indicate the i-th new data for the t-th block.
depends solely on the model htâ1 and the most recent block of new data st consisting of p examples with p being strictly if we set p = 16 then we will predict for limited, e.g. each new data and use a block of 16 new data to update the model.
Catastrophic forgetting is the main challenge faced by all incremental learning algorithms. Suppose a model hbase is initially trained on n classes and we update it with m new added classes to form the model hnew. Ideally, we hope hnew can predict all n + m classes well, but in practice the performance on the n old classes drop dramatically due to the lack of old classes data when training the new classes. In this work, we propose a modiï¬ed cross-distillation loss and a two-step learning technique to address this problem in online scenario.
Concept drift is another problem that happens in most real life applications. Concept in classification prob- lems is defined as the joint distribution P(X,Y) where X is the input data and Y represents target variable. Suppose a model is trained on data streams by time ¢ with joint dis- tribution P(X;, Y;), and let P(X;,, Y;,) represent the joint distribution of old classes in future data streams. Concept drift happens when P(X;, Y;) A P(Xn, Yn). In this work, we do not measure concept drift quantitatively, but we pro- vide a simple yet effective method to mitigate the problem by updating the exemplar set using the features of each new data in old classes, which is illustrated in Section|4.3]
# 4. Incremental Learning Framework
In this work, we propose an incremental learning frame- work as shown in Figure 2 that can be applied to any online scenario where data is available sequentially and the net- work is capable of lifelong learning. There are three parts in our framework: learn from scratch, ofï¬ine retraining and learn from a trained model. Incremental learning in online scenario is implemented in 4.3 and lifelong learning can be achieved by alternating the last two parts after initial learn- ing.
[ FO) { n® hn n@) wel nO a âOnline Learning Offline Learning Online Learning (retrain) Real Life Image Data Learn from scratch i=0 n© |} â({ ae { Offine retrain i=2m+1.m20 Learn froma trained model = i= 2m,m21.
Figure 2: Proposed incremental learning framework. h(i) indicates the evolving model at i-th step.
# 4.1. Learn from Scratch
This part serves as the starting point to learn new classes. In this case, we assume the network does not have any pre- vious knowledge of incoming classes, which means there is no previous knowledge to be retained. Our goal is to build a model that can adapt to new classes fast with limited data, e.g. block size of 8 or 16.
Baseline. Suppose we have data streams with block size p belong to M classes: {s1, ..., st} â Rn à {1, ..., M }. The baseline for the model to learn from sequential data can be thought as generating a sequence of model {h1, ..., ht} us- ing standard cross-entropy where ht is updated from htâ1 by using block of new data st. Thus ht is evolving from h0 for a total of t updates by using the given data streams. Compared to traditional ofï¬ine learning, the complete data is not available and we need to update the model for each block of new data to make it dynamically ï¬t to the data dis- tribution used so far. So in the beginning, the performance on incoming data is poor due to data scarcity.
Online representation learning. A practical solution is to utilize representation learning when data is scarce at the beginning of the learning process. Nearest class Mean (NCM) classiï¬er [22, 21] is a good choice where the test image is classiï¬ed as the class with the closest class data mean. We use a pre-trained deep network to extract features by adding a representation layer before the last fully con- nected layer for each input data xi denoted as Ï(xi). Thus the classiï¬er can be expressed as
yâ = arg min yâ{1,...,M } d(Ï(x), ÂµÏ y ). (1)
The class mean ÂµÏ i:yi=i Ï(xi) and Ny denote the number of data in classes y. We assume that the highly non- linear nature of deep representations eliminates the need of a linear metric and allows to use Euclidean distance here
xy = (Ï(x) â ÂµÏ dÏ y )T (Ï(x) â ÂµÏ y ) (2)
Our method: combining baseline with NCM classi- ï¬er. NCM classiï¬er behaves well when number of avail- able data is limited since the class representation is based solely on the mean representation of the images belonging to that class. We apply NCM in the beginning and update using an online estimate of the class mean [7] for each new
observation.
ÂµÏ y â nyi nyi + 1 ÂµÏ y + 1 nyi + 1 Ï(xi) (3)
We use a simple strategy to switch from NCM to baseline classiï¬er when accuracy for baseline surpass representation learning for s consecutive blocks of new data. Based on our empirical results, we set s = 5 in this work.
# 4.2. Ofï¬ine Retraining
In order to achieve lifelong learning, we include an of- ï¬ine retraining part after each online incremental learning phase. By adding new classes or new data of existing class, both catastrophic forgetting and concept drift [5] become more severe. The simplest solution is to include a periodic ofï¬ine retraining by using all available data up to this time instance.
Construct exemplar set. We use herding selection [30] to generate a sorted list of samples of one class based on the distance to the mean of that class. We then construct the exemplar set by using the ï¬rst q samples in each class {E(y) q }, y â [1, ..., n] where q is manually speci- ï¬ed. The exemplar set is commonly used to help retain the old classesâ knowledge in incremental learning methods.
FC: n classes ood Base Model = rr @: Distillation @ = i x B: accommodation ratio 8 l new class data rear 7 in bod Online Model copy FC : n+m classes 1 - at: Modified Cross-Entropy
Figure 3: Modiï¬ed Cross-Distillation Loss. It contains two losses: the distilling loss on old classes and the modi- ï¬ed cross-entropy loss on all old and new classes.
# 4.3. Learn from a Trained Model
This is the last component of our proposed incremental learning framework. The goal here is to continue to learn from new data streams starting from a trained model. Dif- ferent from existing incremental learning, we deï¬ne new data containing both new classes data and new observations of old classes and we use each new data only once for train- ing in online scenario. In additional to addressing the catas- trophic forgetting problem, we also need to consider con- cept drift for already learned classes due to the fact that data distribution in real life application may change over time in unforeseen ways [23].
Cross- distillation loss function is commonly used in state-of-the- art incremental learning methods to retain the previous
In this case, we consider only new learned knowledge. classes data for incoming data streams. Suppose the model is already trained on n classes, and there are m new classes added. Let {(xi, yi), yi â [n + 1, ...n + m]} denote new classes data. The output logits of the new classiï¬er is de- noted as p(n+m)(x) = (o(1), ..., o(n), o(n+1), ...o(n+m)), the recorded old classes classiï¬er output logits is Ëp(n)(x) = (Ëo(1), ..., Ëo(n)). The knowledge distillation loss [9] can be formulated as in Equation 4, where Ëp(i) T are the i-th distilled output logit as deï¬ned in Equation 5
n Ly(x) = >> -p? (e)loglp?? (#)] (4) i=l
pi exp (60 /T) pf exp (0 /T) TS exp (O/T) PP SI exp (0/7) (5) T is the temperature scalar. When T = 1, the class with the highest score has the most influence. When T > 1, the re- maining classes have a stronger influence, which forces the network to learn more fine grained knowledge from them. The cross entropy loss to learn new classes can be expressed as Lo(x) = 32" âglog[p (x)]| where @ is the one- hot label for input data x. The overall cross-distillation loss function is formed as in Equation[6] by using a hyper- parameter a to tune the influence between two components.
LCD(x) = αLD(x) + (1 â α)LC(x) (6)
Modified cross-distillation with accommodation ra- tio. Although cross-distillation loss forces the network to learn latent information from the distilled output log- its, its ability to retain previous knowledge still remains limited. An intuitive way to make the network retain previous knowledge is to keep the output from the old classesâ classifier as a part of the final classifier. Let out- put logits of the new classifier be denoted as p+") (x) = (0), ...,08â¢, o"*), ..0("+â¢)), the recorded old classesâ classifier output logits is 6) (a) = (6, ...,6(â¢). We use an accommodation ratio 0 < @ < 1 to combine the two classifier output as Bp + (1 B)p O<i<n
Ëp(i) = βp(i) + (1 â β)Ëp(i) p(i) 0 < i ⤠n n < i ⤠n + m (7)
When 3 = 1, the final output is the same as the new clas- sifier and when 8 = 0, we replace the first n output units with the old classes classifier output. This can be thought as using the accommodation ratio ( to tune the output units for old classes. As shown in Figure [3] the modified cross- distillation loss can be expressed by replacing the original cross-entropy loss part L¢(2) with the new modified cross- entropy loss Le (x) = PYâ âGMlog|p («)] after ap- plying the accommodation ratio as in Equation|§] Lep(2) =aLp(x)+(1â a)Lo(x) (8)
Algorithm 1 Update Exemplar Set Input: New observation for old classes (xi, yi) Require: Old classes feature extractor Î Require: Current exemplar set {E(yi) 1: M (yi) â nyi 2: for m = 1,...,q do , ...E(yi) q 1 nyi +1 M (yi) + 1 nyi +1 Î(xi) } d(m) = (Î(E(yi) 3: 4: dmin â min{d(1), ..., d(m)} 5: Imin â Index{dmin} 6: d(q+1) = (Î(xi) â M (yi))T (Î(xi) â M (yi)) 7: if d(q+1) ⤠dmin then Remove E(yi) 8: Imin Add xi to {E(yi) m ) â M (yi))T (Î(E(yi) m ) â M (yi)) from {E(yi) 1 , ...E(yi) qâ1} , ...E(yi) q } 9: 10: else 11: 12: return {E(yi) 1 No need to update current exemplars , ...E(yi) q } 1
We empirically set β = 0.5, T = 2 and α = n n+m in this work where n and m are the number of old and new classes. The modiï¬ed cross-distillation loss push the net- work to learn more from old classesâ output units since we add it directly as part of the ï¬nal output.
Update exemplar set. As described in Section 1, we consider the new data streams containing both new classes data and new observations of old classes with unknown dis- tribution. In this case, retaining previous knowledge is not sufï¬cient since concept drift can happen to old classes and the model will still undergo performance degradation. One solution is to keep updating the network using the exem- plars for old classes. The class mean of each old class {M (1), ..., M (n), M (i) â Rn} is calculated and recorded as described in Section 4.2 by constructing the exemplar set {(E(y) ), y â [1, ..., n]} using previous data streams. Let {(xi, yi), yi â [1, ..., n]} denote the new ob- servation of old classes. We follow the same online class mean update as described in Equation 3 to keep updating the class mean with all data seen so far. So when concept drift happens, e.g., the class mean changes toward the new data, we update the exemplar set to make new data more likely to be selected to update the model during two-step learning as described in next part. The complete process of updating exemplar set is shown in Algorithm 1.
Two-step learning. Unlike other incremental learn- ing algorithms that mix new classes data with old classes exemplars, we ï¬rst let the model learn from a block of new classes data and then a balanced learning step is fol- lowed. This two-step learning technique is deigned for on- line learning scenarios, where both update time and number of available data are limited. As shown in Figure 5, we use the modiï¬ed cross-distillation loss in the ï¬rst step to over-
come catastrophic forgetting since all data in this block be- longs to new classes. In the second step, we pair same num- ber of old classes exemplars from the exemplar set with the new classes data. As we have balanced new and old classes, cross entropy loss is used to achieve balanced learning.
Exemplar Set @: New Task Data @ : Old Task Data Pair New Task Data âXN Modified Cross-Distillation Loss Balanced Learning Cross-Entropy Loss
Figure 5: Two-Step Learning. Black dots correspond to old classes samples stored in exemplar set. Red dots corre- spond to samples from new classes.
# 5. Experimental Results
Our experimental results consists of two main parts. In part one, we compare our modiï¬ed cross-distillation loss and the two-step learning technique as introduced in Sec- tion 4.3 with current state-of-the-art incremental learning methods [2, 14, 31, 21]. We follow the iCaRL experiment benchmark protocol [21] to arrange classes and select exem- plars, but in the more challenging online learning scenario as illustrated in Section 5.3. Our method is implemented on two public datasets: CIFAR-100 [12] and ImageNet-1000 (ILSVRC 2012) [25]. Part two is designed to test the per- formance of our complete framework. Since our goal is to set up an incremental learning framework that can be ap- plied to online learning scenario, we use Food-101 [1] food image dataset to evaluate our methods. For each part of our proposed framework, we compare our results to base- line methods as described in Section 4.
# 5.1. Datasets
We used three public datasets. Two common datasets: CIFAR-100 and ImageNet-1000 (ILSVRC 2012) and one food image dataset: Food-101.
Food-101 is the largest real-world food recognition dataset consisting of 1k images per food classes collected from foodspotting.com, comprising of 101 food classes. We divided 80% for training and 20% for testing for each class. CIFAR-100 consists of 60k 32Ã32 RGB images for 100 common objects. The dataset is originally divided into 50K as training and 10k as testing.
ImageNet-1000 (ILSVRC 2012) ImageNet Large-Scale Visual Recognition Challenge 2012 (ILSVRC12) is an an- nual competition which uses a subset of ImageNet. This subset contains 1000 classes with more than 1k images per
(a) (b) (c) (d)
Accuracy oe oe econ a 02) Fa ee urnerea © wpe 0.0 Fr rn eC) Number of classes
Accuracy 0 2% 4 = O00 Number of classes
06 Accuracy or + Gn ee 02) pw ecw © verso 0.0 0 2 40 60 80 100 Number of classes
+ 06 Accuracy * + oat + rs 00 0 40800 Number of classes
Figure 4: Incremental learning results on CIFAR-100 with split of (a) 5 classes, (b) 10 classes, (c) 20 classes and (d) 50 classes. The Upper Bound in last step is obtained by ofï¬ine training a model using all training samples from all classes. (Best viewed in color)
class. In total, there are about 1.2 million training data, 50k validation images, and 150k testing images.
Data pre-processing For Food-101, we performed im- age resize and center crop. As for CIFAR-100, random cropping and horizontal ï¬ip was applied following the orig- inal implementation [8]. For ImageNet, we follow the steps in VGG pre-processing [27], including random cropping, horizontal ï¬ip, image resize and mean subtraction.
of-the-art methods [21, 2, 31]. We consider the online set- ting that new classes data comes sequentially and we predict each new data at ï¬rst and then use a block of new data to update the model. For each incremental step, we compare our accuracy obtained in online scenario with state-of-the- art results in ofï¬ine mode. We constructed the exemplar set for both CIFAR and ImageNet with the same number of samples as in [21, 2, 31] for fair comparison.
# 5.2. Implementation Detail
Our implementation is based on Pytorch [18]. For ex- periment part one, we follow the same experiment setting as current state-of-the-art incremental learning methods, a standard 18-layer ResNet for ImageNet-1000 and a 32-layer ResNet for CIFAR-100. For experiment part two, we ap- plied a 18-layer ResNet to Food-101. The ResNet imple- mentation follows the setting suggested in [8]. We use stochastic gradient descent with learning rate of 0.1, weight decay of 0.0001 and momentum of 0.9.
ImageNet-100 1.0 bad a Accuracy mm lw te iCaRL i EEIL â* BIC â® Our Method © = UpperBound 10 20 30 40 50 60 70 Number of classes S B 80 90 100
Selection of block size p in online learning scenario. Different from ofï¬ine learning scenario, where we select a batch size to maximize overall performance after many epochs. In online learning scenario, we need to select block size p based on real life applications. More speciï¬cally, a large block size causes slow update since we have to wait until enough data arrives to update the model. On the other hand, using a very small block size, e.g., update with each new observation, although very fast, is not suitable for deep neural network due to strong bias towards new data. There- fore, we design a pretest using the ï¬rst 128 new data for each experiment to repeatedly update the model by vary- ing block size p â {1, 2, 4, 8, 16, 32, 64}. Thus the optimal block size is chosen which gives the highest accuracy on these 128 new data. We do not consider p > 64 as such a large block size is not practical for real life applications.
# 5.3. Evaluation of Modiï¬ed Cross-Distillation Loss and Two-Step Learning
In this part, we compared our modiï¬ed cross-distillation loss and two-step learning technique with the current state-
Figure 7: Incremental learning results on ImageNet-100 with split of 10 classes. The Upper Bound in last step is obtained by ofï¬ine training a model using all training sam- ples from all classes. (Best viewed in color)
CIFAR-100. We divided 100 classes into splits of 5, 10, 20, and 50 in random order. Therefore, we have incremental training steps for 20, 10, 5, and 2, respectively. The optimal block size is set as p = 8. We ran the experiment for four trials and each time with a random order for the 100 classes. The average accuracy is shown in Figure 4. Our method shows the best accuracy for all incremental learning steps even in the challenging online learning scenario.
ImageNet-1000. As 1000-class is too large and im- practical for online scenario, so we randomly selected 100 classes from the 1000 classes to construct a subset of the original dataset, which is referred to as ImageNet-100. We then divided the 100 classes into 10 classes split so we have an incremental step of 10. The optimal block size is set as p = 16. We ran this for four trials as before and we recorded the average accuracy in each step as shown in Fig- ure 7. Although the performance of EEIL [2] surpass our
(a) (b) (c) (d)
F = Baseline & Da Representation earing ° i, the i-th set of 512 new data »
Boa â= Baseline Sa a Representation Learning ° i the ith set of 51D new data *
8, â= Baseline § . | fepresetaton Learning i, the ith set of 512 new data .
Boe â= Baseline Sor = one tation Learning . â. the ith set of 512 new data .
Figure 6: Starting from scratch on Food-101 with number of new classes (a) 20 classes (b) 30 classes (c) 40 classes and (d) 50 classes. Baseline and our method are illustrated in Section 4.1 (Best viewed in color)
Method Baseline Representation Learning Ours 20 30 40 50 62.81% 56.53% 54.35% 51.39% 60.21% 55.32% 53.68% 51.26% 70.90% 64.32% 62.31% 57.83% 20 30 40 50
84.17% 80.95% 77.82% 74.46% (b)
Table 1: Online learning from scratch on Food-101 with (a) Online accuracy and (b) Testing accuracy. The Upper Bound is obtained by ofï¬ine training a model using all training samples from all given classes. (Best result marked in bold)
method in the second step, we attain the best performance as more classes are added.
# 5.5. Results on Food-101
# 5.4. Evaluation of Our Complete Framework
We used a food image dataset Food-101 [1] to evaluate performance of our proposed incremental learning frame- work.
Benchmark protocol of online incremental learning. Until now, there is no benchmark protocol on how to eval- uate an online incremental learning method. In addition to address catastrophic forgetting [16] as in ofï¬ine incremen- tal learning, we also need to consider concept drift [5] in online scenario. We propose the following evaluation pro- cedure: for a given multi-class classiï¬cation dataset, the classes should be randomly arranged. For each class, the training data should be further split into new training data and old training data. The former is used when a class is introduced to the model for the ï¬rst time. The later is con- sidered when the model has seen the class before, which is used to simulate real life applications and test the ability of the method to handle new observations of old classes. After each online learning phase, the updated model is evaluated on test data containing all classes already been trained so far. There is no over-ï¬tting since the test data is never used to update the model. In addition to the overall test accuracy, we should separately examine the accuracy for new classes and accuracy for old classes data. We also suggest to use online accuracy, which is the accuracy for data in training set before they are used to update the model, to represent the classiï¬cation performance of future data stream. In general, online accuracy shows the modelâs ability to adapt to future data stream and online accuracy for old classes indicates the modelâs ability to handle new observations of old classes.
Although there are three separate components of the pro- posed incremental learning framework as described in Sec- tion 4, we only test the component described in 4.1 once and then alternate between the two components described in 4.2 and 4.3. In addition, the ofï¬ine retraining part in 4.2 is in- applicable with online incremental learning. So in this ex- periment, we test for one cycle of our proposed framework starting from scratch then learning from a trained model provided by ofï¬ine retraining. We use half training data per class as new classes data and the other half as new ob- servations of old classes. We divided the Food-101 dataset into split of 20, 30, 40, 50 classes randomly and performed the one incremental step learning with step size of 20, 30, 40, and 50, respectively. In addition, we construct exem- plar set with only 10 samples per class to simulate real life applications instead of including more samples per class.
In this part, we evaluate our method that combines baseline and representation learning as described in Section 4.1. Optimal block size is set as p = 16. Result of online accuracy compared to baseline and representation learning is shown in Table 1a. Our method achieved the best online accuracy in all incremental learning steps. Similarly, test accuracy compared to upper bound is shown in Table 1b. We also calculated the accuracy of each 512 incoming new data as shown in Figure 6. We observed that the representation learning works well at the beginning when data is scarce and the baseline achieved higher accu- racy as the number of new data increases. Thus by combin- ing the two methods and automatically switch from one to the other, we attain a higher overall online accuracy.
Learn from a trained model. In this part, we perform
Online Accuracy Test Accuracy Incremental Step 20 30 40 50 new old new old 54.35% â 64.78% 22.83% â 61.01% 70.97% â 64.00% 41.77% â 70.32% (84.17%) 52.62% â 62.25% 22.41% â 60.00% 71.56% â 61.87% 42.25% â 69.90% (80.95%) 46.30% â 61.53% 20.53% â 53.43% 66.62% â 56.31% 40.82% â 65.65% (77.82%) 43.49% â 56.76% 19.47% â 51.71% 63.32% â 54.20% 36.81% â 63.92% (74.46%)
Table 2: Online learning from a trained model on Food-101 with baseline method using original cross-distillation loss in the left of â and our proposed method in the right (best result marked in bold), (â¢) shows the Upper Bound results.
0.85 ° cs a & Overall Test Accuracy so £ & © Incremental Step
(a) (b) (c)
0.65 rorrs Overall Online Accuracy 5 0.25 30 40 50 Incremental Step
Ss h updating exemplar set Online Accuracy for old classes 20 30 40 50 Incremental Step
Figure 8: Ablation study on Food-101 dataset (a) overall online accuracy (b) overall test accuracy (c) online accuracy for old classes. (Best viewed in color)
a one incremental step experiment following our proposed benchmark protocol described in Section 5.4 and the result is shown in Table 2. Compared to the baseline, our method improved the online learning accuracy for both new and old classes, which shows that our model can adapt quickly to future data stream including both new classes data or new observations of old classes. In addition, we signiï¬cantly im- proved the test accuracy compared to the baseline method. However, the trade off is slightly lower accuracy for the new classes test accuracy compared to the baseline due to the use of the accommodation ratio in our method. Since it is difï¬cult for the model to perform well on new classes without losing knowledge from the old classes, the accom- modation ratio can be manually tuned to balance between the new classes and the old classes depending on the ap- plication scenario. A higher accommodation ratio leads to higher accuracy on new classes by trading off accuracy on old classes. For this experiment, we simply use β = 0.5.
Ablation study. We analyzed different components of our method to demonstrate their impacts. We ï¬rst show the inï¬uence of different loss functions including cross-entropy, cross-distillation, and our modiï¬ed cross- distillation. We then analyzed the impact of updating the exemplar set to mitigate concept drift. As shown in Fig- ure 8a and 8b, even without updating exemplar set, our modiï¬ed cross-distillation loss outperformed the other two (black and blue lines) for all incremental steps. By updating the exemplar set, we were able to achieve a higher overall online and test accuracy. Furthermore, Figure 8c illustrates improvement of online accuracy for old classes by updating
the exemplar set. Since we do not deliberately select any new data from old classes to update the model during the incremental learning step, as the data distribution changes, our method was able to automatically update the exemplar set by using the current class mean calculated by all data in old classes seen so far. Thus through the proposed two-step learning which pairs each new data with an exemplar, we can achieve a higher online accuracy for future data streams.
# 6. Conclusion
learning framework including a modiï¬ed cross-distillation loss to- gether with a two-step learning technique to address catas- trophic forgetting in the challenging online learning sce- nario, and a simple yet effective method to update the ex- emplar set using the feature of each new observation of old classes data to mitigate concept drift. Our method has the following properties: (1) can be trained using data streams including both new classes data and new observations of old classes in online scenario, (2) has good performance for both new and old classes on future data streams, (3) requires short run-time to update with limited data, (4) has poten- tial to be used in lifelong learning that can handle unknown number of classes incrementally. Our method outperforms current state-of-the-art on CIFAR-100 and ImageNet-1000 (ILSVRC 2012) in the challenging online learning scenario. Finally, we showed our proposed framework can be applied to real life image classiï¬cation problem by using Food-101 dataset as an example and observed signiï¬cant improve- ment compared to baseline methods.
# References
[1] Lukas Bossard, Matthieu Guillaumin, and Luc Van Gool. Food-101 â mining discriminative components with random forests. Proceedings of the European Conference on Com- puter Vision, 2014.
[2] Francisco M. Castro, Manuel J. Marin-Jimenez, Nicolas Guil, Cordelia Schmid, and Karteek Alahari. End-to-end in- cremental learning. Proceedings of the European Conference on Computer Vision, September 2018. [3] Gert Cauwenberghs and Tomaso Poggio.
Incremental and decremental support vector machine learning. Proceedings of the Advances in Neural Information Processing Systems, pages 409â415, 2001.
[4] Corinna Cortes and Vladimir Vapnik. Support-vector net- works. Machine learning, 20(3):273â297, 1995.
[5] JoËao Gama, IndrËe ËZliobaitËe, Albert Bifet, Mykola Pech- enizkiy, and Abdelhamid Bouchachia. A survey on con- cept drift adaptation. ACM Computing Surveys, 46(4):44:1â 44:37, Mar. 2014.
[6] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. Proceedings of the Advances in Neural Information Processing Systems, pages 2672â2680, 2014.
[7] Samantha Guerriero, Barbara Caputo, and Thomas Mensink. Proceedings of Deep nearest class mean classiï¬ers. the International Conference on Learning Representations, Worskhop Track, 2018.
[8] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770â778, 2016.
[9] Geoffrey Hinton, Oriol Vinyals, and Jeffrey Dean. Distill- ing the knowledge in a neural network. Proceedings of the NIPS Deep Learning and Representation Learning Work- shop, 2015.
[10] Heechul Jung, Jeongwoo Ju, Minju Jung, and Junmo Kim. arXiv Less-forgetting learning in deep neural networks. preprint arXiv:1607.00122, 2016.
[11] James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska- Barwinska, et al. Overcoming catastrophic forgetting in The National Academy of Sciences, neural networks. 114(13):3521â3526, 2017.
[12] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.
[13] Ilja Kuzborskij, Francesco Orabona, and Barbara Caputo. From n to n+ 1: Multiclass transfer incremental learning. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3358â3365, 2013.
[14] Zhizhong Li and Derek Hoiem. Learning without forgetting. IEEE Transactions on Pattern Analysis and Machine Intelli- gence, 40(12):2935â2947, 2017.
[15] Viktor Losing, Barbara Hammer, and Heiko Wersing. Incre- mental on-line learning: A review and comparison of state-
of-the-art algorithms. Neurocomputing, 275:1261â1274, 2018.
[16] Michael McCloskey and Neal J Cohen. Catastrophic inter- ference in connectionist networks: The sequential learning In Psychology of Learning and Motivation, vol- problem. ume 24, pages 109â165. Elsevier, 1989.
[17] Thomas Mensink, Jakob Verbeek, Florent Perronnin, and Gabriela Csurka. Distance-based image classiï¬cation: Gen- eralizing to new classes at near-zero cost. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(11):2624â 2637, 2013.
[18] Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic dif- ferentiation in PyTorch. Proceedings of the Advances Neural Information Processing Systems Workshop, 2017.
[19] Robi Polikar, Lalita Upda, Satish S Upda, and Vasant Honavar. Learn++: An incremental learning algorithm for supervised neural networks. IEEE Transactions on Systems, Man, and Cybernetics, 31(4):497â508, 2001.
[20] Amal Rannen, Rahaf Aljundi, Matthew B Blaschko, and Tinne Tuytelaars. Encoder based lifelong learning. Pro- ceedings of the IEEE International Conference on Computer Vision, pages 1320â1328, 2017.
[21] Sylvestre-Alvise Rebufï¬, Alexander Kolesnikov, Georg Sperl, and Christoph H. Lampert. icarl: Incremental clas- siï¬er and representation learning. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, July 2017.
[22] Marko Ristin, Matthieu Guillaumin, Juergen Gall, and Luc Van Gool. Incremental learning of ncm forests for large-scale image classiï¬cation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3654â 3661, 2014.
[23] Amelie Royer and Christoph H Lampert. Classiï¬er adapta- tion at prediction time. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1401â 1409, 2015.
[24] Stefan Ruping. Incremental learning with support vector ma- chines. Proceedings of the IEEE International Conference on Data Mining, pages 641â642, 2001.
[25] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, San- jeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpa- thy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recogni- tion Challenge. International Journal of Computer Vision, 115(3):211â252, 2015.
[26] Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim. Continual learning with deep generative replay. Proceedings of the Advances in Neural Information Processing Systems, pages 2990â2999, 2017.
[27] Karen Simonyan and Andrew Zisserman. Very deep convo- lutional networks for large scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
[28] Ragav Venkatesan, Hemanth Venkateswara, Sethuraman A strategy for an arXiv preprint Panchanathan, uncompromising incremental arXiv:1705.00744, 2017. and Baoxin Li. learner.
[29] Geoffrey I Webb, Roy Hyde, Hong Cao, Hai Long Nguyen, and Francois Petitjean. Characterizing concept drift. Data Mining and Knowledge Discovery, 30(4):964â994, 2016. [30] Max Welling. Herding dynamical weights to learn. Proceed- ings of the International Conference on Machine Learning, pages 1121â1128, 2009.
[31] Yue Wu, Yinpeng Chen, Lijuan Wang, Yuancheng Ye, Zicheng Liu, Yandong Guo, and Yun Fu. Large scale in- cremental learning. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, June 2019. | {
"id": "1705.00744"
} |
2003.12298 | Information-Theoretic Probing with Minimum Description Length | To measure how well pretrained representations encode some linguistic
property, it is common to use accuracy of a probe, i.e. a classifier trained to
predict the property from the representations. Despite widespread adoption of
probes, differences in their accuracy fail to adequately reflect differences in
representations. For example, they do not substantially favour pretrained
representations over randomly initialized ones. Analogously, their accuracy can
be similar when probing for genuine linguistic labels and probing for random
synthetic tasks. To see reasonable differences in accuracy with respect to
these random baselines, previous work had to constrain either the amount of
probe training data or its model size. Instead, we propose an alternative to
the standard probes, information-theoretic probing with minimum description
length (MDL). With MDL probing, training a probe to predict labels is recast as
teaching it to effectively transmit the data. Therefore, the measure of
interest changes from probe accuracy to the description length of labels given
representations. In addition to probe quality, the description length evaluates
"the amount of effort" needed to achieve the quality. This amount of effort
characterizes either (i) size of a probing model, or (ii) the amount of data
needed to achieve the high quality. We consider two methods for estimating MDL
which can be easily implemented on top of the standard probing pipelines:
variational coding and online coding. We show that these methods agree in
results and are more informative and stable than the standard probes. | http://arxiv.org/pdf/2003.12298 | Elena Voita, Ivan Titov | cs.CL | null | null | cs.CL | 20200327 | 20200327 | 0 2 0 2
r a M 7 2 ] L C . s c [
1 v 8 9 2 2 1 . 3 0 0 2 : v i X r a
# Information-Theoretic Probing with Minimum Description Length
# Elena Voita1,2
# Ivan Titov1,2
# Elena Voita!â Ivan Titov!
# 1University of Edinburgh, Scotland 2University of Amsterdam, Netherlands
[email protected]
[email protected]
[email protected] [email protected]
# Abstract
To measure how well pretrained representa- tions encode some linguistic property, it is common to use accuracy of a probe, i.e. a classiï¬er trained to predict the property from the representations. Despite widespread adop- tion of probes, differences in their accuracy fail to adequately reï¬ect differences in repre- sentations. For example, they do not substan- tially favour pretrained representations over randomly initialized ones. Analogously, their accuracy can be similar when probing for gen- uine linguistic labels and probing for random synthetic tasks. To see reasonable differences in accuracy with respect to these random base- lines, previous work had to constrain either the amount of probe training data or its model size. Instead, we propose an alternative to the standard probes, information-theoretic prob- ing with minimum description length (MDL). With MDL probing, training a probe to pre- dict labels is recast as teaching it to effectively transmit the data. Therefore, the measure of interest changes from probe accuracy to the de- scription length of labels given representations. In addition to probe quality, the description length evaluates âthe amount of effortâ needed to achieve the quality. This amount of effort characterizes either (i) size of a probing model, or (ii) the amount of data needed to achieve the high quality. We consider two methods for esti- mating MDL which can be easily implemented on top of the standard probing pipelines: varia- tional coding and online coding. We show that these methods agree in results and are more in- formative and stable than the standard probes.1
# Introduction
To estimate to what extent representations (e.g., ELMo (Peters et al., 2018) or BERT (Devlin et al., 2019)) capture a linguistic property, most previous
1We release code at https://github.com/ lena-voita/description-length-probing.
Probe: Standard â> Description Length e.g., accuracy Codelength
Figure 1: Illustration of the idea behind MDL probes.
work uses âprobing tasksâ (aka âprobesâ and âdiag- nostic classiï¬ersâ); see Belinkov and Glass (2019) for a comprehensive review. These classiï¬ers are trained to predict a linguistic property from âfrozenâ representations, and accuracy of the classiï¬er is used to measure how well these representations encode the property.
Despite widespread adoption of such probes, they fail to adequately reï¬ect differences in repre- sentations. This is clearly seen when using them to compare pretrained representations with randomly initialized ones (Zhang and Bowman, 2018). Anal- ogously, their accuracy can be similar when prob- ing for genuine linguistic labels and probing for tags randomly associated to word types (âcontrol tasksâ, Hewitt and Liang (2019)). To see differ- ences in the accuracy with respect to these random baselines, previous work had to reduce the amount of a probe training data (Zhang and Bowman, 2018) or use smaller models for probes (Hewitt and Liang, 2019).
As an alternative to the standard probing, we take an information-theoretic view at the task of measuring relations between representations and la- bels. Any regularity in representations with respect to labels can be exploited both to make predictions and to compress these labels, i.e., reduce length of the code needed to transmit them. Formally, we recast learning a model of data (i.e., training a prob- ing classiï¬er) as training it to transmit the data (i.e., labels) in as few bits as possible. This naturally leads to a change of measure: instead of evaluating
probe accuracy, we evaluate minimum description length (MDL) of labels given representations, i.e. the minimum number of bits needed to transmit the labels knowing the representations. Note that since labels are transmitted using a model, the model has to be transmitted as well (directly or indirectly). Thus, the overall codelength is a combination of the quality of ï¬t of the model (compressed data length) with the cost of transmitting the model itself.
Intuitively, codelength characterizes not only the ï¬nal quality of a probe, but also the âamount of ef- fortâ needed achieve this quality (Figure 1). If rep- resentations have some clear structure with respect to labels, the relation between the representations and the labels can be understood with less effort; for example, (i) the âruleâ predicting the label (i.e., the probing model) can be simple, and/or (ii) the amount of data needed to reveal this structure can be small. This is exactly how our vague (so far) notion of âthe amount of effortâ is translated into codelength. We explain this more formally when describing the two methods for evaluating MDL we use: variational coding and online coding; they dif- fer in a way they incorporate model cost: directly or indirectly.
Variational code explicitly incorporates cost of transmitting the model (probe weights) in addition to the cost of transmitting the labels; this joint cost is exactly the loss function of a variational learning algorithm (Honkela and Valpola, 2004). As we will see in the experiments, close probe accuracies often come at a very different model cost: the âruleâ (the probing model) explaining regularity in the data can be either simple (i.e., easy to communicate) or complicated (i.e., hard to communicate) depending on the strength of this regularity.
Online code provides a way to transmit data with- out directly transmitting the model. Intuitively, it measures the ability to learn from different amounts of data. In this setting, the data is transmitted in a sequence of portions; at each step, the data trans- mitted so far is used to understand the regularity in this data and compress the following portion. If the regularity in the data is strong, it can be revealed using a small subset of the data, i.e., early in the transmission process, and can be exploited to efï¬- ciently transmit the rest of the dataset. The online code is related to the area under the learning curve, which plots quality as a function of the number of training examples.
If we now recall that, to get reasonable differ-
ences with random baselines, previous work manu- ally tuned (i) model size and/or (ii) the amount of data, we will see that these were indirect ways of accounting for the âamount of effortâ component of (i) variational and (ii) online codes, respectively. Interestingly, since variational and online codes are different methods to estimate the same quantity (and, as we will show, they agree in the results), we can conclude that the ability of a probe to achieve good quality using a small amount of data and its ability to achieve good quality using a small probe architecture reï¬ect the same property: strength of the regularity in the data. In contrast to previous work, MDL incorporates this naturally in a theo- retically justiï¬ed way. Moreover, our experiments show that, differently from accuracy, conclusions made by MDL probes are not affected by an un- derlying probe setting, thus no manual search for settings is required.
We illustrate the effectiveness of MDL for dif- ferent kinds of random baselines. For example, when considering control tasks (Hewitt and Liang, 2019), while probes have similar accuracies, these accuracies are achieved with a small probe model for the linguistic task and a large model for the random baseline (control task); these architectures are obtained as a byproduct of MDL optimization and not by manual search.
Our contributions are as follows:
⢠we propose information-theoretic probing which measures MDL of labels given repre- sentations;
⢠we show that MDL naturally characterizes not only probe quality, but also âthe amount of effortâ needed to achieve it;
⢠we explain how to easily measure MDL on top of standard probe-training pipelines;
⢠we show that results of MDL probing are more informative and stable than those of standard probes.
# Information-Theoretic Viewpoint
Let D = {(x1, y1), (x2, y2), . . . , (xn, yn)} be a dataset, where x1:n = (x1, x2, . . . , xn) are representations from a model and y1:n = (y1, y2, . . . , yn) are labels for some linguistic task (we assume that yi â {1, 2, . . . , K}, i.e. we con- sider classiï¬cation tasks). As in standard prob- ing task, we want to measure to what extent x1:n
encode y1:n. Differently from standard probes, we propose to look at this question from the information-theoretic perspective and deï¬ne the goal of a probe as learning to effectively transmit the data.
Setting. Following the standard information the- ory notation, let us imagine that Alice has all (xi, yi) pairs in D, Bob has just the xiâs from D, and that Alice wants to communicate the yiâs to Bob. The task is to encode the labels y1:n knowing the inputs x1:n in an optimal way, i.e. with minimal codelength (in bits) needed to transmit y1:n.
Transmission: Data and Model. Alice can transmit the labels using some probabilistic model of data p(y|x) (e.g., it can be a trained probing clas- siï¬er). Since Bob does not know the precise trained model that Alice is using, some explicit or implicit transmission of the model itself is also required. In Section 2.1, we explain how to transmit data using a model p. In Section 2.2, we show direct and indirect ways of transmitting the model.
Interpretation: quality and âamount of effortâ. In Section 2.3, we show that total codelength char- acterizes both probe quality and the âamount of effortâ needed to achieve it. We draw connections between different interpretations of this âamount of effortâ part of the code and manual search for probe settings done in previous work.2
# 2.1 Transmission of Data Using a Model
Suppose that Alice and Bob have agreed in advance on a model p, and both know the inputs x1:n. Then there exists a code to transmit the labels y1:n loss- lessly with codelength3
n Ly(yin|tin) = â >> logy p(yili)- A)
This is the Shannon-Huffman code, which gives an optimal bound on the codelength if the data are independent and come from a conditional probabil- ity distribution p(y|x).
Learning is compression. The bound (1) is ex- actly the categorical cross-entropy loss evaluated on the model p. This shows that the task of com- pressing labels y1:n is equivalent to learning a
2Note that in this work, we do not consider practical im- plementations of transmission algorithms; everywhere in the text, âcodelengthâ refers to the theoretical codelength of the associated encodings.
3Up to at most one bit on the whole sequence; for datasets of reasonable size this can be ignored.
model p(y|x): quality of a learned model p(y|x) is the codelength needed to transmit the data.
Compression is usually compared against uni- form encoding which does not require any learning It assumes p(y|x) = punif (y|x) = from data. 1 K , and yields codelength Lunif (y1:n|x1:n) = n log2 K bits. Another trivial encoding ignores input x and relies on class priors p(y), resulting in codelength H(y).
Relation to Mutual Information. If the in- puts and the outputs come from a true joint distribution q(x, y), then, for any transmission method with codelength L, it holds Eq[L(y|x)] ⥠H(y|x) (Grunwald, 2004). Therefore, the gain in codelength over the trivial codelength H(y) is
H(y) â Eq[L(y|x)] ⤠H(y) â H(y|x) = I(y; x).
In other words, the compression is limited by the mutual information (MI) between inputs (i.e. pre- trained representations) and outputs (i.e. labels).
Note that total codelength includes model code- length in addition to the data code. This means that while high MI is necessary for effective compres- sion, a good representation is the one which also yields simple models predicting y from x, as we formalize in the next section.
# 2.2 Transmission of the Model (Explicit or Implicit)
We consider two compression methods that can be used with deep learning models (probing classi- ï¬ers):
⢠variational code â an instance of two-part codes, where a model is transmitted explicitly and then used to encode the data;
⢠online code â a way to encode both model and data without directly transmitting the model.
2.2.1 Variational Code We assume that Alice and Bob have agreed on a model class H = {po|9 ⬠O}. With two-part codes, for any model pg, Alice first transmits its parameters 6* and then encodes the data while re- lying on the model. The description length decom- poses accordingly: 2Q- Le ten |Z1:n) = = Lyaram(*) + how (ytn|T1n)
2Q- Le ten |Z1:n) = = Lyaram(*) + how (ytn|T1n) > logy per (yil@i)- (2) se Lyaram( (6*)
To compute the description length of the parameters Lparam(θâ), we can further assume that Alice and Bob have agreed on a prior distribution over the parameters α(θâ). Now, we can rewrite the total description length as
n â logp(a(6")eâ) â S| logy poe (yil ti), i=l
where m is the number of parameters and « is a prearranged precision for each parameter. With deep learning models, such straightforward codes for parameters are highly inefficient. Instead, in the variational approach, weights are treated as random variables, and the description length is given by the expectation
Ly" (Ytn|@1n) = n =- Egg | logga(0) -l0g,3(0)+)_ logs pa(yilx:) i=1 = KL(B || a) - n 0~8 > logs po(yil:vi), (3) i=l
where 3(0) = [Ji", 3;(9;) is a distribution encod- ing uncertainty about the parameter values. The distribution (4) is chosen by minimizing the code- length given in Expression (3). The formal jus- tification for the description length relies on the bits-back argument (Hinton and von Cramp, 1993; Honkela and Valpola, 2004; MacKay, 2003). How- ever, the underlying intuition is straightforward: parameters we are uncertain about can be transmit- ted at a lower cost as the uncertainty can be used to determine the required precision. The entropy term in Equation (3), H(8) = -Egxg logs 6(9), quantifies this discount.
β (y1:n|x1:n) is known as the evidence-lower-bound (ELBO) and used as the objective in variational inference. The distribution β(θ) approximates the intractable pos- terior distribution p(θ|x1:n, y1:n). Consequently, any variational method can in principle be used to estimate the codelength.
In our experiments, we use the network com- pression method of Louizos et al. (2017). Similarly to variational dropout (Molchanov et al., 2017), it uses sparsity-inducing priors on the parameters, pruning neurons from the probing classifier as a byproduct of optimizing the ELBO. As a result we can assess the probe complexity both using its de- scription length K L((3 || a) and by inspecting the discovered architecture.
2.2.2 Online (or Prequential) Code The online (or prequential) code (Rissanen, 1984) is a way to encode both the model and the labels without directly encoding the model weights. In the online setting, Alice and Bob agree on the form of the model pθ(y|x) with learnable parameters θ, its initial random seeds, and its learning algorithm. They also choose timesteps 1 = t0 < t1 < · · · < tS = n and encode data by blocks.4 Alice starts by communicating y1:t1 with a uniform code, then both Alice and Bob learn a model pθ1(y|x) that predicts y from x using data {(xi, yi)}t1 i=1, and Al- ice uses that model to communicate the next data block yt1+1:t2. Then both Alice and Bob learn a model pθ2(y|x) from a larger block {(xi, yi)}t2 i=1 and use it to encode yt2+1:t3. This process contin- ues until the entire dataset has been transmitted. The resulting online codelength is
online (Yiml®1n) =ty logy K- S-1 - S- logs po; (YettitizalPtetltizs) (4) i=l
In this sequential evaluation, a model that per- forms well with a limited number of training ex- amples will be rewarded by having a shorter code- length (Alice will require fewer bits to transmit the subsequent yti:ti+1 to Bob). The online code is related to the area under the learning curve, which plots quality (in case of probes, accuracy) as a func- tion of the number of training examples. We will illustrate this in Section 3.2.
# Interpretations of Codelength
Connection to previous work. To get larger dif- ferences in scores compared to random baselines, previous work tried to (i) reduce size of a prob- ing model and (ii) reduce the amount of a probe training data. Now we can see that these were in- direct ways to account for the âamount of effortâ component of (i) variational and (ii) online codes, respectively.
Online code and model size. While the online code does not incorporate model cost explicitly, we can still evaluate model cost by interpreting the difference between the cross-entropy of the model trained on all data and online codelength as the cost of the model. The former is codelength of the data
4In all experiments in this paper, the timesteps correspond to 0.1, 0.2, 0.4, 0.8, 1.6, 3.2, 6.25, 12.5, 25, 50, 100 percent of the dataset.
if one knows model parameters, the latter (online In codelength) â if one does not know them. Section 3.2 we will show that trends for model cost evaluated for the online code are similar to those for the variational code. It means that in terms of a code, the ability of a probe to achieve good quality using small amount of data or using a small probe architecture reï¬ect the same property: the strength of the regularity in the data.
Which code to choose? In terms of implementa- tion, the online code uses a standard probe along with its training setting: it trains the probe on in- creasing subsets of the dataset. Using the varia- tional code requires changing (i) a probing model to a Bayesian model and (ii) the loss function to the corresponding variational loss (3) (i.e. adding the model KL term to the standard data cross-entropy). As we will show later, these methods agree in re- sults. Therefore, the choice of the method can be done depending on the preferences: the variational code can be used to inspect the induced probe archi- tecture, but the online code is easier to implement.
# 3 Description Length and Control Tasks
Hewitt and Liang (2019) noted that probe accu- racy itself does not necessarily reveal if the rep- resentations encode the linguistic annotation or if the probe âitselfâ learned to predict this annotation. They introduced control tasks which associate word types with random outputs, and each word token is assigned its typeâs output, regardless of context. By construction, such tasks can only be learned by the probe itself. They argue that selectivity, i.e. difference between linguistic task accuracy and control task accuracy, reveals how much the lin- guistic probe relies on the regularities encoded in the representations. They propose to tune probe hyperparameters so that to maximize selectivity. In contrast, we will show that MDL probes do not require such tuning.
# 3.1 Experimental Setting
In all experiments, we use the data and follow the setting of Hewitt and Liang (2019); we build on top of their code and release our extended version to reproduce the experiments.
In the main text, we use a probe with default hyperparameters which was a starting point in He- witt and Liang (2019) and was shown to have low selectivity. In the appendix, we provide results for
10 different settings and show that, in contrast to accuracy, codelength is stable across settings.
Task: part of speech. Control tasks were de- signed for two tasks: part-of-speech (PoS) tagging and dependency edge prediction. In this work, we focus only on the PoS tagging task, the task of as- signing tags, such as noun, verb, and adjective, to individual word tokens. For the control task, for each word type, a PoS tag is independently sam- pled from the empirical distribution of the tags in the linguistic data.
Data. The pretrained model is the 5.5 billion- word pre-trained ELMo (Peters et al., 2018). The data comes from Penn Treebank (Marcus et al., 1993) with the traditional parsing train- ing/development/testing splits5 without extra pre- processing. Table 1 shows dataset statistics.
Probes. The probe is MLP-2 of Hewitt and Liang (2019) with the default hyperparame- is a multi-layer perceptron ters. Namely, yi â¼ with two hidden layers deï¬ned as: softmax(W3ReLU (W2ReLU (W1hi))); hidden layer size h is 1000 and no dropout is used. Ad- ditionally, in the appendix, we provide results for both MLP-2 and MLP-1 for several h values: 1000, 500, 250, 100, 50.
For the variational code, we replace dense layers with the Bayesian compression layers from Louizos et al. (2017); the loss function changes to Eq. (3).
Optimizer. All of our probing models are trained with Adam (Kingma and Ba, 2015) with learning rate 0.001. With standard probes, we follow the original paper (Hewitt and Liang, 2019) and anneal the learning rate by a factor of 0.5 once the epoch does not lead to a new minimum loss on the devel- opment set; we stop training when 4 such epochs occur in a row. With variational probes, we do not anneal learning rate and train probes for 200 epochs; long training is recommended to enable pruning (Louizos et al., 2017).
# 3.2 Experimental Results Results are shown in Table 2.6
5As given by the code of Qi and Manning (2017) at https://github.com/qipeng/arc-swift.
6Accuracies can differ from the ones reported in Hewitt and Liang (2019): we report accuracy on the test set, while they â on the development set. Since the development set is used for stopping criteria, we believe that test scores are more reliable.
Labels Number of sentences Number of targets Part-of-speech 45 39832 / 1700 / 2416 950028 / 40117 / 56684
Table 1: Dataset statistics. Numbers of sentences and targets are given for train / dev / test sets.
Accuracy Description Length variational code online code codelength compression codelength compression 93.7 / 96.3 97.5 / 91.9 97.3 / 89.4 163 / 267 85 / 470 103 / 612 31.32 / 19.09 59.76 / 10.85 49.67 / 8.33 173 / 302 96 / 515 115 / 717 29.5 / 16.87 53.06 / 9.89 44.3 / 7.11
# MLP-2, h=1000 LAYER 0 LAYER 1 LAYER 2
Table 2: Experimental results; shown in pairs: linguistic task / control task. Codelength is measured in kbits (variational codelength is given in equation (3), online â in equation (4)). Accuracy is shown for the standard probe as in Hewitt and Liang (2019); for the variational probe, scores are similar (see Table 3).
800 Variational code @@®S Data 600) [=I Model Codelength ES Ss 6 . ui o al i o 12 o 12 Base Control
Online code Ss Data [a Model 1 faa o 12 o 12 Base Control
(a) (b) (c) (d) random seeds
Accuracy _Codelength wee eeeee} 175 07| "28 |eoeee 150 96 95 wenn | 2° eae 100 ieee se eee (ee " 93 __l75 Var. Online layer ¢O m1 #2
Learning curves Loy 09 a eee 3 go7 Bog a 1E © layer 0 205) Tae = layerd oa a ® layer 2 01 0.4 16 625 2550100 Training examples (% of the full dataset)
Figure 2: (a), (b): codelength split into data and model codes; (c): learning curves corresponding to online code (solid lines for linguistic task, dashed â for control); (d): results for 5 random seeds, linguistic task (for control task, see appendix).
Different compression methods, similar results. First, we see that both compression methods show similar trends in codelength. For the linguistic task, the best layer is the ï¬rst one. For the control task, codes become larger as we move up from the em- bedding layer; this is expected since the control task measures the ability to memorize word type. Note that codelengths for control tasks are substan- tially larger than for the linguistic task (at least twice larger). This again illustrates that description length is preferable to probe accuracy: in contrast to accuracy, codelength is able to distinguish these tasks without any search for settings.
LAYER 0: MDL is correct, accuracy is not. What is even more surprising, codelength identiï¬es the control task even when accuracy indicates the opposite: for LAYER 0, accuracy for the control task is higher, but the code is twice longer than for the linguistic task. This is because codelength char- acterizes how hard it is to achieve this accuracy: for the control task, accuracy is higher, but the cost of achieving this score is very big. We will illustrate
this later in this section.
Embedding vs contextual: drastic difference. For the linguistic task, note that codelength for the embedding layer is approximately twice larger than that for the ï¬rst layer. Later in Section 4 we will see the same trends for several other tasks, and will show that even contextualized representations obtained with a randomly initialized model are a lot better than with the embedding layer alone.
Model: small for linguistic, large for control. Figure 2(a) shows data and model components of the variational code. For control tasks, model size is several times larger than for the linguistic task. This is something that probe accuracy alone is not able to reï¬ect: representations have structure with respect to the linguistic labels and this structure can be âexplainedâ with a small model. The same representations do not have structure with respect to random labels, therefore these labels can be pre- dicted only using a larger model.
Using interpretation from Section 2.3 to split
Accuracy Final probe layer 0 base control 93.5 96.3 406-33-49 427-214-137 layer 1 base control 97.7 92.2 664-55-35 824-272-260 layer 2 base control 97.3 88.7 750-75-41 815-308-481
Table 3: Pruned architecture of a trained variational probe (starting probe: 1024-1000-1000).
the online code into data and model codelength, we get Figure 2(b). The trends are similar to the ones with the variational code; but with the online code, the model component shows how easy it is to learn from small amount of data: if the represen- tations have structure with respect to some labels, this structure can be revealed with a few training ex- amples. Figure 2(c) shows learning curves showing the difference between behavior of the linguistic and control tasks. In addition to probe accuracy, such learning curves have also been used by Yo- gatama et al. (2019) and Talmor et al. (2019).
Architecture: sparse for linguistic, dense for control. The method for the variational code we use, Bayesian compression of Louizos et al. (2017), lets us assess the induced probe complexity not only by using its description length (as we did above), but also by looking at the induced architec- ture (Table 3). Probes learned for linguistic tasks are much smaller than those for control tasks, with only 33-75 neurons at the second and third lay- ers. This relates to previous work (Hewitt and Liang, 2019). The authors considered several pre- deï¬ned probe architectures and picked one of them based on a manually deï¬ned criterion. In contrast, the variational code gives probe architecture as a byproduct of training and does not need human guidance.
# 3.3 Stability and Reliability of MDL Probes
Here we discuss stability of MDL results across compression methods, underlying probing classi- ï¬er setting and random seeds.
The two compression methods agree in results. Note that the observed agreement in codelengths
97 eo. 96/2 8 ® 300 A 5a * « * 95 * | 250,* ; +t 94 93 Accuracy (Layer 0) Codelength (Layer 9). Ct 4 * ee eee | eee & © | 200 * * aes 1900" 250 50 1000° 250° 50 *°°1000 250 50 1000 250 50 MLP-2 MLP-L MLP-2 MLP-1
Figure 3: Results for 10 probe settings: accuracy is wrong for 8 out of 10 settings, MDL is always correct (for accuracy higher is better, for codelength â lower).
from different methods (Table 2) is rather surpris- ing: this contrasts to Blier and Ollivier (2018), who experimented with images (MNIST, CIFAR-10) and argued that the variational code yields very poor compression bounds compared to online code. We can speculate that their results may be due to the particular variational approach they use. The agreement between different codes is desirable and suggests sensibility and reliability of the results.
Hyperparameters: change results for accuracy, do not for MDL. While here we will discuss in detail results for the default settings, in the ap- pendix we provide results for 10 different settings; for LAYER 0, results are given in Figure 3. We see that accuracy can change greatly with the settings. For example, difference in accuracy for linguistic and control tasks varies a lot; for LAYER 0 there are settings with contradictory results: accuracy can be higher either for the linguistic or for the control task depending on the settings (Figure 3). In striking contrast to accuracy, MDL results are stable across settings, thus MDL does not require search for probe settings.
Random seed: affects accuracy but not MDL. We evaluated results from Table 2 for random seeds from 0 to 4; for the linguistic task, results are shown in Figure 2(d). We see that using accuracy can lead to different rankings of layers depending on a ran- dom seed, making it hard to draw conclusions about their relative qualities. For example, accuracy for LAYER 1 and LAYER 2 are 97.48 and 97.31 for seed 1, but 97.38 and 97.48 for seed 0. On the contrary, the MDL results are stable and the scores given to different layers are well separated.
Note that for this ârealâ task, where the true rank- ing of layers 1 and 2 is not known in advance, tun- ing a probe setting by maximizing difference with the synthetic control task (as done by Hewitt and Liang (2019)) does not help: in the tuned setting, scores for these layers remain very close (e.g., 97.3 and 97.0 (Hewitt and Liang, 2019)).
Part-of-speech Constituents Dependencies Entities SRL Coreference Rel. (SemEval) The [shaman]1 cured him with [herbs]2 . ââ Instrument-Agency(e2, e1)
Table 4: Examples of sentences, spans, and target labels for each task.
Labels Number of sentences Number of targets Part-of-speech Constituents Dependencies Entities SRL Coreference Rel. (SemEval) 48 30 49 18 66 2 19 115812 / 15680 / 12217 115812 / 15680 / 12217 12522 / 2000 / 2075 115812 / 15680 / 12217 253070 / 35297 / 26715 115812 / 15680 / 12217 6851 / 1149 / 2717
Table 5: Dataset statistics. Numbers of sentences and targets are given for train / dev / test sets.
# 4 Description Length and Random Models
Now, from random labels for word types, we come to another type of random baselines: randomly initialized models. Probes using these represen- tations show surprisingly strong performance for both token (Zhang and Bowman, 2018) and sen- tence (Wieting and Kiela, 2019) representations. This again conï¬rms that accuracy alone does not reï¬ect what a representation encodes. With MDL probes, we will see that codelength shows large dif- ference between trained and randomly initialized representations.
[010...0] us MLP | a R <â trained { 10, 2) [2, 3) hy hy - < frozen ( pretrained model | | The puppies come home
Figure 4: Probing model architecture for an edge prob- ing task. The example is for semantic role labeling; for PoS, NER and constituents, only a single span is used.
In this part, we also experiment with ELMo and compare it with a version of the ELMo model in which all weights above the lexical layer (LAYER 0) are replaced with random orthonormal matri- ces (but the embedding layer, LAYER 0, is retained from trained ELMo). We conduct a line of experi- ments using a suite of edge probing tasks (Tenney et al., 2019). In these tasks, a probing model (Fig- ure 4) can access only representations within given spans, such as a predicate-argument pair, and must predict properties, such as semantic roles.
# 4.1 Experimental Setting
Tasks and datasets. We focus on several core NLP tasks: PoS tagging, syntactic constituent and dependency labeling, named entity recognition, se-
mantic role labeling, coreference resolution, and relation classiï¬cation. Examples for each task are shown in Table 4, dataset statistics are in Table 5. See extra details in Tenney et al. (2019).
(2019) and use ELMo (Peters et al., 2018) trained on the Billion Word Benchmark dataset (Chelba et al., 2014).
Probes and optimization. Probing architecture is illustrated in Figure 4. It takes a list of con- textual vectors [e0, e1, . . . , en] and integer spans s1 = [i1, j1) and (optionally) s2 = [i2, j2) as in- puts, and uses a projection layer followed by the self-attention pooling operator of Lee et al. (2017) to compute ï¬xed-length span representations. The span representations are concatenated and fed into a two-layer MLP followed by a softmax output
# Accuracy
# Description Length
variational code online code L0 L1 L2 L0 L1 L2 L0 L1 L2 483 209 / 273 252 / 273 1181 603 / 877 719 / 875 158 80 / 103 94 / 103 23.4 462 54.0 / 41.4 192 / 294 44.7 / 41.4 216 / 294 7.5 1149 14.7 / 10.1 570 / 1081 12.3 / 10.1 680 / 1074 7.1 14.0 / 10.8 11.9 / 10.8 175 74 / 106 82 / 106 L0 L1 L2 92.3 95.0 / 93.5 95.3 / 93.6 40 27 / 34 30 / 34 13.2 19.3 / 15.4 17.7 / 15.2 40 27 / 35 26 / 35 81.1 91.9 / 84.4 90.2 / 84.5 411 228 / 306 272 / 306 8.6 381 15.5 / 11.5 212 / 365 13.0 / 11.6 245 / 363 L0 L1 L2 89.9 92.9 / 90.7 92.2 / 90.4 57.4 3.54 50.3 / 54.5 4.04 / 3.72 56.8 / 54.3 3.57 / 3.74 60 51 / 65 55 / 65 3.4 4.0 / 3.1 3.7 / 3.1 L0 L1 L2 55.8 75.2 / 69.1 77.0 / 68.9 11.5 8.0 / 9.7 8.4 / 9.7 2.48 15.9 3.56 / 2.94 8.8 / 11.8 3.40 / 2.92 8.6 / 11.7 1.79 3.2 / 2.4 3.3 / 2.4
# codelength compression codelength compression
= vassal
Variational __Online $400 so 8 âO12 12 enette 1000 a 8 â012 12 ae 10 5100 ao _ âeeee âcee e ⬠32 8 eee (âee eee! 5 5 ano g Gee ee 5" 3° an - one 2 00? 2, 2, li ii | i | &
Variational __Online $400 so 8 âO12 12 enette
1000 a 8 â012 12 ae
10 5100 ao _ âeeee âcee e
⬠32 8 eee (âee eee!
5 5 ano g Gee ee
5" 3° an one 2 00?
- 2, 2, li ii | i | & hoot a2 Oaizenie Base Rand, Base Rend.
Table 6: Experimental results; shown in pairs: trained model / randomly initial- ized model. Codelength is measured in kbits (variational codelength is given in equation (3), online â in equation (4)), compression â with respect to the corre- sponding uniform code.
Table 7: Data and model code components for the tasks from Table 6.
layer. As in the original paper, we use the standard cross-entropy loss, hidden layer size of 256 and dropout of 0.3. For further details on training, we refer the reader to the original paper by Tenney et al. (2019).7
For the variational code, the layers are replaced with that of Bayesian compression by Louizos et al. (2017); loss function changes to (3) and no dropout
is used. Similar to the experiments in the previous section, we do not anneal learning rate and train at least 200 epochs to enable pruning.
We build our experiments on top of the origi- nal code by Tenney et al. (2019) and release our extended version.
# 4.2 Experimental Results
7The differences with the original implementation by Ten- ney et al. (2019) are: softmax with the cross-entropy loss instead of sigmoid with binary cross-entropy, using the loss instead of F1 in the early stopping criterion.
Results are shown in Table 6.
LAYER 0 vs contextual. As we have already seen in the previous section, codelength shows dras-
tic difference between the embedding layer (LAYER 0) and contextualized representations: codelengths differ about twice for most of the tasks. Both com- pression methods show that even for the randomly initialized model, contextualized representations are better than lexical representations. This is be- cause context-agnostic embeddings do not contain enough information about the task, i.e., MI be- tween labels and context-agnostic representations is smaller than between labels and contextualized representations. Since compression of the labels given model (i.e., data component of the code) is limited by the MI between the representations and the labels (Section 2.1), the data component of the codelength is much bigger for the embedding layer than for contextualized representations.
Trained vs random. As expected, codelengths for the randomly initialized model are larger than for the trained one. This is more prominent when not just looking at the bare scores, but compar- ing compression against context-agnostic repre- sentations. For all tasks, compression bounds for the randomly initialized model are closer to those of context-agnostic LAYER 0 than representations from the trained model. This shows that gain from using context for the randomly initialized model is at least twice smaller than for the trained model.
Note also that randomly initialized layers do not evolve: for all tasks, MDL for layers of the ran- domly initialized model is the same. Moreover, Table 7 shows that not only total codelength but data and model components of the code for random model layers are also identical. For the trained model, this is not the case: LAYER 2 is worse than LAYER 1 for all tasks. This is one more illustra- tion of the general process explained in Voita et al. (2019a): the way representations evolve between layers is deï¬ned by the training objective. For the randomly initialized model, since no training ob- jective has been optimized, no evolution happens.
# 5 Related work
Probing classiï¬ers are the most common approach for associating neural network representations with linguistic properties (see Belinkov and Glass (2019) for a survey). Among the works highlighting limi- tations of standard probes (not mentioned earlier) is the work by Saphra and Lopez (2019), who show that diagnostic classiï¬ers are not suitable for under- standing learning dynamics.
In addition to task performance, learning curves
have also been used before by Yogatama et al. (2019) to evaluate how quickly a model learns a new task, and by Talmor et al. (2019) to understand whether the performance of a LM on a task should be attributed to the pre-trained representations or to the process of ï¬ne-tuning on the task data.
Other methods for analyzing NLP models in- clude (i) inspecting the mechanisms a model uses to encode information, such as attention weights (Voita et al., 2018; Raganato and Tiede- mann, 2018; Voita et al., 2019b; Clark et al., 2019; Kovaleva et al., 2019) or individual neurons (Karpa- thy et al., 2015; Pham et al., 2016; Bau et al., 2019), (ii) looking at model predictions using manually deï¬ned templates, either evaluating sensitivity to speciï¬c grammatical errors (Linzen et al., 2016; Gulordava et al., 2018; Tran et al., 2018; Marvin and Linzen, 2018) or understanding what language models know when applying them as knowledge bases or in question answering settings (Radford et al., 2019; Petroni et al., 2019; Poerner et al., 2019; Jiang et al., 2019).
An information-theoretic view on analysis of NLP models has been previously attempted in Voita et al. (2019a) when explaining how representations in the Transformer evolve between layers under different training objectives.
# 6 Conclusions
We propose information-theoretic probing which measures minimum description length (MDL) of labels given representations. We show that MDL naturally characterizes not only probe quality, but also âthe amount of effortâ needed to achieve it (or, intuitively, strength of the regularity in representa- tions with respect to the labels); this is done in a theoretically justiï¬ed way without manual search for settings. We explain how to easily measure MDL on top of standard probe-training pipelines. We show that results of MDL probing are more informative and stable compared to the standard probes.
# Acknowledgments
IT acknowledges support of the European Research Council (ERC StG BroadSem 678254) and the Dutch National Science Foundation (NWO VIDI 639.022.518).
# References
Anthony Bau, Yonatan Belinkov, Hassan Sajjad, Nadir Durrani, Fahim Dalvi, and James Glass. 2019. Iden- tifying and controlling important neurons in neural machine translation. In International Conference on Learning Representations, New Orleans.
Yonatan Belinkov and James Glass. 2019. Analysis methods in neural language processing: A survey. Transactions of the Association for Computational Linguistics, 7:49â72.
Léonard Blier and Yann Ollivier. 2018. The descrip- In Advances tion length of deep learning models. in Neural Information Processing Systems, pages 2216â2226.
Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robin- son. 2014. One billion word benchmark for mea- suring progress in statistical language modeling. In Fifteenth Annual Conference of the International Speech Communication Association.
Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What does BERT In Pro- look at? an analysis of BERTâs attention. ceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 276â286, Florence, Italy. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- In Proceedings of the 2019 Conference standing. of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171â4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
Peter Grunwald. 2004. A tutorial introduction to the minimum description length principle. arXiv preprint math/0406077.
Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless green recurrent networks dream hierarchically. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1195â1205. Associ- ation for Computational Linguistics.
John Hewitt and Percy Liang. 2019. Designing and interpreting probes with control tasks. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 2733â2743, Hong Kong, China. Association for Computational Lin- guistics.
GE Hinton and D von Cramp. 1993. Keeping neu- ral networks simple by minimising the description In Proceedings of COLT-93, length of weights. pages 5â13.
Antti Honkela and Harri Valpola. 2004. Variational an information- learning and bits-back coding: theoretic view to bayesian learning. In IEEE Trans- actions on Neural Networks, volume 15, pages 800â 810.
Zhengbao Jiang, Frank F Xu, Jun Araki, and Graham Neubig. 2019. How can we know what language models know? arXiv preprint arXiv:1911.12543.
Andrej Karpathy, Justin Johnson, and Li Fei-Fei. 2015. Visualizing and understanding recurrent networks. arXiv preprint arXiv:1506.02078.
Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Repre- sentation (ICLR 2015).
Olga Kovaleva, Alexey Romanov, Anna Rogers, and Anna Rumshisky. 2019. Revealing the dark secrets of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 4365â4374, Hong Kong, China. Association for Computational Linguistics.
Kenton Lee, Luheng He, Mike Lewis, and Luke Zettle- moyer. 2017. End-to-end neural coreference reso- In Proceedings of the 2017 Conference on lution. Empirical Methods in Natural Language Processing, pages 188â197, Copenhagen, Denmark. Association for Computational Linguistics.
Tal Linzen, Emmanuel Dupoux, and Yoav Goldberg. 2016. Assessing the ability of lstms to learn syntax- sensitive dependencies. Transactions of the Associa- tion for Computational Linguistics, 4:521â535.
Christos Louizos, Karen Ullrich, and Max Welling. 2017. Bayesian compression for deep learning. In Advances in Neural Information Processing Systems, pages 3288â3298.
Information theory, infer- ence and learning algorithms. Cambridge university press.
Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computa- tional Linguistics, 19(2):313â330.
Rebecca Marvin and Tal Linzen. 2018. Targeted syn- In Proceed- tactic evaluation of language models. ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1192â1202, Brussels, Belgium. Association for Computational Linguistics.
Dmitry Molchanov, Arsenii Ashukha, and Dmitry Vetrov. 2017. Variational dropout sparsiï¬es deep In Proceedings of the 34th Inter- neural networks. national Conference on Machine Learning.
Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- In Proceedings of the 2018 Confer- resentations. ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227â2237, New Orleans, Louisiana. Association for Computational Linguistics.
Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowl- In Proceedings of the 2019 Confer- edge bases? ence on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 2463â2473, Hong Kong, China. As- sociation for Computational Linguistics.
Ngoc-Quan Pham, German Kruszewski, and Gemma Boleda. 2016. Convolutional neural network lan- In Proceedings of the 2016 Con- guage models. ference on Empirical Methods in Natural Language Processing, pages 1153â1162, Austin, Texas. Asso- ciation for Computational Linguistics.
Nina Poerner, Ulli Waltinger, and Hinrich Schütze. 2019. Bert is not a knowledge base (yet): Fac- tual knowledge vs. name-based reasoning in unsu- pervised qa. arXiv preprint arXiv:1911.03681.
Peng Qi and Christopher D. Manning. 2017. Arc-swift: A novel transition system for dependency parsing. In Proceedings of the 55th Annual Meeting of the As- sociation for Computational Linguistics (Volume 2: Short Papers), pages 110â117, Vancouver, Canada. Association for Computational Linguistics.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9.
Alessandro Raganato and Jörg Tiedemann. 2018. An analysis of encoder representations in transformer- In Proceedings of the based machine translation. 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 287â297, Brussels, Belgium. Association for Com- putational Linguistics.
Jorma Rissanen. 1984. Universal coding, information, IEEE Transactions on prediction, and estimation. Information theory, 30(4):629â636.
Naomi Saphra and Adam Lopez. 2019. Understand- ing learning dynamics of language models with In Proceedings of the 2019 Conference SVCCA. of the North American Chapter of the Association
for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3257â3267, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
Alon Talmor, Yanai Elazar, Yoav Goldberg, and olmpics â on what lan- Jonathan Berant. 2019. guage model pre-training captures. arXiv preprint arXiv:1912.13283.
Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R Bowman, Dipan- jan Das, et al. 2019. What do you learn from con- text? probing for sentence structure in contextual- ized word representations. In International Confer- ence on Learning Representations.
Ke Tran, Arianna Bisazza, and Christof Monz. 2018. The importance of being recurrent for modeling hi- In Proceedings of the 2018 erarchical structure. Conference on Empirical Methods in Natural Lan- guage Processing, pages 4731â4736, Brussels, Bel- gium. Association for Computational Linguistics.
Elena Voita, Rico Sennrich, and Ivan Titov. 2019a. The bottom-up evolution of representations in the trans- former: A study with machine translation and lan- In Proceedings of the guage modeling objectives. 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4396â4406, Hong Kong, China. Association for Computational Linguistics.
Elena Voita, Pavel Serdyukov, Rico Sennrich, and Ivan Titov. 2018. Context-aware neural machine trans- In Proceedings lation learns anaphora resolution. of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1264â1274, Melbourne, Australia. Associa- tion for Computational Linguistics.
Elena Voita, David Talbot, Fedor Moiseev, Rico Sen- nrich, and Ivan Titov. 2019b. Analyzing multi-head self-attention: Specialized heads do the heavy lift- In Proceedings of the ing, the rest can be pruned. 57th Annual Meeting of the Association for Com- putational Linguistics, pages 5797â5808, Florence, Italy. Association for Computational Linguistics.
John Wieting and Douwe Kiela. 2019. No train- ing required: Exploring random encoders for sen- tence classiï¬cation. In International Conference on Learning Representations.
Dani Yogatama, Cyprien de Masson dâAutume, Jerome Connor, Tomas Kocisky, Mike Chrzanowski, Ling- peng Kong, Angeliki Lazaridou, Wang Ling, Lei Yu, Chris Dyer, and Phil Blunsom. 2019. Learning and evaluating general linguistic intelligence. arXiv preprint arXiv:1901.11373.
Kelly Zhang and Samuel Bowman. 2018. Language modeling teaches you more than translation does:
Lessons learned through auxiliary syntactic task analysis. In Proceedings of the 2018 EMNLP Work- shop BlackboxNLP: Analyzing and Interpreting Neu- ral Networks for NLP, pages 359â361, Brussels, Bel- gium. Association for Computational Linguistics.
# A Description Length and Control Tasks
# A.1 Settings
Results are given in Table 8.
Accuracy Description Length variational code online code codelength compr. codelength compr. MLP-2, h=1000 L 0 93.7 / 96.3 163 / 267 32 / 19 173 / 302 96 / 515 L 1 97.5 / 91.9 85 / 470 115 / 717 L 2 97.3 / 89.4 103 / 612 60 / 11 50 / 8 30 / 17 53 / 10 44 / 7 MLP-2, h=500 L 0 93.5 / 96.2 161 / 268 32 / 19 170 / 313 30 / 16 93 / 547 L 1 97.8 / 92.1 84 / 470 55 / 9 112 / 755 46 / 7 L 2 97.1 / 86.5 102 / 611 61 / 11 50 / 8 MLP-2, h=250 L 0 93.6 / 96.1 161 / 274 32 / 19 169 / 328 30 / 16 91 / 582 L 1 97.7 / 90.3 84 / 470 56 / 9 112 / 799 46 / 6 L 2 97.1 / 85.2 101 / 611 61 / 11 50 / 8 MLP-2, h=100 L 0 93.7 / 95.5 161 / 261 32 / 20 167 / 367 31 / 14 91 / 678 L 1 97.6 / 86.9 84 / 492 56 / 8 112 / 901 46 / 6 L 2 97.2 / 80.9 102 / 679 61 / 10 50 / 8 MLP-2, h=50 L 0 93.7 / 93.1 161 / 314 32 / 16 166 / 416 31 / 12 61 / 8 L 1 97.6 / 82.7 84 / 605 55 / 7 93 / 781 50 / 6 116 / 1007 44 / 5 L 2 97.0 / 76.2 102 / 833 MLP-1, h=1000 L 0 93.7 / 96.8 160 / 254 32 / 20 166 / 275 31 / 19 88 / 477 58 / 11 L 1 97.7 / 92.7 82 / 468 107 / 696 48 / 7 L 2 97.0 / 86.7 100 / 618 62 / 11 51 / 8 MLP-1, h=500 L 0 93.6 / 97.2 159 / 257 32 / 20 164 / 295 31 / 17 88 / 516 58 / 10 L 1 97.5 / 91.6 82 / 468 107 / 736 48 / 7 L 2 97.0 / 86.3 100 / 619 62 / 11 51 / 8 MLP-1, h=250 L 0 93.6 / 96.6 159 / 257 32 / 20 164 / 316 31 / 16 58 / 9 87 / 574 L 1 97.5 / 89.9 82 / 473 109 / 795 47 / 6 L 2 97.1 / 84.2 99 / 632 62 / 11 51 / 8 MLP-1, h=100 L 0 93.7 / 95.3 159 / 269 32 / 19 163 / 374 31 / 14 58 / 8 L 1 97.6 / 86.4 82 / 525 87 / 683 109 / 905 47 / 6 L 2 97.1 / 80.0 100 / 731 62 / 10 51 / 7 MLP-1, h=50 L 0 93.7 / 92.7 159 / 336 32 / 15 164 / 438 31 / 11 62 / 8 L 1 97.6 / 82.0 82 / 648 56 / 7 90 / 790 51 / 6 114 / 1016 45 / 5 L 2 97.2 / 75.0 100 / 875
Table 8: Experimental results; shown in pairs: linguis- tic task / control task. Codelength is measured in kbits (variational codelength is given in equation (3), online â in equation (4)). h is the probe hidden layer size.
# A.2 Random seeds: control task
Results are shown in Figure 5.
Accurac Codelength jee, 800 96 RRR e layer0O 94 REKKK 600 @~ layerl o2iee⢠=| |eeune | TEEE® « layer2 400 90 sal * oocee | 28eee Ed x * 200 Var. Online
Figure 5: Results for 5 random seeds, control task (de- fault setting: MLP-2, h = 1000).
# B Description Length and Random Models
Accuracy Final probe layer 0 base 91.31 728-31-154 layer 1 base random 97.7 96.76 878-42-172 876-50-228 layer 2 base random 97.32 96.76 872-50-211 929-47-229
Table 9: Pruned architecture of a trained variational probe, Part of Speech (starting probe: 1024-256-256).
Accuracy Final probe layer 0 base 75.61 976-47-242 layer 1 base random 86.01 81.35 1011-53-227 1001-57-235 layer 2 base random 84.36 81.42 985-61-238 971-57-234
Table 10: Pruned architecture of a trained variational probe, constituent labeling (starting probe: 1024-256- 256).
Accuracy Final probe layer 0 base 80.11 (423+356)-36-119 layer 1 base random 92.3 89.86 (682+565)-38-85 (635+548)-40-98 layer 2 base random 90.6 89.96 (581+422)-42-104 (646+538)-38-94
Table 11: Pruned architecture of a trained vari- ational probe, dependency labeling (starting probe: (1024+1024)-512-256).
Accuracy Final probe layer 0 base 91.7 450-16-36 layer 1 base random 94.95 93.36 509-16-35 551-18-36 layer 2 base random 94.93 93.57 527-17-41 536-18-34
Table 12: Pruned architecture of a trained variational probe, named entity recognition (starting probe: 1024- 256-256).
Accuracy Final probe layer 0 base 79.1 (567+754)-46-158 layer 1 base random 90.25 86.59 (709+937)-48-140 (678+857)-55-148 layer 2 base random 88.5 86.34 (601+863)-52-142 (744+889)-53-151
Table 13: Pruned architecture of a trained varia- tional probe, semantic role labeling (starting probe: (1024+1024)-512-256).
Accuracy Final probe layer 0 base 88.87 (358+352)-16-20 layer 1 base random 91.6 90.35 (497+492)-20-22 (363+357)-23-21 layer 2 base random 90.29 90.45 (519+505)-18-19 (375+377)-21-21
Table 14: Pruned architecture of a trained varia- tional probe, coreference resolution (starting probe: (1024+1024)-512-256).
Accuracy Final probe layer 0 base 48.77 (138+137)-10-14 layer 1 base random 71.07 60.73 (116+178)-16-17 (168+135)-15-15 layer 2 base random 71.59 60.69 (123+164)-14-18 (167+125)-13-15
Table 15: Pruned architecture of a trained varia- tional probe, relation classiï¬cation (starting probe: (1024+1024)-512-256). | {
"id": "1911.03681"
} |
2003.12206 | Improving Reproducibility in Machine Learning Research (A Report from the NeurIPS 2019 Reproducibility Program) | One of the challenges in machine learning research is to ensure that
presented and published results are sound and reliable. Reproducibility, that
is obtaining similar results as presented in a paper or talk, using the same
code and data (when available), is a necessary step to verify the reliability
of research findings. Reproducibility is also an important step to promote open
and accessible research, thereby allowing the scientific community to quickly
integrate new findings and convert ideas to practice. Reproducibility also
promotes the use of robust experimental workflows, which potentially reduce
unintentional errors. In 2019, the Neural Information Processing Systems
(NeurIPS) conference, the premier international conference for research in
machine learning, introduced a reproducibility program, designed to improve the
standards across the community for how we conduct, communicate, and evaluate
machine learning research. The program contained three components: a code
submission policy, a community-wide reproducibility challenge, and the
inclusion of the Machine Learning Reproducibility checklist as part of the
paper submission process. In this paper, we describe each of these components,
how it was deployed, as well as what we were able to learn from this
initiative. | http://arxiv.org/pdf/2003.12206 | Joelle Pineau, Philippe Vincent-Lamarre, Koustuv Sinha, Vincent Larivière, Alina Beygelzimer, Florence d'Alché-Buc, Emily Fox, Hugo Larochelle | cs.LG, stat.ML | To appear at JMLR, 16 pages + Appendix | null | cs.LG | 20200327 | 20201230 | 0 2 0 2
c e D 0 3 ] G L . s c [
4 v 6 0 2 2 1 . 3 0 0 2 : v i X r a
# Improving Reproducibility in Machine Learning Research (A Report from the NeurIPS 2019 Reproducibility Program)
Joelle Pineau School of Computer Science, McGill University (Mila) Facebook AI Research CIFAR [email protected] Philippe Vincent-Lamarre Ecole de biblioth`economie et des sciences de lâinformation, Universit´e de Montr´eal [email protected] Koustuv Sinha School of Computer Science, McGill University (Mila) Facebook AI Research [email protected] Vincent Larivi`ere Ecole de biblioth´economie et des sciences de lâinformation, Universit´e de Montr´eal [email protected] Alina Beygelzimer Yahoo! Research [email protected] Florence dâAlch´e-Buc T´el´ecom Paris, Institut Polytechnique de France [email protected] Emily Fox University of Washington Apple [email protected] Hugo Larochelle Google CIFAR [email protected]
# Abstract
One of the challenges in machine learning research is to ensure that presented and published results are sound and reliable. Reproducibility, that is obtaining similar results as presented in a paper or talk, using the same code and data (when available), is a necessary step to verify the reliability of research ï¬ndings. Reproducibility is also an important step to promote open and accessible research, thereby allowing the scientiï¬c community to quickly integrate new ï¬ndings and convert ideas to practice. Reproducibility also promotes the use of robust experimental workï¬ows, which potentially reduce unintentional errors. In 2019, the Neural Information Processing Systems (NeurIPS) conference, the premier international conference for research in machine learning, introduced a reproducibility program, designed to improve the standards across the community for how we conduct, communicate, and evaluate machine learning research. The program contained three components: a code submission policy, a community-wide reproducibility challenge, and the inclusion of the Machine Learning Reproducibility checklist as part of the paper submission process. In this paper, we describe each of these components, how it was deployed, as well as what we were able to learn from this initiative.
Keywords: Reproducibility, NeurIPS 2019
©2020 Joelle Pineau, Philippe Vincent-Lamarre, Koustuv Sinha, Vincent Larivi`ere, Alina Beygelzimer, Florence dâAlch´e-Buc, Emily Fox, Hugo Larochelle. Corresponding author: Joelle Pineau ([email protected]).
# Pineau et al.
# 1. Introduction
At the very foundation of scientiï¬c inquiry is the process of specifying a hypothesis, running an experiment, analyzing the results, and drawing conclusions. Time and again, over the last several centuries, scientists have used this process to build our collective understanding of the natural world and the laws that govern it. However, for the ï¬ndings to be valid and reliable, it is important that the experimental process be repeatable, and yield consistent results and conclusions. This is of course well-known, and to a large extent, the very foundation of the scientiï¬c process. Yet a 2016 survey in the journal Nature revealed that more than 70% of researchers failed in their attempt to reproduce another researcherâs experiments, and over 50% failed to reproduce one of their own experiments (Baker, 2016). In the area of computer science, while many of the ï¬ndings from early years were derived from mathematics and theoretical analysis, in recent years, new knowledge is increasingly derived from practical experiments. Compared to other ï¬elds like biology, physics or so- ciology where experiments are made in the natural or social world, the reliability and reproducibility of experiments in computer science, where the experimental apparatus for the most part consists of a computer designed and built by humans, should be much easier to achieve. Yet in a surprisingly large number of instances, researchers have had diï¬culty reproducing the work of others (Henderson et al., 2018).
Focusing more narrowly on machine learning research, where most often the experiment consists of training a model to learn to make predictions from observed data, the reasons for this gap are numerous and include:
⢠Lack of access to the same training data / diï¬erences in data distribution;
⢠Misspeciï¬cation or under-speciï¬cation of the model or training procedure;
⢠Lack of availability of the code necessary to run the experiments, or errors in the code;
⢠Under-speciï¬cation of the metrics used to report results;
⢠Improper use of statistics to analyze results, such as claiming signiï¬cance without proper statistical testing or using the wrong statistic test;
⢠Selective reporting of results and ignoring the danger of adaptive overï¬tting;
⢠Over-claiming of the results, by drawing conclusions that go beyond the evidence insuï¬cient number of experiments, mismatch between hypothesis & presented (e.g. claim).
We spend signiï¬cant time and energy (both of machines and humans), trying to over- come this gap. This is made worse by the bias in the ï¬eld towards publishing positive results (rather than negative ones). Indeed, the evidence threshold for publishing a new positive ï¬nding is much lower than that for invalidating a previous ï¬nding. In the latter case, it may require several teams showing beyond the shadow of a doubt that a result is false for the research community to revise its opinion. Perhaps the most infamous instance of this is that of the false causal link between vaccines and autism. In short, we would argue that it is always more eï¬cient to properly conduct the experiment and analysis in the ï¬rst place.
2
Improving Reproducibility in Machine Learning Research
In 2019, the Neural Information Processing Systems (NeurIPS) conference, the pre- mier international conference for research in machine learning, introduced a reproducibility program, designed to improve the standards across the community for how we conduct, communicate, and evaluate machine learning research. The program contained three com- ponents: a code submission policy, a community-wide reproducibility challenge, and the inclusion of the Machine Learning Reproducibility checklist as part of the paper submission process.
In this paper, we describe each of these components, how it was deployed, as well as what we were able to learn from this exercise. The goal is to better understand how such an approach is implemented, how it is perceived by the community (including authors and reviewers), and how it impacts the quality of the scientiï¬c work and the reliability of the ï¬ndings presented in the conferenceâs technical program. We hope that this work will inform and inspire renewed commitment towards better scientiï¬c methodology, not only in the machine learning research community, but in several other research ï¬elds.
# 2. Background
There is a growing interest in improving reproducibility across scientiï¬c disciplines. A full review of such work is beyond the scope of this paper, but the common motivation is to ensure transparency of scientiï¬c ï¬ndings, a faster and more reliable discovery process, and high conï¬dence in our scientiï¬c knowledge. In support of this direction, several biomedical journals agreed in 2014 on a set of principles and guidelines for reporting preclinical research in a way that ensured greater reproducibility (McNutt, 2014).
There are challenges regarding reproducibility that appear to be unique (or at least more pronounced) in the ï¬eld of ML compared to other disciplines. The ï¬rst is an insuï¬cient exploration of the variables (experimental conditions, hyperparameters) that might aï¬ect the conclusions of a study. In machine learning, a common goal for a model is to beat the top benchmarks scores. However, it is hard to assert if the aspect of a model claimed to have improved its performance is indeed the factor leading to the higher score. This limitation has been highlighted in a few studies reporting that new proposed methods are often not better than previous implementations when a more thorough search of hyper-parameters is performed (Lucic et al., 2018; Melis et al., 2017), or even when using diï¬erent random parameter initializations (Bouthillier et al., 2019; Henderson et al., 2018).
The second challenge refers to the proper documentation and reporting of the informa- tion necessary to reproduce the reported results (Gundersen and Kjensmo, 2018). A recent report indicated that 63.5% of the results in 255 manuscripts were successfully replicated (Raï¬, 2019). Strikingly, this study found that when the original authors provided assistance to the reproducers, 85% of results were successfully reproduced, compared to 4% when the authors didnât respond. Although a selection bias could be at play (authors who knew their results would reproduce might have been more likely to provide assistance for the reproduction), this contrasts with large-scale replication studies in other disciplines that failed to observe similar improvement when the original authors of the study were involved (Klein et al., 2019). It therefore remains to be established if the ï¬eld is having a reproduc- tion problem similar to the other ï¬elds, or if it would be better described as a reporting problem.
3
# Pineau et al.
Reproducible Replicable fa) yn = o c <x Robust Generalisable
Figure 1: Reproducible Research. Adapted from: https://github.com/WhitakerLab/ReproducibleResearch
Thirdly, as opposed to most scientiï¬c disciplines where uncertainty of the observed eï¬ects are routinely quantiï¬ed, it appears like statistical analysis is seldom conducted in ML research (Forde and Paganini, 2019; Henderson et al., 2018).
# 2.1 Deï¬ning Reproducibility
Before going any further, it is worth deï¬ning a few terms that have been used (sometimes interchangeably) to describe reproducibility & related concepts. We adopt the terminology from Figure 1, where Reproducible work consists of re-doing an experiment using the same data and same analytical tools, whereas Replicable work considers diï¬erent data (presum- ably sampled from similar distribution or method), Robust work assumes the same data but diï¬erent analysis (such as reimplementation of the code, perhaps diï¬erent computer archi- tecture), and Generalisable work leads to the same conclusions despite considering diï¬erent data and diï¬erent analytical tools. For the purposes of our work, we focus primarily on the notion of Reproducibility as deï¬ned here, and assume that any modiï¬cation in analytical tools (e.g. re-running experiments on a diï¬erent computer) was small enough as to be neg- ligible. A recent report by the National Academies of Sciences, Engineering, and Medicine, provides more in-depth discussion of these concepts, as well as several recommendations for improving reproducibility broadly across scientiï¬c ï¬elds (National Academies of Sciences, Engineering, and Medicine, 2019).
# 2.2 The Open Science movement
âOpen Science is transparent and accessible knowledge that is shared and developed through collaborative networksâ (Vicente-S´aez and Mart´ınez-Fuentes, 2018). In other words, Open science is a movement to conduct science in a more transparent way. This includes making code, data, scientiï¬c communications and any other research artifact publicly available and easily accessible over the long-term, thereby increasing the transparency of the research process and improving the reporting quality in scientiï¬c manuscripts (Sonnenburg et al., 2007). The implementation of Open science practices has been identiï¬ed as a core factor that could improve the reproducibility of science (Munaf`o et al., 2017; Gil et al., 2016). As such, the NeurIPS reproducibility program was designed to incorporate elements designed
4
Improving Reproducibility in Machine Learning Research
to encourage researchers to share the artefacts of their research (code, data), in addition to their manuscripts.
# 2.3 Code submission policies
It has become increasingly common in recent years to require the sharing of data and It is code, along with a paper, when computer experiments were used in the analysis. now standard expectation in the Nature research journals for authors to provide access to code and data to readers (Nature Research, 2021). Similarly, the policy at the journal Science speciï¬es that authors are expected to satisfy all reasonable requests for data, code or materials (Science - AAAS, 2018). Within machine learning and AI conferences, the ability to include supplementary material has now been standard for several years, and many authors have used this to provide the data and/or code used to produce the paper. More recently, ICML 2019, the second largest international conference in machine learning has also rolled-out an explicit code submission policy (ICML, 2019). During initial submission for double-blind reviewing, this can be uploaded as supplementary material. For the ï¬nal submission, it is best practice for data and code to be uploaded to a repository and given a DOI that can be cited.
# 2.4 Reproducibility challenges
Reproducibility tracks have been run prior in database systems conferences such as SIG- MOD, as early as 2008 (Manolescu et al., 2008; Bonnet et al., 2011), using the term âre- peatabilityâ in lieu of our notion of reproducibility. A notable recommendation was to focus on accepted papers (to spend eï¬ort on the work likely to have more impact). Another useful recommendation was to provide wiki page associated with each paper so the community could comment on it, post code, etc. The organizers also found that the vast majority of authorâs whose work was included in a repeatability study found the process helpful.
The 2018 ICLR reproducibility challenge ï¬rst introduced a dedicated platform to inves- tigate reproducibility of papers for the Machine Learning community. The goal of this ï¬rst iteration was to investigate reproducibility of empirical results submitted to the 2018 Inter- national Conference on Learning Representations (ICLR, 2018). The organizers chose ICLR for this challenge because the timing was right for course-based participants: most partic- ipants were drawn from graduate machine learning courses, where the challenge served as the ï¬nal course project. The choice of ICLR was motivated by the fact that papers submitted to the conference were automatically made available publicly on OpenReview, including during the review period. This means anyone in the world could access the paper prior to selection, and could interact with the authors via the message board on Open- Review. This ï¬rst challenge was followed a year later by the 2019 ICLR Reproducibility Challenge (Pineau et al., 2019), and followed by 2019 NeurIPS Reproducibility Challenge (Sinha et al., 2020) in this edition. Use of the OpenReview platform allowed a wiki-like interface to provide transparency into the replicability work and a conversation between authors and participants.
Several less formal activities, including hackathons, course projects, online blogs, open- source code packages, have participated in the eï¬ort to carry out re-implementation and
5
# Pineau et al.
replication of previous work and should be considered in the same spirit as the eï¬ort de- scribed here.
# 2.5 Checklists
The Checklist Manifesto presents a highly compelling case for the use of checklists in safety- critical systems (Gawande, 2010). It documents how pre-ï¬ight checklists were introduced at Boeing Corporation as early as 1935 following the unfortunate crash of an airplane prototype. Checklists are similarly used in surgery rooms across the world to prevent oversights. Similarly, the WHO Surgical Safety Checklist, which is employed in surgery rooms across the world, has been shown to signiï¬cantly reduce morbidity and mortality (Clay-Williams and Colligan, 2015).
In the case of scientiï¬c manuscripts, reporting checklists are meant to provide the mini- mal information that must be included in a manuscript, and are not necessarily exhaustive. The use of checklists in scientiï¬c research has been explored in a few instances. Reporting guidelines in the form of checklists have been introduced for a wide range of study design in health research (The EQUATOR Network, 2021), and the Transparency and Openness Pro- motion (TOP) guidelines have been adopted by multiple journals across disciplines (Nosek et al., 2015). There are now more than 400 checklists registered in the EQUATOR Network. CONSORT, one of the most popular guidelines used for randomized controlled trials was found to be eï¬ective and to improve the completeness of reporting for 22 checklist items (Turner et al., 2012).
The ML checklist described below was signiï¬cantly inï¬uenced by Natureâs Reporting Checklist for Life Sciences Articles (Checklist, 2021). Other guidelines are under develop- ment outside of the ML community, namely for the application of AI tools in clinical trials (Liu et al., 2019) and health-care (Collins and Moons, 2019).
Concurrently with this work, a checklist was developed for AI publications that contain several of the same elements as we outline below, in terms of documenting data, code, methods and experiments (Gundersen et al., 2018). The checklist used at NeurIPS which we describe below is more oriented speciï¬cally towards machine learning experiments, with items related to test/validation/train splits, hyper-parameter ranges.
# 2.6 Other considerations
Beyond reproducibility, there are several other factors that aï¬ect how scientiï¬c research is conducted, communicated and evaluated. One of the best practices used in many venues, including NeurIPS, is that of double-blind reviewing. It is worth remembering that in 2014, the then program chairs Neil Lawrence and Corinna Cortes ran an interesting experiment, by assigning 10% of submitted papers to be reviewed independently by two groups of review- ers (each lead by a diï¬erent area chair). The results were surprising: overall the reviewers disagreed on 25.9% of papers, but when tasked with reaching a 22.5% acceptance rate, they disagreed on 57% of the list of accepted papers. We raise this point for two reasons. First, to emphasize that the NeurIPS community has for many years already demonstrated an openness towards trying new approaches, as well as looking introspectively on the eï¬ec- tiveness of its processes. Second, to emphasize that there are several steps that come into play when a paper is written, and selected for publication at a high-proï¬le international
6
Improving Reproducibility in Machine Learning Research
venue, and that a reproducibility program is only one aspect to consider when designing community standards to improve the quality of scientiï¬c practices.
# 3. The NeurIPS 2019 code submission policy
The NeurIPS 2019 code submission policy, as deï¬ned for all authors (see Appendix, Figure 7), was drafted by the program chairs and oï¬cially approved by the NeurIPS board in winter 2019 (before the May 2019 paper submission deadline.)
The most frequent objections we heard to having a code submission policy (at all) include:
⢠Dataset conï¬dentiality: There are cases where the dataset cannot be released for legitimate privacy reasons. This arises often when looking at applications of ML, for example in healthcare or ï¬nance. One strategy to mitigate this limitation is to provide complementary empirical results on an open-source benchmark dataset, in addition to the results on the conï¬dential data.
⢠Proprietary software: The software used to derive the result contains intellectual property, or is built on top of proprietary libraries. This is of particular concern to some researchers working in industry. Nonetheless, as shown in Figure 2(a), we see that many authors from industry were indeed able to submit code, and furthermore despite the policy, the acceptance rate for papers from authors in industry remained high (higher than authors from academia (Figure 2(b))). By the camera-ready dead- line, most submissions from the industry reported having submitted code (Figure 2(a),2(b)).
⢠Computation infrastructure: Even if data and code are provided, the experiments may require so much computation (time & number of machines) that it is impractical for any reviewer, or in fact most researchers, to attempt reproducing the work. This is the case for work on training very large neural models, for example the AlphaGo game playing agent (Silver et al., 2016) or the BERT language model (Devlin et al., 2018). Nonetheless it is worth noting that both these systems have been reproduced within months (if not weeks) of their release (tia, 2019).
⢠Replication of mistakes: Having a copy of the code used to produce the exper- imental results is not a guarantee that this code is correct, and there is signiï¬cant value in reimplementing an algorithm directly from its description in a paper. This speaks more to the notion of Robustness deï¬ned above. It is indeed common that there are mistakes in code (as there may be in proofs for more theoretical papers). Nonetheless, the availability of the code (or proof) can be tremendously helpful to verify or re-implement the method. It is indeed much easier to verify a result (with the initial code or proof), then it is to produce from nothing (this is perhaps most poignantly illustrated by the longevity of the lack of proof for Fermatâs last theorem (Wikipedia, 2020).)
It is worth noting that the NeurIPS 2019 code submission policy leaves signiï¬cant time & ï¬exibility, in particular it says that it: âexpects code only for accepted papers, and
7
# Pineau et al.
(a)
First author affiliation 0.6- Tee 1880 1951 704 164 317 65 569 713 205 Industry ° ES Initial submission Proportion % Academia âLal 74 77 173° 97 Academia indy Camera ready Last author affiliation 06- § 04- 3 Qa © 0.2- a Code 1625 1676 559 202 387 78 786 918 337 provided Academia Industry fl Yes I No ; . {] Not applicable c 3 5 04- Q o a 0.2- â32 72 129 122 47 21 288 40 73 Academia Industry
Academia Authorship Iii First Industry Affiliation
(b)
Figure 2: Eï¬ect of code submission policy. 2(a) Link to code provided at initial submission and camera-ready, as a function of aï¬liation of the ï¬rst and last authors. We observe for industry aï¬liated authors code is not provided in the initial submission, but later provided after camera ready. Overall, we observe authors from the academia are more prone to release the code of their papers. 2(b) Acceptance rate of submissions as a function of aï¬liation of the ï¬rst and last authors. The red dashed line shows the acceptance rate for all submissions. We observe industry aï¬liated authors have higher chance of acceptance.
8
# Improving Reproducibility in Machine Learning Research
2 ] = a |] 2 i 2 -> wo sai Ba 3 2) E & £ 2 iJ Ss 3 a g â ⬠& a = yes| 8 O 83 yes Code provided
60 @ ° O B40 ⬠a S 7) se 20 ° 746 1868 - No Yes Confirmed code
(a) (b)
Figure 3: 3(a) Diagram representing the transition of the code availability from initial submission to camera-ready only for submissions with an author from the industry (ï¬rst or last). 3(b) Percentage of submissions reporting that they provided code on the checklist subsequently conï¬rmed by the reviewers.
only by the camera-ready deadlineâ. So code submission is not mandatory, and the code is not expected to be used during the review process to decide on the soundness of the work. Reviewers were asked as a part of their assessment to report if code was provided along the manuscript at the initial submission stage. About 40% of authors reported that they had provided code at this stage which was conï¬rmed by the reviewers (if at least one reviewer indicated that the code was provided for each submission) for 71.5% of those submissions (Figure 3(b)). Note that authors are still able to provide code (or a link to code) as part of their initial submission. In Table 1, we provide a summary of code submission frequency for ICML 2019, as well as NeurIPS 2018 and 2019. We observe a growing trend towards more papers adding a link to code, even with only soft encouragement and no coercive measures. While the value of having code extends long beyond the review period, it is useful, in those cases where code is available during the review process, to know how it is used and perceived by the reviewers. When surveying reviewers at the end of the review period, we found:
Q. Was code provided (e.g. in the supplementary material)? Yes: 5298
If provided, did you look at the code? Yes: 2255
If provided, was the code useful in guiding your review? Yes: 1315
If not provided, did you wish code had been available? Yes: 3881
We were positively surprised by the number of reviewers willing to engage with this type of artefact during the review process. Furthermore, we found that the availability of code at submission (as indicated on the checklist) was positively associated with the reviewer score (p < 1e â 08).
9
# Pineau et al.
Conference NeurIPS 2018 ICML 2019 NeurIPS 2019 # papers submitted 4856 3424 6743 % papers accepted 20.8% 22.6% 21.1% % papers w/code at submission 36% 40% % papers w/code at camera-ready <50% 67% 74.4% Code submission policy âAuthors may submit up to 100MB of supplementary ma- terial, such as proofs, deriva- tions, data, or source code.â âTo foster reproducibility, we highly encourage authors to submit code. Reproducibility of results and easy availability of code will be taken into ac- count in the decision-making process.â âWe expect (but not require) accompanying code to be sub- mitted with accepted papers that contribute and present experiments with a new algo- rithm.â See Appendix, Fig. 7
Table 1: Code submission frequency for recent ML conferences. Source for number of papers accepted and acceptance rates: https://github.com/lixin4ever/Conference-Acceptance-Rate. ICML 2019 numbers reproduced from the ICML 2019 Code-at-Submit-Time Experiment.
# 4. The NeurIPS 2019 Reproducibility Challenge
The main goal of this challenge is to provide independent veriï¬cation of the empirical claims in accepted NeurIPS papers, and to leave a public trace of the ï¬ndings from this secondary analysis (Sinha et al., 2020). The reproducibility challenge oï¬cially started on Oct.31 2019, right after the ï¬nal paper submission deadline, so that participants could have the beneï¬t of any code submission by authors. By this time, the authorsâ identity was also known, allowing collaborative interaction between participants and authors. We used OpenReview (OpenReview.net, 2021) to enable communication between authors and challenge participants.
As shown in Table 2, a total of 173 papers were claimed for reproduction. This is a 92% increase since the last reproducibility challenge at ICLR 2019 (Pineau et al., 2019). We had participants from 73 diï¬erent institutions distributed around the world (see Appendix, Figure 8), including 63 universities and 10 industrial labs. Institutions with the most par- ticipants came from 3 continents and include McGill University (Canada), KTH (Sweden), Brown University (US) and IIT Roorkee (India). In those cases (and several others), high participation rate occurred when a professor at the university used this challenge as a ï¬nal course project.
All reports submitted to the challenge are available on OpenReview 1 for the community; in many cases with a link to the reimplementation code. The goal of making these available is to two-fold: ï¬rst to give examples of reproducibility reports so that the practice becomes more widespread in the community, and second so that other researchers can beneï¬t from the knowledge, and avoid the pitfalls that invariably come with reproducing another teamâs work. Most reports produced during the challenge oï¬er a much more detailed & nuanced account of their eï¬orts, and the level of ï¬delity to which they could reproduce the methods,
1. https://openreview.net/group?id=NeurIPS.cc/2019/Reproducibility Challenge
10
# Improving Reproducibility in Machine Learning Research
Conference ICLR 2018 ICLR 2019 NeurIPS 2019 # papers submitted 981 1591 6743 Acceptance rate 32.0 31.4 21.1 # papers claimed 123 90 173 # ticipating institutions 31 35 73 par- # reports reviewed n/a 26 84
Table 2: Participation in the Reproducibility Challenge. Source for number of papers accepted and acceptance rates: https://github.com/lixin4ever/Conference-Acceptance-Rate
results & claims of each paper. Similarly, while some readers may be looking for a ârepro- ducibility scoreâ, we have not found that the ï¬ndings of most reproducibility studies lend themselves to such a coarse summary.
Once submitted, all reproducibility reports underwent a review cycle (by reviewers of the NeurIPS conference), to select a small number of high-quality reports, which are published in the Sixth edition (Issue 2) 2 of the journal ReScience (ReScience C, 2020; Sinha et al., 2020). This provides a lasting archival record for this new type of research artefact.
# 5. The NeurIPS 2019 ML reproducibility checklist
The third component of the reproducibility program involved use of the Machine Learning reproducibility checklist (see Appendix, Figure 9). This checklist was ï¬rst proposed in late 2018, at the NeurIPS conference, in response to ï¬ndings of recurrent gaps in experimental methodology found in recent machine learning papers. An earlier version (v.1.1) was ï¬rst deployed as a trial with submission of the ï¬nal camera-ready version for NeurIPS 2018 papers (due in January 2019); this initial test allowed collection of feedback from authors and some minor modiï¬cations to the content of the checklist (mostly edited for clarity and reduced some redundant questions). The edited version 1.2 was then deployed during the NeurIPS 2019 review process, and authors were obliged to ï¬ll it both at the initial paper submission phase (May 2019), and at the ï¬nal camera-ready phase (October 2019). This allowed us to analyze any change in answers, which presumably resulted from the review feedback (or authorsâ own improvements of the work). The checklist was implemented on the CMT platform; each question included a multiple choice âYes, No, not applicableâ, and an (optional) open comment ï¬eld.
Figure 4 shows the initial answers provided for each submitted paper. It is reassuring to see that 97% of submissions are said to contain Q#. A clear description of the mathematical setting, algorithm, and/or model. Since we expect all papers to contain this, the 3% no/na answers might reï¬ect margin of error in how authors interpreted the questions. Next, we notice that 89% of submissions answered to the aï¬rmative when asked Q#. For all ï¬gures and tables that present empirical results, indicate if you include: A description of how experiments were run. This is reasonably consistent with the fact that 9% of NeurIPS 2019 submissions indicated âTheoryâ as their primary subject area, and thus may not contain empirical results.
One set of responses that raises interesting questions is the following trio:
2. https://rescience.github.io/read/#issue-2-neurips-2019-reproducibility-challenge
11
# Pineau et al.
Initial submission checklist answers Camera ready checklist answers 17: FT computing 16: FT central tendency 15: FT error bars 14: FT statistics 4 13: FT description 4 12: FT number runs Answer yes No Not applicable 11: FT hyper-parameters 10: FT sample allocation 9: FT pre-processing 8: FT link data 7: FT data collection 6: T proof 5: T assumptions 4: T statement ] 3: MA ink code 2: MA complexity 1: MA description ° 2000 4000 6000 500 1000 Frequency Frequency °
Figure 4: Author responses to all checklist questions for NeurIPS 2019 submitted papers.
Q#. A clear deï¬nition of the speciï¬c measure or statistics used to report results.
Q#. Clearly deï¬ned error bars.
Q#. A description of results with central tendency (e.g. mean) & variation (e.g. stddev).
In particular, it seems surprising to have 87% of papers that see value in clearly deï¬ning the metrics and statistics used, yet 36% of papers judge that error bars are not applicable to their results.
As shown in Figure 5, many checklist answers appear to be associated with a higher acceptance rate when the answer is âyesâ. However, it is too early to rule out potential covariates (e.g. paperâs topic, reviewer expectations, etc.) At this stage, it is encouraging that answering ânoâ to any of the questions is not associated with a higher acceptance rate. There seems to be a higher acceptance rate associated with âNAâ responses on a subset of questions related to âFigures and tablesâ. Although it is still unclear at this stage why this eï¬ect is observed, it disappears when we only include manuscripts for which the reviewers indicated that the checklist was useful for the review.
Finally, it is worth considering the reviewersâ point of view on the usefulness of the ML checklist to assess the soundness of the papers. When asked âWere the Reproducibility Checklist answers useful for evaluating the submission?â, 34% responded Yes.
We also note, as shown in Figure 6, that reviewers who found the checklist useful gave higher scores. And that those who found the checklist useful or not useful were more conï¬dent in their assessment than those who had not read the checklist. Finally, papers where the checklist was assessed as useful were more likely to be accepted.
12
# Improving Reproducibility in Machine Learning Research
0.3- @ 0.2- & Answer is} Byes < [No âNot applicable 0.1- 0.0- i 2 3 4 5 6 7 8 9 fo 11 12 13 14 15 16 417 Question #
Figure 5: Acceptance rate per question. The x-axis corresponds to the question number on the checklist. The numbers within each bar show the number of submissions for each answer. See Fig. 4 (and in Appendix Fig. 9) for text corresponding to each Question # (x-axis). The red dashed line shows the acceptance rate for all submissions.
je) oO 55- y 400° 0.25- 5 fe} 0.20- Useful for 3 50- @ 375 2 the evaluation? 5 8 sso eo Yes = aol ; J No. $s 3 8 0.10 Haven't read them a5 = 3.25- < Fe 8 : 0.05 - 4.0- o882 3.00- sie 0.00- $382
Figure 6: Perceived usefulness of the ML reproducibility checklist vs the review outcomes. (a) Eï¬ect on the paper score (scale 1-10). (b) Eï¬ect on the reviewer conï¬dence score (scale of 1 to 5, where 1 is lowest). (c) Eï¬ect on the ï¬nal accept/reject decision.
13
# Pineau et al.
# 6. Discussion
We presented a summary of the activities & ï¬ndings from the NeurIPS 2019 reproducibility program. Perhaps the best way to think of this eï¬ort is as a case study showing how three diï¬erent mechanisms (code submission, reproducibility challenge, reproducibility checklist) can be incorporated into a conference program in an attempt to improve the quality of sci- entiï¬c contributions. At this stage, we do not have concluding evidence that these processes indeed have an impact on the quality of the work or of the papers that are submitted and published.
However we note several encouraging indicators:
⢠The number of submissions to NeurIPS increased by nearly 40% this year, therefore we can assume the changes introduced did not result in a signiï¬cant drop of interest by authors to submit their work to NeurIPS.
⢠The number of authors willingly submitting code is quickly increasing, from less than 50% a year ago, to nearly 75%. It seems a code submission policy based on voluntary participation is suï¬cient at this time. We are not necessarily aiming for 100% com- pliance, as there are some cases where this may not be desirable (e.g. pre-processing script for conï¬dential data).
⢠The number of reviewers indicating that they consulted the code, or wished to consult it is in the 1000âs, indicating that this is useful in the review process.
⢠The number of participants in the reproducibility challenge continues to increase, as does the number of reproducibility reports, and reviewers of reproducibility reports. This suggests that an increasing segment of the community is willing to participate voluntarily in secondary analysis of research results.
⢠One-third of reviewers found the checklist answers useful, furthermore reviewers who found the checklist useful gave higher scores to the paper, which suggests the check- listâs use is useful for both reviewers and authors.
The work leaves several questions open, which would require further investigation, and a careful study design to elucidate:
⢠What is the long-term value (e.g. reproducibility, robustness, generalization, impact of follow-up work) of the code submitted?
⢠What is the eï¬ect of diï¬erent incentive mechanisms (e.g. cash payment, conference registration, a point/badge system) on the participation rate & quality of work in the reproducibility challenge?
⢠What is the beneï¬t of using the checklist for authors?
⢠What is the accuracy of the ML checklist answers (for each question) when ï¬lled by authors?
⢠What is the measurable eï¬ect of the checklist on the quality of the ï¬nal paper, e.g. in terms of soundness of results, clarity of writing?
14
Improving Reproducibility in Machine Learning Research
⢠What is the measurable eï¬ect of the checklist on the review process, in terms of reliability (e.g. inter-rater agreement) and eï¬ciency (e.g. need for response/rebuttal, discussion time)?
A related direction to explore is the development of tools and platforms that enhance re- producibility. Throughout this work we have focused on processes & guidelines, but stayed away from prescribing any infrastructure or software tooling to support reproducibility. Many software tools, such as Docker3, ReproZip4, WholeTale5 can encapsulate operating systems components, code, experimental variables and data ï¬les into a single package. Standardization of such tools would help sharing of information and improve ease of re- producibility. A comparison of diï¬erent tools is provided in (Isdahl and Gundersen, 2019). We hope to see in the future some level of convergence on standadized tools for supporting reproducibility.
A few other CS conferences have developed reproducibility programs, and explored mechanisms beyond what we introduced at NeurIPS 2019. For example, the ACM Multi- media conference, under the guidance of a Reproducibility Committee, has outlined speciï¬c reproducibility objectives, and included in 2021 a speciï¬c call for reproducibility papers6. The ACM and its aï¬liated events has also introduced Badges7 that can been attributed to papers to indicate when they meet pre-deï¬ned standards of Artifact availability, Re- producibility and Replication. Beyond the CS community, other concrete suggestions have been provided to increase the trustworthiness in scientiï¬c ï¬ndings (Hall Jamieson et al., 2019).
One additional aspect worth emphasizing is the fact that achieving reproducible results across a research community, whether NeurIPS or another, requires a signiï¬cant cultural and organizational changes, not just a code submission policy or a checklist. The initiative described here is just one step in helping the community adopt better practices, in terms of conducting, communicating, and evaluating scientiï¬c research. The NeurIPS community is far from alone in looking at this problem. Several workshops have been held in recent years to discuss the issue as it pertains to machine learning and computer science (SIGCOMM, 2017; ICML, 2017, 2018; ICLR, 2019). Speciï¬c calls for reproducibility papers have been issued (ECIR, 2020). An open-access peer-reviewed journal is dedicated to such papers (ReScience C, 2020), which was used to publish select reports in ICLR 2019 Reproducibility Challenge (Pineau et al., 2019) and NeurIPS 2019 Reproducibility Challenge (Sinha et al., 2020). And in the process, many labs are changing their practices to improve reproducibility of their own results.
While this report focuses on the reproducibility program deployed for NeurIPS 2019, we expect many of the ï¬ndings and recommendations to be more broadly applicable to other conferences and journals that represent machine learning research. Several other venues have already started deï¬ning code submission policies, though compliance varies. The ML checklist was crafted by consulting several other checklists, and could be adapted to other venues, as was done already for ICML 2020 and EMNLP 2020. The 2020 version of the ML
3. https://www.docker.com/ 4. https://www.reprozip.org/ 5. https://wholetale.org/ 6. https://project.inria.fr/acmmmreproducibility/ 7. https://www.acm.org/publications/policies/artifact-review-and-badging-current
15
# Pineau et al.
reproducibility challenge was also extended to cover papers accepted at 7 leading conferences (NeurIPS, ICML, ICLR, CVPR, EMNLP, CVPR and ECCV). We do not foresee any major obstacles extending this to journal venues, where the longer review time and repeated interactions with the reviewers provide even more opportunity to meet high standards of reproducibility.
# Acknowledgments
We thank the NeurIPS board and the NeurIPS 2019 general chair (Hanna Wallach) their unfailing support of this initiative. Without their courage and spirit of experimentation, none of this work would have been possible. We thank the many authors who submitted their work to NeurIPS 2019 and agreed to participate in this large experiment. We thank the program committee (reviewers, area chairs) of NeurIPS 2019 who not only incorporated the reproducibility checklist into their task ï¬ow, but also provided feedback about its usefulness. We thank Zhenyu (Sherry) Xue for preparing the data on NeurIPS papers & reviews for the analysis presented here. We thank the OpenReview team (in particular Andrew McCallum, Pam Mandler, Melisa Bok, Michael Spector and Mohit Uniyal) who provided support to host the results of the reproducibility challenge. We thank CodeOcean (in particular Xu Fei) for providing free compute resources to reproducibility challenge participants. Thank you to Robert Stojnic and Yolanda Gil for valuable comments on an early version of the manuscript. Finally, we thank the several participants of the reproducibility challenge who dedicated time and eï¬ort to verify results that were not their own, to help strengthen our understanding of machine learning, and the types of problems we can solve today.
# References
Elf opengo: an analysis and open reimplementation of alphazero. In Proceedings of the 36th International Conference on Machine Learning, 2019.
1,500 scientists lift the lid on reproducibility. Nature News, 533 (7604):452, May 2016. doi: 10.1038/533452a. URL https://www.nature.com/news/ 1-500-scientists-lift-the-lid-on-reproducibility-1.19970.
Philippe Bonnet, Stefan Manegold, Matias Bjørling, Wei Cao, Javier Gonzalez, Joel Grana- dos, Nancy Hall, Stratos Idreos, Milena Ivanova, Ryan Johnson, David Koop, Tim Kraska, Ren´e M¨uller, Dan Olteanu, Paolo Papotti, Christine Reilly, Dimitris Tsirogiannis, Cong Yu, Juliana Freire, and Dennis Shasha. Repeatability and workability evaluation of sigmod 2011. SIGMOD Rec., 40(2):45â48, September 2011. ISSN 0163-5808. doi: 10.1145/2034863.2034873. URL https://doi.org/10.1145/2034863.2034873.
Xavier Bouthillier, C´esar Laurent, and Pascal Vincent. Unreproducible research is repro- ducible. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Ma- chine Learning Research, pages 725â734, Long Beach, California, USA, 09â15 Jun 2019. PMLR. URL http://proceedings.mlr.press/v97/bouthillier19a.html.
16
Improving Reproducibility in Machine Learning Research
Nature Checklist. Nature checklist. 2021. URL https://media.nature.com/original/ nature-assets/ng/journal/v49/n10/extref/ng.3933-S2.pdf.
Robyn Clay-Williams and Lacey Colligan. Back to basics: Checklists in aviation and health- care. BMJ Quality & Safety, 24(7):428â431, July 2015. ISSN 2044-5415, 2044-5423. doi: 10.1136/bmjqs-2015-003957.
Gary S. Collins and Karel G. M. Moons. Reporting of artiï¬cial intelligence prediction models. The Lancet, 393(10181):1577â1579, April 2019. ISSN 0140-6736, 1474-547X. doi: 10.1016/S0140-6736(19)30037-6.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv:1810.04805 [cs], October 2018.
ECIR. Call for Reproducibility papers. April 2020. URL ecir2020.org.
Jessica Zosa Forde and Michela Paganini. The Scientiï¬c Method in the Science of Machine Learning. arXiv:1904.10922 [cs, stat], April 2019. URL http://arxiv.org/abs/1904. 10922.
Atul Gawande. Checklist manifesto, the (HB). Penguin Books India, 2010.
Yolanda Gil, C´edric H. David, Ibrahim Demir, Bakinam T. Essawy, Robinson W. Ful- weiler, Jonathan L. Goodall, Leif Karlstrom, Huikyo Lee, Heath J. Mills, Ji-Hyun Oh, Suzanne A. Pierce, Allen Pope, Mimi W. Tzeng, Sandra R. Villamizar, and Xuan Yu. Toward the geoscience paper of the future: Best practices for documenting and sharing re- search from data to software to provenance. Earth and Space Science, 3(10):388â415, 2016. doi: https://doi.org/10.1002/2015EA000136. URL https://agupubs.onlinelibrary. wiley.com/doi/abs/10.1002/2015EA000136.
Odd Erik Gundersen and Sigbjørn Kjensmo. State of the art: Reproducibility in artiï¬cial intelligence. In Thirty-second AAAI conference on artiï¬cial intelligence, 2018.
Odd Erik Gundersen, Yolanda Gil, and David W Aha. On reproducible ai: Towards repro- ducible research, open science and digital scholarship in ai publications. AI Magazine, 39 (3), 2018.
Kathleen Hall Jamieson, Marcia McNutt, Veronique Kiermer, and Richard Sever. Signaling the trustworthiness of science. PNAS, 116(29):19231â19236, 2019.
Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, and David Meger. Deep reinforcement learning that matters. In Thirty-Second AAAI Conference on Artiï¬cial Intelligence, 2018.
ICLR. ICLR 2018 Reproducibility Challenge. 2018. URL https://www.cs.mcgill.ca/ ~jpineau/ICLR2018-ReproducibilityChallenge.html.
ICLR. Reproducibility in Machine Learning Workshop. 2019. URL https://sites. google.com/view/icml-reproducibility-workshop/home.
17
# Pineau et al.
ICML. Reproducibility in Machine Learning Workshop. 2017. https://sites.google.com/view/icml-reproducibility-workshop/icml2017/ talks-and-abstracts. URL
ICML. Reproducibility in Machine Learning Workshop. 2018. URL https://sites. google.com/view/icml-reproducibility-workshop/icml2018/home.
ICML. Call for papers. 2019. URL https://icml.cc/Conferences/2019/CallForPapers.
Ricahrd Isdahl and Odd Erik Gundersen. Out-of-the-box reproducibility: A survey of machine learning platforms. In 15th International Conference on eScience, 2019.
Richard A Klein, Corey L Cook, Charles R Ebersole, Christine Vitiello, Brian A Nosek, Christopher R Chartier, Cody D Christopherson, Samuel Clay, Brian Collisson, Jarret Crawford, et al. Many labs 4: Failure to replicate mortality salience eï¬ect with and without original author involvement. 2019.
Xiaoxuan Liu, Livia Faes, Melanie J Calvert, and Alastair K Denniston. Extension of the consort and spirit statements. The Lancet, 394(10205):1225, 2019.
Mario Lucic, Karol Kurach, Marcin Michalski, Sylvain Gelly, and Olivier Bousquet. Are gans created equal? a large-scale study. In Advances in neural information processing systems, pages 700â709, 2018.
Ioana Manolescu, Loredana Afanasiev, Andrei Arion, Jens Dittrich, Stefan Manegold, Neok- lis Polyzotis, Karl Schnaitter, Pierre Senellart, Spyros Zoupanos, and Dennis Shasha. The repeatability experiment of sigmod 2008. ACM SIGMOD Record, 37(1):39â45, 2008.
Marcia McNutt. Journals unite for reproducibility. Science, 346(6210):679, 2014.
G´abor Melis, Chris Dyer, and Phil Blunsom. On the state of the art of evaluation in neural language models. arXiv preprint arXiv:1707.05589, 2017.
Marcus R Munaf`o, Brian A Nosek, Dorothy VM Bishop, Katherine S Button, Christopher D Chambers, Nathalie Percie Du Sert, Uri Simonsohn, Eric-Jan Wagenmakers, Jennifer J Ware, and John PA Ioannidis. A manifesto for reproducible science. Nature human behaviour, 1(1):1â9, 2017.
National Academies of Sciences, Engineering, and Medicine. Reproducibility and replicability in science. National Academies Press, 2019.
Nature Research. Reporting standards and availability of data, materials, code and pro- tocols, 2021. URL https://www.nature.com/nature-research/editorial-policies/ reporting-standards.
Brian A Nosek, George Alter, George C Banks, Denny Borsboom, Sara D Bowman, Steven J Breckler, Stuart Buck, Christopher D Chambers, Gilbert Chin, Garret Christensen, et al. Promoting an open research culture. Science, 348(6242):1422â1425, 2015.
OpenReview.net. Neurips 2019 reproducibility challenge, 2021. URL https://openreview. net/group?id=NeurIPS.cc/2019/Reproducibility_Challenge.
18
Improving Reproducibility in Machine Learning Research
Joelle Pineau, Koustuv Sinha, Genevieve Fried, Rosemary Nan Ke, and Hugo Larochelle. ICLR Reproducibility Challenge 2019. ReScience C, 5(2):5, May 2019. doi: 10.5281/ zenodo.3158244. URL https://zenodo.org/record/3158244/files/article.pdf.
Edward Raï¬. A step toward quantifying independently reproducible machine learning re- search. In Advances in Neural Information Processing Systems, pages 5486â5496, 2019.
ReScience C. The rescience journal. reproducible science is good. replicated science is better., 2020. URL http://rescience.github.io/.
Science - AAAS. Science journals: Editorial policies., 2018. URL https://www. sciencemag.org/authors/science-journals-editorial-policies.
SIGCOMM. Reproducibility â17: Proceedings of the reproducibility workshop. 2017. URL https://dl.acm.org/doi/proceedings/10.1145/3097766.
David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484, 2016.
Koustuv Sinha, Joelle Pineau, Jessica Forde, Rosemary Nan Ke, and Hugo Larochelle. NeurIPS 2019 Reproducibility Challenge. ReScience C, 6(2):11, May 2020. doi: 10.5281/ zenodo.3818627. URL https://zenodo.org/record/3818627/files/article.pdf.
S ËAk¸ren Sonnenburg, Mikio L Braun, Cheng Soon Ong, Samy Bengio, Leon Bottou, Geoï¬rey Holmes, Yann LeCun, Klaus-Robert M ËAËzller, Fernando Pereira, Carl Edward Rasmussen, et al. The need for open source software in machine learning. Journal of Machine Learning Research, 8(Oct):2443â2466, 2007.
The EQUATOR Network. Enhancing the quality and transparency of health research., 2021. URL https://www.equator-network.org/.
Lucy Turner, Larissa Shamseer, Douglas G Altman, Laura Weeks, Jodi Peters, Thilo Kober, Soï¬a Dias, Kenneth F Schulz, Amy C Plint, and David Moher. Consolidated standards of reporting trials (consort) and the completeness of reporting of randomised controlled trials (rcts) published in medical journals. Cochrane Database of Systematic Reviews, (11), 2012.
Rub´en Vicente-S´aez and Clara Mart´ınez-Fuentes. Open science now: A systematic literature review for an integrated deï¬nition. Journal of business research, 88:428â436, 2018.
Wikipedia. Fermatâs Last Theorem. March 2020. URL https://en.wikipedia.org/wiki/ Fermat%27s_Last_Theorem. Page Version ID: 944945734.
19
# Pineau et al.
As an experiment, NeurlPS-2019 will use the following Code Submission Policy. 1. The policy only applies to papers that contribute and present experiments with a new algorithm (or a modification to an existing algorithm). That is, a paper is not covered by this policy if: a. The paper is not claiming the contribution of any novel algorithm. b. The paper presents a new algorithm but only analyzes it theoretically (i.e., no experimental results are presented). 2. Code submission for papers covered by this policy is expected but not enforced. 3. The policy accepts a reimplementation by the authors that isn't the code originally run to produce the results reported in the paper (what is instead requested is the equivalent of an Official implementation of the paper's contribution). 4. The policy accepts code that isn't âexecutableâ as is as it has dependencies going beyond the algorithm itself and that cannot be released. Such dependencies would include a. Dataset that cannot be released (e.g., for privacy reasons). b. Specialized hardware that might not be commonly accessible (e.g., specialized accelerators or robotic platforms). c. Non-open sourced or non-free libraries, which do not include the algorithm that is claimed as the scientific contribution of the paper (e.g., paid-for mathematical programming solvers, commercial simulators, MATLAB). The authors will be asked to explain what dependencies are not released and why. 5. The policy expects code only for accepted papers, and only by the camera-ready deadline (October 27, 2019). After the camera-ready deadline, NeurIPS intends to measure the percentage of accepted papers for which code was not released, despite being covered by the policy.
Figure 7: The NeurIPS 2019 code submission policy. Reproduced (with permission) from: [ADD URL
20
# Improving Reproducibility in Machine Learning Research
Reproducibility Challenge Participants
Figure 8: NeurIPS 2019 Reproducibility Challenge Participants by geographical location.
21
# Pineau et al.
The Machine Learning Reproducibility Checklist (Version 1.2, Mar.27 2019) For all models and algorithms presented, check if you include: A clear description of the mathematical setting, algorithm, and/or model. An analysis of the complexity (time, space, sample size) of any algorithm. A link to a downloadable source code, with specification of all dependencies, including external libraries. For any theoretical claim, check if you include: Qs Astatement of the result. Q_sAclear explanation of any assumptions. Q Acomplete proof of the claim. For all figures and tables that present empirical results, check if you include: Q Acomplete description of the data collection process, including sample size. A link to a downloadable version of the dataset or simulation environment. An explanation of any data that were excluded, description of any pre-processing step. An explanation of how samples were allocated for training / validation / testing. The range of hyper-parameters considered, method to select the best hyper-parameter configuration, and specification of all hyper-parameters used to generate results. The exact number of evaluation runs. A description of how experiments were run. A clear definition of the specific measure or statistics used to report results. Clearly defined error bars. A description of results with central tendency (e.g. mean) & variation (e.g. stddev). A description of the computing infrastructure used. Reproduced from: www.cs.mcgill.ca/~jpineau/ReproducibilityChecklist.pdf
Figure 9: The Machine Learning Reproducibility Checklist, version 1.2, used during the NeurIPS 2019 review process.
22 | {
"id": "1707.05589"
} |
2003.11755 | A Survey of Deep Learning for Scientific Discovery | Over the past few years, we have seen fundamental breakthroughs in core
problems in machine learning, largely driven by advances in deep neural
networks. At the same time, the amount of data collected in a wide array of
scientific domains is dramatically increasing in both size and complexity.
Taken together, this suggests many exciting opportunities for deep learning
applications in scientific settings. But a significant challenge to this is
simply knowing where to start. The sheer breadth and diversity of different
deep learning techniques makes it difficult to determine what scientific
problems might be most amenable to these methods, or which specific combination
of methods might offer the most promising first approach. In this survey, we
focus on addressing this central issue, providing an overview of many widely
used deep learning models, spanning visual, sequential and graph structured
data, associated tasks and different training methods, along with techniques to
use deep learning with less data and better interpret these complex models ---
two central considerations for many scientific use cases. We also include
overviews of the full design process, implementation tips, and links to a
plethora of tutorials, research summaries and open-sourced deep learning
pipelines and pretrained models, developed by the community. We hope that this
survey will help accelerate the use of deep learning across different
scientific domains. | http://arxiv.org/pdf/2003.11755 | Maithra Raghu, Eric Schmidt | cs.LG, stat.ML | null | null | cs.LG | 20200326 | 20200326 | 0 2 0 2
r a M 6 2 ] G L . s c [
1 v 5 5 7 1 1 . 3 0 0 2 : v i X r a
# A Survey of Deep Learning for Scientiï¬c Discovery
# Maithra Raghu1,2â
# Eric Schmidt1,3
1 Google 2 Cornell University 3 Schmidt Futures
# Abstract
Over the past few years, we have seen fundamental breakthroughs in core problems in machine learning, largely driven by advances in deep neural networks. At the same time, the amount of data collected in a wide array of scientiï¬c domains is dramatically increasing in both size and complexity. Taken together, this suggests many exciting opportunities for deep learning applications in scientiï¬c settings. But a signiï¬cant challenge to this is simply knowing where to start. The sheer breadth and diversity of diï¬erent deep learning techniques makes it diï¬cult to determine what scientiï¬c problems might be most amenable to these methods, or which speciï¬c combination of methods might oï¬er the most promising ï¬rst approach. In this survey, we focus on addressing this central issue, providing an overview of many widely used deep learning models, spanning visual, sequential and graph structured data, associated tasks and diï¬erent training methods, along with techniques to use deep learning with less data and better interpret these complex models â two central considerations for many scientiï¬c use cases. We also include overviews of the full design process, implementation tips, and links to a plethora of tutorials, research summaries and open-sourced deep learning pipelines and pretrained models, developed by the community. We hope that this survey will help accelerate the use of deep learning across diï¬erent scientiï¬c domains.
1
# 1 Introduction
The past few years have witnessed extraordinary advances in machine learning using deep neural networks. Driven by the rapid increase in available data and computational resources, these neural network models and algorithms have seen remarkable developments, and are a staple technique in tackling fundamental tasks ranging from speech recognition [70, 167], to complex tasks in computer vision such as image classiï¬cation, (instance) segmentation, action recognition [117, 78, 240], and central problems in natural language, including question answering, machine translation and summarization [186, 172, 233, 197]. Many of these fundamental tasks (with appropriate reformulation) are relevant to a much broader array of domains, and in particular have tremendous potential in aiding the investigation of central scientiï¬c questions.
However, a signiï¬cant obstacle in beginning to use deep learning is simply knowing where to start. The vast research literature, coupled with the enormous number of underlying models, tasks and training methods makes it very diï¬cult to identify which techniques might be most appropriate to try, or the best way to start implementing them.
The goal of this survey is to help address this central challenge. In particular, it has the following attributes:
⢠The survey overviews a highly diverse set of deep learning concepts, from deep neural network models for varied data modalities (CNNs for visual data, graph neural networks, RNNs and Transformers for
âCorrespondence to [email protected]
1
sequential data) to the many diï¬erent key tasks (image segmentation, super-resolution, sequence to sequence mappings and many others) to the multiple ways of training deep learning systems.
⢠But the explanation of these techniques is relatively high level and concise, to ensure the core ideas are accessible to a broad audience, and so that the entire survey can be read end to end easily.
⢠From the perspective of aiding scientiï¬c applications, the survey describes in detail (i) methods to use deep learning with less data (self-supervision, semi-supervised learning, and others) and (ii) techniques for interpretability and representation analysis (for going beyond predictive tasks). These are two exciting and rapidly developing research areas, and are also of particular signiï¬cance to possible scientiï¬c use cases.
⢠The survey also focuses on helping quickly ramp up implementation, and in addition to overviews of the entire deep learning design process and a section on implementation tips (Section 9), the survey has a plethora of open-sourced code, research summaries and tutorial references developed by the community throughout the text, including a full section (Section 3) dedicated to this.
Who is this survey for? We hope this survey will be especially helpful for those with a basic understanding of machine learning, interested in (i) getting a comprehensive but accessible overview of many fundamental deep learning concepts and (ii) references and guidance in helping ramp up implementation. Beyond the core areas of deep learning, the survey focuses on methods to develop deep learning systems with less data, and techniques for interpreting these models, which we hope will be of particular use for those interested in applying these techniques in scientiï¬c problems. However, these topics and many others presented, along with the many code/tutorial/paper references may be helpful to anyone looking to learn about and implement deep learning.
# 1.1 Outline of Survey
The survey is structured as follows:
⢠Section 2 starts with some high level considerations for using deep learning. Speciï¬cally, we ï¬rst discuss some template ways in which deep learning might be applied in scientiï¬c domains, followed by a general overview of the entire deep learning design process, and conclude with a brief discussion of other central machine learning techniques that may be better suited to some problems. The ï¬rst part may be of particular interest to those considering scientiï¬c applications, while the latter two parts may be of general interest.
⢠Section 3 provides references to tutorials, open-sourced code model/algorithm implementations, and websites with research paper summaries, all developed by the deep learning community. This section should be very helpful for many readers and we encourage skimming through the links provided.
⢠Section 4 then overviews many of the standard tasks and models in deep learning, covering convolutional networks and their many uses, graph neural networks, sequence models (RNNs, Transformers) and the many associated sequence tasks.
⢠Section 5 looks at some key variants of the supervised learning training process, such as transfer learning, domain adaptation and multitask learning. These are central to many successful applications of deep learning.
⢠Section 6 considers ways to improve the data eï¬ciency for developing deep neural network models, which has been a rapidly evolving area of research, and a core consideration for many applications, including scientiï¬c domains. It covers the many variants of self-supervision and semi-supervised learning, as well as data augmentation and data denoising.
2
⢠Section 7 overviews advances in interpretability and representational analysis, a set of techniques focused on gaining insights into the internals of the end-to-end system: identifying important features in the data, understanding its eï¬ect on model outputs and discovering properties of model hidden representations. These are very important for many scientiï¬c problems which emphasise understanding over predictive accuracy, and may be of broader interest for e.g. aiding model debugging and preemptively identifying failure modes.
⢠Section 8 provides a brief overview of more advanced deep learning methods, speciï¬cally generative modelling and reinforcement learning.
⢠Section 9 concludes with some key implementation tips when putting together an end-to-end deep learning system, which we encourage a quick read through!
# 2 High Level Considerations for Deep Learning
In this section we ï¬rst discuss some high level considerations for deep learning techniques. We start with overviews of template ways in which deep learning might be applied in scientiï¬c settings, followed by a discussion of the end-to-end design process and some brief highlights of alternate machine learning methods which may be more suited to some problems.
# 2.1 Templates for Deep Learning in Scientiï¬c Settings
What are the general ways in which we might apply deep learning techniques in scientiï¬c settings? At a very high level, we can oï¬er a few templates of ways in which deep learning might be used in such problems:
(1) Prediction Problems Arguably the most straightforward way to apply deep learning is to use it to tackle important prediction problems: mapping inputs to predicted outputs. This predictive use case of deep learning is typically how it is also used in core problems in computing and machine learning. For example, the input might be a biopsy image, and the model must output a prediction of whether the imaged tissue shows signs of cancer. We can also think of this predictive use case as getting the model to learn a target function, in our example, mapping from input visual features to the cancer/no cancer output. Using deep learning in this way also encapsulates settings where the target function is very complex, with no mathematical closed form or logical set of rules that describe how to go from input to output. For instance, we might use a deep learning model to (black-box) simulate a complex process (e.g. climate modelling), that is very challenging to explicitly model [101].
(2) From Predictions to Understanding One fundamental diï¬erence between scientiï¬c questions and core machine learning problems is the emphasis in the former on understanding the underlying mechanisms. Oftentimes, outputting an accurate prediction alone is not enough. Instead, we want to gain interpretable insights into what properties of the data or the data generative process led to the observed prediction or outcome. To gain these kinds of insights, we can turn to interpretability and representation analysis methods in deep learning, which focus on determining how the neural network model makes a speciï¬c prediction. There has been signiï¬cant work on both tools to understand what features of the input are most critical to the output prediction, as well as techniques to directly analyze the hidden representations of the neural network models, which can reveal important properties of the underlying data.
(3) Complex Transformations of Input Data In many scientiï¬c domains, the amount of generated data, particularly visual data (e.g. ï¬uorescence microscopy, spatial sequencing, specimen videos [177, 97]) has grown dramatically, and there is an urgent need for eï¬cient analysis and automated processing. Deep learning techniques, which are capable of many complex transformations of data, can be highly eï¬ective for such settings, for example, using a deep neural network based segmentation model to automatically
3
# Data
# Learning
Validation + Analysis
Collection, Preprocessing, _ Visualization, Augmentation Model » \ _ - ââ__ Convolutional Net, \ / yy) VA \ NER Performance \ 4 ) >) >) Graph Neural Net Test on hold out Collect Renee i ) Distribution shift? Raw Label and < u J) Geta âAugmentation | Task Train (Analysis + Interpretation Classification, Detection Goze Error analysis? LR schedule Ablation Study? Useful representations? Bias? Spurious Correlations? X J \ weN / a | a >) Visualization and testing simple tasks Learning Embeddings, J {_Regularize Visualization Method Supervised Leaming, Self-Supervision, Semi-Supervised, Transfer, Muttitask Learning / Se - Iterate (1) Iterate (2) | Iterate (3) [ Iterate (4) Iterate (5)
Figure 1: Schematic of a typical deep learning workï¬ow. A typical development process for deep learning applications can be viewed as consisting of three sequential stages (i) data related steps (ii) the learning component (iii) validation and analysis. Each one of these stages has several substeps and techniques associated with it, also depicted in the ï¬gure. In the survey we will overview most techniques in the learning component, as well as some techniques in the data and validation stages. Note that while a natural sequence is to ï¬rst complete steps in the data stage, followed by learning and then validation, standard development will likely result in multiple diï¬erent iterations where the techniques used or choices made in one stage are revisited based oï¬ of results of a later stage.
identify the nuclei in images of cells, or a pose estimation system to rapidly label behaviors seen in videos of mice for neuroscience analysis.
# 2.2 Deep Learning Workï¬ow
With these examples of templates for deep learning applications in science, we next look at the end to end workï¬ow for designing a deep learning system. Figure 1 illustrates what a typical workï¬ow might look like.
Having selected the overarching (predictive) problem of interest, we can broadly think of having three stages for designing and using the deep learning system: (i) data related steps, such as collection, labelling, preprocessing, visualization, etc (ii) learning focused steps, such as choice of deep neural network model, the task and method used to train the model (iii) validation and analysis steps, where performance evaluations are conducted on held out data, as well as analysis and interpretation of hidden representations and ablation studies of the overall methods.
These three stages are naturally sequential. However, almost all of the time, the ï¬rst attempt at building an end-to-end deep learning system will result in some kind of failure mode. To address these, it is important to keep in mind the iterative nature of the design process, with results from the diï¬erent stages informing the redesign and rerunning of other stages.
Figure 1 provides some examples of common iterations with the backward connecting arrows: (i) the Iterate (1) arrow, corresponding to iterations on the data collection process, e.g. having performed some data visualization, the labelling process for the raw instances might require adjusting â the ï¬rst labelling mechanism might be too noisy, or not capture the objective of interest (ii) the Iterate (2) arrow, corresponding to iterations on the learning setup, due to e.g. deciding that a diï¬erent task or method might be more appropriate, or decomposing the learning process into multiple steps â ï¬rst performing self-supervision followed by supervised learning (iii) the Iterate (3) arrow, changing the data related steps based oï¬ of the results of the learning step (iv) the Iterate (4) arrow, redesigning the learning process informed by the
4
validation results e.g. ï¬nding out the model has overï¬t on the training data at validation and hence reducing training time or using a simpler model (v) the Iterate (5) arrow, adapting the data steps based oï¬ the validation/analysis results, e.g. ï¬nding that the model is relying on spurious attributes of the data, and improving data collection/curation to mitigate this.
Focus of Survey and Nomenclature In this survey, we provide a comprehensive overview of many of the techniques in the learning stage, along with some techniques (e.g. data augmentation, interpretability and representation analysis, Section 7) in the data and validation stages.
For the learning stage, we look at popular models, tasks and methods. By models (also sometimes referred to as architecture), we mean the actual structure of the deep neural network â how many layers, of what type, and how many neurons, etc. By tasks, we mean the kind of prediction problem, speciï¬cally, the type of input and output. For example, in an image classiï¬cation task, the input consists of images and the output a probability distribution over a (discrete) set of diï¬erent categories (called classes). By methods, we refer to the type of learning process used to train the system. For example, supervised learning is a very general learning process, consisting of the neural network being given data instances with corresponding labels, with the labels providing supervision.
Unlike diï¬erent models and tasks, methods can be subsets of other methods. For example, self-supervision, a method where the neural network is trained on data instances and labels, but the labels automatically created from the data instance, can also be considered a type of supervised learning. This can be a little confusing! But it suï¬ces to keep in mind the general notions of models, tasks and methods.
# 2.3 Deep Learning or Not?
As a ï¬nal note before diving into the diï¬erent deep learning techniques, when formulating a problem, it is important to consider whether deep learning provides the right set of tools to solve it. The powerful underlying neural network models oï¬er many sophisticated functionalities, such learned complex image transforms. However, in many settings, deep learning may not be the best technique to start with or best suited to the problem. Below we very brieï¬y overview some of the most ubiquitous machine learning methods, particularly in scientiï¬c contexts.
Dimensionality Reduction and Clustering In scientiï¬c settings, the ultimate goal of data analysis is often understanding â identifying the underlying mechanisms that give rise to patterns in the data. When this is the goal, dimensionality reduction, and/or clustering are simple (unsupervised) but highly eï¬ective methods to reveal hidden properties in the data. They are often very useful in the important ï¬rst step of exploring and visualizing the data (even if more complex methods are applied later.)
Dimensionality Reduction: Dimensionality reduction methods are either linear, relying on a linear transformation to reduce data dimensionality, or non-linear, reducing dimensionality while approximately preserving the non-linear (manifold) structure of the data. Popular linear dimensionality reduction methods include PCA and non-negative matrix factorization, with some popular non-linear methods including t- SNE [141] and UMAP [148]. Most dimensionality reduction methods have high-quality implementations in packages like scikit-learn or on github, e.g. https://github.com/oreillymedia/t-SNE-tutorial or https://github.com/lmcinnes/umap.
Clustering: Often used in combination with dimensionality reduction, clustering methods provide a powerful, unsupervised way to identify similarities and diï¬erences across the data population. Commonly used clustering methods include k-means (particularly the k-means++ variant), Gaussian Mixture Models (GMMs), hierarchical clustering and spectral clustering. Like dimensionality reduction techniques, these clustering methods have robust implementations in packages like scikit-learn.
5
In Section 7.2.2, we discuss how dimensionality reduction and clustering can be used on the hidden representations of neural networks.
Linear Regression, Logistic Regression (and variants!) Arguably the most fundamental techniques for supervised problems like classiï¬cation and regression, linear and logistic regression, and their variants (e.g. Lasso, Ridge Regression) may be particularly useful when there is limited data, and a clear set of (possibly preprocessed) features (such as in tabular data.) These methods also often provide a good way to sanity check the overarching problem formulation, and may be a good starting point to test out a very simple version of the full problem. Due to their simplicity, linear and logistic regression are highly interpretable, and provide straightforward ways to perform feature attribution.
Decision Trees, Random Forests and Gradient Boosting Another popular class of methods are deci- sion trees, random forests and gradient boosting. These methods can also work with regression/classiï¬cation tasks, and are well suited to model non-linear relations between the input features and output predic- tions. Random forests, which ensemble decision trees, can often be preferred to deep learning methods in settings where the data has a low signal-to-noise ratio. These methods can typically be less inter- pretable than linear/logistic regression, but recent work [160] has looked at developing software libraries https://github.com/interpretml/interpret to address this challenge.
Other Methods and Resources: Both the aforementioned techniques and many other popular methods such as graphical models, Gaussian processes, Bayesian optimization are overviewed in detail in excellent course notes such as University of Torontoâs Machine Learning Course or Stanfordâs CS229, detailed articles at https://towardsdatascience.com/ and even interactive textbooks such as https://d2l.ai/index.html (called Dive into Deep Learning [267]) and https://github.com/rasbt/python-machine-learning-book- 2nd-edition.
# 3 Deep Learning Libraries and Resources
A remarkable aspect of advances in deep learning so far is the enormous number of resources developed and shared by the community. These range from tutorials, to overviews of research papers, to open sourced code. Throughout this survey, we will reference some of these materials in the topic speciï¬c sections, but we ï¬rst list here a few general very useful frameworks and resources.
Software Libraries for Deep Learning: Arguably the two most popular code libraries for deep learning are PyTorch (with a high level API called Lightning) and TensorFlow (which also oï¬ers Keras as a high level API.) Developing and training deep neural network models critically relies on fast, parallelized matrix and tensor operations (sped up through the use of Graphical Processing Units) and performing automatic diï¬erentiation for computing gradients and optimization (known as autodiï¬.) Both PyTorch and TensorFlow oï¬er these core utilities, as well as many other functions. Other frameworks include Chainer, ONNX, MXNET and JAX. Choosing the best framework has been the source of signiï¬cant debate. For ramping up quickly, programming experiences closest to native Python, and being able to use many existing code repositories, PyTorch (or TensorFlow with the Keras API) may be two of the best choices.
Tutorials: (i) https://course.fast.ai/ fast.ai provides a free, coding-ï¬rst course on the most important deep learning techniques as well as an intuitive and easy to use code library, https://github.com/fastai/ fastai, for model design and development. (ii) https://towardsdatascience.com/ contains some fantastic tutorials on almost every deep learning topic imaginable, crowd sourced from many contributors. (iii)
6
Many graduate deep learning courses have excellent videos and lecture notes available online, such as http://www.cs.toronto.edu/~rgrosse/courses/csc421_2019/ for Deep Learning and Neural Networks, or the more topic speciï¬c Stanfordâs CS224N NLP with Deep Learning. A nice collection of some of these topic speciï¬c lectures is provided at https://github.com/Machine-Learning-Tokyo/AI_Curriculum. There are also some basic interactive deep learning courses online, such as https://github.com/leriomaggio/deep- learning-keras-tensorflow.
(i) https://paperswithcode.com/ This excellent site keeps Research Overviews, Code, Discussion: track of new research papers and their corresponding opensourced code, trending directions and displays state of the art results (https://paperswithcode.com/sota) across many standard benchmarks. (ii) Discussion of deep learning research is very active on Twitter. http://www.arxiv-sanity.com/top keeps track of some of the top most discussed papers and comments. (iii) https://www.reddit.com/r/MachineLearning/ (iv) https://www.paperdigest.org/ is also a good forum for research and general project discussion. conference-paper-digest/ contains snippets of all the papers in many diï¬erent top machine learning conferences. (v) IPAM (Institute for Pure and Applied Mathematics) has a few programs e.g. https:// www.ipam.ucla.edu/programs/workshops/new-deep-learning-techniques/?tab=schedule and https:// www.ipam.ucla.edu/programs/workshops/deep-learning-and-medical-applications/?tab=schedule with videos overviewing deep learning applications in science.
Models, Training Code and Pretrained Models: As we discuss later in the survey, publicly available models, training code and pretrained models are very useful for techniques such as transfer learning. There are many good sources of these, here are a few that are especially comprehensive and/or accessible:
(i) Pytorch and TensorFlow have a collection of pretrained models, found at https://github.com/ tensorflow/models and https://pytorch.org/docs/stable/torchvision/models.html.
(ii) https://github.com/huggingface Hugging Face (yes, that really is the name), oï¬ers a huge collection of both pretrained neural networks and the code used to train them. Particularly impressive is their library of Transformer models, a one-stop-shop for sequential or language applications.
(iii) https://github.com/rasbt/deeplearning-models oï¬ers many standard neural network architectures, including multilayer perceptrons, convolutional neural networks, GANs and Recurrent Neural Networks.
(iv) https://github.com/hysts/pytorch_image_classification does a deep dive into image classiï¬ca- tion architectures, with training code, highly popular data augmentation techniques such as cutout, and careful speed and accuracy benchmarking. See their page for some object detection architectures also.
(v) https://github.com/openai/baselines provides implementations of many popular RL algorithms.
(vi) https://modelzoo.co/ is a little like paperswithcode, but for models, linking to implementations of neural network architectures for many diï¬erent standard problems.
(vii) https://github.com/rusty1s/pytorch_geometric. Implementations and paper links for many graph neural network architectures.
Data Collection, Curation and Labelling Resources: A crucial step in applying deep learning to a problem is collecting, curating and labelling data. This is a very important, time-intensive and often highly intricate task (e.g. labelling object boundaries in an image for segmentation.) Luckily, there are some resources and libraries to help with this, for example https://github.com/tzutalin/labelImg, https://github.com/ wkentaro/labelme, https://rectlabel.com/ for images and https://github.com/doccano/doccano for text/sequential data.
7
Training Evaluation 2 Dog t Optimize t @ @ parameters @ @ Labels ©0200 ©0008 Firetruck Volcano Dog LA Pie ©0000 ©0000 Data Instances @@@@ @@00@ @ © @ © , x Unseen Data eX
Figure 2: The Supervised Learning process for training neural networks. The ï¬gure illustrates the supervised learning process for neural networks. Data instances (in this case images) and corresponding labels are collected. During the training step, the parameters of the neural network are optimized so that when input a data instance, the neural network outputs the corresponding label. During evaluation, the neural network is given unseen data instances as input, and if trained successfully, will output a meaningful label (prediction).
Visualization, Analysis and Compute Resources: When training deep neural network models, it is critical to visualize important metrics such as loss and accuracy while the model is training. Tensorboard https: //www.tensorflow.org/tensorboard (which works with Pytorch and TensorFlow) is a very popular framework for doing this. Related is the colab eï¬ort https://colab.research.google.com/notebooks/welcome.ipynb, which, aside from providing a user-friendly, interactive way for model development and analysis (very similar to jupyter notebooks) also provides some (free!) compute resources.
# 4 Standard Neural Network Models and Tasks
In this section, we overview the standard neural network models and the kinds of tasks they can be used for, from convolutional networks for image predictions and transformations to transformer models for sequential data to graph neural networks for chemistry applications.
# 4.1 Supervised Learning
Before diving into the details of the diï¬erent deep neural network models, it is useful to brieï¬y discuss supervised learning, the most standard method to train these models. In the supervised learning framework, we are given data instances and an associated label for each data instance, i.e. (data instance, label) pairs. For example, the data instances might comprise of chest x-ray images, and the labels (one for each chest x-ray image) a binary yes/no to whether it shows the symptoms of pneumonia. Training the neural network model then consists of ï¬nding values for its parameters so that when it is fed in a data instance (chest x-ray) as input, it correctly outputs the corresponding label (yes/no on whether the chest x-ray has pneumonia.) To ï¬nd these parameter values, we perform iterative optimization to guide the neural network parameters to appropriate values, using the given labels to provide supervision. Figure 2 shows a schematic of the supervised learning setup for deep learning.
8
Supervised learning is the most basic yet most critical method for training deep neural networks. As will be seen through the subsequent sections, there can be signiï¬cant diversity in the kinds of (data, label) pairs used. Even in settings where clear (data, label) pairs are not possible to collect (Sections 6, 6.2), the training problem is often reformulated and recast into a supervised learning framework.
# 4.2 Multilayer Perceptrons
The ï¬rst and most basic kind of deep neural network is the multilayer perceptron. These models consist of a stack of fully connected layers (matrix multiplications) interleaved with a nonlinear transform.
Despite their simplicity, they are useful for problems where the data might consist of a set of distinct, (possibly categorical) features, for example, tabular data. These models have more expressive power than logistic/linear regression, though those methods would be a good ï¬rst step to try. One way to apply these models might be to ï¬rst preprocess the data to compute the distinct set of features likely to be important, and use this as input. https://github.com/rasbt/deeplearning-models provides some implementations of some example multilayer perceptron architectures.
Scientiï¬c Examples One recent scientiï¬c example is given by the use of simple MLPs for pharamaceutical formulation [256], developing variants of a drug that is stable and safe for patient use.
# 4.3 Convolutional Neural Networks
These are arguably the most well known family of neural networks, and are very useful in working with any kind of image data. They are characterized by having convolutional layers, which allow the neural network to reuse parameters across diï¬erent spatial locations of an image. This is a highly useful inductive bias for image data, and helping with eï¬ciently learning good features, some, like Gabor ï¬lters, which correspond to traditional computer vision techniques. Convolutional neural networks (CNNs) have so many possible uses that we overview some of the most ubiquitous tasks separately below.
# 4.3.1 Image Classiï¬cation
This is arguably the simplest and most well known application of convolutional neural networks. The model is given an input image, and wants to output a class â one of a (typically) mutually exclusive set of labels for that image. The earlier example, of mapping a chest x-ray image to a binary disease label, is precisely image classiï¬cation.
Convolutional neural networks for image classiï¬cation is an extremely common application of deep learning. There many diï¬erent types of CNN models for classiï¬cation: VGG â a simple stack of convolutional layers followed by a fully connected layer [214], ResNets â which are a family of convolutional networks of diï¬erent sizes and depths and skip connections [79], DenseNets â another family of models where unlike standard neural networks, every layer in a "block" is connected to every other layer [94]. More recent, complex models include ResNeXt [253] and recently Eï¬cientNets, which have separate scaling factors for network depth, width and the spatial resolution of the input image [223]. Tutorials, implementations and pretrained versions of many of these models can be found in the references given in Section 3.
Scientiï¬c Examples: Image classiï¬cation has found many varied scientiï¬c applications, such as in analyzing cryoEM data [226] (with associated code https://github.com/cramerlab/boxnet). An especially large body of work has looked at medical imaging uses of image classiï¬cation, speciï¬cally, using CNNs to predict disease labels. Examples range from ophthalmology [72], radiology (2D x-rays and 3D CT scans) [258, 5, 185],
9
Classification Semantic Segmentation Object Detection Instance Segmentation
Figure 3: Diï¬erences between Image Classiï¬cation, Object Detection, Semantic Segmentation and Instance Segmentation tasks. Image source [1] The ï¬gure illustrates the diï¬erences between classiï¬cation, object detection, semantic segmentation and instance segmentation. In classiï¬cation, the whole image gets a single label (balloons), while in object detection, each balloon is also localized with a bounding box. In semantic segmentation, all the pixels corresponding to balloon are identiï¬ed, while in instance segmentation, each individual balloon is identiï¬ed separately.
pathology [135, 55], analyzing brain scans (PET, fMRI) [202, 45]. An excellent survey of the numerous papers in this area is given by [228].
# 4.3.2 Object Detection
Image classiï¬cation can be thought of as a global summary of the image. Object detection dives into some of the lower level details of the image, and looks at identifying and localizing diï¬erent objects in the image. For example, given an input image of an outdoor scene having a dog, a person and a tree, object detection would look at both identifying the presence of the dog, person and tree and âcircle their locationâ in the image â speciï¬cally, put a bounding box around each of them. The supervised learning task is thus to take an input image and output the coordinates of these bounding boxes, as well as categorizing the kind of object they contain.
Like image classiï¬cation, there are many high performing and well established convolutional architectures for object detection. Because of the intricacy of the output task, these models tend to be more complex with a backbone component (using an image classiï¬cation model) and a region proposal component for bounding box proposals. But there are still many pretrained models available to download. One of the most successful early models was Faster R-CNN [192], which signiï¬cantly sped up the slow bounding box proposal component. Since then there have been many improved models, including YOLOv3 [191], and most recently Eï¬cientDets [224]. Arguably the most popular recent architecture however has been Mask R-CNN and its variants [78, 248]. Mask R-CNN performs some segmentation as well as object detection (see below). Besides some of the resources mentioned in Section 3, a good source of code and models is https://github.com/rbgirshick, one of the key authors in a long line of these object
10
(Note though that there are many other popular implementations, such as https: detection models. //github.com/matterport/Mask_RCNN.) This in depth article towardsdatascience object detection Faster R-CNN oï¬ers a detailed tutorial on downloading, setting up and training an object detection model, including helpful pointers to data collection and annotation (the latter using https://rectlabel.com/.) Most recently the Detectron2 system https://github.com/facebookresearch/detectron2 [248] builds on Mask R-CNN and oï¬ers many varied image task functionalities.
Scientiï¬c Examples: Object detection has also gained signiï¬cant attention across diï¬erent scientiï¬c applications. It has been used in many medical settings to localize features of interest, for example, tumor cells across diï¬erent imaging modalities [125, 269] or fractures in radiology [199, 227].
# 4.3.3 Semantic Segmentation and Instance Segmentation
Segmentation dives into the lowest possible level of detail â categorizing every single image pixel. In semantic segmentation, we want to categorize pixels according to the high level group they belong to. For example, suppose we are given an image of a street, with a road, diï¬erent vehicles, pedestrians, etc. We would like to determine if a pixel is part of any pedestrian, part of any vehicle or part of the road â i.e. label the image pixels as either pedestrian, vehicle or road. Instance segmentation is even more intricate, where not only do we want to categorize each pixel in this way, but do so separately for each instance (and provide instance speciï¬c bounding boxes like in object detection). The diï¬erences are illustrated in Figure 3 (sourced from [1].) Returning to the example of the image of the street, suppose the image has three pedestrians. In semantic segmentation, all of the pixels making up these three pedestrians would fall under the same category â pedestrian. In instance segmentation, these pixels would be further subdivided into those belonging to pedestrian one, pedestrian two or pedestrian three.
Because segmentation models must categorize every pixel, their output is not just a single class label, or a bounding box, but a full image. As a result, the neural network architectures for segmentation have a slightly diï¬erent structure that helps them better preserve spatial information about the image. A highly popular and successful architecture, particularly for scientiï¬c applications, has been the U-net [196], which also has a 3d volumetric variant [33]. Other architectures include FCNs (Fully Convolutional Networks) [136], SegNet [9] and the more recent Object Contextual Representations [260]. A couple of nice surveys on semantic segmentation methods are given by towardsdatascience Semantic Segementation with Deep Learning and https://sergioskar.github.io/Semantic_Segmentation/.
For instance segmentation, Mask R-CNN [78] and its variants [248] have been extremely popular. This tutorial Mask R-CNN tutorial with code provides a step by step example application. The recent Detectron2 package [248] (https://github.com/facebookresearch/detectron2) also oï¬ers this functionality.
Scientiï¬c Examples: Out of all of the diï¬erent types of imaging prediction problems, segmentation methods have been especially useful for (bio)medical applications. Examples include segmenting brain MR images [156, 236], identifying key regions of cells in diï¬erent tissues [254, 217] and even studying bone structure [129].
# 4.3.4 Super-Resolution
Super resolution is a technique for transforming low resolution images to high resolution images. This problem has been tackled both using convolutional neural networks and supervised learning, as well as generative models.
Super resolution formally deï¬ned is an underdetermined problem, as there may be many possible high resolution mappings for a low resolution image. Traditional techniques imposed constraints such
11
as sparsity to ï¬nd a solution. One of the ï¬rst CNNs for super resolution, SRCNN [50] outlines the correspondences between sparse coding approaches and convolutional neural networks. More recently, Residual Dense Networks [270] have been a popular approach for super-resolution on standard benchmarks (with code available https://github.com/yulunzhang/RDN), as well as Predictive Filter Flow [114], (code: https://github.com/aimerykong/predictive-filter-flow) which has also looked at image denoising and deblurring. In some of the scientiï¬c applications below, U-nets have also been successful for super resolution.
Scientiï¬c Examples: Super resolution is arguably even more useful for scientiï¬c settings than standard natural image benchmarks. Two recent papers look at U-nets for super-resolution of ï¬uorescence microscopy [245] (code: https://csbdeep.bioimagecomputing.com/) and electron microscopy [56]. Other examples include super resolution of chest CT scans [231] and Brain MRIs [31].
# 4.3.5 Image Registration
Image registration considers the problem of aligning two input images to each other. Particularly relevant to scientiï¬c applications, the two input images might be from diï¬erent imaging modalities (e.g. a 3D scan and a 2D image), or mapping a moving image to a canonical template image (such as in MRIs.) The alignment enables better identiï¬cation and analysis of features of interest.
The potential of image registration is primarily demonstrated through diï¬erent scientiï¬c applications. At the heart of the technique is a convolutional neural network, often with an encoder-decoder structure (similar to the U-net [196]) to guide the alignment of two images. Note that while this underlying model is trained through supervised learning, many registration methods do not require explicit labels, using similarity functions and smoothness constraints to provide supervision. For example, [12] develop an unsupervised method to perform alignment for Brain MRIs. The code for this and several followup papers [13, 39] provides a helpful example for building oï¬ of and applying these methods https://github.com/voxelmorph/voxelmorph. Other useful resources include https://github.com/ankurhanda/gvnn (with corresponding paper [75]) a library for learning common parametric image transformations.
# 4.3.6 Pose Estimation
Pose estimation, and most popularly human pose estimation, studies the problem of predicting the pose of a human in a given image. In particular, a deep neural network model is trained to identify the location of the main joints, the keypoints (e.g. knees, elbows, head) of the person in the image. These predictions are combined with existing body models to get the full stick-ï¬gure-esque output summarizing the pose. (See Figure 4, sourced from [218], for an illustration.)
(2D) Human pose estimation is a core problem in computer vision with multiple benchmark datasets, and has seen numerous convolutional architectures developed to tackle it. Some of the earlier models include a multi-stage neural network introduced by [244], and a stacked hourglass model [158] that alternatingly combines high and low resolutions of the intermediate representations. More recently, HRNet [218], which keeps a high resolution representation throughout the model is a top performing architecture (code at https://github.com/leoxiaobin/deep-high-resolution-net.pytorch). Also of interest might be [24] provides an end-to-end system for multiperson pose detection in the corresponding code repository https: //github.com/CMU-Perceptual-Computing-Lab/openpose.
Scientiï¬c Examples: Pose estimation has gained signiï¬cant interest in neuroscience settings, where videos of animals are recorded, and automatically predicting poses in the image can help identify important behaviors. An example is given by [146, 147], with associated code http://www.mousemotorlab.org/deeplabcut.
12
Figure 4: Pose Estimation. Image source [218] The task of pose estimation, speciï¬cally multi-person 2D (human) pose-estimation is depicted in the ï¬gure. The neural network model predicts the positions of the main joints (keypoints), which are combined with a body model to get the stick-ï¬gure like approximations of pose overlaid on the multiple humans in the image. Variants of these techniques have been used to study animal behaviors in scientiï¬c settings.
# 4.3.7 Other Tasks with Convolutional Neural Networks
In the preceding sections, we have overviewed some of the most common tasks for which convolutional neural networks are used. However, there are many additional use cases of these models that we have not covered, including video prediction [57], action recognition [52] and style transfer [64]. We hope that the provided references and resources enable future investigation into some of these methods also.
# 4.4 Graph Neural Networks
Many datasets, such as (social) network data and chemical molecules have a graph structure to them, consisting of vertices connected by edges. An active area of research, graph neural networks, has looked at developing deep learning methods to work well with this kind of data. The input graph consists of nodes v having some associated feature vector hv, and sometimes edges euv also having associated features zeuv . For example, nodes v might correspond to diï¬erent atoms, and the edges euv to the diï¬erent kinds of chemical bonds between atoms. At a high level, most graph neural networks compute useful information from the data by (i) using the feature vectors of the neighbors of each vertex v to compute information on the input graph instance (ii) using this information to update the feature vector of v. This process, which respects the connectivity of the graph, is often applied iteratively, with the ï¬nal output either at the vertex level (Are meaningful vertex feature vectors computed?) or at the level of the full input graph (Is some global property of the entire graph correctly identiï¬ed?)
Application Characteristics Problems where the data has an inherent graph structure, and the goal is to learn some function on this graph structure â either at the per vertex level or a global property of the entire graph. There are also spatio-temporal graph neural networks â performing predictions on graph
13
structures evolving over time.
Technical References Although most graph neural networks follow the high level structure of aggregating information from vertex neighbors and using this information to update feature vectors, there are many many diï¬erent architectural variants, with connections to other neural network models such as convolutional nets and recurrent models. Recent work has also looked at spatio-temporal graph networks for problems like action recognition in video [124]. A nice uniï¬cation of many of the ï¬rst popular methods, such as [53, 15, 127], is given by [67]. A more recent survey paper [250], provides an extremely comprehensive overview of the diï¬erent kinds of architectures, problems, benchmark datasets and open source resources. Some useful code repositories include https://github.com/rusty1s/pytorch_geometric, https://github.com/deepmind/graph_nets and https://github.com/dmlc/dgl, which together cover most of the popular deep learning frameworks.
Scientiï¬c Examples Graph neural networks have been very popular for several chemistry tasks, such as predicting molecular properties [53, 93, 67, 103], determining protein interfaces [60, 229] and even generating candidate molecules [41, 21]. A useful library for many of these chemistry tasks is https://github.com/ deepchem, which also has an associated benchmark task [249]. A detailed tutorial of diï¬erent graph neural networks and their use in molecule generation can be seen at https://www.youtube.com/watch?v= VXNjCAmb6Zw.
# 4.5 Neural Networks for Sequence Data
A very common attribute for data is to have a sequential structure. This might be frames in a video, amino acid sequences for a protein or words in a sentence. Developing neural network models to work with sequence data has been one of the most extensive areas of research in the past few years. A large fraction of this has been driven by progress on tasks in natural language processing, which focuses on getting computers to work with the language used by people to communicate. Two popular tasks in this area, which have seen signiï¬cant advances, have been machine translation â developing deep learning models to translate from one language to another and question answering â taking as input a (short) piece of text and answering a question about it. In the following sections, we ï¬rst overview some of the main NLP tasks that have driven forward sequence modelling and then the neural network models designed to solve these tasks.
# 4.5.1 Language Modelling (Next Token Prediction)
Language modelling is a training method where the deep learning model takes as input the tokens of the sequence up to time/position t, and then uses these to predict token t + 1. This is in fact a self-supervised training method (see Section 6), where the data provides a natural set of labels without additional labelling needed. In the NLP context, the neural network is fed in a sequence of words, corresponding to a sentence or passage of text, and it tries to predict the next word. For example, given a sentence, "The cat sat on the roof", the network would ï¬rst be given as input "The" and asked to predict "cat", then be fed in "The cat" and asked to predict "sat", and so on. (There are some additional details in implementation, but this is the high level idea.) Because of the easy availability of data/labels, and the ability to use language modelling at diï¬erent levels â for words and even for characters, it has been a popular benchmark in natural language, and also for capturing sequence dependencies in scientiï¬c applications, such as protein function prediction [77, 80], and using the hidden representations as part of a larger pipeline for protein structure prediction in AlphaFold [205] (with opensourced code https://github.com/deepmind/deepmind-research/tree/master/alphafold_casp13.)
14
Attention I I hello how geht es dir i decoder
Figure 5: Illustration of the Sequence to Sequence prediction task. Image source [267] The ï¬gure shows an illustration of a Sequence to Sequence task, translating an input sentence (sequence of tokens) in English to an output sentence in German. Note the encoder-decoder structure of the underlying neural network, with the encoder taking in the input, and the decoder generating the output, informed by the encoder representations and the previously generated output tokens. In this ï¬gure, the input tokens are fed in one by one, and the output is also generated one at a time, which is the paradigm when using Recurrent Neural Networks as the underlying model. With Transformer models, which are now extremely popular for sequence to sequence tasks, the sequence is input all at once, signiï¬cantly speeding up use.
# 4.5.2 Sequence to Sequence
Another very popular task for sequence data is sequence to sequence â transforming one sequence to another. This is precisely the setup for machine translation, where the model gets an input sentence (sequence) in say English, and must translate it to German, which forms the output sentence (sequence). Some of the ï¬rst papers framing this task and tackling it in this way are [10, 221, 234]. Sequence to sequence tasks typically rely on neural network models that have an encoder-decoder structure, with the encoder neural network taking in the input sequence and learning to extract the important features, which is then used by the decoder neural network to produce the target output. Figure 5(sourced from [267]) shows an example of this. This paradigm has also found some scientiï¬c applications as varied as biology [23] and energy forcasting [145]. Sequence to sequence models critically rely on a technique called attention, which we overview below. For more details on this task, we recommend looking at some of the tutorials and course notes highlighted in Section 3.
# 4.5.3 Question Answering
One other popular benchmark for sequence data has been question answering. Here, a neural network model is given a paragraph of text (as context) and a speciï¬c question to answer on this context as input. It must then output the part of the paragraph that answers the question. Some of the standard benchmarks for this task are [83, 186], with http://web.stanford.edu/class/cs224n/slides/cs224n-2019-lecture10-QA.pdf providing an excellent overview of the tasks and common methodologies. Question answering critically relies on the neural network model understanding the relevance and similarity of diï¬erent sets of sequences (e.g. how relevant is this part of the context to the question of interest?). This general capability (with appropriate reformulation) has the potential to be broadly useful, both for determining similarity and relevance on other datasets, and for question answering in specialized domains [61].
15
Figure 6: Diagram of a Recurrent Neural Network model, speciï¬cally a LSTM (Long-Short Term Network). Image source [163] The ï¬gure illustrates an LSTM network, a type of Recurrent Neural Network. We see that the input xt at each timestep also inform the internal network state in the next timestep (hence a recurrent neural network) through a gating mechanism. This gating mechanism is called an LSTM, and consists of sigmoid and tanh functions, which transform and recombine the input for an updated internal state, and also emit an output. The mechanics of this gating process are shown in the middle cell of the ï¬gure.
# 4.5.4 Recurrent Neural Networks
Having seen some of the core tasks in deep learning for sequence data, these next few sections look at some of the key neural network models.
Recurrent neural networks (RNNs) were the ï¬rst kind of deep learning model successfully used on many of the aforementioned tasks. Their distinguishing feature, compared to CNNs or MLPs (which are feedforward neural networks, mapping input straight to output), is that there are feedback connections, enabling e.g. the output at each timestep to become the input for the next timestep, and the preservation and modiï¬cation of an internal state across timesteps. When RNNs are used for sequential data tasks, sequences are input token by token, with each token causing an update of the internal cell state of the RNN, and also making the RNN emit a token output. Note that this enables these models to work with variable length data â often a deï¬ning characteristic of sequence data. How the input is processed, cell state updated and output emitted are controlled by gating functions â see the technical references!
Application Characteristics: Problems where the data has a sequential nature (with diï¬erent sequences of varying length), and prediction problems such as determining the next sequence token, transforming one sequence to another, or determining sequence similarities are important tasks.
Technical References: Research on sequence models and RNNs has evolved dramatically in just the past couple of years. The most successful and popular kind of RNN is a bi-LSTM with Attention, where LSTM (Long-Short Term Memory) [88] refers to the kind of gating function that controls updates in the network, bi refers to bidirectional (the neural network is run forwards and backwards on the sequence) and Attention is a very important technique that we overview separately below. (Some example papers [149, 150] and code resources https://github.com/salesforce/awd-lstm-lm.) This excellent post https: //colah.github.io/posts/2015-08-Understanding-LSTMs/ provides a great overview of RNNs and LSTMs in detail. (Figure 6 shows a diagram from the post revealing the details of the gating mechanisms in LSTMs.) The post also describes a small variant of LSTMs, Gated Recurrent Units (GRUs) which are also popular in practice [127]. While RNNs (really bi-LSTMs) have been very successful, they are often tricky to develop and train, due to their recursiveness presenting challenges with optimization (the vanishing/exploding gradients problem [87, 170, 76]), with performing fast model training (due to generating targets token by token), and challenges learning long term sequential dependencies. A new type of feedforward neural network architecture, the Transformer (overviewed below), was proposed to alleviate the ï¬rst two of these challenges.
16
ENCODER #1 Feed Forward Feed Forward Neural Network Neural Network Fe) 22 Self-Attention > Es oe | Thinking Machines
Figure 7: Image of a couple of layers from a Transformer network. Image source [3] The ï¬gure depicts the core sequence of layers that are fundamental to Transformer neural networks, a self-attention layer (sometimes called a self-attention head) followed by fully connected layers. Note that when working with sequence data, transformers take the entire input sequence all at once, along with positional information (in this case the input sequence being "Thinking Machines".)
Scientiï¬c Examples: RNNs have found several scientiï¬c applications for data with sequential structure, such as in genomics and proteomics [175, 132, 111].
# 4.5.5 Attention
A signiï¬cant problem in using RNNs and working with sequential data is the diï¬culty in capturing long range dependencies. Long range dependencies are when tokens in the sequence that are very far apart from each other must be processed together to inform the correct output. RNNs process sequences in order, token by token, which means they must remember all of the important information from the earlier tokens until much later in the sequence â very challenging as the memory of these architectures is far from perfect. Attention [32, 11] is a very important technique that introduces shortcut connections to earlier tokens, which alleviates the necessity to remember important features for the duration of the entire sequence. Instead it provides a direct way to model long term dependencies â the neural network has the ability to look back and attend to what it deems relevant information (through learning) earlier in the input. A very nice overview of attention is provided by https://lilianweng.github.io/lil-log/2018/06/24/attention-attention.html. A variant of attention, self-attention, which can be used to help predictions on a single input sequence, is the core building block of Transformer models.
# 4.5.6 Transformers
While attention helped with challenges in long range dependencies, RNNs still remained slow to train and tricky to design (due to optimization challenges with vanishing/exploding gradients.) These challenges were inherent to their recurrent, token-by-token nature, prompting the proposal of a new feedforward neural network to work with sequential data, the Transformer [233], which critically relies on attentional mechanisms (the paper is in fact titled Attention is All you Need.) During training transformers take in the entire sequence as input all at once, but have positional embeddings that respects the sequential nature of the data. Transformers have been exceptionally popular, becoming the dominant approach to many natural language tasks and sequential tasks.
17
Application Characteristics: Problems where the data has a sequential nature and long range depen- dencies that need to be modelled. Given the large number of pretrained transformer models, they can also be very useful in settings where pretrained models on standard benchmarks can be quickly adapted to the target problem.
Technical References: The original transformer paper [233] provides a nice overview of the motivations and the neural network architecture. The model was designed with machine translation tasks in mind, and so consists of an encoder neural network and a decoder neural network. With transformers being adopted for tasks very diï¬erent to machine translation, the encoder and decoder are often used in stand-alone fashions for diï¬erent tasks â for example, the encoder alone is used for question answering, while the decoder is important for text generation. Two very accessible step by step tutorials on the transformer are The Annotated Transformer and The Illustrated Transformer. A nice example of some of the language modelling capabilities of this models is given by [180].
Since the development of the transformer, there has been considerable research looking at improving the training of these models, adjusting the self-attention mechanism and other variants. A very important result using the transformer has been BERT (Pretraining of deep Bi-directional Transformers for Language understanding) [43]. This paper demonstrates that performing transfer learning (see Section 5.1) using a transformer neural network can be extremely successful for many natural language tasks. (Some of the ï¬rst papers showing the potential of transfer learning in this area were [92, 180], and since BERT, there have been followups which extend the model capabilities [257].) From a practical perspective, the development of transformers, BERT and transfer learning mean that there are many resources available online for getting hold of code and pretrained models. We refer to some of these in Section 3, but of particular note is https: //github.com/huggingface/transformers which has an excellent library for transformer models. A good overview of BERT and transfer learning in NLP is given in http://jalammar.github.io/illustrated-bert/.
Scientiï¬c Examples: There have been several interesting examples of transformers used in scientiï¬c settings, such as training on protein sequences to ï¬nd representations encoding meaningful biological properties [195], protein generation via language modelling [142], bioBERT [121] for text mining in biomedical data (with pretrained model and training code), embeddings of scientiï¬c text [18] (with code https: //github.com/allenai/scibert) and medical question answering [237].
# 4.5.7 Other Tasks with Sequence Data
In the previous sections, weâve given an overview of some of the important benchmark tasks for sequential data, and the types of deep learning models available to tackle them. As with convolutional networks, this is not a comprehensive overview, but hopefully thorough enough to help with generating ideas on possible applications and oï¬ering pointers to other useful related areas. A few other sequential data tasks that might be of interest are structured prediction, where the predicted output has some kind of structure, from tree structures (in e.g. parsing) [28, 246] to short, executable computer program structure [271] and summarization, where passages of text are summarized by a neural network [130, 273]. Weâll also discuss word embeddings later in the survey.
# 4.6 Section Summary
In this section, we have overviewed supervised learning, some of the core neural network models and the kinds of important tasks they can be used for. As previously discussed, these topics span an extremely large area of research, so there are some areas, e.g. deep neural networks for set structured data [262, 113], modelling diï¬erent invariances â invariances to speciï¬ed Lie groups for application to molecular property prediction [58], spherical invariances [35, 36] not covered. But we hope the material and references presented help inspire novel contributions to these very exciting and rapidly evolving research directions.
18
# Step 1: Pretraining
# Step 2: Finetuning
Final Model
Train on diverse, generic task, e.g. ImageNet
Train on target task
Randomly @ @ @ @ Initialize Pretrained SGOO®@ Wests: @©OOO Contain useful âS eco ee0ee
Figure 8: The Transfer Learning process for deep neural networks. Transfer learning is a two step process for training a deep neural network. Instead of intializing parameters randomly and directly training on the target task, we ï¬rst perform a pretraining step, on some diverse, generic task. This results in the neural network parameters converging to a set of values, known as the pretrained weights. If the pretraining task is diverse enough, these pretrained weights will contain useful features that can be leveraged to learn the target task more eï¬ciently. Starting from the pretrained weights, we then train the network on the target task, known as ï¬netuning, giving us the ï¬nal model.
# 5 Key (Supervised Learning) Methods
In the previous section we saw diï¬erent kinds of neural network models, and the many diï¬erent types of tasks they could be used for. To train the models for these tasks, we typically rely on the supervised learning methodology â optimize model parameters to correctly output given labels (the supervision) on a set of training data examples.
In more detail, the standard supervised learning method for deep neural networks consists of (i) collecting data instances (e.g. images) (ii) collecting labels for the data instances (e.g. is the image a cat or a dog) (iii) splitting the set of collected (data instance, label) into a training set, validation set and test set (iv) randomly initializing neural network parameters (iv) optimizing parameters so the network outputs the correct corresponding label given an input data instance on the training set (v) further tuning and validating on the validation and test sets.
In this section we overview methods that use variants of this process, for example initializing the neural network parameters diï¬erently or dealing with shifts between the training data and the test sets. In Section 6, we look at variants that reduce the dependence on collecting labels.
# 5.1 Transfer Learning
Through the preceding sections, weâve made references to using pretrained models. This is in fact referring to a very important method for training deep neural networks, known as transfer learning. Transfer learning is a two step process for training a deep neural network model, a pretraining step, followed by a ï¬netuning step, where the model in trained on the target task. More speciï¬cally, we take a neural network with parameters
19
randomly initialized, and ï¬rst train it on a standard, generic task â the pretraining step. For example, in image based tasks, a common pretraining task is ImageNet [42], which is an image classiï¬cation task on a large dataset of natural images. With an appropriate pretraining task that is generic and complex enough, the pretraining step allows the neural network to learn useful features, stored in its parameters, which can then be reused for the second step, ï¬netuning. In ï¬netuning, the pretrained neural network is further trained (with maybe some minor modiï¬cations to its output layer) on the true target task of interest. This process is illustrated in Figure 8. But being able to use the features it learned during pretraining often leads to boosts in performance and convergence speed of the target task, as well as needing less labelled data.
Because of these considerable beneï¬ts, transfer learning has been extraordinarily useful in many settings, particularly in computer vision [95], which had many early successful applications. As overviewed in Section 4.5.6, the recent development of models like ULMFiT [92] and especially BERT [43] has also made transfer learning extremely successful in natural language and sequential data settings, with recent work making the transfer learning process even more eï¬cient [90, 201]. Most importantly, the ready availability of standard neural network architectures pretrained on standard benchmarks through many open sourced code repositories on GitHub (examples given in Section 3) has meant that downloading and ï¬netuning a standard pretrained model has become the de-facto standard for most new deep learning applications.
Typically, performing transfer learning is an excellent way to start work on a new problem of interest. There is the beneï¬t of using a well-tested, standard neural network architecture, aside from the knowledge reuse, stability and convergence boosts oï¬ered by pretrained weights. Note however that the precise eï¬ects of transfer learning are not yet fully understood, and an active research area [116, 184, 266, 159, 143, 181, 235] looks at investigating its exact properties. For transfer learning in vision [116, 266, 112] may be of particular interest for their large scale studies and pretraining recommendations.
# 5.2 Domain Adaptation
Related to transfer learning is the task of domain adaptation. In (unsupervised) domain adaptation, we have training data and labels in a source domain, but want to develop a deep learning model that will also work on a target domain, where the data instances may look diï¬erent to those in the source domain, but the high level task is the same. For instance, our source domain many consist of images of handwritten digits (zero to nine) which we wish to classify as the correct number. But the target domain many have photographs of house numbers (from zero to nine), that we also wish to classify as the correct number. Domain adaptation techniques help build a model on the source domain that can also work (reasonably) well out-of-the-box on the shifted target domain.
The most dominant approach to domain adaptation in deep learning is to build a model that can (i) perform well on the source domain task, and (ii) learns features that are as invariant to the domain shift as possible. This is achieved through jointly optimizing for both of these goals. Returning to our example on handwritten digits and house number photographs, (i) corresponds to the standard supervised learning classiï¬cation problem of doing well on the (source) task of identifying handwritten digits correctly while (ii) is more subtle, and typically involves explicitly optimizing for the hidden layer representations of handwritten digits and house number photographs to look the same as each other â domain invariance. Some popular ways to implement this include gradient reversal [62], minimizing a distance function on the hidden representations [137], and even adversarial training [63, 211]. More recently, [219] look at using self-supervision (see Section 6) to jointly train on the source and target domains, enabling better adaptation.
Other approaches to domain adaptation include translating data instances from the source to the target domain, and bootstrapping/co-training approaches (see Section 6.2). Some of these methods are overviewed in tutorials such as Deep Domain Adaptation in Computer Vision.
20
# 5.3 Multitask Learning
In many supervised learning applications, ranging from machine translation [2] to scientiï¬c settings [187, 176], neural networks are trained in a multitask way â predicting several diï¬erent outputs for a single input. For example, in image classiï¬cation, given an input medical image, we might train the network not only to predict a disease of interest, but patient age, history of other related disease, etc. This often has beneï¬cial eï¬ects even if there is only one prediction of interest, as it provides the neural network with useful additional feedback that can guide it in learning the most important data features. (This can be so useful that sometimes auxiliary prediction targets are deï¬ned solely for this purpose.) Additionally, the prediction of multiple targets can mean that more data is available to train the model (only a subset of the data has the target labels of interest, but many more data instances have other auxiliary labels.) The most extreme version of this is to simultaneously train on two entirely diï¬erent datasets. For example, instead of performing a pretraining/ï¬netuing step, the model could be trained on both ImageNet and a medical imaging dataset at the same time.
Multitask learning is usually implemented in practice by giving the neural network multiple heads. The head of a neural network refers to its output layer, and a neural network with multiple heads has one head for each predictive task (e.g. one head for predicting age, one for predicting the disease of interest) but shares all of the other features and parameters, across these diï¬erent predictive tasks. This is where the beneï¬t of multitask learning comes from â the shared features, which comprise of most of the network, get many diï¬erent sources of feedback. Implementing multitask learning often also requires careful choice of the way to weight the training objectives for these diï¬erent tasks. A nice survey of some popular methods for multitask learning is given by https://ruder.io/multi-task/index.html#fn4, and a tutorial on some of the important considerations in http://hazyresearch.stanford.edu/multi-task-learning. One package for implementing multitask learning is found in https://github.com/SenWu/emmental and step-by-step example with code excerpts in towardsdatascience Multitask Learning: teach your AI more to make it better.
# 5.4 Weak Supervision (Distant Supervision)
Suppose it is very diï¬cult to collect high quality labels for the target task of interest, and neither is there an existing, standard, related dataset and corresponding pretrained model to perform transfer learning from. How might one provide the deep learning model with enough supervision during the training process? While high quality labels might be hard to obtain, noisy labels might be relatively easy to collect. Weak supervision refers to the method of training a model on a dataset with these noisy labels (typically for future ï¬netuning), where the noisy labels are often generated in an automatic process.
In computer vision (image based) tasks, some examples are: taking an image level label (for classiï¬cation) and automatically inferring pixel level labels for segmentation [171], clustering hidden representations computed by a pretrained network as pseudo-labels [255], or taking Instagram tags as labels [143] for pretraining. In language tasks, examples are given by [153, 89, 264], which provide noisy supervision by assuming all sentences mentioning two entities of interest express a particular relation (also known as distant supervision). A nice overview of weak supervision and its connection to other areas is given in https: //hazyresearch.github.io/snorkel/blog/ws_blog_post.html, with a related post looking speciï¬cally at medical and scientiï¬c applications http://hazyresearch.stanford.edu/ws4science.
# 5.5 Section Summary
In this section, we have overviewed some of the central supervised learning based methodologies for developing deep learning models. This is just a sampling of the broad collection of existing methods, and again, we hope that the descriptions and references will help facilitate further exploration of other approaches. One method not covered that might be of particular interest is multimodal learning, where neural networks are
21
Training 180° Unlabelled Data Labels from Pretext ee Instances Task (Rotation) ©0000 270° 180° 0° lita *. [ioe fe ©0000
Figure 9: Training neural networks with Self-Supervision. The ï¬gure illustrates one example of a self- supervision setup. In self-supervision, we typically have a collection of unlabelled data instances, in this case images. We deï¬ne a pretext task, that will automatically generate labels for the data instances. In this case, the pretext task is rotation â we randomly rotate the images by some amount and label them by the degree of rotation. During training, the neural network is given this rotated image and must predict the degree of rotation. Doing so also requires the neural network learn useful hidden representations of the image data in general, so after training with self-supervision, this neural network can then be successfully and eï¬ciently ï¬netuned on a downstream task.
simultaneously trained on data from diï¬erent modalities, such as images and text [139, 238, 102]. Multimodal learning also provides a good example of the fact that it is often diï¬cult to precisely categorize deep learning techniques as only being useful for a speciï¬c task or training regime. For example, we looked at language modelling for sequence tasks in this supervised learning section, but language modelling is also an example of self-supervision (Section 6) and generative models (Section 8.1). There are many rich combinations of the outlined methods in both this section and subsequent sections, which can prove very useful in the development of an end to end system.
# 6 Doing More with Less Data
Supervised learning methods, and speciï¬c variants such as transfer learning and multitask learning have been highly successful in training deep neural network models. However, a signiï¬cant limitation to their use, and thus the use of deep learning, is the dependence on large amounts of labelled data. In many specialized domains, such as medicine, collecting a large number of high quality, reliable labels can be prohibitively expensive.
Luckily, in just the past few years, weâve seen remarkable advances in methods that reduce this dependence, particularly self-supervision and semi-supervised learning. These approaches still follow the paradigm of training a neural network to map raw data instances to a speciï¬ed label, but critically, these labels are not collected separately, but automatically deï¬ned via a pretext task. For example, we might take a dataset of images, rotate some of them, and then deï¬ne the label as the degree of rotation, which is the prediction target for the neural network. This enables the use of unlabelled data in training the deep neural network. In
22
this section, we cover both self-supervision and semi-supervised learning as well as other methods such as data augmentation and denoising, all of which enable us to do more with less data.
# 6.1 Self-Supervised Learning
In self-supervision, a pretext task is deï¬ned such that labels can be automatically calculated directly from the raw data instances. For example, on images, we could rotate the image by some amount, label it by how much it was rotated, and train a neural network to predict the degree of rotation [66] â this setup is illustrated in Figure 9. This pretext task is deï¬ned without needing any labelling eï¬ort, but can be used to teach the network good representations. These representations can then be used as is or maybe with a little additional data for downstream problems. Arguably the biggest success of self-supervision has been language modelling for sequential data and speciï¬cally natural language problems, which we overviewed in Section 4.5.1. Below we outline some of the most popular and successful self-supervision examples for both image and sequential data. (A comprehensive list of self-supervision methods can also be found on this page https://github.com/jason718/awesome-self-supervised-learning.)
# 6.1.1 Self-Supervised Learning for Images
A recent, popular and simple self-supervised task for images is to predict image rotations [66]. Each image instance is transformed with one of four possible rotations and the deep learning model must classify the rotation correctly. Despite its simplicity, multiple studies have shown its success in learning good representations [266, 265, 112]. Another popular method examined in those studies is exemplar [51], which proposes a self-supervision task relying on invariance to image transformations. For example, we might take a source image of a cat, and perform a sequence of transformations, such as rotation, adjusting contrast, ï¬ipping the image horizontally, etc. We get multiple images of the cat by choosing many such sequences, and train the neural network to recognize these all as the same image.
Other methods look at using image patches as context to learn about the global image structure and important features. For example, [48] deï¬nes a pretext task where the relative locations of pairs of image patches must be determined, while [161] teaches a neural network to solve jigsaw puzzles. This latter task has been shown to be eï¬ective at large scales [69], with nice implementations and benchmarking provided by https://github.com/facebookresearch/fair_self_supervision_benchmark. A recent line of work has looked at using mutual information inspired metrics as a way to provide supervision on the relatedness of diï¬erent image patches [84, 169, 7, 154], but these may be more intricate to implement. Many of these mutual information based metrics also rely on contrastive losses [30], which, at a high level, provides supervision to the network by making representations of a pair of similar inputs more similar than representations of a pair of diï¬erent inputs. Very recently, a new self-supervision method, SimCLR [29], uses this to achieve high performance (one implementation at https://github.com/sthalles/SimCLR.)
Note that some of the image registration examples given in Section 4.3.5 are also examples of self- supervised learning, where some kind of domain speciï¬c similarity function can be automatically computed to assess the quality of the output. Such approaches may be relevant to other domains, and are useful to explore. A great set of open-sourced implementations of many of self-supervision methods is provided by https://github.com/google/revisiting-self-supervised.
# 6.1.2 Self-Supervised Learning for Sequential (Natural Language) Data
While research on self-supervision techniques for images has been extremely active, the strongest successes of this framework have arguably been with sequential data, particularly text and natural language. The sequential structure immediately gives rise to eï¬ective self-supervision pretext tasks. Two dominant classes of pretext tasks operate by either (i) using neighboring tokens of the sequence as input context for predicting a
23
target token (ii) taking in all tokens up to a particular position and predicting the next token. The latter of these is language modelling, which was overviewed in Section 4.5.1. The former is the principle behind word embeddings.
Word embeddings have been critical to solving many natural language problems. Before the recent successes of full ï¬edged transfer learning in language (Section 5.1) this simple self-supervised paradigm was where knowledge reuse was concentrated, and formed a highly important component of any deep learning system for natural language (sequential) data. From a scientiï¬c perspective, learning word embeddings for sequential data has the potential to identify previously unknown similarities in the data instances. It has already found interesting uses in aiding with the automatic analysis of scientiï¬c texts, such as drug name recognition systems [131], biomedical named entity recognition [73], identifying important concepts in materials science [230] and even detecting chemical-protein interactions [37].
The key fundamental ideas of word embeddings are captured in the word2vec framework [152, 151], the original framework relying on either a Continuous-Bag-of-Words (CBOW) neural network or a Skip-Gram neural network. Actually, both of these models are less neural networks and more two simple matrix multiplications, with the ï¬rst matrix acting as a projection, and giving the desired embedding. In CBOW, the context â deï¬ned as the neighborhood words â are input, and the model must correctly identify the target output word. In Skip-Gram, this is reversed, with the center word being input, and the context being predicted. For example, given a sentence "There is a cat on the roof", with the target word being cat, CBOW would take in the vector representations of (There, is, a, on, the, roof) and output "cat", while Skip-Gram would roughly swap the inputs and outputs. The simplicity of these methods may make them more suitable for many tasks compared to language modelling. Two nice overviews of the these methods are given by Introduction to Word Embeddings and word2vec, and https://ruder.io/word-embeddings-1/. Other embedding methods include [173, 123].
# 6.1.3 Self-Supervision Summary
In this section we have outlined many of the interesting developments in self-supervised learning, a very successful way to make use of unlabelled data to learn meaningful representations, either for analysis or other downstream tasks. Self-supervision can be eï¬ectively used along with other techniques. For example, in the language modelling application, we saw it used for transfer learning (Section 5.1), where a deep learning model is ï¬rst pretrained using the language modelling self supervision objective, and then ï¬netuned on the target task of interest. In the following section, we will other ways of combining self-supervision with labelled data.
# 6.2 Semi-Supervised Learning
While collecting large labelled datasets can be prohibitively expensive, it is often possible to collect a smaller amount of labelled data. When assembling a brand new dataset, a typical situation is having a small amount of labelled data and a (sometimes signiï¬cantly) larger number of data instances with no labels. Semi-supervised learning looks at precisely this setting, proposing techniques that enable eï¬ective learning on labelled and unlabelled data. Below we overview some of the popular methods for semi-supervised learning.
# 6.2.1 Self-Supervision with Semi-Supervised Learning
Following on from the previous section, one natural way to make use of the unlabelled data is to use a self-supervised pretext task. To combine this with the labelled data, we can design a neural network that has two diï¬erent outputs heads (exactly as in multitask learning, see Section 5.3), with one output head being used for the labelled data, and the other for the self-supervised objective on the unlabelled data. Importantly,
24
this means that the features learned by the neural network are shared between the labelled and unlabelled data, leading to better representations. This simple approach has been shown to be very eï¬ective [266, 265].
# 6.2.2 Self-Training (Bootstrapping)
Self-training, sometimes also referred to as bootstrapping or pseudo-labels, is an iterative method where a deep neural network is ï¬rst developed in a supervised fashion on the labelled data. This neural network is then used to provide (pseudo) labels to the unlabelled data, which can then be used in conjunction with the labelled data to train a new, more accurate neural network. This approach often works well and can even be repeated to get further improvements. There are a couple of common details in implementation â often when adding the neural network pseudo-labelled data, we only keep the most conï¬dently pseudo-labelled examples. These pseudo-labelled examples may also be used for training with a diï¬erent objective function compared to the labelled data. One of the early papers proposing this method was [120], with a more recent paper [252] demonstrating signiï¬cant successes at large scale. Other variants, including mean teacher [225], temporal ensembling [119] and the recent MixMatch [19] also primarily use the self-training approach, but incorporate elements of consistency (see below). There are nice open sourced implementations of these methods, such as https://github.com/CuriousAI/mean-teacher for mean teacher and https://github.com/google- research/mixmatch and https://github.com/YU1ut/MixMatch-pytorch for MixMatch.
# 6.2.3 Enforcing Consistency (Smoothness)
An important theme in many semi-supervised methods has been to provide supervision on the unlabelled data through enforcing consistency. If a human was given two images A and B, where B was a slightly perturbed version of A (maybe blurred, maybe some pixels obscured or blacked out), they would give these images the same label â consistency. We can also apply this principle to provide feedback to our neural network on the unlabelled data, combining it with the labelled data predictions as in multitask learning (Section 5.3) to form a semi-supervised learning algorithm. A popular method on enforcing consistency is virtual adversarial training [155], which enforces consistency across carefully chosen image perturbations. Another paper, unsupervised data augmentation [251], uses standard data augmentation techniques such as cutout [44] for images and back translation for text [206] to perturb images and enforces consistency across them. [265] uses consistency constraints along with other semi-supervised and self-supervised techniques in its full algorithm.
# 6.2.4 Co-training
Another way to provide feedback on unlabelled data is to train two (many) neural network models, each on a diï¬erent view of the raw data. For example, with text data, each model might see a diï¬erent part of the input sentence. These models can then be given feedback to be maximally consistent with each other, or with a diï¬erent model which sees all of the data, or even used for self-training, with each diï¬erent model providing pseudo labels on the instances it is most conï¬dent on. This post https://ruder.io/semi-supervised/ gives a nice overview of diï¬erent co-training schemes, and [34, 179, 74] are some recent papers implementing this in text and images.
# 6.2.5 Semi-Supervised Learning Summary
Semi-supervised learning is a powerful way to reduce the need for labelled data and can signiï¬cantly boost the eï¬cacy of deep learning models. Semi-supervised learning can be applied in any situation where a meaningful task can be created on the unlabelled data. In this section we have overviewed some natural ways to deï¬ne such tasks, but there may be many creative alternatives depending on the domain of interest. We hope the references will provide a helpful starting point for implementation and further exploration!
25
cat: 1.0 dog: 1.0 cat: 0.4 @ dog: 0.6
Figure 10: An illustration of the Mixup data augmentation technique. Image source [40] The ï¬gure provides an example of the Mixup data augmentation method â an image of a cat and an image of a dog are linearly combined, with 0.4 weight on the cat and 0.6 weight on the dog, to give a new input image shown in the bottom with a smoothed label of 0.4 weight on cat and 0.6 weight on dog. Mixup has been a very popular and successful data augmentation method for image tasks.
# 6.3 Data Augmentation
As depicted in Figure 1, data augmentation is an important part of the deep learning workï¬ow. Data augmentation refers to the process of artiï¬cially increasing the size and diversity of the training data by applying a variety of transformations to the raw data instances. For example, if the raw instances were to consist of images, we might artiï¬cially pad out the image borders and then perform an oï¬ center (random) crop to give us the ï¬nal augmented image instance. Aside from increasing the size and diversity of the data, data augmentation oï¬ers the additional beneï¬t of encouraging the neural network to be robust to certain kinds of common transformations of data instances. In this section, we overview some of the most popular data augmentation techniques for image and sequential data. These techniques will typically already be part of many open sourced deep learning pipelines, or easy to invoke in any mainstream deep learning software package. There are also some speciï¬c libraries written for augmentations, for example imgaug https://github.com/aleju/imgaug, nlpaug https://github.com/makcedward/nlpaug and albumentations https://github.com/albumentations-team/albumentations.
# 6.3.1 Data Augmentation for Image Data
Simple augmentations for image data consider transformations such as horizontal ï¬ips or random crops (padding the image borders and taking an oï¬ center crop.) Inspired by these simple methods are two very successful image augmentation strategies, cutout [44], which removes a patch from the input image, and RICAP [222], which combines patches from four diï¬erent input image to create a new image (with new label a combination of the original labels.) This somewhat surprising latter technique of combining images has in fact shown to be very successful in mixup [268], another data augmentation strategy where linear combinations of images (instead of patches) are used. (This strategy has also been combined with cutout in the recently proposed cutmix augmentation strategy [261], with code https://github.com/clovaai/CutMix-PyTorch.)
Other useful augmentation strategies include TANDA [188] which learns a model to compose data augmentations, the related randaugment [38], choosing a random subset of diï¬erent possible augmentations, population based augmentation [85] which randomly searches over diï¬erent augmentation policies, [91]
26
applying color distortions to the image and the recently proposed augmix [82] (code https://github.com/ google-research/augmix.)
# 6.3.2 Data Augmentation for Sequence Data
Data augmentation for sequential data typically falls into either (i) directly modifying the input sequence, or (ii) in the case of sequence to sequence tasks (Section 4.5.2), increasing the number of input-output sequences through noisy translation with the neural network. When directly modifying the input sequence, common perturbations include randomly deleting a sequence token (comparable to the masking approach used in [43]), swapping sets of sequence tokens, and replacing a token with its synonym. This latter strategy is usually guided by word embeddings [239] or contextualized word embeddings [110]. Examples of combining these transformations are given by [243, 98], with code repositories such as https://github.com/makcedward/ nlpaug providing some simple implementations.
The other dominant approach to data augmentation of sequences is using sequence-to-sequence models to generate new data instances, known as back-translation [206, 54]. Concretely, suppose we have a model to translate from English sequences to German sequences. We can take the output German sequence and use existing tools/noisy heuristics to translate it back to English. This gives us an additional English-German sequence pair.
# 6.4 Data (Image) Denoising
When measuring and collecting high dimensional data, noise can easily be introduced to the raw instances, be they images or single-cell data. As a result there has been signiï¬cant interest and development of deep learning techniques to denoise the data. Many of these recent methods work even without paired noisy and clean data samples, and many be applicable in a broad range of settings. For instance, Noise2Noise [122] uses a U-net neural network architecture to denoise images given multiple noisy copies. The recent Noise2Self [14] (with code: https://github.com/czbiohub/noise2self) frames denoising as a self-supervision problem, using diï¬erent subsets of features (with assumed independent noise properties) to perform denoising, applying it to both images as well as other high dimensional data.
# Interpretability, Model Inspection and Representation Analysis
Many standard applications of deep learning (and machine learning more broadly) focus on prediction â learning to output speciï¬c target values given an input. Scientiï¬c applications, on the other hand, are often focused on understanding â identifying underlying mechanisms giving rise to observed patterns in the data. When applying deep learning in scientiï¬c settings, we can use these observed phenomena as prediction targets, but the ultimate goal remains to understand what attributes give rise to these observations. For example, the core scientiï¬c question might be on how certain amino acid sequences (encoding a protein) give rise to particular kinds of protein function. While we might frame this as a prediction problem, training a deep neural network to take as input an amino acid sequence and output the predicted properties of the protein, we would ideally like to understand how that amino acid sequence resulted in the observed protein function.
To answer these kinds of questions, we can turn to interpretability techniques. Interpretability methods are sometimes equated to a fully understandable, step-by-step explanation of the modelâs decision process. Such detailed insights can often be intractable, especially for complex deep neural network models. Instead, research in interpretability focuses on a much broader suite of techniques that can provide insights ranging from (rough) feature attributions â determining what input features matter the most, to model inspection â determining what causes certain neurons in the network to ï¬re. In fact, these two examples also provide a rough split in the type of interpretability method.
27
âSmoothGrad Gradient
Figure 11: The output of SmoothGrad, a type of saliency map. Image source [215] The ï¬gure shows the original input image (left), raw gradients (middle), which are often too noisy for reliable feature attributions, and SmoothGrad (right), a type of saliency map that averages over perturbations to produce a more coherent feature attribution visualization the input. In particular, we can clearly see that the monument in the picture is important for the model output.
One large set of methods (which we refer to as Feature Attribution and Per Example Interpretability) concentrates on taking a speciï¬c input along with a trained deep neural network, and determining what features of the input are most important. The other broad class of techniques looks at taking a trained model, and a set of inputs, to determine what diï¬erent parts of the network have learned (referred to as Model Inspection and Representational Analysis). This latter set of methods can be very useful in revealing important, hidden patterns in the data that the model has implicitly learned through being trained on the predictive task. For example, in [118], which looks at machine translation, representation analysis techniques are used to illustrate latent linguistic structure learned by the model. We overview both sets of methods below.
# 7.1 Feature Attribution and Per Example Interpretability
We start oï¬ by overviewing some of the popular techniques used to provide feature attribution at a per example level, answering questions such as which parts of an input image are most important for a particular model prediction. These techniques can be further subcategorized as follows:
# 7.1.1 Saliency Maps and Input Masks
At a high level, saliency maps take the gradient of the output prediction with respect to the input. This gives a mask over the input, highlighting which regions have large gradients (most important for the prediction) and which have smaller gradients. First introduced by [213], there are many variants of saliency maps, such as Grad-CAM [204], SmoothGrad [215], IntGrad [220], which make the resulting feature attributions more robust. These and other methods are implemented in https://github.com/PAIR-code/saliency. Note that while these methods can be extremely useful, their predictions are not perfect [105], and must be validated further.
Closely related to these saliency methods is [166], which provides the ability to inspect the kinds of features causing neurons across diï¬erent hidden layers to ï¬re. The full, interactive paper can be read at https://distill.pub/2018/building-blocks/ with code and tutorials available at https://github.com/ tensorflow/lucid.
Many other techniques look at computing some kind of input mask, several of them using deconvolutional layers, ï¬rst proposed by [263] and built on by [106] and [20]. Other work looks at directly optimizing to ï¬nd a sparse mask that will highlight the most important input features [59] (with associated code https://github.com/jacobgil/pytorch-explain-black-box) or ï¬nding such a mask through an iterative
28
Figure 12: Visualization of the kinds of features hidden neurons have learned to detect. Image source [165] This ï¬gure, from [165], illustrates the result of optimizing inputs to show what features hidden neurons have learned to recognize. In this example, the hidden neuron has learned to detect (especially) soccer balls, tennis balls, baseballs, and even the legs of soccer players.
algorithm [25].
# 7.1.2 Feature Ablations and Perturbations
Related to some of masking approaches above, but with enough diï¬erences to categorize separately are several methods that isolate the crucial features of the input either by performing feature ablations or computing perturbations of the input and using these perturbations along with the original input to inform the importance of diï¬erent features.
Arguably the most well known of the ablation based approaches is the notion of a Shapely value, ï¬rst introduced in [207]. This estimates the importance of a particular feature x0 in the input by computing the predictive power of a subset of input features containing x0 and averaging over all possible such subsets. While Shapely values may be expensive to compute naively for deep learning, follow on work [140] has proposed more eï¬cient (and expressive) variants, with highly popular opensourced implementation: https://github.com/slundberg/shap.
The shap opensourced implementation above also uniï¬es some related approaches that use perturbations to estimate feature values. One such approach is LIME [194], which uses multiple local perturbations to enable learning an interpretable local model. Another is DeepLIFT, which uses a reference input to compare activation diï¬erences [210], and yet another approach, Layer-wise Relevance Propagation [6] looks at computing relevance scores in a layerwise manner.
Other work performing ablations to estimate feature importance includes [275] (with code https:// github.com/lmzintgraf/DeepVis-PredDiff), while [59], described in Section 7.1.1 has elements of using input perturbations.
# 7.2 Model Inspection and Representational Analysis
In this second class of interpretability methods, the focus is on gaining insights not at a single input example level, but using a set of examples (sometimes implicitly through the trained network) to understand the salient properties of the data. We overview some diï¬erent approaches below.
# 7.2.1 Probing and Activating Hidden Neurons
A large class of interpretability methods looks at either (i) probing hidden neurons in the neural network â understanding what kinds of inputs it activates for (ii) directly optimizing the input to activate a hidden neuron. Both of these techniques can provide useful insights into what the neural network has chosen to pay attention to, which in turn corresponds to important properties of the data.
29
be 7 mr ica* ° ; mg Indo-Aryan Slavic Turkic 002 4 * magsâ | Dravidian a e an si = sd * ° Ps hy bp (we = oor + F 4 . gis e 5 Baltic F e mk Ww t aa eo ee oe ge em ri e th r wy ee a e hr si SR Uralic oi o4 e ey e e eâ Celtic , © ge i © sq_20 et ee 10 eve 7 . wf? ce Niger-Congo ve, ® ht nao - at » e BU 80, oor + eg yeâ e eee iw e hal e ee coeâ ms ef w e e ea ® en mg 1 . ceb ame mi Germanic oot oF eoâ ny aw Romance A ustronesiar o Austronesian t + +t t i 0.02 0.01 ° 0.01 0.02
Figure 13: Clustering neural network hidden representations to reveal linguistic structures. Image source [118] In work on analyzing multilingual translation systems [118], representational analysis techniques are used to compute similarity of neural network (Transformer) hidden representations across diï¬erent languages. Performing clustering on the result reveals grouping of diï¬erent language representations (each language a point on the plot) according to language families, which aï¬ect linguistic structure. Importantly, this analysis uses the neural network to identify key properties of the underlying data, a mode of investigation that might be very useful in scientiï¬c domains.
Several papers falls into the probing category [259, 272], with an especially thorough study given by Network Dissection [17]. Here here hidden neurons are categorized by the kinds of features they respond to. The paper website http://netdissect.csail.mit.edu/ contains method details as well as links to the code and data.
The other broad category of methods take a neural network, ï¬x its parameters, and optimize the input to ï¬nd the kinds of features that makes some hidden neuron activate. There are several papers using this approach, but of particular note is Feature Visualization [165], with an interactive article and code at: https://distill.pub/2017/feature-visualization/. Followup work, Activation Atlases [26] (with page https://distill.pub/2019/activation-atlas/), does this across many diï¬erent concepts, providing a full mapping of the features learned by the neural network. More recently [164] has used this as a building block to further understand how certain computations are performed in a neural network. Also related is [104], which looks at ï¬nding linear combinations of hidden neurons that correspond to interpretable concepts.
# 7.2.2 Dimensionality Reduction on Neural Network Hidden Representations
In many standard scientiï¬c settings, e.g. analyzing single cell data, dimensionality reduction methods such as PCA, t-SNE [141], UMAP [148] are very useful in revealing important factors of variation and critical diï¬erences in the data subpopulations e.g. tumor cells vs healthy cells. Such methods can also be used on the hidden activations (over some input dataset) of a neural network. Through the process of being trained on some predictive task, the neural network may implicitly learn these important data attributes in its hidden representations, which can then be extracted through dimensionality reduction methods.
30
# 7.2.3 Representational Comparisons and Similarity
Related to more standard approaches of dimensionality reduction and clustering, a line of work has studied comparing hidden representations across diï¬erent neural network models. Early work applied matching algorithms [126] with follow on approaches using canonical correlation analysis [183, 157] (with associated code https://github.com/google/svcca.) This latter approach has been used to identify and understand many representational properties in natural language applications [118, 16, 235] and even in modelling the mouse visual cortex as an artiï¬cial neural network [208]. Another recent technique uses a kernel based approach to perform similarity comparisons [115] (with code https://colab.sandbox.google.com/github/ google-research/google-research/blob/master/representation_similarity/Demo.ipynb.)
# 7.3 Technical References
The preceding sections contain many useful pointers to techniques and associated open sourced code references. One additional reference of general interest may be https://christophm.github.io/interpretable-ml- book/ a fully open sourced book on interpretable machine learning. This focuses slightly more on more traditional interpretability methods, but has useful overlap with some of the techniques presented above and may suggest promising open directions.
# 8 Advanced Deep Learning Methods
The methods and tasks overviewed in the survey so far â supervised learning, fundamental neural network architectures (and their many diï¬erent tasks), diï¬erent paradigms like transfer learning as well as ways to reduce labelled data dependence such as self-supervision and semi-supervised learning â are an excellent set of ï¬rst approaches for any problem amenable to deep learning. In most such problems, these approaches will also suï¬ce in ï¬nding a good solution.
Occasionally however, it might be useful to turn to more advanced methods in deep learning, speciï¬cally generative models and reinforcement learning. We term these methods advanced as they are often more intricate to implement, and may require speciï¬c properties of the problem to be useful, for example an excellent environment model/simulator for reinforcement learning. We provide a brief overview of these methods below.
# 8.1 Generative Models
At a high level, generative modelling has two fundamental goals. Firstly, it seeks to model and enable sampling from high dimensional data distributions, such as natural images. Secondly, it looks to learn low(er) dimensional latent encodings of the data that capture key properties of interest.
To achieve the ï¬rst goal, generative models take samples of the high dimensional distribution as input, for example, images of human faces, and learn some task directly on these data instances (e.g. encoding and then decoding the instance or learning to generate synthetic instances indistinguishable from the given data samples or generating values per-pixel using neighboring pixels as context). If generative modelling achieved perfect success at this ï¬rst goal, it would make it possible to continuously sample âfreeâ data instances! Such perfect success is extremely challenging, but the past few years has seen enormous progress in the diversity and ï¬delity of samples from the data distribution.
For the second goal, learning latent encodings of the data with diï¬erent encoding dimensions correspond to meaningful factors of variation, having an explicit encoder-decoder structure in the model can be helpful in encouraging learning such representations. This is the default structure for certain kinds of generative models
31
Figure 14: Human faces generated from scratch by StyleGAN2. Image source [100] The ï¬gure shows multiple human face samples from StyleGAN2 [100]. While perfectly modelling and capture full diversity of complex data distributions like human faces remains challenging, the quality and ï¬delity of samples from recent generative models is very high.
such as variational autoencoders [109] but has also been adopted into other models, such as BigBiGAN [49], a type of generative adversarial network. In the following sections we overview some of these main types of generative models.
# 8.1.1 Generative Adversarial Networks
Arguably the most well known of all diï¬erent types of generative models, Generative Adversarial Networks, commonly known as GANs, consist of two neural networks, a generator and a discriminator, which are pitted in a game against each other. The generator takes as input a random noise vector and tries to output samples that look like the data distribution (e.g. synthesize images of peoples faces), while the discriminator tries to distinguish between true samples of the data, and those synthesized by the generator. First proposed in [68], GANs have been an exceptionally popular research area, with the most recent variations, such as BigGAN [22] (code: https://github.com/ajbrock/BigGAN-PyTorch), BigBiGAN [49] and StyleGAN(2) [100] (code: https://github.com/NVlabs/stylegan2) able to generate incredibly realistic images.
Unconditional GANs vs Conditional GANs The examples given above are all unconditional GANs, where the data is generated with only a random noise vector as input. A popular and highly useful variant are conditional GANs, where generation is conditioned on additional information, such as a label, or a âsourceâ image, which might be translated to a diï¬erent style. Examples include pix2pix [96] (code: https://phillipi.github.io/pix2pix/), cycleGAN [274], and applications of these to videos [27].
GANs have found many scientiï¬c applications, from performing data augmentation in medical image settings [65] to protein generation [193]. The âadversarialâ loss objective of GANs can make them somewhat tricky to train, and useful implementation advice is given in https://www.fast.ai/2019/05/03/decrappify/, and (for conditional GANs) is included in https://github.com/jantic/DeOldify.
# 8.1.2 Variational Autoencoders
Another type of generative model is given by the variational autoencoder, ï¬rst proposed by [109]. VAEs have an encoder decoder structure, and thus an explicit latent encoding, which can capture useful properties of the data distribution. They also enable estimation of the likelihood of a sampled datapoint â the probability of its occurrence in the data distribution. VAEs have also been extremely popular, with many variations and extensions proposed [216, 99, 107, 71]. Because of the explicit latent encoding and the ability to estimate likelihoods, they have also found use cases in various scientiï¬c settings, such as for modelling gene expression in single-cell RNA sequencing [138].
32
# 8.1.3 Autoregressive Models
Yet another type of generative model is autoregressive models, which take in inputs sequentially and use those to generate an appropriate output. For instance, such models may take in a sequence of pixel values (some of them generated at a previous timestep) and use these to generate a new pixel value for a speciï¬c spatial location. Autoregressive models such as PixelRNN [168], PixelCNN (and variants) [232, 200] and the recently proposed VQ-VAE(2) [189] (code: https://github.com/rosinality/vq-vae-2-pytorch) oï¬er very high generation quality.
# 8.1.4 Flow Models
A relatively new class of generative models, ï¬ow models, looks at performing generation using a sequence of invertible transformations, which enables the computation of exact likelihoods. First proposed in [46, 47], performing an expressive but tractable sequence of invertible transformations is an active area of research [108, 86]. A nice introduction to normalizing ï¬ows is given in this short video tutorial https: //www.youtube.com/watch?v=i7LjDvsLWCg&feature=youtu.be.
# 8.2 Reinforcement Learning
Reinforcement learning has quite a diï¬erent framing to the techniques and methods introduced so far, aiming to solve the sequential decision making problem. It is typically introduced with the notions of an environment and an agent. The agent can take a sequence of actions in the environment, each of which aï¬ect the environment state in some way, and also result in possible rewards (feedback) â âpositiveâ for good sequences of actions resulting in a âgoodâ state and ânegativeâ for bad sequences of actions leading to a âbadâ state. For example, in a game like chess, the state is the current position of all pieces in play (the game state), an action the moving of a piece, with a good sequence of actions resulting in a win, a bad sequence of actions in a loss and the reward might be one or zero depending on having a win or loss respectively.
With this being the setup, the goal of reinforcement learning is to learn, through interaction with the environment, good sequences of actions (typically referred to as a policy). Unlike supervised learning, feedback (the reward) is typically given only after performing the entire sequence of actions. Speciï¬cally, feedback is sparse and time delayed. There are a variety of diï¬erent reinforcement learning use cases depending on the speciï¬cs of the problem.
# 8.2.1 RL with an Environment Model/Simulator
Some of the most striking results with RL, such as AlphaGoZero [212], critically use an environment model/simulator. In such a setting, a variety of learning algorithms [242, 203, 128] (some code: https: //github.com/openai/baselines) can help the agent learn a good sequence of actions, often through simultaneously learning a value function â a function that determines whether a particular environment state is beneï¬cial or not. Because the beneï¬t of an environment state may depend on the entire sequence of actions (some still in the future), RL is very important in properly assessing the value of the environment state, through implicitly accounting for possible future actions. Combining value functions with traditional search algorithms has been a very powerful way to use RL, and may be broadly applicable to many domains.
Speciï¬cally, if developing a solution to the problem is multistep in nature, with even a noisy validation possible in simulation, using RL to learn a good value function and combining that with search algorithms may lead to discovering new and more eï¬ective parts of the search space. Approaches like these have gained traction in considering RL applications to fundamental problems in both computer systems, with [144] providing a survey and a new benchmark, and machine learning systems [174], in designing task-speciï¬c neural network models. The latter has recently also resulted in scientiï¬c use cases â designing neural
33
networks to emulate complex processes across astronomy, chemistry, physics, climate modelling and others [101].
# 8.2.2 RL without Simulators
In other settings, we donât have access to an environment model/simulator, and may simply have records of sequences of actions (and the ensuing states and rewards). This is the oï¬ine setting. In this case, we may still try to teach an agent a good policy, using the observed sequences of actions/states/rewards in conjunction with oï¬-policy methods [209, 182, 134], but thorough validation and evaluation can be challenging. Evaluation in oï¬-policy settings often uses a statistical technique known as oï¬-policy policy evaluation (example algorithms include [178, 133]). In robotics, reinforcement learning literature has looked at performing transfer learning between policies learned in simulation and policies learned on real data [198]. A thorough overview of deep reinforcement learning is given in http://rail.eecs.berkeley.edu/deeprlcourse/.
# Implementation Tips
In this section, we highlight some useful tips for implementing these models.
Explore Your Data Before starting with steps in the learning phase (see Figure 1), make sure to perform a thorough exploration of your data. What are the results of simple dimensionality reduction methods or clustering? Are the labels reliable? Is there imbalance amongst diï¬erent classes? Are diï¬erent subpopulations appropriately represented?
Try Simple Methods When starting oï¬ with a completely new problem, it is useful to try the simplest version possible. (It might even be worthwhile starting with no learning at all â how does the naive majority baseline perform? For datasets with large imbalances, it may be quite strong!) If the dataset is very large, is there some smaller subsampled/downscaled version that can be used for faster preliminary testing? What is the simplest model that might work well? How does a majority baseline perform? (This ties in settings where the data has class imbalance.) Does the model (as expected) overï¬t to very small subsets of the data?
Where possible, start with well tested models/tasks/methods With the plethora of standard models (many of them pretrained), data augmentation, and optimization methods readily available (Section 3), most new problems will be amenable to some standard set of these choices. Start with this! Debugging the dataset and objective function associated with a new problem at the same time as debugging the neural network model, task choice, optimization algorithm, etc is very challenging.
Additionally, many of the standard model/task/method choices are very well benchmarked, and exploring performance in these settings is an excellent ï¬rst step in understanding the inherent challenges of the new problem. Wherever possible, the easiest way to get starting with the learning phase is to clone an appropriate github repository that has the models and training code needed, and make the minimal edits needed to work with the new dataset and objective function.
First Steps in Debugging Poor Performance Having put together an end-to-end system, you observe that it is not performing well on the validation data. What is the reason? Before getting into more subtle design questions on hyperparameter choice (below), some ï¬rst things to look at might be (i) Is the model overï¬tting? If so, more regularization, data augmentation, early stopping, smaller model may help. (ii) Is there a distribution shift between the training and validation data? (iii) Is the model underï¬tting? If so, check the optimization process by seeing if the model overï¬ts when trained on a smaller subset of the training
34
data. Test out a simpler task. Check for noise in the labels or data instances and for distribution shift. (iv) Look at the instances on which the model makes errors. Is there some pattern? For imbalanced datasets, loss function reweighting or more augmentation on the rarer classes can help. (v) How stable is the model performance across multiple random reruns? (vi) What are gradient and intermediate representation norms through the training process?
Which hyperparameters matter most? A big challenge in improving deep learning performance is the multitude of hyperparameters it is possible to change. In practice, some of the simplest hyperparameters often aï¬ect performance the most, such as learning rate and learning rate schedule. Picking an optimizer with subtleties such as weight decay correctly implemented can also be very important, see this excellent article on a very popular optimizer, AdamW https://www.fast.ai/2018/07/02/adam-weight-decay/. It might also be very useful to visualize the contributions to total loss from the main objective function vs diï¬erent regularizers such as weight decay.
Other hyperparameters that can be explored include batch size and data preprocessing, though if standard setups are used for these, varying learning rate related hyperparameters is likely to be the ï¬rst most useful aspect to explore. To test diï¬erent hyperparameter settings, it can be very useful to cross-validate: hold out a portion of the training data, train diï¬erent hyperparameter settings on the remaining data, pick whichever hyperparameter setting does best when evaluated on the held out data, and then ï¬nally retrain that hyperparameter setting on the full training dataset.
Validate your model thoroughly! Deep learning models are notorious for relying on spurious correlations in the data to perform their predictions [8, 162, 247]. By spurious correlation, we mean features in the data instances that happen to co-occur with a speciï¬c label, but will not result in a robust, generalizable model. For example, suppose we have data from diï¬erent chest x-ray machines (corresponding to diï¬erent hospitals) that we put together to train a deep learning model. It might be the case that one of these machines, so happens to scan many sick patients. The deep learning model might then implicitly learn about the chest x-ray machine instead of the features of the illness. One of the best tests for ensuring the model is learning in a generalizable way is to evaluate the model on data collected separately from the training data, which will introduce some natural distribution shift and provide a more robust estimate of its accuracy. Some recent interesting papers exploring these questions include [81, 190].
Relatedly, deep neural networks will also pick up on any biases in the data, for example, learning to pay attention to gender (a sensitive attribute) when made to predict age due to class imbalances leading to spurious correlations. This can pose signiï¬cant challenges for generalizable conclusions in scientiï¬c settings where data may be collected from one population, but the predictions must be accurate across all populations. It is therefore important to perform postprocessing analysis on the model representations to identify the presence of such biases. A line of active research studies how to debias these representations [4, 241].
Implementation References Some of the general design considerations when coming to implementation (along with factors aï¬ecting larger scale deployment, not explored in this survey) are discussed in this overview https://github.com/chiphuyen/machine-learning-systems-design/blob/master/build/build1/consolidated.pdf.
For speciï¬c points on training and debugging deep learning systems, two excellent guides are given by http://josh-tobin.com/assets/pdf/troubleshooting-deep-neural-networks-01-19.pdf and http: //karpathy.github.io/2019/04/25/recipe/.
35
# 10 Conclusion
As the amount of data collected across many diverse scientiï¬c domains continues to increase in both sheer amount and complexity, deep learning methods oï¬er many exciting possibilities for both fundamental predictive problems as well as revealing subtle properties of the underlying data generation process. In this survey, we overviewed many of the highly successful deep learning models, tasks and methodologies, with references to the remarkably comprehensive open-sourced resources developed by the community. We hope that both the overviews and the references serve to accelerate applications of deep learning to many varied scientiï¬c problems!
36
# Acknowledgements
The authors would like to thank Jon Kleinberg, Samy Bengio, Yann LeCun, Chiyuan Zhang, Quoc Le, Arun Chaganty, Simon Kornblith, Aniruddh Raghu, John Platt, Richard Murray, Stu Feldman and Guy Gur-Ari for feedback and comments on earlier versions.
# References
[1] Waleed Abdulla. Splash of Color: Instance Segmen- tation with Mask R-CNN and TensorFlow, 2018. https://engineering.matterport.com/splash- of-color-instance-segmentation-with-mask-r- cnn-and-tensorflow-7c761e238b46.
[2] Roee Aharoni, Melvin Johnson, and Orhan Firat. Massively multilingual neural machine translation. arXiv preprint arXiv:1903.00089, 2019.
[3] Jay Alammar. The Illustrated Transformer, http://jalammar.github.io/illustrated- 2018. transformer/.
[4] Mohsan Alvi, Andrew Zisserman, and Christoï¬er NellÃ¥ker. Turning a blind eye: Explicit removal of biases and variation from deep neural network em- beddings. In Proceedings of the European Conference on Computer Vision (ECCV), pages 0â0, 2018.
[5] Marios Anthimopoulos, Stergios Christodoulidis, Lukas Ebner, Andreas Christe, and Stavroula Mougiakakou. Lung pattern classiï¬cation for in- terstitial lung diseases using a deep convolutional neural network. IEEE transactions on medical imag- ing, 35(5):1207â1216, 2016.
[6] Sebastian Bach, Alexander Binder, Grégoire Mon- tavon, Frederick Klauschen, Klaus-Robert Müller, and Wojciech Samek. On pixel-wise explanations for non-linear classiï¬er decisions by layer-wise relevance propagation. PloS one, 10(7):e0130140, 2015.
[7] Philip Bachman, R Devon Hjelm, and William Buch- walter. Learning representations by maximizing mutual information across views. arXiv preprint arXiv:1906.00910, 2019.
[8] Marcus A Badgeley, John R Zech, Luke Oakden- Rayner, Benjamin S Glicksberg, Manway Liu, William Gale, Michael V McConnell, Bethany Per- cha, Thomas M Snyder, and Joel T Dudley. Deep learning predicts hip fracture using confounding pa- tient and healthcare variables. npj Digital Medicine, 2(1):31, 2019.
[9] Vijay Badrinarayanan, Alex Kendall, and Roberto Cipolla. Segnet: A deep convolutional encoder- decoder architecture for image segmentation. IEEE transactions on pattern analysis and machine intel- ligence, 39(12):2481â2495, 2017.
37
[10] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
[11] Dzmitry Bahdanau, Jan Chorowski, Dmitriy Serdyuk, Philemon Brakel, and Yoshua Bengio. End-to-end attention-based large vocabulary speech recognition. In 2016 IEEE international conference on acoustics, speech and signal processing (ICASSP), pages 4945â4949. IEEE, 2016.
[12] Guha Balakrishnan, Amy Zhao, Mert R Sabuncu, John Guttag, and Adrian V Dalca. An unsuper- vised learning model for deformable medical image registration. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 9252â9260, 2018.
[13] Guha Balakrishnan, Amy Zhao, Mert R Sabuncu, John Guttag, and Adrian V Dalca. Voxelmorph: a learning framework for deformable medical image registration. IEEE transactions on medical imaging, 2019.
[14] Joshua Batson and Loic Royer. Noise2self: Blind arXiv preprint denoising by self-supervision. arXiv:1901.11365, 2019.
[15] Peter Battaglia, Razvan Pascanu, Matthew Lai, Danilo Jimenez Rezende, et al. Interaction networks for learning about objects, relations and physics. In Advances in neural information processing systems, pages 4502â4510, 2016.
[16] Anthony Bau, Yonatan Belinkov, Hassan Saj- jad, Nadir Durrani, Fahim Dalvi, and James Glass. Identifying and controlling important neu- rons in neural machine translation. arXiv preprint arXiv:1811.01157, 2018.
[17] David Bau, Bolei Zhou, Aditya Khosla, Aude Oliva, and Antonio Torralba. Network dissection: Quanti- fying interpretability of deep visual representations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6541â6549, 2017.
[18] Iz Beltagy, Arman Cohan, and Kyle Lo. Scibert: Pretrained contextualized embeddings for scientiï¬c text. arXiv preprint arXiv:1903.10676, 2019.
[19] David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin Raï¬el. Mixmatch: A holistic approach to semi-supervised learning. arXiv preprint arXiv:1905.02249, 2019.
[20] Mariusz Bojarski, Anna Choromanska, Krzysztof Choromanski, Bernhard Firner, Larry Jackel, Urs Muller, and Karol Zieba. Visualbackprop: visual- izing cnns for autonomous driving. arXiv preprint arXiv:1611.05418, 2, 2016.
[21] Xavier Bresson and Thomas Laurent. A two-step graph convolutional decoder for molecule generation. arXiv preprint arXiv:1906.03412, 2019.
[22] Andrew Brock, Jeï¬ Donahue, and Karen Simonyan. Large scale gan training for high ï¬delity natural image synthesis. arXiv preprint arXiv:1809.11096, 2018.
[23] Renzhi Cao, Colton Freitas, Leong Chan, Miao Sun, Haiqing Jiang, and Zhangxin Chen. Prolango: protein function prediction using neural machine translation based on a recurrent neural network. Molecules, 22(10):1732, 2017.
[24] Zhe Cao, Gines Hidalgo, Tomas Simon, Shih-En Wei, and Yaser Sheikh. OpenPose: realtime multi-person 2D pose estimation using Part Aï¬nity Fields. In arXiv preprint arXiv:1812.08008, 2018.
[25] Brandon Carter, Jonas Mueller, Siddhartha Jain, and David Giï¬ord. What made you do this? under- standing black-box decisions with suï¬cient input subsets. arXiv preprint arXiv:1810.03805, 2018.
[26] Shan Carter, Zan Armstrong, Ludwig Schubert, Ian Johnson, and Chris Olah. Activation atlas. Distill, 2019. https://distill.pub/2019/activation-atlas.
[27] Caroline Chan, Shiry Ginosar, Tinghui Zhou, and Alexei A Efros. Everybody dance now. In Pro- ceedings of the IEEE International Conference on Computer Vision, pages 5933â5942, 2019.
[28] Danqi Chen and Christopher Manning. A fast and accurate dependency parser using neural networks. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 740â750, 2014.
[29] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoï¬rey Hinton. A simple framework for con- trastive learning of visual representations. arXiv preprint arXiv:2002.05709, 2020.
[30] Ting Chen, Yizhou Sun, Yue Shi, and Liangjie Hong. On sampling strategies for neural network-based col- laborative ï¬ltering. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 767â776, 2017.
[31] Yuhua Chen, Yibin Xie, Zhengwei Zhou, Feng Shi, Anthony G Christodoulou, and Debiao Li. Brain mri super resolution using 3d deep densely con- nected neural networks. In 2018 IEEE 15th Inter- national Symposium on Biomedical Imaging (ISBI 2018), pages 739â742. IEEE, 2018.
[32] Jan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio. Attention-based models for speech recognition. In Advances in neural information processing systems, pages 577â585, 2015.
38
[33] Ãzgün Ãiçek, Ahmed Abdulkadir, Soeren S Lienkamp, Thomas Brox, and Olaf Ronneberger. 3d u-net: learning dense volumetric segmentation from sparse annotation. In International conference on medical image computing and computer-assisted intervention, pages 424â432. Springer, 2016.
[34] Kevin Clark, Minh-Thang Luong, Christopher D Manning, and Quoc V Le. Semi-supervised sequence modeling with cross-view training. arXiv preprint arXiv:1809.08370, 2018.
[35] Taco Cohen and Max Welling. Group equivariant convolutional networks. In International conference on machine learning, pages 2990â2999, 2016.
[36] Taco S Cohen, Mario Geiger, Jonas Köhler, and arXiv preprint Max Welling. arXiv:1801.10130, 2018. Spherical cnns.
[37] Peter Corbett and John Boyle. Improving the learn- ing of chemical-protein interactions from literature using transfer learning and specialized word embed- dings. Database, 2018, 2018.
[38] Ekin D Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V Le. Randaugment: Practical data aug- mentation with no separate search. arXiv preprint arXiv:1909.13719, 2019.
[39] Adrian Dalca, Marianne Rakic, John Guttag, and Mert Sabuncu. Learning conditional deformable templates with convolutional networks. In Advances in neural information processing systems, pages 804â 816, 2019.
[40] Yann Dauphin. mixup: Beyond Empirical Risk Min- imization Image, 2017. https://www.dauphin.io/.
[41] Nicola De Cao and Thomas Kipf. Molgan: An implicit generative model for small molecular graphs. arXiv preprint arXiv:1805.11973, 2018.
[42] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierar- chical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248â255. Ieee, 2009.
[43] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidi- rectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
[44] Terrance DeVries and Graham W Taylor. Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552, 2017.
[45] Yiming Ding, Jae Ho Sohn, Michael G Kawczynski, Hari Trivedi, Roy Harnish, Nathaniel W Jenkins, Dmytro Lituiev, Timothy P Copeland, Mariam S Aboian, Carina Mari Aparici, et al. A deep learning model to predict a diagnosis of alzheimer disease by using 18f-fdg pet of the brain. Radiology, 290(2):456â 464, 2018.
[46] Laurent Dinh, David Krueger, and Yoshua Bengio. Nice: Non-linear independent components estima- tion. arXiv preprint arXiv:1410.8516, 2014.
[47] Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real nvp. arXiv preprint arXiv:1605.08803, 2016.
[48] Carl Doersch, Abhinav Gupta, and Alexei A Efros. Unsupervised visual representation learning by con- text prediction. In Proceedings of the IEEE In- ternational Conference on Computer Vision, pages 1422â1430, 2015.
[49] Jeï¬ Donahue and Karen Simonyan. Large scale adversarial representation learning. In Advances in Neural Information Processing Systems, pages 10541â10551, 2019.
[50] Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang. Image super-resolution using deep con- volutional networks. IEEE transactions on pattern analysis and machine intelligence, 38(2):295â307, 2015.
[51] Alexey Dosovitskiy, Jost Tobias Springenberg, Mar- tin Riedmiller, and Thomas Brox. Discriminative unsupervised feature learning with convolutional neural networks. In Advances in neural information processing systems, pages 766â774, 2014.
[52] Yong Du, Wei Wang, and Liang Wang. Hierarchical recurrent neural network for skeleton based action recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1110â1118, 2015.
[53] David K Duvenaud, Dougal Maclaurin, Jorge Ipar- raguirre, Rafael Bombarell, Timothy Hirzel, Alán Aspuru-Guzik, and Ryan P Adams. Convolutional networks on graphs for learning molecular ï¬nger- prints. In Advances in neural information processing systems, pages 2224â2232, 2015.
[54] Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. Understanding back-translation at scale. arXiv preprint arXiv:1808.09381, 2018.
[55] Andre Esteva, Brett Kuprel, Roberto A Novoa, Justin Ko, Susan M Swetter, Helen M Blau, and Sebastian Thrun. Dermatologist-level classiï¬cation of skin cancer with deep neural networks. Nature, 542(7639):115, 2017.
[56] Linjing Fang, Fred Monroe, Sammy Weiser Novak, Lyndsey Kirk, Cara R Schiavon, B Yu Seungyoon, Tong Zhang, Melissa Wu, Kyle Kastner, Yoshiyuki Kubota, et al. Deep learning-based point-scanning bioRxiv, page 740548, super-resolution imaging. 2019.
[57] Chelsea Finn, Ian Goodfellow, and Sergey Levine. interaction Unsupervised learning for physical through video prediction. In Advances in neural information processing systems, pages 64â72, 2016.
39
[58] Marc Finzi, Samuel Stanton, Pavel Izmailov, and Andrew Gordon Wilson. Generalizing convolu- tional neural networks for equivariance to lie groups on arbitrary continuous data. arXiv preprint arXiv:2002.12880, 2020.
[59] Ruth C Fong and Andrea Vedaldi. Interpretable explanations of black boxes by meaningful pertur- bation. In Proceedings of the IEEE International Conference on Computer Vision, pages 3429â3437, 2017.
[60] Alex Fout, Jonathon Byrd, Basir Shariat, and Asa Ben-Hur. Protein interface prediction using graph convolutional networks. In Advances in Neural Infor- mation Processing Systems, pages 6530â6539, 2017. [61] Ferenc Galkó and Carsten Eickhoï¬. Biomedical ques- tion answering via weighted neural network passage retrieval. In European Conference on Information Retrieval, pages 523â528. Springer, 2018.
[62] Yaroslav Ganin and Victor Lempitsky. Unsuper- vised domain adaptation by backpropagation. arXiv preprint arXiv:1409.7495, 2014.
[63] Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Lavi- olette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. The Journal of Machine Learning Research, 17(1):2096â 2030, 2016.
[64] Leon A Gatys, Alexander S Ecker, and Matthias Bethge. Image style transfer using convolutional neural networks. In Proceedings of the IEEE con- ference on computer vision and pattern recognition, pages 2414â2423, 2016.
[65] Amirata Ghorbani, Vivek Natarajan, David Coz, and Yuan Liu. Dermgan: Synthetic generation of clinical skin images with pathology. arXiv preprint arXiv:1911.08716, 2019.
[66] Spyros Gidaris, Praveer Singh, and Nikos Ko- modakis. Unsupervised representation learning by predicting image rotations. arXiv preprint arXiv:1803.07728, 2018.
[67] Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In Proceedings of the 34th International Conference on Machine Learning- Volume 70, pages 1263â1272. JMLR. org, 2017. [68] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversar- ial nets. In Advances in neural information process- ing systems, pages 2672â2680, 2014.
[69] Priya Goyal, Dhruv Mahajan, Abhinav Gupta, and Ishan Misra. Scaling and benchmarking self- supervised visual representation learning. arXiv preprint arXiv:1905.01235, 2019.
[70] Alex Graves, Abdel-rahman Mohamed, and Geof- frey Hinton. Speech recognition with deep recurrent neural networks. In 2013 IEEE international con- ference on acoustics, speech and signal processing, pages 6645â6649. IEEE, 2013.
[71] Aditya Grover, Aaron Zweig, and Stefano Ermon. Graphite: Iterative generative modeling of graphs. arXiv preprint arXiv:1803.10459, 2018.
[72] Varun Gulshan, Lily Peng, Marc Coram, Martin C Stumpe, Derek Wu, Arunachalam Narayanaswamy, Subhashini Venugopalan, Kasumi Widner, Tom Madams, Jorge Cuadros, et al. Development and val- idation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. Jama, 316(22):2402â2410, 2016.
[73] Maryam Habibi, Leon Weber, Mariana Neves, David Luis Wiegandt, and Ulf Leser. Deep learning with word embeddings improves biomedical named entity recognition. Bioinformatics, 33(14):i37âi48, 2017.
[74] Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, and Masashi Sugiyama. Co-teaching: Robust training of deep neural networks with extremely noisy labels. In Advances in neural information processing systems, pages 8527â8537, 2018.
[75] Ankur Handa, Michael Bloesch, Viorica PÄtrÄucean, Simon Stent, John McCormac, and Andrew Davi- son. gvnn: Neural network library for geometric computer vision. In European Conference on Com- puter Vision, pages 67â82. Springer, 2016.
[76] Boris Hanin. Which neural net architectures give rise to exploding and vanishing gradients? In Advances in Neural Information Processing Systems, pages 582â591, 2018.
[77] Jack Hanson, Yuedong Yang, Kuldip Paliwal, and Yaoqi Zhou. Improving protein disorder prediction by deep bidirectional long short-term memory recur- rent neural networks. Bioinformatics, 33(5):685â692, 2016.
[78] Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 2961â2969, 2017.
[79] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770â 778, 2016.
[80] Rhys Heï¬ernan, Yuedong Yang, Kuldip Paliwal, and Yaoqi Zhou. Capturing non-local interactions by long short-term memory bidirectional recurrent
40
neural networks for improving prediction of pro- tein secondary structure, backbone angles, contact numbers and solvent accessibility. Bioinformatics, 33(18):2842â2849, 2017.
[81] Dan Hendrycks and Thomas Dietterich. Bench- marking neural network robustness to common corruptions and perturbations. arXiv preprint arXiv:1903.12261, 2019.
[82] Dan Hendrycks, Norman Mu, Ekin D Cubuk, Barret Zoph, Justin Gilmer, and Balaji Lakshminarayanan. Augmix: A simple data processing method to im- prove robustness and uncertainty. arXiv preprint arXiv:1912.02781, 2019.
[83] Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Advances in neural information processing systems, pages 1693â1701, 2015.
[84] R Devon Hjelm, Alex Fedorov, Samuel Lavoie- Marchildon, Karan Grewal, Phil Bachman, Adam Trischler, and Yoshua Bengio. Learning deep repre- sentations by mutual information estimation and maximization. arXiv preprint arXiv:1808.06670, 2018.
[85] Daniel Ho, Eric Liang, Ion Stoica, Pieter Abbeel, and Xi Chen. Population based augmentation: Ef- ï¬cient learning of augmentation policy schedules. arXiv preprint arXiv:1905.05393, 2019.
[86] Jonathan Ho, Xi Chen, Aravind Srinivas, Yan Improving Duan, and Pieter Abbeel. Flow++: ï¬ow-based generative models with variational de- quantization and architecture design. arXiv preprint arXiv:1902.00275, 2019.
[87] Sepp Hochreiter. The vanishing gradient problem during learning recurrent neural nets and problem so- lutions. International Journal of Uncertainty, Fuzzi- ness and Knowledge-Based Systems, 6(02):107â116, 1998.
[88] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735â 1780, 1997.
[89] Raphael Hoï¬mann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S Weld. Knowledge-based weak supervision for information extraction of over- lapping relations. In Proceedings of the 49th Annual Meeting of the Association for Computational Lin- guistics: Human Language Technologies-Volume 1, pages 541â550. Association for Computational Lin- guistics, 2011.
[90] Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzeb- ski, Bruna Morrone, Quentin De Laroussilhe, An- drea Gesmundo, Mona Attariyan, and Sylvain Gelly.
Parameter-eï¬cient transfer learning for nlp. arXiv preprint arXiv:1902.00751, 2019.
[91] Andrew G Howard. Some improvements on deep con- volutional neural network based image classiï¬cation. arXiv preprint arXiv:1312.5402, 2013.
[92] Jeremy Howard and Sebastian Ruder. Universal language model ï¬ne-tuning for text classiï¬cation. arXiv preprint arXiv:1801.06146, 2018.
[93] Weihua Hu, Bowen Liu, Joseph Gomes, Marinka Zitnik, Percy Liang, Vijay Pande, and Jure Leskovec. Pre-training graph neural networks. arXiv preprint arXiv:1905.12265, 2019.
[94] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected con- volutional networks. In Proceedings of the IEEE conference on computer vision and pattern recogni- tion, pages 4700â4708, 2017.
[95] Minyoung Huh, Pulkit Agrawal, and Alexei A Efros. What makes imagenet good for transfer learning? arXiv preprint arXiv:1608.08614, 2016.
[96] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Image-to-image translation with Alexei A Efros. conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1125â1134, 2017.
[97] Na Ji. Adaptive optical ï¬uorescence microscopy. Nature methods, 14(4):374, 2017.
[98] Robin Jia and Percy Liang. Data recombina- tion for neural semantic parsing. arXiv preprint arXiv:1606.03622, 2016.
[99] Matthew Johnson, David K Duvenaud, Alex Wiltschko, Ryan P Adams, and Sandeep R Datta. Composing graphical models with neural networks for structured representations and fast inference. In Advances in neural information processing systems, pages 2946â2954, 2016.
[100] Tero Karras, Samuli Laine, and Timo Aila. A style- based generator architecture for generative adversar- ial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4401â4410, 2019.
[101] MF Kasim, D Watson-Parris, L Deaconu, S Oliver, P Hatï¬eld, DH Froula, G Gregori, M Jarvis, S Khati- wala, J Korenaga, et al. Up to two billion times accel- eration of scientiï¬c simulations with deep neural ar- chitecture search. arXiv preprint arXiv:2001.08055, 2020.
[102] Jeremy Kawahara, Sara Daneshvar, Giuseppe Argen- ziano, and Ghassan Hamarneh. Seven-point checklist and skin lesion classiï¬cation using multitask multi- modal neural nets. IEEE journal of biomedical and health informatics, 23(2):538â546, 2018.
41
[103] Steven Kearnes, Kevin McCloskey, Marc Berndl, Vijay Pande, and Patrick Riley. Molecular graph convolutions: moving beyond ï¬ngerprints. Journal of computer-aided molecular design, 30(8):595â608, 2016.
[104] Been Kim, Martin Wattenberg, Justin Gilmer, Car- rie Cai, James Wexler, Fernanda Viegas, and Rory Sayres. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). arXiv preprint arXiv:1711.11279, 2017.
[105] Pieter-Jan Kindermans, Sara Hooker, Julius Ade- bayo, Maximilian Alber, Kristof T Schütt, Sven Dähne, Dumitru Erhan, and Been Kim. The (un) reliability of saliency methods. In Explainable AI: Interpreting, Explaining and Visualizing Deep Learn- ing, pages 267â280. Springer, 2019.
[106] Pieter-Jan Kindermans, Kristof T Schütt, Maxim- ilian Alber, Klaus-Robert Müller, Dumitru Erhan, Been Kim, and Sven Dähne. Learning how to explain neural networks: Patternnet and patternattribution. arXiv preprint arXiv:1705.05598, 2017.
[107] D Kingma, Tim Salimans, R Josefowicz, Xi Chen, Ilya Sutskever, Max Welling, et al. Improving varia- tional autoencoders with inverse autoregressive ï¬ow. 2017.
[108] Durk P Kingma and Prafulla Dhariwal. Glow: Gen- In erative ï¬ow with invertible 1x1 convolutions. Advances in Neural Information Processing Systems, pages 10215â10224, 2018.
[109] Durk P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised learn- ing with deep generative models. In Advances in neural information processing systems, pages 3581â 3589, 2014.
[110] Sosuke Kobayashi. Contextual augmentation: Data augmentation by words with paradigmatic relations. arXiv preprint arXiv:1805.06201, 2018.
[111] Kaname Kojima, Shu Tadaka, Fumiki Katsuoka, Gen Tamiya, Masayuki Yamamoto, and Kengo Ki- noshita. A recurrent neural network based method for genotype imputation on phased genotype data. bioRxiv, page 821504, 2019.
[112] Alexander Kolesnikov, Xiaohua Zhai, and Lucas Beyer. Revisiting self-supervised visual represen- tation learning. arXiv preprint arXiv:1901.09005, 2019.
[113] Patrick T Komiske, Eric M Metodiev, and Jesse Thaler. Energy ï¬ow networks: deep sets for particle jets. Journal of High Energy Physics, 2019(1):121, 2019.
Image recon- struction with predictive ï¬lter ï¬ow. arXiv preprint arXiv:1811.11482, 2018.
[115] Simon Kornblith, Mohammad Norouzi, Honglak Lee, and Geoï¬rey Hinton. Similarity of neural network representations revisited. arXiv preprint arXiv:1905.00414, 2019.
[116] Simon Kornblith, Jonathon Shlens, and Quoc V Le. Do better imagenet models transfer better? In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2661â2671, 2019.
[117] Alex Krizhevsky, Ilya Sutskever, and Geoï¬rey E Hin- ton. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097â1105, 2012.
[118] Sneha Reddy Kudugunta, Ankur Bapna, Isaac Caswell, Naveen Arivazhagan, and Orhan Firat. In- vestigating multilingual nmt representations at scale. arXiv preprint arXiv:1909.02197, 2019.
[119] Samuli Laine and Timo Aila. Temporal ensem- bling for semi-supervised learning. arXiv preprint arXiv:1610.02242, 2016.
[120] Dong-Hyun Lee. Pseudo-label: The simple and eï¬cient semi-supervised learning method for deep neural networks. In Workshop on Challenges in Representation Learning, ICML, volume 3, page 2, 2013.
[121] Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jae- woo Kang. Biobert: pre-trained biomedical language representation model for biomedical text mining. arXiv preprint arXiv:1901.08746, 2019.
[122] Jaakko Lehtinen, Jacob Munkberg, Jon Hasselgren, Samuli Laine, Tero Karras, Miika Aittala, and Timo Aila. Noise2noise: Learning image restoration with- out clean data. arXiv preprint arXiv:1803.04189, 2018.
[123] Omer Levy and Yoav Goldberg. Neural word embed- ding as implicit matrix factorization. In Advances in neural information processing systems, pages 2177â 2185, 2014.
[124] Chaolong Li, Zhen Cui, Wenming Zheng, Chunyan Xu, and Jian Yang. Spatio-temporal graph con- volution for skeleton based action recognition. In Thirty-Second AAAI Conference on Artiï¬cial Intel- ligence, 2018.
[125] Hailiang Li, Jian Weng, Yujian Shi, Wanrong Gu, Yijun Mao, Yonghua Wang, Weiwei Liu, and Jiajie Zhang. An improved deep learning approach for detection of thyroid papillary cancer in ultrasound images. Scientiï¬c reports, 8(1):6600, 2018.
[126] Yixuan Li, Jason Yosinski, Jeï¬ Clune, Hod Lipson, and John E Hopcroft. Convergent learning: Do diï¬erent neural networks learn the same representa- tions? In Iclr, 2016.
42
[127] Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neural net- works. arXiv preprint arXiv:1511.05493, 2015.
[128] Timothy P Lillicrap, Jonathan J Hunt, Alexan- der Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous con- trol with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
[129] Fang Liu, Zhaoye Zhou, Hyungseok Jang, Alexey Samsonov, Gengyan Zhao, and Richard Kijowski. Deep convolutional neural network and 3d de- formable approach for tissue segmentation in mus- culoskeletal magnetic resonance imaging. Magnetic resonance in medicine, 79(4):2379â2391, 2018.
[130] Peter J Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. Generating wikipedia by summarizing long sequences. arXiv preprint arXiv:1801.10198, 2018.
[131] Shengyu Liu, Buzhou Tang, Qingcai Chen, and Xiao- long Wang. Eï¬ects of semantic features on machine learning-based drug name recognition systems: word embeddings vs. manually constructed dictionaries. Information, 6(4):848â865, 2015.
[132] Xueliang Liu. Deep recurrent neural network for protein function prediction from sequence. arXiv preprint arXiv:1701.08318, 2017.
[133] Yao Liu, Omer Gottesman, Aniruddh Raghu, Matthieu Komorowski, Aldo A Faisal, Finale Doshi- Velez, and Emma Brunskill. Representation bal- ancing mdps for oï¬-policy policy evaluation. In Advances in Neural Information Processing Systems, pages 2644â2653, 2018.
[134] Yao Liu, Adith Swaminathan, Alekh Agarwal, and Emma Brunskill. Oï¬-policy policy gradient with state distribution correction. arXiv preprint arXiv:1904.08473, 2019.
[135] Yun Liu, Krishna Gadepalli, Mohammad Norouzi, George E Dahl, Timo Kohlberger, Aleksey Boyko, Subhashini Venugopalan, Aleksei Timofeev, Philip Q Nelson, Greg S Corrado, et al. Detecting cancer metastases on gigapixel pathology images. arXiv preprint arXiv:1703.02442, 2017.
[136] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmen- tation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3431â 3440, 2015.
[137] Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I Jordan. Deep transfer learning with joint adaptation networks. In Proceedings of the 34th International Conference on Machine Learning- Volume 70, pages 2208â2217. JMLR. org, 2017.
[138] Romain Lopez, Jeï¬rey Regier, Michael Cole, Michael Jordan, and Nir Yosef. A deep generative model for gene expression proï¬les from single-cell rna sequenc- ing. arXiv preprint arXiv:1709.02082, 2017.
[139] Donghuan Lu, Karteek Popuri, Gavin Weiguang Ding, Rakesh Balachandar, and Mirza Faisal Beg. Multimodal and multiscale deep neural networks for the early diagnosis of alzheimerâs disease using structural mr and fdg-pet images. Scientiï¬c reports, 8(1):5697, 2018.
[140] Scott M Lundberg and Su-In Lee. A uniï¬ed approach to interpreting model predictions. In Advances in Neural Information Processing Systems, pages 4765â 4774, 2017.
[141] Laurens van der Maaten and Geoï¬rey Hinton. Visu- alizing data using t-sne. Journal of machine learning research, 9(Nov):2579â2605, 2008.
[142] Ali Madani, Bryan McCann, Nikhil Naik, Ni- tish Shirish Keskar, Namrata Anand, Raphael R Eguchi, Possu Huang, and Richard Socher. Progen: Language modeling for protein generation. bioRxiv, 2020.
[143] Dhruv Mahajan, Ross Girshick, Vignesh Ra- manathan, Kaiming He, Manohar Paluri, Yixuan Li, Ashwin Bharambe, and Laurens van der Maaten. Exploring the limits of weakly supervised pretrain- ing. In Proceedings of the European Conference on Computer Vision (ECCV), pages 181â196, 2018.
[144] Hongzi Mao, Parimarjan Negi, Akshay Narayan, Hanrui Wang, Jiacheng Yang, Haonan Wang, Ryan Marcus, Ravichandra Addanki, Mehrdad Khani, Songtao He, et al. Park: An open platform for learning augmented computer systems. 2019.
[145] Daniel L Marino, Kasun Amarasinghe, and Milos Manic. Building energy load forecasting using deep neural networks. In IECON 2016-42nd Annual Con- ference of the IEEE Industrial Electronics Society, pages 7046â7051. IEEE, 2016.
[146] Alexander Mathis, Pranav Mamidanna, Kevin M Cury, Taiga Abe, Venkatesh N Murthy, Macken- zie Weygandt Mathis, and Matthias Bethge. Deeplabcut: markerless pose estimation of user- deï¬ned body parts with deep learning. Nature neu- roscience, 21(9):1281, 2018.
[147] Mackenzie Weygandt Mathis and Alexander Mathis. Deep learning tools for the measurement of animal behavior in neuroscience. Current Opinion in Neu- robiology, 60:1â11, 2020.
[148] Leland McInnes, John Healy, and James Melville. Umap: Uniform manifold approximation and pro- jection for dimension reduction. arXiv preprint arXiv:1802.03426, 2018.
43
[149] Stephen Merity, Nitish Shirish Keskar, and Richard Socher. Regularizing and Optimizing LSTM Lan- guage Models. arXiv preprint arXiv:1708.02182, 2017.
[150] Stephen Merity, Nitish Shirish Keskar, and Richard Socher. An Analysis of Neural Language Modeling at Multiple Scales. arXiv preprint arXiv:1803.08240, 2018.
[151] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeï¬rey Dean. Eï¬cient estimation of word rep- resentations in vector space. arXiv preprint arXiv:1301.3781, 2013.
[152] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeï¬ Dean. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111â3119, 2013.
[153] Mike Mintz, Steven Bills, Rion Snow, and Dan Ju- rafsky. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Nat- ural Language Processing of the AFNLP: Volume 2-Volume 2, pages 1003â1011. Association for Com- putational Linguistics, 2009.
[154] Ishan Misra and Laurens van der Maaten. Self- supervised learning of pretext-invariant representa- tions, 2019.
[155] Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. Virtual adversarial training: a regularization method for supervised and semi- supervised learning. IEEE transactions on pattern analysis and machine intelligence, 41(8):1979â1993, 2018.
[156] Pim Moeskops, Max A Viergever, Adriënne M Men- drik, Linda S de Vries, Manon JNL Benders, and Ivana IÅ¡gum. Automatic segmentation of mr brain images with a convolutional neural network. IEEE transactions on medical imaging, 35(5):1252â1261, 2016.
[157] Ari Morcos, Maithra Raghu, and Samy Bengio. In- sights on representational similarity in neural net- works with canonical correlation. In Advances in Neural Information Processing Systems, pages 5727â 5736, 2018.
[158] Alejandro Newell, Kaiyu Yang, and Jia Deng. Stacked hourglass networks for human pose esti- mation. In European conference on computer vision, pages 483â499. Springer, 2016.
[159] Jiquan Ngiam, Daiyi Peng, Vijay Vasudevan, Simon Kornblith, Quoc V Le, and Ruoming Pang. Domain adaptive transfer learning with specialist models. arXiv preprint arXiv:1811.07056, 2018.
[160] Harsha Nori, Samuel Jenkins, Paul Koch, and Rich Caruana. Interpretml: A uniï¬ed framework for machine learning interpretability. arXiv preprint arXiv:1909.09223, 2019.
[161] Mehdi Noroozi and Paolo Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles. In European Conference on Computer Vi- sion, pages 69â84. Springer, 2016.
[162] Luke Oakden-Rayner, Jared Dunnmon, Gustavo Carneiro, and Christopher Ré. Hidden stratiï¬ca- tion causes clinically meaningful failures in ma- chine learning for medical imaging. arXiv preprint arXiv:1909.12475, 2019.
[163] Chris Olah. 2015. Understanding-LSTMs/. Understanding LSTM Networks, https://colah.github.io/posts/2015-08-
[164] Chris Olah, Nick Cammarata, Ludwig Schubert, Gabriel Goh, Michael Petrov, and Shan Carter. Zoom in: An introduction to circuits. Distill, 5(3):e00024â001, 2020.
[165] Chris Olah, Alexander Mordvintsev, and Ludwig Schubert. Feature visualization. Distill, 2017. https://distill.pub/2017/feature-visualization.
Ian John- son, Shan Carter, Ludwig Schubert, Katherine Ye, and Alexander Mordvintsev. The build- Distill, 2018. ing blocks of https://distill.pub/2018/building-blocks.
[167] Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499, 2016.
[168] Aaron van den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016.
[169] Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018.
[170] Razvan Pascanu, Tomas Mikolov, and Yoshua Ben- gio. Understanding the exploding gradient problem. CoRR, abs/1211.5063, 2, 2012.
[171] Deepak Pathak, Philipp Krahenbuhl, and Trevor Darrell. Constrained convolutional neural networks for weakly supervised segmentation. In Proceedings of the IEEE international conference on computer vision, pages 1796â1804, 2015.
[172] Romain Paulus, Caiming Xiong, and Richard Socher. A deep reinforced model for abstractive summariza- tion. arXiv preprint arXiv:1705.04304, 2017.
[173] Jeï¬rey Pennington, Richard Socher, and Christo- pher Manning. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference
44
on empirical methods in natural language processing (EMNLP), pages 1532â1543, 2014.
[174] Hieu Pham, Melody Y Guan, Barret Zoph, Quoc V Le, and Jeï¬ Dean. Eï¬cient neural architecture search via parameter sharing. arXiv preprint arXiv:1802.03268, 2018.
[175] Gianluca Pollastri, Darisz Przybylski, Burkhard Rost, and Pierre Baldi. Improving the prediction of protein secondary structure in three and eight classes using recurrent neural networks and proï¬les. Proteins: Structure, Function, and Bioinformatics, 47(2):228â235, 2002.
[176] Ryan Poplin, Avinash V Varadarajan, Katy Blumer, Yun Liu, Michael V McConnell, Greg S Corrado, Lily Peng, and Dale R Webster. Prediction of cardio- vascular risk factors from retinal fundus photographs via deep learning. Nature Biomedical Engineering, 2(3):158, 2018.
[177] Rory M Power and Jan Huisken. A guide to light- sheet ï¬uorescence microscopy for multiscale imaging. Nature methods, 14(4):360, 2017.
[178] Doina Precup. Eligibility traces for oï¬-policy policy evaluation. Computer Science Department Faculty Publication Series, page 80, 2000.
[179] Siyuan Qiao, Wei Shen, Zhishuai Zhang, Bo Wang, and Alan Yuille. Deep co-training for semi- supervised image recognition. In Proceedings of the European Conference on Computer Vision (ECCV), pages 135â152, 2018.
[180] Alec Radford, Jeï¬rey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. OpenAI Blog, 1(8), 2019.
[181] Colin Raï¬el, Noam Shazeer, Adam Roberts, Kather- ine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the lim- its of transfer learning with a uniï¬ed text-to-text transformer. arXiv preprint arXiv:1910.10683, 2019.
[182] Aniruddh Raghu, Matthieu Komorowski, Leo An- thony Celi, Peter Szolovits, and Marzyeh Ghassemi. Continuous state-space models for optimal sepsis treatment-a deep reinforcement learning approach. arXiv preprint arXiv:1705.08422, 2017.
[183] Maithra Raghu, Justin Gilmer, Jason Yosinski, and Jascha Sohl-Dickstein. Svcca: Singular vector canon- ical correlation analysis for deep learning dynamics and interpretability. In Advances in Neural Infor- mation Processing Systems, pages 6076â6085, 2017.
[184] Maithra Raghu, Chiyuan Zhang, Jon Kleinberg, and Samy Bengio. Transfusion: Understanding transfer learning for medical imaging. In Advances in Neural Information Processing Systems, pages 3342â3352, 2019.
[185] Pranav Rajpurkar, Jeremy Irvin, Kaylie Zhu, Bran- don Yang, Hershel Mehta, Tony Duan, Daisy Ding, Aarti Bagul, Curtis Langlotz, Katie Shpanskaya, et al. Chexnet: Radiologist-level pneumonia de- tection on chest x-rays with deep learning. arXiv preprint arXiv:1711.05225, 2017.
[186] Pranav Rajpurkar, Jian Zhang, Konstantin Lopy- rev, and Percy Liang. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250, 2016.
[187] Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, and Vijay Pande. Massively multitask networks for drug discovery. arXiv preprint arXiv:1502.02072, 2015.
[188] Alexander Ratner, Stephen H Bach, Henry Ehren- berg, Jason Fries, Sen Wu, and Christopher Ré. Snorkel: Rapid training data creation with weak supervision. Proceedings of the VLDB Endowment, 11(3):269â282, 2017.
[189] Ali Razavi, Aaron van den Oord, and Oriol Vinyals. Generating diverse high-ï¬delity images with vq-vae- 2. arXiv preprint arXiv:1906.00446, 2019.
[190] Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar. Do imagenet classiï¬ers gener- alize to imagenet? arXiv preprint arXiv:1902.10811, 2019.
[191] Joseph Redmon and Ali Farhadi. An incremental arXiv:1804.02767, 2018. improvement. Yolov3: arXiv preprint
[192] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detec- tion with region proposal networks. In Advances in neural information processing systems, pages 91â99, 2015.
[193] Donatas Repecka, Vykintas Jauniskis, Laurynas Karpus, Elzbieta Rembeza, Jan Zrimec, Simona Poviloniene, Irmantas Rokaitis, Audrius Laurynenas, Wissam Abuajwa, Otto Savolainen, et al. Expanding functional protein sequence space using generative adversarial networks. bioRxiv, page 789719, 2019.
[194] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. Why should i trust you?: Explaining the predictions of any classiï¬er. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135â 1144. ACM, 2016.
[195] Alexander Rives, Siddharth Goyal, Joshua Meier, Demi Guo, Myle Ott, C Lawrence Zitnick, Jerry Ma, and Rob Fergus. Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences. bioRxiv, page 622803, 2019.
45
[196] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pages 234â241. Springer, 2015.
[197] Alexander M Rush, Sumit Chopra, and Jason Weston. A neural attention model for abstrac- tive sentence summarization. arXiv preprint arXiv:1509.00685, 2015.
[198] Andrei A Rusu, Mel Vecerik, Thomas Rothörl, Nico- las Heess, Razvan Pascanu, and Raia Hadsell. Sim- to-real robot learning from pixels with progressive nets. arXiv preprint arXiv:1610.04286, 2016.
[199] Ruhan Sa, William Owens, Raymond Wiegand, Mark Studin, Donald Capoferri, Kenneth Barooha, Alexander Greaux, Robert Rattray, Adam Hutton, John Cintineo, et al. Intervertebral disc detection in x-ray images using faster r-cnn. In 2017 39th Annual International Conference of the IEEE Engi- neering in Medicine and Biology Society (EMBC), pages 564â567. IEEE, 2017.
[200] Tim Salimans, Andrej Karpathy, Xi Chen, and Diederik P Kingma. Pixelcnn++: Improving the pixelcnn with discretized logistic mixture likeli- hood and other modiï¬cations. arXiv preprint arXiv:1701.05517, 2017.
[201] Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108, 2019.
[202] Saman Sarraf, Ghassem Toï¬ghi, et al. Deepad: Alzheimer disease classiï¬cation via deep convolu- tional neural networks using mri and fmri. BioRxiv, page 070441, 2016.
[203] John Schulman, Filip Wolski, Prafulla Dhari- wal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
[204] Ramprasaath R Selvaraju, Michael Cogswell, Ab- hishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision, pages 618â626, 2017.
[205] Andrew W Senior, Richard Evans, John Jumper, James Kirkpatrick, Laurent Sifre, Tim Green, Chongli Qin, Augustin ŽÃdek, Alexander WR Nelson, Alex Bridgland, et al. Improved protein structure prediction using potentials from deep learning. Na- ture, pages 1â5, 2020.
[206] Rico Sennrich, Barry Haddow, and Alexandra Birch. Improving neural machine translation models with monolingual data. arXiv preprint arXiv:1511.06709, 2015.
[207] Lloyd S Shapley. A value for n-person games. Con- tributions to the Theory of Games, 2(28):307â317, 1953.
[208] Jianghong Shi, Eric Shea-Brown, and Michael Buice. Comparison against task driven artiï¬cial neural net- works reveals functional properties in mouse visual cortex. In Advances in Neural Information Process- ing Systems, pages 5765â5775, 2019.
[209] Susan M Shortreed, Eric Laber, Daniel J Lizotte, T Scott Stroup, Joelle Pineau, and Susan A Mur- phy. Informing sequential clinical decision-making through reinforcement learning: an empirical study. Machine learning, 84(1-2):109â136, 2011.
[210] Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. Learning important features through prop- agating activation diï¬erences. In Proceedings of the 34th International Conference on Machine Learning- Volume 70, pages 3145â3153. JMLR. org, 2017. [211] Rui Shu, Hung H Bui, Hirokazu Narui, and Stefano Ermon. A dirt-t approach to unsupervised domain adaptation. arXiv preprint arXiv:1802.08735, 2018. [212] David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of go without human knowledge. nature, 550(7676):354â359, 2017. [213] Karen Simonyan, Andrea Vedaldi, and Andrew Zis- serman. Deep inside convolutional networks: Visual- ising image classiï¬cation models and saliency maps. arXiv preprint arXiv:1312.6034, 2013.
[214] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recog- nition. arXiv preprint arXiv:1409.1556, 2014. [215] Daniel Smilkov, Nikhil Thorat, Been Kim, Fer- nanda Viégas, and Martin Wattenberg. Smooth- grad: removing noise by adding noise. arXiv preprint arXiv:1706.03825, 2017.
[216] Casper Kaae Sønderby, Tapani Raiko, Lars Maaløe, Søren Kaae Sønderby, and Ole Winther. Ladder In Advances in neural variational autoencoders. information processing systems, pages 3738â3746, 2016.
[217] Youyi Song, Ling Zhang, Siping Chen, Dong Ni, Baopu Li, Yongjing Zhou, Baiying Lei, and Tianfu Wang. A deep learning based framework for accurate segmentation of cervical cytoplasm and nuclei. In 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pages 2903â2906. IEEE, 2014.
[218] Ke Sun, Bin Xiao, Dong Liu, and Jingdong Wang. Deep high-resolution representation learning for hu- man pose estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recog- nition, pages 5693â5703, 2019.
46
[219] Yu Sun, Eric Tzeng, Trevor Darrell, and Alexei A Efros. Unsupervised domain adaptation through self-supervision. arXiv preprint arXiv:1909.11825, 2019.
[220] Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In Proceed- ings of the 34th International Conference on Ma- chine Learning-Volume 70, pages 3319â3328. JMLR. org, 2017.
[221] I Sutskever, O Vinyals, and QV Le. Sequence to sequence learning with neural networks. Advances in NIPS, 2014.
[222] Ryo Takahashi, Takashi Matsubara, and Kuniaki Uehara. Data augmentation using random image cropping and patching for deep cnns. IEEE Transac- tions on Circuits and Systems for Video Technology, 2019.
[223] Mingxing Tan and Quoc V Le. Eï¬cientnet: Rethink- ing model scaling for convolutional neural networks. arXiv preprint arXiv:1905.11946, 2019.
[224] Mingxing Tan, Ruoming Pang, and Quoc V Le. Eï¬cientdet: Scalable and eï¬cient object detection. arXiv preprint arXiv:1911.09070, 2019.
[225] Antti Tarvainen and Harri Valpola. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning re- sults. In Advances in neural information processing systems, pages 1195â1204, 2017.
[226] Dimitry Tegunov and Patrick Cramer. Real-time cryo-em data pre-processing with warp. BioRxiv, page 338558, 2018.
[227] Yee Liang Thian, Yiting Li, Pooja Jagmohan, David Sia, Vincent Ern Yao Chan, and Robby T Tan. Con- volutional neural networks for automated fracture detection and localization on wrist radiographs. Ra- diology: Artiï¬cial Intelligence, 1(1):e180001, 2019.
[228] Eric J Topol. High-performance medicine: the con- vergence of human and artiï¬cial intelligence. Nature medicine, 25(1):44â56, 2019.
[229] Raphael Townshend, Rishi Bedi, Patricia Suriana, and Ron Dror. End-to-end learning on 3d protein structure for interface prediction. In Advances in Neural Information Processing Systems, pages 15616â 15625, 2019.
[230] Vahe Tshitoyan, John Dagdelen, Leigh Weston, Alexander Dunn, Ziqin Rong, Olga Kononova, Kristin A Persson, Gerbrand Ceder, and Anubhav Jain. Unsupervised word embeddings capture latent knowledge from materials science literature. Nature, 571(7763):95â98, 2019.
[231] Kensuke Umehara, Junko Ota, and Takayuki Ishida. Application of super-resolution convolutional neural
network for enhancing image resolution in chest ct. Journal of digital imaging, 31(4):441â450, 2018.
[232] Aaron Van den Oord, Nal Kalchbrenner, Lasse Es- peholt, Oriol Vinyals, Alex Graves, et al. Condi- tional image generation with pixelcnn decoders. In Advances in neural information processing systems, pages 4790â4798, 2016.
[233] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pages 5998â6008, 2017.
[234] Oriol Vinyals and Quoc Le. A neural conversational model. arXiv preprint arXiv:1506.05869, 2015.
[235] Elena Voita, Rico Sennrich, and Ivan Titov. The bottom-up evolution of representations in the trans- former: A study with machine translation and arXiv preprint language modeling objectives. arXiv:1909.01380, 2019.
[236] Christian Wachinger, Martin Reuter, and Tassilo Klein. Deepnat: Deep convolutional neural network for segmenting neuroanatomy. NeuroImage, 170:434â 445, 2018.
[237] Kun Wang, Bite Yang, Guohai Xu, and Xiaofeng He. Medical question retrieval based on siamese neural network and transfer learning method. In International Conference on Database Systems for Advanced Applications, pages 49â64. Springer, 2019.
[238] Nancy XR Wang, Ali Farhadi, Rajesh PN Rao, and Bingni W Brunton. Ajile movement prediction: Mul- timodal deep learning for natural human neural recordings and video. In Thirty-Second AAAI Con- ference on Artiï¬cial Intelligence, 2018.
[239] William Yang Wang and Diyi Yang. Thatâs so an- noying!!!: A lexical and frame-semantic embedding based data augmentation approach to automatic cat- egorization of annoying behaviors using# petpeeve tweets. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2557â2563, 2015.
[240] Xiaolong Wang, Ross Girshick, Abhinav Gupta, and Kaiming He. Non-local neural networks. In Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7794â7803, 2018.
[241] Zeyu Wang, Klint Qinami, Yannis Karakozis, Kyle Genova, Prem Nair, Kenji Hata, and Olga Rus- sakovsky. Towards fairness in visual recognition: Ef- fective strategies for bias mitigation. arXiv preprint arXiv:1911.11834, 2019.
[242] Ziyu Wang, Victor Bapst, Nicolas Heess, Volodymyr Mnih, Remi Munos, Koray Kavukcuoglu, and Nando de Freitas. Sample eï¬cient actor-critic with experi- ence replay. arXiv preprint arXiv:1611.01224, 2016.
47
[243] Jason W Wei and Kai Zou. Eda: Easy data augmen- tation techniques for boosting performance on text classiï¬cation tasks. arXiv preprint arXiv:1901.11196, 2019.
[244] Shih-En Wei, Varun Ramakrishna, Takeo Kanade, and Yaser Sheikh. Convolutional pose machines. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4724â4732, 2016.
[245] Martin Weigert, Uwe Schmidt, Tobias Boothe, An- dreas Müller, Alexandr Dibrov, Akanksha Jain, Ben- jamin Wilhelm, Deborah Schmidt, Coleman Broad- dus, Siân Culley, et al. Content-aware image restora- tion: pushing the limits of ï¬uorescence microscopy. Nature methods, 15(12):1090, 2018.
[246] David Weiss, Chris Alberti, Michael Collins, and Slav Petrov. Structured training for neural net- arXiv preprint work transition-based parsing. arXiv:1506.06158, 2015.
[247] Julia K Winkler, Christine Fink, Ferdinand Toberer, Alexander Enk, Teresa Deinlein, Rainer Hofmann- Wellenhof, Luc Thomas, Aimilios Lallas, Andreas Blum, Wilhelm Stolz, et al. Association between surgical skin markings in dermoscopic images and diagnostic performance of a deep learning convo- lutional neural network for melanoma recognition. JAMA dermatology, 155(10):1135â1141, 2019.
[248] Yuxin Wu, Alexander Kirillov, Francisco Massa, Detec- https://github.com/facebookresearch/ Wan-Yen Lo, tron2. detectron2, 2019. and Ross Girshick.
[249] Zhenqin Wu, Bharath Ramsundar, Evan N Feinberg, Joseph Gomes, Caleb Geniesse, Aneesh S Pappu, Karl Leswing, and Vijay Pande. Moleculenet: a benchmark for molecular machine learning. Chemi- cal science, 9(2):513â530, 2018.
[250] Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and Philip S Yu. A com- prehensive survey on graph neural networks. arXiv preprint arXiv:1901.00596, 2019.
[251] Qizhe Xie, Zihang Dai, Eduard Hovy, Minh-Thang Luong, and Quoc V Le. Unsupervised data augmen- tation. arXiv preprint arXiv:1904.12848, 2019.
[252] Qizhe Xie, Eduard Hovy, Minh-Thang Luong, and Quoc V Le. Self-training with noisy student improves imagenet classiï¬cation. arXiv preprint arXiv:1911.04252, 2019.
[253] Saining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, and Kaiming He. Aggregated residual transfor- mations for deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1492â1500, 2017.
[254] Jun Xu, Xiaofei Luo, Guanhao Wang, Hannah Gilmore, and Anant Madabhushi. A deep convolu- tional neural network for segmenting and classifying epithelial and stromal regions in histopathological images. Neurocomputing, 191:214â223, 2016. [255] Xueting Yan, Ishan Misra, Abhinav Gupta, Deepti Ghadiyaram, and Dhruv Mahajan. Clusterï¬t: Im- proving generalization of visual representations. arXiv preprint arXiv:1912.03330, 2019.
[256] Yilong Yang, Zhuyifan Ye, Yan Su, Qianqian Zhao, Xiaoshan Li, and Defang Ouyang. Deep learning for in vitro prediction of pharmaceutical formulations. Acta pharmaceutica sinica B, 9(1):177â185, 2019.
[257] Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Ruslan Salakhutdinov, and Quoc V Le. Xlnet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237, 2019.
[258] Koichiro Yasaka, Hiroyuki Akai, Osamu Abe, and Shigeru Kiryu. Deep learning with convolutional neural network for diï¬erentiation of liver masses at dynamic contrast-enhanced ct: a preliminary study. Radiology, 286(3):887â896, 2017.
[259] Jason Yosinski, Jeï¬ Clune, Anh Nguyen, Thomas Fuchs, and Hod Lipson. Understanding neural net- works through deep visualization. arXiv preprint arXiv:1506.06579, 2015.
[260] Yuhui Yuan, Xilin Chen, and Jingdong Wang. Object-contextual representations for semantic seg- mentation. arXiv preprint arXiv:1909.11065, 2019. [261] Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo. Cutmix: Regularization strategy to train strong clas- siï¬ers with localizable features. In Proceedings of the IEEE International Conference on Computer Vision, pages 6023â6032, 2019.
[262] Manzil Zaheer, Satwik Kottur, Siamak Ravan- bakhsh, Barnabas Poczos, Russ R Salakhutdinov, and Alexander J Smola. Deep sets. In Advances in neural information processing systems, pages 3391â 3401, 2017.
[263] Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In Euro- pean conference on computer vision, pages 818â833. Springer, 2014.
[264] Daojian Zeng, Kang Liu, Yubo Chen, and Jun Zhao. Distant supervision for relation extraction via piece- wise convolutional neural networks. In Proceedings of the 2015 Conference on Empirical Methods in Nat- ural Language Processing, pages 1753â1762, 2015.
[265] Xiaohua Zhai, Avital Oliver, Kolesnikov, supervised semi-supervised learning. preprint arXiv:1905.03670, 2019. and Lucas Beyer. Alexander Self- arXiv S4l:
48
Joan Puigcerver, Alexander Kolesnikov, Pierre Ruyssen, Carlos Riquelme, Mario Lucic, Josip Djolonga, Andre Susano Pinto, Maxim Neumann, Alexey Dosovitskiy, et al. The visual task adaptation benchmark. arXiv preprint arXiv:1910.04867, 2019.
[267] Aston Zhang, Zachary C Lipton, Mu Li, and Alexan- der J Smola. Dive into deep learning. Unpublished draft. Retrieved, 3:319, 2019.
[268] Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412, 2017.
[269] Junkang Zhang, Haigen Hu, Shengyong Chen, Yu- jiao Huang, and Qiu Guan. Cancer cells detection in phase-contrast microscopy images based on faster r-cnn. In 2016 9th International Symposium on Computational Intelligence and Design (ISCID), vol- ume 1, pages 363â367. IEEE, 2016.
[270] Yulun Zhang, Yapeng Tian, Yu Kong, Bineng Zhong, and Yun Fu. Residual dense network for image super- resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2472â2481, 2018.
[271] Victor Zhong, Caiming Xiong, and Richard Socher. Seq2sql: Generating structured queries from natu- ral language using reinforcement learning. arXiv preprint arXiv:1709.00103, 2017.
[272] Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Object detec- tors emerge in deep scene cnns. arXiv preprint arXiv:1412.6856, 2014.
[273] Qingyu Zhou, Nan Yang, Furu Wei, Shaohan Huang, Ming Zhou, and Tiejun Zhao. Neural document summarization by jointly learning to score and select sentences. arXiv preprint arXiv:1807.02305, 2018.
[274] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image trans- lation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision, pages 2223â2232, 2017.
[275] Luisa M Zintgraf, Taco S Cohen, Tameem Adel, and Max Welling. Visualizing deep neural network deci- sions: Prediction diï¬erence analysis. arXiv preprint arXiv:1702.04595, 2017. | {
"id": "2002.05709"
} |
2003.11539 | Rethinking Few-Shot Image Classification: a Good Embedding Is All You Need? | The focus of recent meta-learning research has been on the development of
learning algorithms that can quickly adapt to test time tasks with limited data
and low computational cost. Few-shot learning is widely used as one of the
standard benchmarks in meta-learning. In this work, we show that a simple
baseline: learning a supervised or self-supervised representation on the
meta-training set, followed by training a linear classifier on top of this
representation, outperforms state-of-the-art few-shot learning methods. An
additional boost can be achieved through the use of self-distillation. This
demonstrates that using a good learned embedding model can be more effective
than sophisticated meta-learning algorithms. We believe that our findings
motivate a rethinking of few-shot image classification benchmarks and the
associated role of meta-learning algorithms. Code is available at:
http://github.com/WangYueFt/rfs/. | http://arxiv.org/pdf/2003.11539 | Yonglong Tian, Yue Wang, Dilip Krishnan, Joshua B. Tenenbaum, Phillip Isola | cs.CV, cs.LG | First two authors contributed equally. Project Page:
https://people.csail.mit.edu/yuewang/projects/rfs/ Code:
http://github.com/WangYueFt/rfs/ | null | cs.CV | 20200325 | 20200617 | 0 2 0 2
n u J 7 1 ] V C . s c [
2 v 9 3 5 1 1 . 3 0 0 2 : v i X r a
# Rethinking Few-Shot Image Classiï¬cation: a Good Embedding Is All You Need?
# Yonglong Tian1* Yue Wang1* Dilip Krishnan2
# Joshua B. Tenenbaum1 1MIT CSAIL 2Google Research
# Phillip Isola1
# Abstract
The focus of recent meta-learning research has been on the development of learning algorithms that can quickly adapt to test time tasks with limited data and low compu- tational cost. Few-shot learning is widely used as one of the standard benchmarks in meta-learning. In this work, we show that a simple baseline: learning a supervised or self- supervised representation on the meta-training set, followed by training a linear classiï¬er on top of this representation, outperforms state-of-the-art few-shot learning methods. An additional boost can be achieved through the use of self- distillation. This demonstrates that using a good learned embedding model can be more effective than sophisticated meta-learning algorithms. We believe that our ï¬ndings mo- tivate a rethinking of few-shot image classiï¬cation bench- marks and the associated role of meta-learning algorithms. Code is available at: http://github.com/WangYueFt/ rfs/.
based methods aim to ï¬nd good metrics (usually kernel functions) to side-step the need for inner-loop optimization for each task.
Meta-learning is evaluated on a number of domains such as few-shot classiï¬cation and meta-reinforcement learning. Focusing on few-shot classiï¬cation tasks, a question that has been raised in recent work is whether it is the meta- learning algorithm or the learned representation that is re- sponsible for the fast adaption to test time tasks. [39] sug- gested that feature reuse is main factor for fast adaptation. Recently, [9] proposed transductive ï¬ne-tuning as a strong baseline for few-shot classiï¬cation; and even in a regu- lar, inductive, few-shot setup, they showed that ï¬ne-tuning is only slightly worse than state-of-the-art algorithms. In this setting, they ï¬ne-tuned the network on the meta-testing set and used information from the testing data. Besides, [5] shows an improved ï¬ne-tuning model performs slightly worse than meta-learning algorithms.
# 1. Introduction
Few-shot learning measures a modelâs ability to quickly adapt to new environments and tasks. This is a challeng- ing problem because only limited data is available to adapt the model. Recently, signiï¬cant advances [56, 55, 53, 12, 46, 48, 57, 34, 44, 62, 26, 28] have been made to tackle this problem using the ideas of meta-learning or âlearning to learnâ.
Meta-learning deï¬nes a family of tasks, divided into dis- joint meta-training and meta-testing sets. Each task con- sists of limited training data, which requires fast adapt- ability [45] of the learner (e.g., the deep network that is ï¬ne-tuned). During meta-training/testing, the learner is trained and evaluated on a task sampled from the task dis- tribution. The performance of the learner is evaluated by the average test accuracy across many meta-testing tasks. Methods to tackle this problem can be cast into two main categories: optimization-based methods and metric-based methods. Optimization-based methods focus on designing algorithms that can quickly adapt to each task; while metric-
In this paper, we propose an extremely simple baseline that suggests that good learned representations are more powerful for few-shot classiï¬cation tasks than the current crop of complicated meta-learning algorithms. Our baseline consists of a linear model learned on top of a pre-trained embedding. Surprisingly, we ï¬nd this outperforms all other meta-learning algorithms on few-shot classiï¬cation tasks, often by large margins. The differences between our ap- proach and that of [9] are: we do not utilize information from testing data (since we believe that inductive learning is more generally applicable to few-shot learning); and we use a ï¬xed neural network for feature extraction, rather than ï¬ne-tuning it on the meta-testing set. The ï¬ndings in con- current works [6, 20] are inline with our simple baseline.
Our model learns representations by training a neural network on the entire meta-training set: we merge all meta- training data into a single task and a neural network is asked to perform either ordinary classiï¬cation or self-supervised learning, on this combined dataset. The classiï¬cation task is equivalent to the pre-training phase of TADAM [34] and LEO [44]. After training, we keep the pre-trained network up to the penultimate layer and use it as a feature extractor. During meta-testing, for each task, we ï¬t a linear classiï¬er on the features extracted by the pre-trained network. In con- trast to [9] and [39], we do not ï¬ne-tune the neural network.
*: equal contribution.
1
Furthermore, we show that self-distillation on this base- line provides an additional boost. Self-distillation is a form of knowledge distillation [18], where the student and teacher models are identical in architecture and task. We apply self-distillation to the pre-trained network.
Contributions. Our key contributions are:
⢠A surprisingly simple baseline for few-shot learning, which achieves the state-of-the-art. This baseline sug- gests that many recent meta-learning algorithms are no better than simply learning a good representation through a proxy task, e.g., image classiï¬cation.
⢠Building upon the simple baseline, we use self- distillation to further improve performance.
⢠Our combined method achieves an average of 3% im- provement over the previous state-of-the-art on widely used benchmarks. On the new benchmark Meta- Dataset [54], our method outperforms previous best re- sults by more than 7% on average.
⢠Beyond supervised training, we show that represen- tations learned with state-of-the-art self-supervised methods achieve similar performance as fully super- vised methods. Thus we can âlearn to learnâ simply by learning a good self-supervised embedding.
# 2. Related works
Metric-based meta-learning. The core idea in metric- based meta-learning is related to nearest neighbor algo- rithms and kernel density estimation. Metric-based meth- ods embed input data into ï¬xed dimensional vectors and use them to design proper kernel functions. The predicted label of a query is the weighted sum of labels over sup- port samples. Metric-based meta-learning aims to learn a task-dependent metric. [22] used Siamese network to en- code image pairs and predict conï¬dence scores for each pair. Matching Networks [55] employed two networks for query samples and support samples respectively and used an LSTM with read-attention to encode a full context em- bedding of support samples. Prototypical Networks [46] learned to encode query samples and support samples into a shared embedding space; the metric used to classify query samples is the distance to prototype representations of each class. Instead of using distances of embeddings, Relation Networks [48] leveraged relational module to represent an appropriate metric. TADAM [34] proposed metric scaling and metric task conditioning to boost the performance of Prototypical Networks.
Optimization-based meta-learning. Deep learning models are neither designed to train with very few examples nor to converge very fast. To ï¬x that, optimization-based
methods intend to learn with a few examples. Meta-learner [40] exploited an LSTM to satisfy two main desiderata of few-shot learning: quick acquisition of task-dependent knowledge and slow extraction of transferable knowledge. MAML [12] proposed a general optimization algorithm; it aims to ï¬nd a set of model parameters, such that a small number of gradient steps with a small amount of training data from a new task will produce large improvements on that task. In that paper, ï¬rst-order MAML was also proposed, which ignored the second-order derivatives of MAML. It achieved comparable results to complete MAML with orders of magnitude speedup. To further simplify MAML, Reptile [33] removed re-initialization for each task, making it a more natural choice in certain settings. LEO [44] proposed that it is beneï¬cial to decouple the optimization-based meta-learning algorithms from high- dimensional model parameters. In particular, it learned a stochastic latent space from which the high-dimensional parameters can be generated. MetaOptNet [26] replaced the linear predictor with an SVM in the MAML framework; it incorporated a differentiable quadratic programming (QP) solver to allow end-to-end learning. For a complete list of recent works on meta-learning, we refer readers to [59].
Towards understanding MAML. To understand why MAML works in the ï¬rst place, many efforts have been made either through an optimization perspective or a gen- eralization perspective. Reptile [33] showed a variant of MAML works even without re-initialization for each task, because it tends to converge towards a solution that is close to each taskâs manifold of optimal solutions. In [39], the au- thors analyzed whether the effectiveness of MAML is due to rapid learning of each task or reusing the high quality It concluded that feature reuse is the dominant features. component in MAMLs efï¬cacy, which is reafï¬rmed by ex- periments conducted in this paper.
Meta-learning datasets. Over the past several years, many datasets have been proposed to test meta-learning or few-shot learning algorithms. Omniglot [24] was one of the earliest few-shot learning datasets; it contains thousands of handwritten characters from the worldâs alphabets, intended for one-shot âvisual Turing testâ. In [25], the authors re- ported the 3-year progress for the Omniglot challenge, con- cluding that human-level one-shot learnability is still hard for current meta-learning algorithms. [55] introduced mini- In [42], a ImageNet, which is a subset of ImageNet [8]. large portion of ImageNet was used for few-shot learning tests. Meta-dataset [54] summarized recent datasets and tested several representative methods in a uniform fashion.
Knowledge distillation. The idea of knowledge distilla- tion (KD) dates back to [4]. The original idea was to com-
press the knowledge contained in an ensemble of models into a single smaller model. In [18], the authors generalized this idea and brought it into the deep learning framework. In KD, knowledge is transferred from the teacher model to the student model by minimizing a loss in which the tar- get is the distribution of class probabilities induced by the teacher model. In was shown in [63] that KD has several beneï¬ts for optimization and knowledge transfer between tasks. BAN [13] introduced sequential distillation, which also improved the performance of teacher models. In natu- ral language processing (NLP), BAM [7] used BAN to dis- till from single-task models to a multi-task model, helping the multi-task model surpass its single-task teachers. An- other two related works are [30] which provides theoretical analysis of self-distillation and CRD [50] which shows dis- tillation improves the transferability across datasets.
# 3. Method
We establish preliminaries about the meta-learning prob- lem and related algorithms in §3.1; then we present our baseline in §3.2; ï¬nally, we introduce how knowledge dis- tillation helps few-shot learning in §3.3. For ease of com- parison to previous work, we use the same notation as [26].
# 3.1. Problem formulation
The collection of meta-training tasks is deï¬ned as T = {(Dtrain i=1, termed as meta-training set. The tu- i ple (Dtrain ) describes a training and a testing dataset i of a task, where each dataset contains a small number of examples. Training examples Dtrain = {(xt, yt)}T t=1 and testing examples Dtest = {(xq, yq)}Q q=1 are sampled from the same distribution.
A base learner A, which is given by yâ = fθ(xâ) (â de- notes t or q), is trained on Dtrain and used as a predictor on Dtest. Due to the high dimensionality of xâ, the base learner A suffers high variance. So training examples and testing examples are mapped into a feature space by an embedding model Φâ = fÏ(xâ). Assume the embedding model is ï¬xed during training the base learner on each task, then the ob- jective of the base learner is
θ = A(Dtrain; Ï) = arg min Lbase(Dtrain; θ, Ï) + R(θ), θ (1)
where L is the loss function and R is the regularization term.
The objective of the meta-learning algorithms is to learn a good embedding model, so that the average test error of the base learner on a distribution of tasks is minimized. For- mally,
Ï = arg min ET [Lmeta(Dtest; θ, Ï)], Ï (2)
Figure 1. In meta-training, we train on an image classiï¬cation task on the merged meta-training data to learn an embedding model. This model is then re-used at meta-testing time to extract embed- ding for a simple linear classiï¬er.
where θ = A(Dtrain; Ï).
Once meta-training is ï¬nished, the performance of the tasks S = j=1, called meta-testing set. The evalu- model {(Dtrain j ation is done over the distribution of the test tasks: is evaluated on a set of held-out , Dtest j )}J
ES [Lmeta(Dtest; θ, Ï), where θ = A(Dtrain; Ï)].
# 3.2. Learning embedding model through classiï¬ca- tion
As we show in §3.1, the goal of meta-training is to learn a transferrable embedding model fÏ, which generalizes to any new task. Rather than designing new meta-learning al- gorithms to learn the embedding model, we propose that a model pre-trained on a classiï¬cation task can generate pow- erful embeddings for the downstream base learner. To that end, we merge tasks from meta-training set into a single task, which is given by
Dnew = {(xi, yi)}K = âª{Dtrain 1 k=1 , . . . , Dtrain i , . . . , Dtrain I }, (4)
where Dtrain then i is the task from T . The embedding model is
Ï = arg min Lce(Dnew; Ï), Ï (5)
and Lce denotes the cross-entropy loss between predictions and ground-truth labels. We visualize the task in Figure 1. ) sam- pled from meta-testing distribution, we train a base learner on Dtrain . The base learner is instantiated as multivariate j logistic regression. Its parameters θ = {W , b} include a weight term W and a bias term b, given by
T 6 = argmin )> £i°(W fa(x:) + bye) + R(W, b). ©) {Wb} 4
We also evaluate other base learners such as nearest neigh- bor classiï¬er with L-2 distance and/or cosine distance in §4.7.
(3)
Feature Extraction [oms) (â] = fo Query @
Figure 2. We show a meta-testing case for 5-way 1-shot task: 5 support images and 1 query image are transformed into embeddings using the ï¬xed neural network; a linear model (logistic regression (LR) in this case) is trained on 5 support embeddings; the query image is tested using the linear model.
distil (â)__ bistill Distil ~â _ dfsD â> eeo â > d feb GEER useeeSEED Janel GER seendTEEAD eD Generation 0 Generation 1 Generation K
Figure 3. Sequential self-distillation: a vanilla model, termed as Generation 0, is trained with standard cross-entropy loss; then, the k-th generation is learned with knowledge distilled from the (k-1)-th generation.
In our method, the crucial difference between meta- training and meta-testing is the embedding model param- eterized by Ï is carried over from meta-training to meta- testing and kept unchanged when evaluated on tasks sam- pled from meta-testing set. The base learner is re-initialized for every task and trained on Dtrain of meta-testing task. Our method is the same with the pre-training phase of meth- ods used in [44, 34]. Unlike other methods [9, 39], we do not ï¬ne-tune the embedding model fÏ during the meta- testing stage.
dicted by fÏ:
¢! = arg min(alâ¢@(D"â¢; 6')+ # (7) BKL(s(D"": 6), F(D"":8))).
where usually β = 1 â α.
We exploit the Born-again [13] strategy to apply KD se- quentially to generate multiple generations, which is shown in Figure 3. At each step, the embedding model of k-th generation is trained with knowledge transferred from the embedding model of (k-1)-th generation:
Ïk = arg min (αLce(Dnew; Ï)+ Ï Î²KL(f (Dnew; Ï), f (Dnew; Ïkâ1))). (8)
Assume we repeat the operation K times, we use ÏK as the embedding model to extract features for meta-testing. We analyze the effects of sequential self-distillation in §4.6.
# 4. Experiments
# 3.3. Sequential self-distillation
Knowledge distillation [18] is an approach to transfer knowledge embedded in an ensemble of models to a sin- gle model, or from a larger teacher model to a smaller stu- dent model. Instead of using the embedding model directly for meta-testing, we distill the knowledge from the embed- ding model into a new model with an identical architecture, training on the same merged meta-training set. The new embedding model parameterized by 墉 is trained to mini- mize a weighted sum of the cross-entropy loss between the predictions and ground-truth labels and the KullbackLeibler divergence (KL) between predictions and soft targets pre-
We conduct experiments on four widely used few-shot image recognition benchmarks: miniImageNet [55], tiered- ImageNet [42], CIFAR-FS [3], and FC100 [34]. The ï¬rst two are derivatives of ImageNet [43], while the last two are reorganized from the standard CIFAR-100 dataset [23, 52]. Additional results on Meta-Dataset [54] is presented in §5.
# 4.1. Setup
Architecture. Following previous works [29, 34, 26, 41, 9], we use a ResNet12 as our backbone: the network con- sists of 4 residual blocks, where each has 3 convolutional layers with 3Ã3 kernel; a 2Ã2 max-pooling layer is applied after each of the ï¬rst 3 blocks; and a global average-pooling layer is on top of the fourth block to generate the feature em- bedding. Similar to [26], we use Dropblock as a regularizer
miniImageNet 5-way tieredImageNet 5-way model backbone 1-shot 5-shot 1-shot 5-shot 32-32-32-32 MAML [12] 64-64-64-64 Matching Networks [55] 64-64-64-64 IMP [2] Prototypical Networksâ [46] 64-64-64-64 64-64-64-64 TAML [21] 64-64-64-64 SAML [15] 64-64-64-64 GCR [27] 64-64-64-64 KTN(Visual) [35] 64-64-64-64 PARN[60] 64-64-128-128 Dynamic Few-shot [14] 64-96-128-256 Relation Networks [48] 96-192-384-512 R2D2 [3] ResNet-12 SNAIL [29] ResNet-12 AdaResNet [32] ResNet-12 TADAM [34] ResNet-12 Shot-Free [41] ResNet-12 TEWAM [37] ResNet-12 MTL [47] ResNet-12 Variational FSL [64] MetaOptNet [26] ResNet-12 Diversity w/ Cooperation [11] ResNet-18 Fine-tuning [9] LEO-trainvalâ [44] 48.70 ± 1.84 43.56 ± 0.84 49.2 ± 0.7 49.42 ± 0.78 51.77 ± 1.86 52.22 ± n/a 53.21 ± 0.80 54.61 ± 0.80 55.22 ± 0.84 56.20 ± 0.86 50.44 ± 0.82 51.2 ± 0.6 55.71 ± 0.99 56.88 ± 0.62 58.50 ± 0.30 59.04 ± n/a 60.07 ± n/a 61.20 ± 1.80 61.23 ± 0.26 62.64 ± 0.61 59.48 ± 0.65 57.73 ± 0.62 61.76 ± 0.08 63.11 ± 0.92 51.67 ± 1.81 55.31 ± 0.73 64.7 ± 0.7 68.20 ± 0.66 53.31 ± 0.89 66.05 ± 0.85 66.49 ± n/a 72.34 ± 0.64 71.21 ± 0.66 71.55 ± 0.66 73.00 ± 0.64 65.32 ± 0.70 54.48 ± 0.93 68.8 ± 0.1 68.88 ± 0.92 71.94 ± 0.57 76.70 ± 0.30 77.64 ± n/a 75.90 ± n/a 75.50 ± 0.80 77.69 ± 0.17 78.63 ± 0.46 65.99 ± 0.72 75.62 ± 0.48 78.17 ± 0.49 66.58 ± 0.70 77.59 ± 0.12 66.33 ± 0.05 - - - - - - - - - - - - 63.52 ± n/a - - - - 70.30 ± 1.75 - - 72.69 ± 0.74 - - - - - - 71.32 ± 0.78 - - - - 82.59 ± n/a - - - 81.56 ± 0.53 - 85.55 ± 0.48 81.44 ± 0.09 WRN-28-10 WRN-28-10 Ours-simple Ours-distill ResNet-12 ResNet-12 62.02 ± 0.63 64.82 ± 0.60 79.64 ± 0.44 69.74 ± 0.72 82.14 ± 0.43 71.52 ± 0.69 84.41 ± 0.55 86.03 ± 0.49
Table 1. Comparison to prior work on miniImageNet and tieredImageNet. Average few-shot classiï¬cation accuracies (%) with 95% conï¬dence intervals on miniImageNet and tieredImageNet meta-test splits. Results reported with input image size of 84x84. a-b-c-d denotes a 4-layer convolutional network with a, b, c, and d ï¬lters in each layer. â results obtained by training on the union of training and validation sets.
and change the number of ï¬lters from (64,128,256,512) to (64,160,320,640). As a result, our ResNet12 is identical to that used in [41, 26] .
Optimization setup. We use SGD optimizer with a mo- mentum of 0.9 and a weight decay of 5eâ4. Each batch consists of 64 samples. The learning rate is initialized as 0.05 and decayed with a factor of 0.1 by three times for all datasets, except for miniImageNet where we only decay twice as the third decay has no effect. We train 100 epochs for miniImageNet, 60 epochs for tieredImageNet, and 90 epochs for both CIFAR-FS and FC100. During distillation, we use the same learning schedule and set α = β = 0.5.
Data augmentation. When training the embedding net- work on transformed meta-training set, we adopt random crop, color jittering, and random horizontal ï¬ip as in [26]. For meta-testing stage, we train an N -way logistic regres- sion base classiï¬er. We use the implementations in scikit- learn [1] for the base classiï¬er.
# 4.2. Results on ImageNet derivatives
The miniImageNet dataset [55] is a standard bench- mark for few-shot learning algorithms for recent works. It consists of 100 classes randomly sampled from the Ima- geNet; each class contains 600 downsampled images of size 84x84. We follow the widely-used splitting protocol pro- posed in [40], which uses 64 classes for meta-training, 16 classes for meta-validation, and the remaining 20 classes for meta-testing.
The tieredImageNet dataset [42] is another subset of Im- ageNet but has more classes (608 classes). These classes are ï¬rst grouped into 34 higher-level categories, which are further divided into 20 training categories (351 classes), 6 validation categories (97 classes), and 8 testing categories (160 classes). Such construction ensures the training set is distinctive enough from the testing set and makes the prob- lem more challenging. Results. During meta-testing, we evaluate our method with 3 runs, where in each run the accuracy is the mean accuracy
CIFAR-FS 5-way FC100 5-way model backbone 1-shot 5-shot 1-shot 5-shot 32-32-32-32 MAML [12] 64-64-64-64 Prototypical Networks [46] 64-96-128-256 Relation Networks [48] 96-192-384-512 R2D2 [3] ResNet-12 TADAM [34] ResNet-12 Shot-Free [41] ResNet-12 TEWAM [37] Prototypical Networks [46] ResNet-12 ResNet-12 MetaOptNet [26] 58.9 ± 1.9 55.5 ± 0.7 55.0 ± 1.0 65.3 ± 0.2 - 69.2 ± n/a 70.4 ± n/a 72.2 ± 0.7 72.6 ± 0.7 71.5 ± 1.0 72.0 ± 0.6 35.3 ± 0.6 69.3 ± 0.8 79.4 ± 0.1 - 84.7 ± n/a 81.3 ± n/a 83.5 ± 0.5 37.5 ± 0.6 84.3 ± 0.5 41.1 ± 0.6 - - - 40.1 ± 0.4 - - - 48.6 ± 0.6 - - 56.1 ± 0.4 - - 52.5 ± 0.6 55.5 ± 0.6 Ours-simple Ours-distill ResNet-12 ResNet-12 71.5 ± 0.8 73.9 ± 0.8 86.0 ± 0.5 42.6 ± 0.7 86.9 ± 0.5 44.6 ± 0.7 59.1 ± 0.6 60.9 ± 0.6
Table 2. Comparison to prior work on CIFAR-FS and FC100. Average few-shot classiï¬cation accuracies (%) with 95% conï¬dence intervals on CIFAR-FS and FC100. a-b-c-d denotes a 4-layer convolutional network with a, b, c, and d ï¬lters in each layer.
of 1000 randomly sampled tasks. We report the median of 3 runs in Table 1. Our simple baseline with ResNet-12 is al- ready comparable with the state-of-the-art MetaOptNet [26] on miniImageNet, and outperforms all previous works by at least 3% on tieredImageNet. The network trained with dis- tillation further improves over the simple baseline by 2-3%. We notice that previous works [38, 44, 34, 47] have also leveraged the standard cross-entropy pre-training on the meta-training set. In [34, 44], a wide ResNet (WRN-28- 10) is trained to classify all classes in the meta-training set (or combined meta-training and meta-validation set), and [9] also con- then frozen during the meta-training stage. ducts pre-training but the model is ï¬ne-tuned using the sup- port images in meta-testing set, achieving 57.73 ± 0.62. We adopt the same architecture and gets 61.1 ± 0.86. So ï¬ne-tuning on small set of samples makes the performance worse. Another work [34] adopts a multi-task setting by jointly training on the standard classiï¬cation task and few- shot classiï¬cation (5-way) task. In another work [47], the ResNet-12 is pre-trained before mining hard tasks for the meta-training stage. In this work, we show standard cross- entropy pre-training is sufï¬cient to generate strong embed- dings without meta-learning techniques or any ï¬ne-tuning.
miniImageNet 5-way model backbone 1-shot 5-shot ResNet50 Supervised MoCo [16] ResNet50 ResNet50â CMC [49] 57.56 ± 0.79 54.19 ± 0.93 56.10 ± 0.89 73.81 ± 0.63 73.04 ± 0.61 73.87 ± 0.65
Comparsions of embeddings from supervised pre- Table 3. training and self-supervised pre-training (Moco and CMC). â the encoder of each view is 0.5Ã width of a normal ResNet-50.
ble 2 summarizes the results, which shows that our simple baseline is comparable to Prototypical Networks [46] and MetaOptNet [26] on CIFAR-FS dataset, and outperforms both of them on FC100 dataset. Our distillation version achieves the new state-of-the-art on both datasets. This ver- iï¬es our hypothesis that a good embedding plays an impor- tant role in few-shot recognition.
# 4.4. Embeddings from self-supervised representa- tion learning
# 4.3. Results on CIFAR derivatives
The CIFAR-FS dataset [3] is a derivative of the origi- nal CIFAR-100 dataset by randomly splitting 100 classes into 64, 16 and 20 classes for training, validation, and test- ing, respectively. The FC100 dataset [34] is also derived from CIFAR-100 dataset in a similar way to tieredImagN- net. This results in 60 classes for training, 20 classes for validation, and 20 classes for testing. Results. Similar to previous experiments, we evaluate our method with 3 runs, where in each run the accuracy is the mean accuracy of 3000 randomly sampled tasks. Ta-
Using unsupervised learning [61, 49, 16, 51] to improve the generalization of the meta-learning algorithms [58] re- moves the needs of data annotation. In addition to using embeddings from supervised pre-training, we also train a linear classiï¬er on embeddings from self-supervised repre- sentation learning. Following MoCo [16] and CMC [49] (both are inspired by InstDis [61]), we train a ResNet50 [17] (without using labels) on the merged meta-training set to learn an embedding model. We compare unsupervised ResNet50 to a supervised ResNet50. From Table 3, we ob- serve that using embeddings from self-supervised ResNet50 is only slightly worse than using embeddings from super- the results are com- vised ResNet50 (in 5-shot setting,
minilmageNet _ tieredImageNet CIFAR-FS FC100 NN LR C-2 Aug Distill | 1-shot 5-shot 1-shot 5-shot 1-shot 5-shot 1-shot 5-shot v 56.29 69.96 64.80 78.75 64.36 78.00 38.40 49.12 v 58.74 78.31 67.62 84.77 66.92 84.78 40.36 57.23 v v 61.56 79.27 69.53 85.08 71.24 85.63 42.77 58.86 v v v 62.02 79.64 69.74 85.23 71.45 85.95 42.59 59.13 v v v v 64.82 82.14 71.52 86.03 73.89 86.93 44.57 60.91
Table 4. Ablation study on four benchmarks with ResNet-12 as backbone network. âNNâ and âLRâ stand for nearest neighbour classiï¬er and logistic regression. âL-2â means feature normalization after which feature embeddings are on the unit sphere. âAugâ indicates that each support image is augmented into 5 samples to train the classiï¬er. âDistillâ represents the use of knowledge distillation.
CIFAR-FS
# minilmageNet
â S © ea 7 id > B 62 72 cS âeâ LR+Norm. g âe LR+Norm. fe} gy 60 âe= NN+Norm. 3 70 â NN+Norm. < 55 IR < â= IR ' 9 == NN 68 == NN 2 56 £ ny ? 66 +o O54 = @ a = uw 64 0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8 Number of Generation Number of Generation ES â8 S 8 SS âââââââ ae 2 80.0 a 86 Oo ¢ e 375 5 84 Q g 275° âe= LR+Norm. <u âe LR+Norm. 3 5 âe= NN+Norm. 3 = NN+Norm. ing == LR G 80 == LR 8 70.0 == NN FA == NN = = if 78 0 1 2 3 4 5 6 7 8 Number of Generation 0 1 2 3 4 5 6 7 8 Number of Generation
~
# S
7
~
# = yn wm
Figure 4. Evaluation on different generations of distilled networks. The 0-th generation (or root generation) indicates the vanilla network trained with only standard classiï¬cation cross-entropy loss. The k-th generation is trained by combining the standard classiï¬cation loss and the knowledge distillation (KD) loss using the (k-1)-th generation as the teacher model. Logistic regression (LR) and nearest neighbours (NN) are evaluated.
parable). This observation shows the potential of self- supervised learning in the scenario of few-shot learning.
# 4.5. Ablation experiments
In this section, we conduct ablation studies to analyze how each component affects the few-shot recognition per- formance. We study the following ï¬ve components of our method: (a) we chose logistic regression as our base learner, and compare it to a nearest neighbour classiï¬er with eu- clidean distance; (b) we ï¬nd that normalizing the feature vectors onto the unit sphere, e.g., L-2 normalization, could improve the classiï¬cation of the downstream base classi- ï¬er; (c) during meta-testing, we create 5 augmented sam- ples from each support image to alleviate the data insufï¬- ciency problem, and using these augmented samples to train the linear classiï¬er; (d) we distill the embedding network on the training set by following the sequential distillation [13] strategy.
Table 4 shows the results of our ablation studies on mini- ImageNet, tieredImageNet, CIFAR-FS, and FC100. In gen- eral, logistic regression signiï¬cantly outperforms the near- est neighbour classiï¬er, especially for the 5-shot case; L-2 normalization consistently improves the 1-shot accuracy by 2% on all datasets; augmenting the support images leads to marginal improvement; even with all these techniques, dis- tillation can still provide 2% extra gain.
# 4.6. Effects of distillation
We can use sequential self-distillation to get an embed- ding model, similar to the one in Born-again networks [13]. We therefore investigate the effect of this strategy on the performance of downstream few-shot classiï¬cation.
In addition to logistic regression and nearest-neighbour classiï¬ers, we also look into a cosine similarity classiï¬er, which is equivalent to the nearest-neighbour classiï¬er but with normalized features (noted as âNN+Norm.â). The
miniImageNet 5-way tieredImageNet 5-way model backbone 1-shot 5-shot 1-shot 5-shot Ours Ours-distill Ours-trainval Ours-distill-trainval 64-64-64-64 64-64-64-64 64-64-64-64 64-64-64-64 55.25 ± 0.58 55.88 ± 0.59 56.32 ± 0.58 56.64 ± 0.58 71.56 ± 0.52 56.18 ± 0.70 71.65 ± 0.51 56.76 ± 0.68 72.46 ± 0.52 56.53 ± 0.68 72.85 ± 0.50 57.35 ± 0.70 72.99 ± 0.55 73.21 ± 0.54 73.15 ± 0.58 73.98 ± 0.56 ResNet-12 Ours ResNet-12 Ours-distill Ours-trainval ResNet-12 Ours-distill-trainval ResNet-12 62.02 ± 0.63 64.82 ± 0.60 63.59 ± 0.61 66.58 ± 0.65 79.64 ± 0.44 69.74 ± 0.72 82.14 ± 0.43 71.52 ± 0.69 80.86 ± 0.47 71.12 ± 0.68 83.22 ± 0.39 72.98 ± 0.71 84.41 ± 0.55 86.03 ± 0.49 85.94 ± 0.46 87.46 ± 0.44 Ours Ours-distill Ours-trainval Ours-distill-trainval SEResNet-12 SEResNet-12 SEResNet-12 SEResNet-12 62.29 ± 0.60 65.96 ± 0.63 64.07 ± 0.61 67.73 ± 0.63 79.94 ± 0.46 70.31 ± 0.70 82.05 ± 0.46 71.72 ± 0.69 80.92 ± 0.43 71.76 ± 0.66 83.35 ± 0.41 72.55 ± 0.69 85.22 ± 0.50 86.54 ± 0.49 86.27 ± 0.45 86.72 ± 0.49
Table 5. Comparisons of different backbones on miniImageNet and tieredImageNet.
CIFAR-FS 5-way FC100 5-way model backbone 1-shot 5-shot 1-shot 5-shot Ours Ours-distill Ours-trainval Ours-distill-trainval 64-64-64-64 64-64-64-64 64-64-64-64 64-64-64-64 62.7 ± 0.8 63.8 ± 0.8 63.5 ± 0.8 64.9 ± 0.8 78.7 ± 0.5 39.6 ± 0.6 79.5 ± 0.5 40.3 ± 0.6 79.8 ± 0.5 43.2 ± 0.6 80.3 ± 0.5 44.6 ± 0.6 53.5 ± 0.5 54.1 ± 0.5 58.5 ± 0.5 59.2 ± 0.5 Ours Ours-distill Ours-trainval Ours-distill-trainval ResNet-12 ResNet-12 ResNet-12 ResNet-12 71.5 ± 0.8 73.9 ± 0.8 73.1 ± 0.8 75.4 ± 0.8 86.0 ± 0.5 42.6 ± 0.7 86.9 ± 0.5 44.6 ± 0.7 86.7 ± 0.5 49.5 ± 0.7 88.2 ± 0.5 51.6 ± 0.7 59.1 ± 0.6 60.9 ± 0.6 66.4 ± 0.6 68.4 ± 0.6 Ours Ours-distill Ours-trainval Ours-distill-trainval SEResNet-12 SEResNet-12 SEResNet-12 SEResNet-12 72.0 ± 0.8 74.2 ± 0.8 73.3 ± 0.8 75.6 ± 0.8 86.0 ± 0.6 43.4 ± 0.6 87.2 ± 0.5 44.9 ± 0.6 86.8 ± 0.5 49.9 ± 0.7 88.2 ± 0.5 52.0 ± 0.7 59.1 ± 0.6 61.4 ± 0.6 66.8 ± 0.6 68.8 ± 0.6
Table 6. Comparisons of different backbones on CIFAR-FS and FC100.
plots of 1-shot and 5-shot results on miniImageNet and CIFAR-FS are shown in Figure 4. The 0-th generation (or root generation) refers to the vanilla model trained with only standard cross-entropy loss, and the (k-1)-th generation is distilled into k-th generation. In general, few-shot recog- nition performance keeps getting better in the ï¬rst two or three generations. After certain number of generations, the accuracy starts decreasing for logistic regression and near- est neighbour. Normalizing the features can signiï¬cantly alleviate this problem.
In Table 1, Table 2, and Table 4, we evalute the model of the second generation on miniImageNet, CIFAR-FS and
FC100 datasets; we use the ï¬rst generation on tieredIma- geNet. Model selection is done on the validation set.
# 4.7. Choice of base classiï¬er
One might argue in the 1-shot case, that a linear classiï¬er should behavior similarly to a nearest-neighbour classiï¬er. However in Table 4 and Figure 4, we ï¬nd that logistic re- gression is clearly better than nearest-neighbour. We argue that this is casued by the scale of the features. After we normalize the features by the L-2 norm, logistic regression (âLR+Normâ) performs similarly to the nearest neighbour classiï¬er (âNN+Norm.â), as shown in the ï¬rst row of Fig-
ure 4. However, when increasing the size of the support set to 5, logistic regression is signiï¬cantly better than nearest- neighbour even after feature normalization
# 4.8. Comparsions of different network backbones.
Better backbone networks generally produce better re- sults; this is also obvious in few-shot learning and/or meta- learning (as shown in Table 1). To further verify our assumption that the key success of few-shot learning al- gorithms is due to the quality of embeddings, we com- pare three alternatives in Table 5 and Table 6: a Con- vNet with four four convolutional layers (64, 64, 64, 64); a ResNet12 as in Table 1; a ResNet12 with sequeeze-and- excitation [19] modules. For each model, we have four set- tings: training on meta-training set; training and distilling on meta-training set; training on meta-training set and meta- validation set; training and distilling on meta-training set and meta-validation set. The results consistently improve with more data and better networks. This is inline with our hypothesis: embeddings are the most critical factor to the performance of few-shot learning/meta learning algorithms; better embeddings will lead to better few-shot testing per- formance (even with a simple linear classier). In addition, our ConvNet model also outperforms other few-shot learn- ing and/or meta learning models using the same network. This veriï¬es that in both small model regime (ConvNet) and large model regime (ResNet), few-shot learning and meta learning algorithms are no better than learning a good em- bedding model.
# 4.9. Multi-task vs multi-way classiï¬cation?
We are interested in understanding whether the efï¬- cacy of our simple baseline is due to multi-task or multi- way classiï¬cation. We compare to training an embedding model through multi-task learning: a model with shared em- bedding network and different classiï¬cation heads is con- structed, where each head is only classifying the corre- sponding category; then we use the embedding model to extract features as we do with our baseline model. This achieves 58.53 ± 0.8 on mini-ImageNet 5-way 1-shot case, compared to our baseline model which is 62.02 ± 0.63. So we argue that the speciality of our setting, where the few- shot classiï¬cation tasks are mutually exclusive and can be merged together into a single multi-way classiï¬cation task, makes the simple model effective.
# 5. Results on Meta-Dataset
Meta-Dataset [54] is a new benchmark for evaluating few-shot methods in large-scale settings. Compared to miniImageNet and tieredImageNet, Meta-Dataset provides more diverse and realistic samples.
Setup. The ILSVRC (ImageNet) subset consists of 712 classes for training, 158 classes for validation, and
130 classes for testing. We follow the setting in Meta- Dateset [54] where the embedding model is trained solely on the ILSVRC training split. We use ResNet-18 [17] as the backbone network. The input size is 128Ã128. In the pre-training stage, we use SGD optimizer with a momen- tum of 0.9. The learning rate is initially 0.1 and decayed by a factor of 10 for every 30 epochs. We train the model for 90 epochs in total. The batch size is 256. We use stan- dard data augmentation, including randomly resized crop and horizontal ï¬ip. In the distillation stage, we set α = 0.5 and β = 1.0. We perform distillation twice and use the model from the second generation for meta-testing. We do not use test-time augmentation in meta-testing. In addition to logistic regression (LR), we also provide results of linear SVM for completeness.
We select the best results from [54] for comparison â for each testing subset, we pick the best accuracy over 7 methods and 3 different architectures including 4-layer ConvNet, Wide ResNet, and ResNet-18. As shown in Ta- ble 7, our simple baselines clearly outperform the best re- sults from [54] on 9 out of 10 testing datasets, often by a large margin. Our baseline method using LR outperforms previous best results by more than 7% on average. Also, self-distillation improves max(LR, SVM) in 7 out of the 10 testing subsets. Moreover, we notice empirically that lo- gistic regression (LR) performs better than linear SVM.
# 6. Discussion
We have proposed a simple baseline for few-shot image classiï¬cation in the meta-learning context. This approach has been underappreciated in the literature thus far. We show with numerous experiments that uch a simple baseline outperforms the current state-of-the-arts on four widely- used few-shot benchmarks. Combined with self-distillation, the performance further improves by 2-3%. Even when meta-training labels are unavailable, it may be possible to leverage state of the art self-supervised learning approaches to learn very good embeddings for meta-testing tasks. 1. What is the intuition of this paper? A: We hope this paper will shed new light on few-shot clas- siï¬cation. We believe representations play an important role. Shown by our empirical experiments, a linear model can generalize well as long as a good representation of the data is given. 2. Why does this simple baseline work? Is there anything that makes few-shot classiï¬cation special? A: Few-shot classiï¬cation is a special case of meta-learning in terms of compositionality of tasks. Each task is an K- way classiï¬cation problem, and on current benchmarks the classes, even between tasks, are all mutually exclusive. This means we can merge all N of the K-way classiï¬cation tasks into a single but harder N K-way classiï¬cation task. Our ï¬nding is that training an embedding model on this new
Trained on ILSVRC train split
Dataset Best from [54] LR (ours) SVM (ours) LR-distill (ours) SVM-distill (ours) ILSVRC Omniglot Aircraft Birds Textures Quick Draw Fungi VGG Flower Trafï¬c Signs MSCOCO 50.50 63.37 68.69 68.66 69.05 51.52 39.96 87.15 66.79 43.74 60.14 64.92 63.12 77.69 78.59 62.48 47.12 91.60 77.51 57.00 56.48 65.90 61.43 74.61 74.25 59.34 41.76 90.32 78.94 50.81 61.48 64.31 62.32 79.47 79.28 60.83 48.53 91.00 76.33 59.28 58.33 66.77 64.23 76.63 76.66 59.02 44.51 89.66 78.64 54.10 Mean Accuracy 60.94 68.02 65.38 68.28 66.86
Table 7. Results on Meta-Dataset. Average accuracy (%) is reported with variable number of ways and shots, following the setup in [54]. We compare four variants of out method (LR, SVM, LR-distill, and SVM-distill) to the best accuracy over 7 methods in [54]. In each episode, 1000 tasks are sampled for evaluation.
N K-way task turns out to transfer well to meta-testing set. On the other hand, we also ï¬nd that self-supervised embed- ding, which does not explicitly require this N K composi- tionality, achieves a similar level of performance. A con- current work [10] studies the representations for few-shot learning from the theoretical point of view. 3. Does your work negate recent progress in meta-learning? A: No. Meta-learning is much broader than just few- shot classiï¬cation. Although we show a simple baseline outperforms other complicated meta-learning algorithms in few-shot classiï¬cation, methods like MAML may still be favorable in other meta-learning domains (e.g., meta- reinforcement learning). 4. Why does distillation work? What does it suggest? A: The soft-labels [18] from the teacher model depict the fact that some classes are closer to each other than other classes. For example, a white goat is much more similar to a brown horse than to an airplane. But the one-hot label does not capture this. After being regularized by soft-labels, the network learns to capture the metric distance. From the- oretical perspective, [36] provides analysis for linear case. Ongoing work [31] argues distillation ampliï¬es regulariza- tion in Hilbert space.
[4] Cristian BuciluËa, Rich Caruana, and Alexandru Niculescu- Mizil. Model compression. In SIGKDD, 2006. 2
[5] Wei-Yu Chen, Yen-Cheng Liu, Zsolt Kira, Yu-Chiang Wang, and Jia-Bin Huang. A closer look at few-shot classiï¬cation. In ICLR, 2019. 1
[6] Yinbo Chen, Xiaolong Wang, Zhuang Liu, Huijuan Xu, and Trevor Darrell. A new meta-baseline for few-shot learning. ArXiv, abs/2003.04390, 2020. 1
[7] Kevin Clark, Minh-Thang Luong, Christopher D. Manning, and Quoc V. Le. Bam! born-again multi-task networks for natural language understanding. In ACL, 2019. 3
[8] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09, 2009. 2
[9] Guneet Singh Dhillon, Pratik Chaudhari, Avinash Ravichan- dran, and Stefano Soatto. A baseline for few-shot image clas- siï¬cation. In ICLR, 2020. 1, 4, 5, 6
[10] Simon Shaolei Du, Wei Hu, Sham M. Kakade, Jason D. Lee, and Qi Lei. Few-shot learning via learning the representa- tion, provably. ArXiv, abs/2002.09434, 2020. 10
[11] Nikita Dvornik, Cordelia Schmid, and Julien Mairal. Diver- sity with cooperation: Ensemble methods for few-shot clas- siï¬cation. In ICCV, 2019. 5
# References
[1] Machine learning in python. https://scikit-learn. org/stable/. 5
[12] Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model- agnostic meta-learning for fast adaptation of deep networks. In ICML, 2017. 1, 2, 5, 6
[13] Tommaso Furlanello, Zachary Chase Lipton, Michael Tschannen, Laurent Itti, and Anima Anandkumar. Born- again neural networks. In ICML, 2018. 3, 4, 7
[2] Kelsey Allen, Evan Shelhamer, Hanul Shin, and Joshua Tenenbaum. Inï¬nite mixture prototypes for few-shot learn- ing. In ICML, 2019. 5
[3] Luca Bertinetto, Joao F Henriques, Philip HS Torr, and An- drea Vedaldi. Meta-learning with differentiable closed-form solvers. arXiv preprint arXiv:1805.08136, 2018. 4, 5, 6
[14] Spyros Gidaris and Nikos Komodakis. Dynamic few-shot visual learning without forgetting. In CVPR, 2018. 5
Jun Cheng, Lei Wang, Jianzhong Cao, and Dacheng Tao. Collect and select: Se- mantic alignment metric learning for few-shot learning. In ICCV, 2019. 5
[16] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross B. Girshick. Momentum contrast for unsupervised vi- sual representation learning. ArXiv, abs/1911.05722, 2019. 6, 13
[17] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. CVPR, 2016. 6, 9
[18] Geoffrey Hinton, Oriol Vinyals, and Jeffrey Dean. Distilling the knowledge in a neural network. In NIPS Deep Learning and Representation Learning Workshop, 2015. 2, 3, 4, 10
[19] Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation net- works. In CVPR, 2018. 9
[20] Shaoli Huang and Dacheng Tao. All you need is a good representation: A multi-level and classiï¬er-centric represen- tation for few-shot learning. ArXiv, abs/1911.12476, 2019. 1
[21] Muhammad Abdullah Jamal and Guo-Jun Qi. Task agnostic meta-learning for few-shot learning. In CVPR, 2019. 5 [22] Gregory Koch, Richard Zemel, and Ruslan Salakhutdinov. Siamese neural networks for one-shot image recognition. In ICML Deep Learning Workshop, 2015. 2
[23] Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009. 4
[24] Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learning through proba- bilistic program induction. Science, 2015. 2
[25] Brenden M. Lake, Ruslan Salakhutdinov, and Joshua B. Tenenbaum. The omniglot challenge: a 3-year progress re- port. Current Opinion in Behavioral Sciences, 2019. 2 [26] Kwonjoon Lee, Subhransu Maji, Avinash Ravichandran, and Stefano Soatto. Meta-learning with differentiable convex op- timization. In CVPR, 2019. 1, 2, 3, 4, 5, 6
[27] Aoxue Li, Tiange Luo, Tao Xiang, Weiran Huang, and Liwei Wang. Few-shot learning with global class representations. In ICCV, 2019. 5
[28] Hongyang Li, David Eigen, Samuel Dodge, Matthew Zeiler, and Xiaogang Wang. Finding task-relevant features for few- shot learning by category traversal. In CVPR, 2019. 1 [29] Nikhil Mishra, Mostafa Rohaninejad, Xi Chen, and Pieter arXiv
Abbeel. A simple neural attentive meta-learner. preprint arXiv:1707.03141, 2017. 4, 5
[30] Hossein Mobahi, Mehrdad Farajtabar, and Peter L Bartlett. Self-distillation ampliï¬es regularization in hilbert space. arXiv preprint arXiv:2002.05715, 2020. 3
[31] Hossein Mobahi, Mehrdad Farajtabar, and Peter L. Bartlett. Self-distillation ampliï¬es regularization in hilbert space. ArXiv, abs/2002.05715, 2020. 10
[32] Tsendsuren Munkhdalai, Xingdi Yuan, Soroush Mehri, and Adam Trischler. Rapid adaptation with conditionally shifted neurons. arXiv preprint arXiv:1712.09926, 2017. 5
[33] Alex Nichol, Joshua Achiam, and John Schulman. On ï¬rst-order meta-learning algorithms. ArXiv, abs/1803.02999, 2018. 2
[34] Boris Oreshkin, Pau Rodr´ıguez L´opez, and Alexandre La- coste. Tadam: Task dependent adaptive metric for improved few-shot learning. In NIPS, 2018. 1, 2, 4, 5, 6
[35] Zhimao Peng, Zechao Li, Junge Zhang, Yan Li, Guo-Jun Qi, and Jinhui Tang. Few-shot image recognition with knowl- edge transfer. In ICCV, 2019. 5
[36] Mary Phuong and Christoph Lampert. Towards understand- ing knowledge distillation. In ICML, 2019. 10
[37] Limeng Qiao, Yemin Shi, Jia Li, Yaowei Wang, Tiejun Huang, and Yonghong Tian. Transductive episodic-wise In ICCV, 2019. 5, adaptive metric for few-shot learning. 6
[38] Siyuan Qiao, Chenxi Liu, Wei Shen, and Alan L. Yuille. Few-shot image recognition by predicting parameters from activations. In CVPR, 2018. 6
[39] Aniruddh Raghu, Maithra Raghu, Samy Bengio, and Oriol towards un- arXiv preprint Vinyals. Rapid learning or feature reuse? derstanding the effectiveness of maml. arXiv:1909.09157, 2019. 1, 2, 4
[40] Sachin Ravi and Hugo Larochelle. Optimization as a model for few-shot learning. In ICLR, 2017. 2, 5
[41] Avinash Ravichandran, Rahul Bhotika, and Stefano Soatto. Few-shot learning with embedded class models and shot-free meta training. In ICCV, 2019. 4, 5, 6
[42] Mengye Ren, Sachin Ravi, Eleni Triantaï¬llou, Jake Snell, Kevin Swersky, Josh B. Tenenbaum, Hugo Larochelle, and Richard S. Zemel. Meta-learning for semi-supervised few- shot classiï¬cation. In ICLR, 2018. 2, 4, 5
[43] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, San- jeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Chal- lenge. IJCV, 2015. 4
[44] Andrei A. Rusu, Dushyant Rao, Jakub Sygnowski, Oriol Vinyals, Razvan Pascanu, Simon Osindero, and Raia Had- sell. Meta-learning with latent embedding optimization. In ICLR, 2019. 1, 2, 4, 5, 6
[45] Tyler Scott, Karl Ridgeway, and Michael C Mozer. Adapted deep embeddings: A synthesis of methods for k-shot induc- tive transfer learning. In NIPS, 2018. 1
[46] Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. In NIPS, 2017. 1, 2, 5, 6 [47] Qianru Sun, Yaoyao Liu, Tat-Seng Chua, and Bernt Schiele. Meta-transfer learning for few-shot learning. In CVPR, 2019. 5, 6
[48] Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip HS Torr, and Timothy M Hospedales. Learning to compare: Re- lation network for few-shot learning. In CVPR, 2018. 1, 2, 5, 6
[49] Yonglong Tian, Dilip Krishnan, and Phillip Isola. Con- trastive multiview coding. arXiv preprint arXiv:1906.05849, 2019. 6, 13
[50] Yonglong Tian, Dilip Krishnan, Contrastive representation distillation. arXiv:1910.10699, 2019. 3 and Phillip Isola. arXiv preprint
[51] Yonglong Tian, Chen Sun, Ben Poole, Dilip Krishnan, Cordelia Schmid, and Phillip Isola. What makes for arXiv preprint good views for contrastive learning? arXiv:2005.10243, 2020. 6
[52] Antonio Torralba, Rob Fergus, and William T. Freeman. 80 million tiny images: A large data set for nonparametric ob- ject and scene recognition. TPAMI, 2008. 4
[53] Eleni Triantaï¬llou, Richard S. Zemel, and Raquel Urtasun. Few-shot learning through an information retrieval lens. In NIPS, 2017. 1
[54] Eleni Triantaï¬llou, Tyler Zhu, Vincent Dumoulin, Pascal Lamblin, Utku Evci, Kelvin Xu, Ross Goroshin, Carles Gelada, Kevin Swersky, Pierre-Antoine Manzagol, et al. Meta-dataset: A dataset of datasets for learning to learn from few examples. arXiv preprint arXiv:1903.03096, 2019. 2, 4, 9, 10
[55] Oriol Vinyals, Charles Blundell, Timothy Lillicrap, koray kavukcuoglu, and Daan Wierstra. Matching networks for one shot learning. In NIPS, 2016. 1, 2, 4, 5
[56] Yuxiong Wang and Martial Hebert. Learning to learn: Model regression networks for easy small sample learning. In ECCV, 2016. 1
[57] Yu-Xiong Wang, Ross B. Girshick, Martial Hebert, and Bharath Hariharan. Low-shot learning from imaginary data. CVPR, 2018. 1
[58] Yu-Xiong Wang and Martial Hebert. Learning from small sample sets by combining unsupervised meta-training with cnns. In Advances in Neural Information Processing Systems 29, 2016. 6
[59] Lilian Weng. Meta-learning: Learning to learn fast. lilianweng.github.io/lil-log, 2018. 2
[60] Ziyang Wu, Yuwei Li, Lihua Guo, and Kui Jia. Position-aware relation networks for few-shot learning. ICCV, 2019. 5 Parn: In
[61] Zhirong Wu, Yuanjun Xiong, Stella X Yu, and Dahua Lin. Unsupervised feature learning via non-parametric instance discrimination. In CVPR, 2018. 6, 13
[62] Han-Jia Ye, Hexiang Hu, De-Chuan Zhan, and Fei Sha. learning. Learning embedding adaptation for CoRR, abs/1812.03664, 2018. 1 few-shot
[63] Junho Yim, Donggyu Joo, Jihoon Bae, and Junmo Kim. A gift from knowledge distillation: Fast optimization, network minimization and transfer learning. In CVPR, 2017. 3 [64] Jian Zhang, Chenglong Zhao, Bingbing Ni, Minghao Xu, and Xiaokang Yang. Variational few-shot learning. In ICCV, 2019. 5
A. Architectures
# ResNet-12
SEResNet-12
Image Image 3x3 conv, 64 3x3 conv, 64 â3x3 conv, 64 3x3 conv, 64 3x3 conv, 64 3x3 Ew 64 pool, /2 3x3 conv, 160 ror: 2 Â¥ 3x3 conv, 160 3x3 conv, 160 Vv Â¥ 3x3 conv, 160 3x3 conv, 160 Â¥v 3x3 com 160 pool, /2 â= Por 12 3x3 conv, 320 Â¥ 3x3 conv, 320 Vv 3x3 conv, 320 3x3 conv, 320 Â¥ 3x3 conv, 320 v 3x3 cor 320 ni 3x3 conv, 640 Â¥ 3x3 conv, 640 Â¥v 3x3 eu 640 == stor pool pool, /2 3x3 conv, 640 Â¥ 3x3 conv, 640 Vv 3x3 conv, 640 global pool 5. Nework architectures of Res
Figure 5. Nework architectures of ResNet-12 and SEResNet- 12 used in this paper. The âSE, 4â stands for a Squeeze-and- Excitation layer with reduction parameter of 4. Dotted box will be removed during meta-testing stage.
The architectures of ResNet-12 ans SEResNet-12 is show in Figure 5.
# B. More Training Details
For SEResNet-12, we use the same training setup as ResNet-12 on all four benchmarks, as described in Sec 4.1. For 4-layer convnet, we also the same training setup as ResNet-12 on tieredImageNet, CIFAR-FS, and FC100, For miniImageNet, we train for 240 epochs with learning rate decayed at epochs 150, 180, and 210 with a factor of 0.1.
We found that using the logit layer as feature results in slightly better accuracy (⤠1%) on miniImageNet, so we report this number in Table 5 for miniImageNet.
# C. Unsupervised Learning Details
We adapt the ï¬rst layer of a standard ResNet-50 to take images of size 84 à 84 as input. We only train on the meta- train set of miniImageNet dataset (do not use meta-val set). We follow the training recipe in CMC [49] and MoCo [16] (which also follows InstDis [61]) except for two differences. The ï¬rst one is that we only use 2048 negatives for each positive sample as miniImageNet contains less than 40k im- ages in total. The second difference is that we train for 2000 epochs, with a learning rate initialized as 0.03 and decayed by consine annealing. | {
"id": "2002.05715"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.